I had to ask myself this week if this is all a little tinfoil-hat. I look at the public applications of the internet of things. I try to communicate what I see happening. It can be hard not to sound a touch paranoid.
I’m still a long way from warning on the innate sinfulness of Greek-style yoghurt (I think). But there are problems ahead. One problem is the chaos of the emerging applications of smart cities technology.
Today is a follow-on from last week’s newsletter. I fell further down the rabbit-hole looking at the internet of things crossing into public uses after sending out last week’s note. The smart city is a prime example of the new interface between humans and sensors.
Marc Benioff is fond of reminding us that technology is not innately good or bad, it is the purposes to which it is put that matter. This fallacy omits the values and biases of the technology’s developers. Also, tech isn’t developed in a hermetic bubble, it usually has a goal in mind.
In the context of turning our social world into a large robot, privacy is too narrow a lens. The stories in this week’s newsletter highlight how the internet of things in the public sphere can undermine those spaces.
The whole reason I began writing these notes was because I felt there is a need clarify the ‘disruption’ of our own capacities by smart objects. The metaphor I’ll be working on is they are ‘read-write’ not only reading social data but writing social outcomes.
Being human-centred requires technology to be developed and deployed with respect for limits and boundaries, with responsibility for its behaviour and accountability for bad faith.
What does that mean? It means that we can legitimately feel uncomfortable with a technology or its deployment because we worry it undermines us, it infantilises us or it ensures we act in one way only and those concerns should be yielded to. That is messy and people’s interpretation of those words will differ – one interesting take is below.
Absent good values, aren’t we just engaging in one large beta program for social robotics?
Smart Cities and Citizens: You can’t improve what you don’t measure
What does it look like when this Silicon Valley axiom is applied to citizens?
This week’s example comes to us from China’s emerging internet dystopia.When Big Brother gets God’s Eye: China tries to catch up on AI | South China Morning Post — www.scmp.com
With the chip, a surveillance camera can greatly speed up human facial recognition and spot a criminal suspect in a crowd in just a few seconds. It has proved effective in at least in one district in Shenzhen and, according to publicly disclosed information, has helped police crack hundreds of cases and find a number of lost children.
Around 100,000 public surveillance cameras across the city will be using Intellifusion’s chips by the end of this year. They enable police to identify an individual in just a second or two from a database of about 300 million people and most surveillance cameras in Shenzhen will be fitted with them by next year.
Chen said the AI tide could not be turned back, but better rules were needed.
China, by virtue of scale and growing expertise, is the technological equivalent of the fruit fly. It is capable of speeding through life-cycles and building knowledge far quicker than other countries.
That provides it with huge advantages and fuels the idea that it will ultimately leap beyond the USA. In certain contexts, it also enjoys boundary-free innovation.
In a political or social setting, technology is almost always put to ordering, recording and classifying. Even though I focus on the internet of things, it is impossible not to stray into the discussion of artificial intelligence. There is little sense in creating all these device-to-device connections without being able to do something with that connection.
In China there is a focus on marrying AI (e.g facial recognition) and ‘citizen intelligence’ (see below) to smart devices. Those devices bring results into real life – they might track your progress through a city and make it easier for police to intercept you or disallow you from accessing cash or goods due to your social score.
Shenzhen (or China) is a great place to develop and deploy a ‘smart chip’ to add facial recognition to CCTV cameras because it can fail multiple times. The desire of the state to hold this kind of power means there will be plenty of patience.
The CCP are already on a new iteration of their large ‘social credit’ experiment.
Big data, meet Big Brother: China invents the digital totalitarian state | The Economist — www.economist.com
As an almost perfect expression of the fearful capability of the internet of things, this is quite something. Without smart and connected devices these technologies cannot interact with us. The internet of things enables a social credit score to deny privileges, deny access to public goods or speed up apprehension by the authorities.
This is beyond privacy if we are actively self-policing. That is also about freedom. You cannot improve what you cannot measure.
This is the extreme, the limit case but there are a few reasons this is still relevant:
- China will be far more open to experiment with technology for control and order. Its willingness to do this lays down development roadmaps for others to follow.
- Many of the devices we will use are going to be produced in China and reflect, at minimum, some of the prevailing attitudes and trends there. At maximum, they will incorporate the CCP learning into our devices. I have already written about how the long tail of Chinese production is a law unto itself – these devices only make me more concerned.
- This can happen in public in China. It serves a purpose there for citizens to know this kind of thing is even possible. It is misguided to think it is a trend unique to China. It is likely happening away from view closer to home. Uber and Facebook are, after all, products of the ‘free world’.
To quickly double-back on last week. Recall that 15% false positives are a good result in AI and ML systems. That 15% is a high-impact percentage when we are talking about policing, rights and privileges.
One important debate has to be had around our willingness to accept that 15%. For what kinds of goods is this an acceptable figure? For what kinds of outcomes is it unacceptable?
We would probably agree that we shouldn’t be jailing people on the basis of a system that might have a 15% failure rate.
What about authorising a loan? What about securing positions on the local council or authority?
Secondly, how do people secure themselves against the worst case ‘leviathan’ model of the state?
Gemma Galdon of Eticas writes about the experience from Barcelona in developing a workable idea of technological sovereignty.
Technological Sovereignty? Democracy, Data and Governance in the Digital Era | CCCB LAB — lab.cccb.org
Technological sovereignty must become another pillar on which to gradually construct and consolidate a new technological model that is ethical, responsible and civic.
In recent years, academia and significant parts of civil society have gradually rescued and underline which are the rights and values that are suffering under the asphyxiating boot of techno-solutionism.
As we mentioned previously, key concepts such as equity, justice, transparency, privacy, responsibility, redistribution, redress and leadership and public and citizens’ value emerge as elements to be custodied in data processes.
Being able to use technology in a responsible way and expecting legal behaviour from it, however, does not depend on the capacity of users to understand and defend themselves.