Responsibility as an Afterthought

The big companies cannot treat responsiblity as an ‘externality’. It is no longer somebody else’s problem. It is theirs.

Not My Problem

Last week’s note talked about responsibility. To my mind, there is a distinction between liability – in the legal sense – and responsibility in the ethical sense.

Liability is a subset of responsibility – more limited and consistent with a law or regulation. Responsibility is more nebulous and has an ethical component.

With regard to smart devices, I detect an attitude to responsibility that makes me uneasy.  I am still struggling to clarify it but it is a disposition I see online that smart devices are helping migrate into the rest of our world.

At the moment, it reminds me the climate change debates of the late 90s and early 2000s. Then CO2 emissions were a negative externality to society. That is they reduced overall well-being for us and we did not receive compensation for that reduction. CO2 wasn’t properly accounted for by anyone, it cost no one person and production of it lay heavily on the resources of the commons.

Until suddenly it was an issue and had to be addressed.

The issue of live broadcasting death and killing on Facebook live is an instance that has solidified the comparison for me. Facebook has built a network of incredibly powerful tools for disseminating information rapidly throughout their network. They have increasingly automated it, removed taste and nuance in favour of statistical algorithms.

They did so to make it addictive (sticky) for users, enable discovery and make it easy for ads/brands to reach an audience. The lack of friction is part of the design. It is also a factor enabling the dissemination of really disturbing live video by clearly disturbed people.

Taking Responsiblity

When traumatic uses of the service arise like this, the large companies (c.f. Twitter) like to push back on this as an issue of individual responsibility. Like the corporates in the 90s who wanted to externalise environmental degradation, tech companies want to externalise responsibility. At least I think it is responsibility. This is the part that needs to be developed further.

There is an empathetic dimension to this – it often appears that the companies are so driven by profit, quarterly goals and networks that they lose sight of the people at the heart of the enterprise. In order to sell more ads, the system has had to assume people are like a mechanistic system of levers and pulleys.

When you press here and pull there, they will click on the ad in front of them. A decade of treating users like lab rats to move more inventory has left Facebook bereft of a vision of the people at the heart of their network.

It might be a comforting conclusion to assume that here begins the revolution but I am not that naive. I think some people are already aware of and sick of Facebook (and/or Google, Twitter etc.) Others are fine with the trade-off or have internalised the human nature projected on them by Facebook.

In any event, I am pretty sure that evasion of responsiblity and the neoliberal instinct for making it the individual’s problem to fix, is why

This instinct matters to me because I see it surfacing in the attitudes (design and ethical) underpinning many of the forthcoming smart devices. They will have deeper, physical real-world impact and it is necessary to start demarcating the responsibilities of companies like Facebook and others and hold them to better standards.

Brex for the Border

Technology (especially A.I. and the internet of things) will be proffered as the remedy to post-brexit border anxiety. Without ethics by design and privacy by design experience suggests it will be a damaging fudge.

Brexit was a shock. The thought of the Troubles, like a candle flickering back to flame, returned to most of our minds. Like many other Irish folk we had no idea – and still don’t if we are being honest – of where the Brexit lark will end up and how much damage it might do to decades of slow and erratic peace building.

Initially the media imagination was captured with the idea of Dublin airport as some Nauru-esque filter for potential UK visitors.

Thankfully Donald Tusk‘s general principles – and Enda Kenny’s stated position – is for no hard border with Northern Ireland.

  • 11. The Union has consistently supported the goal of peace and reconciliation enshrined in the Good Friday Agreement, and continuing to support and protect the achievements, benefits and commitments of the Peace Process will remain of paramount importance. In view of the unique circumstances on the island of Ireland, flexible and imaginative solutions will be required, including with the aim of avoiding a hard border, while respecting the integrity of the Union legal order. In this context, the Union should also recognise existing bilateral agreements and arrangements between the United Kingdom and Ireland which are compatible with EU law.

Despite the clarity of this statement, some measure of tracking will be unavoidable between the two countries. One is inside and the other is outside the European Union. Traversing the boundary between these states will include checks and identification.

I understand why the ‘hard border’ – ‘soft border’ dichotomy would take centre stage. The Irish social imagination can still conjure up images of soldiers at the borders.  A soft border would not be acceptable to the securocratic state in London unless all UK citizens in Northern Ireland consented to a form of quarantine. A hard border could decimate cross-border trade and imperil the settlement in the North.

There is a third increasingly likely resolution of some of the border problems posed by Brexit: ‘smart borders’.

Technology to the rescue.

Smart Borders

Conor O Reilly (Leeds Uni) gave the heads up to this issue on twitter this week.

The work he references is a really interesting report in Nature by the head of Eticas. Eticas have been working with the EU since 2013 researching and developing the smart border systems. You may have seen phase one already rolling out as e-gates at various E.U. airports in the past twelve months.

The Telegraph is the only source to have mooted ‘smart borders’ as a way around the hard/soft problem. In time, I expect more work to go into softening the ground for a largely surveillance based digital border. The Telegraph suggested that a smart border is the solution to the goods and trade problem between Ireland and the North, I expect its scope to expand beyond that significantly.

So we can check the solutionism box. Political actors will once more look to technology to bail them out of a complex human question. In this instance how to track and filter cross-border travel without physically manning the barricades that could well return Ireland to civil war.

Borders as Processes

Travel is one of the most singularly political acts we can engage in. It is pregnant with privilege (see the amount of countries my Irish passport grants me visa-free access to versus your, say, Korean passport). It brings us into direct contact with the power of states – upon leaving and upon re-entering. We transgress modernist boundaries in ever more elaborate displays.

It is also an expression of the power conferred on or withheld from citizens by their respective states. My crossing a border successfully speaks to my status and the status of my state. Checking my bags and forcing a delay to verify my entitlement is an expression of the power of the state into which I am travelling.

What I thought a border meant was challenged after reading the nature article. Consider:

When our personal data are collected and shared before we board an aeroplane, a border ceases to be a line that separates countries or administrative areas. It becomes a process of monitoring, control and automated decisions. The physical border is increasingly irrelevant because our rights, privileges, relations, characteristics and risk levels are checked all the time.

It rings true that a border is a process. An operation.

It is able to see our leaving in advance, anticipate our dispatch and hand us over to another state for arrival. It is a ghost in the machine of the travel and logistics industries. Schengen was a radical reform of a border – choosing invisibility over visibility withing a large group of countries.

The removal of human guards from borders by the E.U. and others is the final act, not the first, of depersonalising the process of a border.

When nowhere is the border, everywhere is.

Technology to the rescue

This is only possible through technology that integrates the sensory and the analytic. The internet of things and artificial intelligence/machine learning.

Here then a second box can be checked, the rationalisation of a process by technology. This is a process – and it will be pregnant with assumptions and motivations.

Lets look at two toys from the IoT toybox and see what might get included in the ‘smart border’ package.

  1. A connected finger print reader – like the one on your iPhone – is now a mobile border enforcement system. The border is wherever observation and enforcement occur, Belfast city centre or Newry A1/M1.
  2. Image recognition and interrogiation systems. As smarter image sensors are embedded into public spaces – say our streetlights and upgraded CCTV cameras – image/facial recognition programmes can scan and identify transgressions of the border in real time. As the camera relays a car registration at the border it is logged and perhaps the faces of passengers also logged. Multiple databases can be polled to tag the car. Eventually it might extend to the faces as well.

It is not hard to see how technology becomes the answer to a problem no one has properly thought about.

This might get presented as a technological panacea for a sticky border problem. It is a great deal more than that. The technology will operate as it is programmed to and the embedded principles are going to matter a great deal. It is not enough to assure that the trade off for deploying smart devices and tech to decentralise the UK/Ireland border across two entire islands will be technocratic.

Compared with the cost of maintaining a single physical border, the cost of maintaining a ubiquitous incorporeal border is paradoxically lower. Making everywhere the border is easier than a single point. It also provides for expansionary movements toward greater and greater monitoring as security arguments demand.

Land and Air

It seems beyond doubt that the Brexit smart border is going to be dependent on the internet of things and A.I. There is no way of gathering the kind of data a border now runs upon without them.

The primary Brexit context will be land-crossing. The low volume of direct flights between the Republic and Northern Ireland mean that airport border control will fall into the larger negotiation. Goods and people however are traversing the border countless times – a trip from Letterkenny to Dublin can make up to seven crossings AFAIK.

Monitoring the crossings will provide ballast for the push for the forthcoming registration plate database. It will also undergird most efforts to expand the database to cover cars temporarily in the state (if it doesn’t already) from outside of Ireland as a state/security/EU issue.

It will wedge open a debate for linking the photographic data held on your driver’s licence or passport to the border cameras as the price of a porous border crossing.

Can They Build It?

Here is the rub. There is already ample evidence that rolling out the e-gates and smart borders has been done in a way that focuses on technology and cost. At the base of the product design pyramid is what technology can do.

The motivation remains capturing as much information as possible on flow – irrespective of whether as a citizen you are entitled to a minimally invasive check and assumption of innocence – of people and aggregating this material in a database.

The outcome is a discipline system, a space governed by lanes and gates with automatic permission and a permissive openness to surveillance.

In most cases, monitoring people’s movements through digital data — or ‘dataveillance’ — is about keeping gates open rather than closing them. Bona fide travellers should have a seamless experience, free of queueing and distrust — but that is the case only if they preregister to share their personal data and pay for the privilege

The secondary motivation is depersonalisation. Replacing people with technology makes a system significantly more likely to be seen as unimpeachable even in the event it turns up errors. The technology still carries an aura of objectivity. We should know now that it is no better than the humans who design, build and program it.

Eticas were ‘astonished’ by the disregard for ethics and privacy in the rollout of e-gates within the E.U.

We have been startled by the lack of serious assessment, evaluation, risk analysis or attempt to foresee the potential impacts of such changes. Technologies and finance are the EU’s main concerns. The human rights, civil liberties and societal implications of securing borders through data processing, mining and matching are receiving little consideration.

Any Brexit border ‘smart’ solution will have to be built with ethics and privacy as core pillars. It won’t be possible to retro-fit practices and programmes onto a border apparatus. If the border is built on the assumption that innocence has to be proven and not assumed then as much as technology offers to do will be implemented.

When it looks like the Brexit crew can barely put their pants on in the morning, it seems highly unlikely we will end up with a well thought out border programme unless it is pushed on both sides. Instead it could well become a Frankenstein’s monster of available surveillance tech only limited by the imaginations of security apparatus in both states.

Before systems are constructed, an account of respect for the rights, dignity and privacy of travellers will have to be integrated into the design process. Privacy by design will not be sufficient, if the border is going to be this pervasive everywhere and nowhere ethics by design will be essential.

In Discipline and Punish, Foucault discusses the prison as essentially a place of discipline and training rather than punishment. Modern sensibility was toward reform of the person. Brexit is the convenient fissure through which the omniscient border can emerge into our world. It could easily become a disciplinary process, embodying enormous exercises of state power, rather than the bureaucratic checking procedure of modern states.

The challenge will be creating a fair system that doesn’t discriminate or blithely take advantage of technology at the expense of assumptions of privacy and innocence.

Identity and Ethics for IoT and AI

identity internet of things create ethical problems with adtech and tracking and articial intelligence
Identity and Identification are two different problems. Photo from Pexels.

The Internet of Things needs some pillars if it is going to promote the surveillance model of the internet. One of those pillars is a 3rd party market for data – so it can see what it gathers to the highest bidder and extract value from ‘everything-as-a-service’. I looked at this in Monday’s newsletter.

A second pillar forms around the idea of ‘identity’. Adbrain‘s CMO describes the problem for data harvesters thus:

To make the most of the Internet of Things opportunity, it is vital that marketers can understand the identity of this end user.

Adbrain are a company that provide ads based on tracking you around the internet and across devices. The perspective is fairly typical of marketers and ad-tech folks sizing up the Internet of Things. The identity problem they discuss is how well their software can know you are you when you engage with a connected device.

Identification not Identity

It seems more correct to suggest they have two problems. One of identification and one of identity.

Identification is their stock in trade. Online trackers build profiles of the you that browses around (in this case across devices as well as sites) to make sure you cannot escape ads for white goods or weird tricks to be a millionaire.

The prize of identification is that you are coherent segment for the duration of your presence online (not twenty different persons across twenty different websites). That can be sold to marketers seeking that segment and consistently across sites and devices until you cave and install and ad blocker. 

Your identity remains fixed. You are you. Offline you persist as the self same person unto death. Online, permanence of self is relatively new and imperfectly applied. As we jump around different sites we variously use our email, twitter/facebook/google or some other identifier to signup or login. Depending on what we use, different sites (and their trackers) have different and partial views of who we are.

To the data they do have, they apply algorithms to infer other characteristics and guess which market segment we might form part of.

Offline, our identity is fixed – I am Cian – and identification is a transactional matter. I may offer my identity to a retailer or insurance company. I have to offer it to police etc.

Online the holy grail is of attaching a fixed and permanent identity to the partially sketched profiles of ad trackers and networks.

A Plague in Real Life

The Internet of Things problem outlined above is also the solution. For ad companies the IoT bridges the problem of identification and identity.

Elon Musk’s brain-computer melding startup might be the long term answer for these companies but in the short term the Internet of Things will suffice.

In real life a device can pickup your unchanging identity – the physical persona – and record what you do. To take the example from Adbrain: a TV can pick up your physical identity using facial recognition or a personalised account etc and begin to add the data it is collecting to the online profile that exists for you.

In this ‘holy grail’ model, physical identity is verified and then data is added to your online ‘record’ so it can follow you from screen to object and back selling you ads, services whatever.

Resolving the tension of identity and identification online only was significantly more difficult – ask Trustev for example. Using smart objects as an augmented identifier wraps the online tracking in the persona of a real extant individual.

The Ethics of Respect

Respecting the dignity of a person becomes a vital design step. Otherwise the data ends up contributing to the 3rd party markets whose value is significantly higher with anchored data like this versus inferred data from online only.

You are entitled to ask if your TV reports data that contributes to a burgeoning, anchored portfolio of preferences which are used to sell ads to you online and offline and adjust behaviours and patterns in your life.

It is quite likely that this will be legal to do. The information gathering – behaviour-adjustment nexus does challenge our ideas of dignity and freedom however. It is unlikely that regulation is going to reach so deeply into the Internet of Things until it is fully linked into A.I and autonomous systems for identifying, collecting, deciding and acting.

At that point it is probably a little late.

This morning Adrian Colyer pointed us to the IEEE paper on Ethical Design of AI systems and I think this is the place to begin:

A key concern over autonomous systems is that their operation must be transparent to a wide range of stakeholders for different reasons (noting that the level of transparency will necessarily be different for each stakeholder). Stated simply, a transparent AI/AS is one in which it is possible to discover how and why the system made a particular decision, or in the case of a robot, acted the way it did.

As Adrian notes himself

Companies should implement ‘ethically aligned design’ programs (from which the entire report derives its title). Professional codes of conduct can support this (there’s a great example from the British Computer Society in this section of the report).

Something I noted last week asking if we need a hippocratic oath for the Internet of Things and A.I. The problem for AdTech people is this lays bare up front the conflict between ethical design and solving their identity-identification problem.

We didn’t get that upfront conflict in the way the internet developed, if we can force it with the IoT and A.I. ecosystem it will be a much safer system for all.

Designing Ethics into The Internet of Things

Finishing up the newsletter on Monday, I mentioned the Internet of Things design manifesto. It is in version 1.0 but it is a good first step to develop a process for integrating good practice into smart objects. 

It led me to think about how ethics has to be integrated into product development and management in the IoT era. 

Process Matters

Any designer will tell you that process is vital. You can come up with a good product without a good process but you probably cannot repeat the magic or scale your success without a good process.

When I look around at some smart devices, it looks like the process was a conventional manufacturing design process with the Internet of Things tacked on.

Sure there are some startups who are have baked ‘smarts’ and ‘intelligence’ into their product design but the real driver of the Internet of Things are the established companies with legacy product portfolios and legacy markets.

Sticking a product online is technically complex. It is also ethically complex. The ethics can get lost in the thicket of technical complexities.

Advocating for Customers

Development, especially in big firms, is a technical job. The product requirements are laid out and the teams go ahead and execute the product as best they can. The cycle of product ‘ideation’, verification of a business case and initial development works across departments like business developmen, marketing and product.

As companies drive into the Internet of Things, the process has to open up to an ‘ethics owner’ in the process. Customer advocacy is conventionally trying to present what customers want – from research, interviews etc – to the development teams and incorporating that into designs.

When these products are supposed to be connected to the internet – de facto gathering data possibly to use toward behaviour modification – it is not enough to advocate for ‘user needs’. Without an explicit responsibility in the design system to advocate for the rights or ethics, the logic of surveillance and people farming takes over.

Good Versus Legal

This is ultimately the problem for smart devices – in the short and long term. What is legal will vary. What is legal may not be sufficient. What is good will succeed and improve lives with less costly trades of freedom and privacy.

Yet, we have law for the very reason that behaving ethically is hard to do and it is not always clear what the right moral action is.

We know that responsibility is indivisible, when everyone is responsible no one is. When the potential harm from smart objects is so distinctive and clear, there is no excuse for going beyond legality and integrating ethics into the design process from day one.

For example: The logic of smart connected cars is tracking data for maintenance purposes. What about packaging that data for insurers who can then argue for discriminatory pricing based on your driving?

Or what about the use of biometric data inevitably gathered by VR companies?

Ensuring legal compliance with data is, currently, advantageous until (if?) the law catches up with the Internet of Things (even then, we are not sure that will happen)

Will a label be sufficient to explain how your VR headset is going to be taking over some of your brain’s functions?

Doing the right thing becomes more important in this context. The threshold of staying legal could remain below what would be ‘right’ for a significant time. In that time tens of billions of smart devices will be made and put online.

Professional Ethics

Professional ethics are associated with law and medicine for the precise reason that they work with individuals and have a opportunity to impart enormous harm on their clients. Professional ethics are a collective effort to enforce standards of behaviour that promote trust, expect higher accountability than law alone and place the customer (person) at the heart of their decision making.

Considering how sensitive the info is and potential harm shit product can cause, professional ethics are unavoidable. I think this is the level which will have to be targeted in order to provide a secure human-centred range of connected products. Reading the IoT Manifesto as a call to action for professional ethics makes it a very robust starting point.

Does harvesting biometric data require a hippocratic oath?

Or as Voices of VR asks:

  • Should companies be getting explicit consent for the type of biometric data that they to capture, store, and tie back to our personal identities?
  • If companies are able to diagnose medical conditions from these new biometric indicators, then what is their ethical responsibility of reporting this users?

It might not be as strict as medical ethics but Gry Hasselbalch argues Data Ethics are the New Competitive Advantage.

My initial worries:

  • Is a market led solution going to address the problem? Or act like ‘green-washing’ did with climate change?
  • Can we construct a robust program of ethics that we can advocate for inclusion in product design processes?
  • The reason we have organised religion is because ethics are contested. Can we even agree on ethics for the Internet of Things era?

I would love to hear your feedback and am setting off to work on the the above three questions in the month ahead.


You can get a weekly digest of articles and notes on a human-centred Internet of Things by subscribing to my newsletter. Read older editions here.

A Zebra Model for the Internet of Things

The Unicorn Business Model is utterly seductive – it might prove too seductive for the Internet of Things and lead us into building our own panopticon.

panopticon internet of things surveillance capitalism
The Panopticon by Jay Crum: Celeste

The Building Knows What?

The Internet of Things became real to me one day in March 2014. I was at a seminar on lighting and watching a speaker from Philips. The speaker was taking us through the work they had done on The Edge, Deloitte’s home in The Netherlands.

In the moment when the sales rep was describing how intelligent building systems were giving Deloitte the power to track and monitor their staff at work (where they sit, how long they are there, how much power they consume) and setup a competition to encourage lower energy use on a person by person basis.

I was shown a video similar to the one below and an accompanying presentation. It frightened the hell out of me.

What frightened me most at the outset (many things have caused me further worry since) was that as former philosophy student, all I could see were an incredible thicket of ethical problems involved in creating space of this type.

As a person with a business in the same industry, what would my approach be to a client that asked for this kind of performance from its building? My honest answer, then and now, was that I would struggle to get on board. I am not satisfied that enough heavy lifting is done to flesh out the dilemmas presented person-facing IoT technologies.

I do not see how legacy companies in lighting, building materials or maintenance, advance the ethics sufficiently to put minds at ease. These products are behaviour monitoring and behaviour changing.

A Model Problem

My scepticism stems from watching how an unregulated internet evolved to oligopoly, tracking and harvesting in order to chase ever more returns. The great folks at Zebra’s unite have put together a graphic on the distortion of the current online business model and a better alternative.

business model startup capitalism internet of things
From ZebrasUnite

What I see in The Edge is a straight road to creating a new set of oligarchic companies mining our physical rather than digital world.

There are commercial reasons to go with the flow. The reason lighting companies are rushing to the IoT is they are now congenitally incapable of turning a profit. Digitisation (LED technology) and low cost production in Asia has destroyed a business model founded on cartel and built-in obsolescence.

Google have a proven cash model, it revolves around data. Companies that make physical products are now in a position to grab a seat on that gravy train thanks to the IoT.

When that compelling unicorn business model is set beside the technology that is incredibly powerful we can see where my worry comes from.

The Zebra model is important – it gives an alternative set of founding principles which, I believe, lead to a different paradigm of IoT roll out. If I am working for a mutual benefit, I am less likely to harvest people incorrigibly.

Like all philosophy students, we studied the Panopticon. The dystopian invention of Bentham and later a Foucault metaphor, it was like Sauron’s all-seeing eye. It was hard on the day watching the promo video for The Edge not to feel that we had slipped closer to instituting one in real life.

The Panopticon was an exercise in human narcissism. We had power of causation over others through our use of space. By building an environment of observation, we istituted a culture of docility. It was a post modern fever dream. A primal yell against modern projects to count, order, organise and distribute humans across space. But it was only an idea.

That was what ran through my head then. It hasn’t really left. Enormously powerful technology is set to be woven into our cities, buildings, hospitals, streets, cars, phones and plenty of other ‘stuff’.

The importance of making sure they aren’t the building blocks for a system of real world surveillance capitalism is pressing.

Behaviour Changing Data

The Edge isn’t the end of the world. It likely remains one-of-a-kind. The technology has already progressed so far since then that our buildings, like our reality, can be composed of sensors and monitors. They process a previously inert world into data.

That data can then be used to reconstitute our reality. It can project on us, or the building’s occupants, desirable and undesirable modes of action. It can suggest, seduce, coerce into doing things we might not otherwise have done.

I might not have wanted to stay at work beyond 4.00 – I have my work done and I need to see my child’s play at school. The building will know I have left, it can detect occupancy on the desk I was assigned for today. So what do I do?

The very fact that these powers are no longer the realm of Orwell but of real life – and real business – make clear the pressing need for a framework of ethics. These objects are going to affect and change how we act. By definition that changes who we are.

In this 2013 post at Polisblog Dieter Zinnbauer of Transparency International  discussed it in the context of the physical, governed, space. For him, the challenge is to empower citizens and remove the pervasive watchfulness of government.

Tactics include establishing openness and accountability policies, mobilizing civic action and raising awareness.

His reflections are a helpful and important jumping off point in thinking about IoT ethics:

Returning to the spatial aspect of the Panopticon, important questions arise: Can we shape the built environment in a systematic way to empower citizens vis-à-vis their governments? Can we literally design for transparency and accountability?

His questions carry weight becuase it is precisely that challenge opening up ahead of the IoT. If we don’t our world will enhance its power to shape and direct action automatically.

I’ll be taking up the challenge of fleshing out ethics and responsibility in posts to follow but I can’t claim to be first to hit upon the idea. It just strikes me from the inside how desperately the profit motive will override concerns for privacy and dignity.

Sign up for the weekly email – a run down of what caught my eye in the IoT this week and why it worries me – and keep an eye on the blog.

The Bullish Case for Snap

The price of shares in SNAP have dropped substantially since their IPO. Anaylsts are concerned at the companies ability to generate money off the back of the excitement their products generate.

I think there is a long term bullish case for Snap. It is their ability to extract data from the real-world engagements of users that sets them apart from other social networks.

If they grasp the always-on nature of the Internet of Things and retain their stickyness, then they are powerful enough to redefine memory and invert successful ad models online.

Snap is a camera company

I think the optimists case starts with the first page of the  (S-1).

Snap is a camera company.

This was the counter intuitive claim that sent a lot of technology writers potty at Silicon Valley’s self indulgence.

There was widespread scepticism of Snap’s strategy in calling itself a camera (interpreted as hardware) company when its product was, clearly, all the millenials addicted to its platform.

From widely respected stock market observers at The Motley Fool there is this neat summary:

Ultimately, Snap’s hardware strategy and identity change are misguided. It’s true that the $130 Spectacles have generated quite a bit of buzz and hype around the company, and the reviews are generally quite positive, but there are countless hardware products that can boast the same but then subsequently failed to be commercial successes in the long-term or justify lofty valuations of their companies. Focusing on hardware threatens to distract from the very serious and very immediate need to grow ad revenue faster than cloud infrastructure costs, which is what prospective investors should really be worried about right about now.

The camera could of course refer to Spectacles and there is a strong chance that it does in the S-1. My feeling is that the bull case rests on Snap sticking to the idea of being a camera company and updating what that means in the 21st century.

Spectacles Prove the Concept and the Concept isn’t Glasses

Spectacles are groundbreaking to me for one dramatic reason. They are connected, they are ‘smart’ and they are a defining moment in products that bridge the digital and physical worlds.

This is going to matter to advertisers. It is a sea-change in how they are able to segment, message and reach potential audiences.

The Facebook and Google model, at a philosophical level, rely on induction from enormous users bases to create effective advertising models. They have access to partial information and primarily exist in their digital dimension. Induction here is a metaphor for the process of getting an ad to a person. It requires theory, testing and confirmation before becoming repeatable. It has to move from a particular to a general to be successful. That is prone to error.

Snap can work deductively. Their products place them on the shoulder of their users. Overtly at first through Spectacles, Snap is welcome to accompany a user on their day. It can record and build a profile of people, preferences, places and any other vital data from just being on. Ads don’t need to go through testing for delivery, nor do we have to use a regression program to generalise conclusions about who is viewing an ad.

The categories are known and close to rock-solid. It is the most efficient vector for identifying and delivering ads possible – in theory.

If I have pictures of dinner at Chapter One on my facebook feed, facebook deduces I am a fan of fine dining. It needs to test that assumption and fine tune it – it may not be true and is therefore sunk cost. This can be done efficiently at scale but is categorically different from Snap.

Snap can know directly with a clever application of A.I. interrogation to my images, whether I like fine dining or merely want people to think I do. Or perhaps I aspire to but cannot afford it. This is deductive since the data yielded by surveillance and the target – me –  are inseperable.

Snap’s vision as a camera company is not Kodak. It is not going to provide us with endless wedding photos. Spectacles are not the revenue stream. They prove the concept and will bcome infinitely dispersed among Snap users in multiple forms. Products like Spectacles are vital for Snap to entrench its presence in users daily lives. For Snap to become their digital chaperone.

Snap’s vision is to be our memory

I have written on this previously:

Compiling images together into a kind of virtual flip-book supplants memory and cements the place of nostalgia in our culture and shapes the kind of products that are going to be successful.

It is the feeling that you are not really paying attention to the thing-in-itself but the thing-mediated-by-screen.

You have felt that. At a concert recently perhaps, or a party or when you should be awestruck by the Geysir and instead stand around with your phone hoisted in front of you.

The demolition of duration is an essential part of the business model, especially as A.I. becomes a solution in search of more and more problems.

Of course the incumbents try to move outside of their digital dimension. Google Glass, Google Home and Facebook’s phone represent products that might bridge the gap between the digital and the real. They are products that will funnel data which is inseperable from us toward the online databases – that makes the online identity better and yields more effective advertising.

Snap isn’t GoPro

What makes SNAP different from GoPro or FLIP? It is the relationship of connected, permanent access to your memory that it is forging with its customers. The products are always on and are able to:

  1. create and endless series of time-slices that can be crunched by A.I. and recast as memory later
  2. facilitate the playfulness that is a defining feature of the human mind. Word-play and image-play are as old as words and paintings.

The first point needs time since Snap has to reformulate an understanding of memory and build a reliance of customers upon it as an interpreter of memory.

The second is what keeps us addicted as human playfulness is evolutionary, it keeps us alive and so promotes biological rewards to keep us coming back.

What is the product?

If I can illustrate with a store I saw earlier this week.  Offline cookies miss the point – they are powerful but they are still inference based. Snap’s positioning turns it into the holder of deducted data from our real world.

It leaves some options for product and revenue;

Hyper-effective advertising: If you cannot build effective campaigns from their model you are probably in the wrong industry. It should be possible given the qualitative difference in data quality. TV was one-to-all general ads. Facebook and Google appear as one-to-one but after a process of iteration and deduction. Snap is inverting that to give you all you need to know to pitch directly and hit first time.

Advertising is narcissistic and the more you know your target the better. Their network gives you deductive possibilities but you have to test those. Snap has the network but it has the direct access to the users’ day (not just what they post). That granularity is the power of the social network leveraged by the Internet of Things.

Information: Nielsen revolutionised TV advertising by inventing a box that tracked accurately what people watched. That information made them very rich and was the key that unlocked TV advertising for decades to follow. It drove up the value of advertising everywhere by quantifying the phenomenon of private TV viewing. The reciprocal nature of the market meant Nielsen became indispensible.

Snap can potentially be both the Nielson box and the medium. They have the data on your daily life at and away from the screen. With clever background technology they can know with certainty, not probability.

The bullish case involves them seeing that market and creating it.  I think Snap know this intuitively (again from their S-1)

In the way that the flashing cursor became the starting point for most products on desktop computers, we believe that the camera screen will be the starting point for most products on smartphones. This is because images created by smartphone cameras contain more context and richer information than other forms of input like text entered on a keyboard

The 21st century camera company isn’t Kodak. Om alluded to this in his New Yorker piece. They have the cash to create exactly what a 21st century camera company looks like. I think it revolves around reframing memory.

It is also possible they are just clever lads in hoodies and going to turn into Twitter.

This is significant philosophical and ethical change to undertake. I will write further on those aspects in the next few weeks.

If you liked this post, please share it or signup for the weekly newsletter (sent on Mondays).

The Real World Effects of the IOT’s Digital Chaperones

The mesh of connected objects that Internet of Things, Ethical Design, Privacy, Do Not Track
make up the Internet of Things are digital chaperones. They are present in our physical lives but they increasingly guide us around and carry our data back to servers to pair it with our digital selves. 

When Steve Jobs gave us the ‘one more thing…‘ in 2007 that became the iPhone, he did say that it was a product that changes everything. As a man who had a reality named after him he probably would not be surprised at the reality-bending power of the iPhone.

Smart devices are part of the Internet of Things network. They are always online and are able to communicate object-to-object in simple or complex ways. I have, for example, looked at smart devices from the context of lighting. Projects which incoroprate ‘smartness’ into lighting fixtures can keep an eye on energy use or track where you are in the office building at any given time.

By 2025 there will be 75 billion devices online.

In 2014, PEW reviewed 20 years of the internet and highlighted this projection:

Experts say the Internet will become ‘like electricity’ over the next decade–less visible, yet more deeply embedded in people’s lives, with many good and potentially bad results

When you look at the way smart devices are integrating themselves into our reality and how their communication networks are ubiquitous, it looks like that prediction is being fulfilled. We are surrounded by Amazon Echos, Fitbits and intelligently sensored city waste systems. The internet and digital flows through these objects like electric current.

Digital Chaperones

I think of these devices as digital chaperones. They have two dominant functions – to guide or enable and to carry.

In their guiding or enabling roles they help us to get something done. It might be quickly re-ordering something from Amazon, ensuring the house is warm when we arrive home or hit our step count for the day. It might be running our water consumption more efficiently or helping the city run smart grids. They intervene to help to speed up action and direct it.

They also carry data. These connected devices, digital chaperones, are with us for every physcial move we make. Unlike our phones – which are mostly asleep – the chapeone is like a digital demi-step noting each physical move. At its simplest it is Fitbit recording every single actual step, digitising it and posting it to our profile on the server. Alexa digitises thinks as varied as our arrival home, our dinner, our music preferences.

They are active and passive. They actively turn on lights, heating, electricity etc. They passively monitor and track, paying attention to parts of our lives that even the web was impregnible to. Once purely physical acts are now augmented.

A case in point from my timeline last week


Aral is right of course, it is another expression of the logic of surveillance capitalism. It is also something new.

Take the following from researcher Jonathan Albright looking at ‘fake news‘ ecosystem after Donald Trump’s election win.

“I scraped the trackers on these sites and I was absolutely dumbfounded. Every time someone likes one of these posts on Facebook or visits one of these websites, the scripts are then following you around the web. And this enables data-mining and influencing companies like Cambridge Analytica to precisely target individuals, to follow them around the web, and to send them highly personalised political messages.”

This should be familiar, it is the digitial surveillance we now understand as the tradeoff for free services like Facebook and GMail on the internet. That tracking is undeniably intrusive. It is also something we imagine taking place ‘over there’ or ‘somewhere else’ because even though it is us. Even though we are traipsing around all these web pages. It is not ‘here’.

The idea of some fourth dimension in which this activity is taking place springs from the nowhere character of the internet. It is beautifully drawn out in Laurence Scott’s The Four Dimensional Human. Scott argues the fourth dimension is necessarily digital. It extends elsewhere and we are our own mirror, avatar or reflection there. Different sites try hard to fix our identity in place, to bind us to one piece of identifying info around which the rest of our profile can be populated. Some are more successful than others. But there is limits to the extension of that digital identity into our physical life.

Until now.

There is an argument that the smart devices provide a one-dimensional view instead of four. The digital chaperones are with us in our physical world, they are the websites, tracking and digital identities breaking through into our physical world. It is akin to collapsing everything back to a single dimension – our Amazon account or Facebook email – and all information co-ordinated from there.

From the same Scout piece quoted above:

Given its rapid development, the technology community needs to anticipate how AI propaganda will soon be used for emotional manipulation in mobile messaging, virtual reality, and augmented reality.

The effects will be far reaching.

Real World Effects: Socialisation

When I have been reviewing the developments of these devices, reading about them for work or compiling reports for technology roadmaps, I always felt this is sociologically important in a way that is different but related to the internet.

The Washington Post helped me confirm this hunch with a great look at the kind of raw human approach these objects inspire.

For children, the potential for transformative interactions are just as dramatic — at home and in classrooms. But psychologists, technologists and linguists are only beginning to ponder the possible perils of surrounding kids with artificial intelligence, particularly as they traverse important stages of social and language development.

An experiment like this doesn’t get more pure than a young child. Looking at how they are socialised (and respond to their stimulus) frames the new category of objects present in their world compared to ours.

I think that calling it exposure to A.I. is too limiting – the smart objects are going to go far beyond and have dramatic impact on the fabric of the reality these children experience contrasted to the one we had.

They are growing up in a reality with these digital chaperones woven into the fabric. The simple ways that has changed their socialisation e.g. less manners, suggests a lot of complex changes are happening for older humans.

The problem, Druin said, is that this emotional connection sets up expectations for children that devices can’t or weren’t designed to meet, causing confusion, frustration and even changes in the way kids talk or interact with adults.

Among the issues, the emotional expectation is a problem. Projecting human qualities on objects is another. Modifying behaviour – dropping manners for example – to better communicate with your object is a third.

In his book Moralizing Technology, Dutch philosopher Peter Paul Verbeek describes a framework for understanding how human’s and technology interact. He calls it a ‘postphenomenology’. It is a

mutual constitution of subject and object, or of humans and their world. Humans and their world are always interrelated.

Objects have a role in constituting the world around us. The example of Alexa and the distinctive interaction with children highlights why creating digital chaperones carries enormous weight.

The moral dimension for design of smart objects is at a higher threshold than for, say, a hammer. The Echo has a causative impact on the world. It also consistutes a permenent observer of the user. It deduces rather than induces behaviour and records it. From a data perspective it is incredibly powerful compared to the induction of digital tracking and advertising.

As I wrote previously, it is important to create policy direction around the Do Not Track concerns of the new breed of digital chaperones and connected devices.

The habits and practices of the web, and of our digital selves, are coming back to our real world. This is happening faster than we know and it will mean that the same issues worrying people about our digital space will urgently become problems for our physical space.

The vector for delivery of that problem is going to be the IoT and it is incumbent on us to try to shape it in our service not the other way around.

It is also going to be vital to engage with the sociological and philosophical agency these objects exert. Not through their potential to do things (i.e. hammer a nail) but their casual impact on our behaviour.

Where is “Do Not Track” for the Internet of Things?

The Internet of Things is re-creating the same industry of surveillance and tracking as the original web. Do we have a Do Not Track for the real world?

The header image from Aral Balkan's website - Better is a do not track software. Internet of things. Privacy. Ethical Design. Surveillance Capitalism
Do Not Track is burgeoning online – what about the real world? – Better image from

Don’t Track Me Bro

We are more aware than ever that the business models of web businesses involve vast surveillance. The industry that Shoshana Zuboff has called surveillance capitalism makes money by marketing your attention. What Aral Balkan has taken to calling ‘people farming’.

We know that Facebook and Google serve us ads based on our interests. Peoples’ response to this is mixed. If the product being offered is good enough, as many as half of us are willing to play ball on tracking and data collection. Despite that, antipathy at being chased around the internet by offers of ‘puppy portraits’ is broadly felt and it is rising.

Do Not Track is one response to the unhealthy dependence of web businesses on tracking and targetting in the digital space. It is part technology and part policy. The technology sends an instruction to websites not to track and collect data and websites need to consider respecting it (policy).

It is admirable but also highlights how important it is that culture and ethics point to the importance of respecting a DNT instruction.

There are similarities here to the way that power utilities had to wake up to the value of encouraging energy efficiency to better protect their businesses for the future. It made no short term commercial sense to encourage people to lower the amount of electricity they used but in the longer term it fostered healthier habits and cleaner sources of fuel became viable.

Recasting culture is never easy. As the Interet of Things revs up to swamp us in devices by 2020 or 2025 there is a window now to set out a culture that respects rights, dignity and privacy.

Where is the Do Not Track for the Internet of Things?

My preface above probably already familiar to you. I am concerened by how the culture of the past twenty years of the internet will bear on the forthcoming culture of connected ‘smart’ devices.

To me, the problems seem to break down like this:

  • What happens when IoT devices facilitate pervasive tracking in our real world?
  • What happens when online tracking follows you ‘offline’ and into your physical world?
  • What does Do Not Track look like when the behaviours of the web breakthrough from the digital world to the physical.
  • What options are there to avoid tracking when you no longer need to be paying attention to be monitored?
  • Have the last twenty years of the internet softened us up to accept surveillance or people farming in real life as a tradeoff for a Fitbit, Alexa or Hue?

The Internet of Things, and the people who are making products to propel into it, cannot avoid dealing with some of the implications of the do not track phenomenon.

It is not impossible that you will have a home with Alexa woven into the architecture. That Alexa will communicate with the internet via Li-Fi (imperceptibly delivered connctivity through your lighting) and be able to co-ordinate the objects in your home – lights, thermostat, fridge, set-top-box, internet router – and package data for analysis.

It isn’t just at home. Employers everywhere will be able to cheaply implement a variety of connected strategies to keep an eye on employees. Wearables, office occupancy monitoring they all exist presently. The experience it creates in the workplace is as bad as you would expect.

“OccupEye” sensors were installed “to monitor whether journalists are at their desks”. The implication being that the company wanted to identify any possible slackers.

Pervasive tracking happens in some businesses. That doesn’t make it acceptable but it makes it easier to resist if it is site-specific. The next ten years will be a completely new departure.

If I want to advertise on Twitter, I can now target behaviours. Think about what that might mean when Twitter can access the actual, real, physical things you do moment-to-moment. It can then sell each moment to me as the ideal point at which you want to engage with and purchase my product. I have no doubt, working close to the space that the IoT is capable of far greater harm than online tracking.

To coin a phrase from a previous post – it can turn your activity into infinite value.

I don’t trust technology companies alone to safeguard this space. The data is too rich and the development of our internet already proves that data harvesting trumps respectful advertising/business.

Interestingly almost identical numbers of people in Pew’s 2016 poll felt that real-world and social media surveilance was not acceptable. I don’t want to bulk copy from their report so I have posted a screengrab, click it to follow through to the whole thing.

Headline results of the PEW Internet Poll 2016

If half of the population don’t want to be tracked as they currently are there are two choices for development of this technology;

  • be sneaky
  • build ethical policy for design and data

I don’t think the first option is going to be beneficial. It may yield short term bursts of data and money but it will ultimately have to reform.

Building ethical policy is not going to be easy, but it is going to be important. The Internet of Things will go 10x out to 2020-2025 and if the right decisions aren’t made now, path dependency will make it much harder to reign in bad practice then. have a great page breaking down what ethical design looks like for the web.

I am in a phase where, after spending years looking warily at this tech for our lights and just finishing this great book by Laurence Scott, all I can think about is how the web as a seperate digital dimension of ourselves is now moving back toward us. It is attempting to reach back through the screen and live with us, follow us, in our physical world.

The habits and practices of the web, and of our digital selves, are coming back to our real world. This is happening faster than we know and it will mean that the same issues worrying people about our digital space will urgently become problems for our physical space.

The vector for delivery of that problem is going to be the IoT and it is incumbent on us to try to shape it in our service not the other way around.


A.I. Image Products set to Conquer Memory

“To perceive means to immobilise… we seize, in the act of perception, something which outruns perception itself.”

– Henri Bergson – Matter&Memory
Credit: Maria Teresa Ortoleva, There Is No Perception That Is Not Full Of Memory (Bergsonian Whiteboard #6), 2014

A.I. as a Nervous System

I regularly read about the nervous system as a metaphor for the system of connected devices. IBM have opted for cognitive but there is no difference. For the metaphor, the sensory inputs we receive from our nerves are analogous to the enabled and connected devices rapidly promulgating through our physical world. The high level processing of these inputs done by our brain is analogous to the cloud-based background processing that pulls the IoT inputs together and colours the big picture*.

That cloud increasingly runs on A.I.

One of the most powerful applications of A.I. will be reading and extracting data from images. A.I. will be able to extract and manipulate fragmentary and tangential data from images, similar to humans, and make it possible to cross-reference these. This gives any image depth where it can be cross referenced with all of the other stored images to embed it in multiple broader meaning networks.

On The Information, Sam Lessin suggests

The rise of AI-driven photo storage and organizing services like Google Photos heralds the return of photos as personal history, rather than as fleeting media.

I am not sure this is correct. Rather than a “return of photos as a personal history”, the dawn of AI photo storage and processing takes us into a new phase of history. The photgraph as a physical media was scarce, it was costly and the moments it captured were sporadic and special.

From Aides Memoir to Source of Value

The modern photograph is the constant, time-lapse, motion-capture model. It is permanent photography and that feels different.

Photos taken and processed in the way Lessin outlines are a paradigm shift in the way we have used images to catalogue our lives. In the new mode, we are consistently moving between images. They knit together a sequence of abstractions. These solidify a person from the point of view of the algorithm. They do so with whatever rapididty we allow down to the millisecond.

Frequently taken and uploaded images are a 180 degree flip from the scarce, golden era of image taking. When I think of photos as life catalogues, I think mostly of the 80s and early 90s family photo-books that I don’t see much of anymore.

The heavy, outsize books laden with photographs held only a fraction of the images on my Galaxy phone. And unless they were the Wedding edition, they spanned years. So images that made it in there were already exalted moments. They stood out. They were acknowledged as primus inter pares of the moments that knit together into our days and weeks.

It is logical that more frequent images lowers their overall value to us. We are drowning in images and they are difficult to distinguish in such volume.

If everything is image-worthy then nothing is, on the face of it, special or different. Perhaps we might get special moments developed in order to mark their distinction but even that requires the kind of commitment and discipline (not to mention $$$) that I don’t see in most of my friends and age group.

So if the image’s value to us is lower, for whom is the value increasing? Sam Lessin is on the money here

The real power is going to be in mining and creating value out of all the data embedded in photos broadly.

The companies like Facebook, Google and, probably, Snap are investing in stitching together your past to recast and present to you.

You are Still the Product

Pushing that back-end together with a consumer product like Spectacles provides incredible power.

These images are a gold mine. It might be nice for us to see the permanent recasting in front of us, each one increasingly relevant as A.I. algorithms improve. For companies it empowers them to sharpen market segmenting to an incredible degree.

Targetting and profiling for ads, like that offered by Facebook or Google, is going to look like amateur hour once the feed of image data from the various nerves get sent to the A.I. brain for processing. Our images are so rich in content, so present with our likes, our contexts, our preffered partners and our personality types, that the digitally reading them will probably turn out to equate to an exponential of our tracked browsing & internet data.

While I was reading Sam’s piece, that scene from Minority Report flashed into my head. The one where they swipe through the screen to look for evidence of future crimes in people’s thoughts and actions. I wasn’t struck by the potential for pre-screening as the nature of the ‘flow’ of images.

The image flows like the catalogue formats we see now in our social media and image products. An infinite reel of past and present present to hand, ready to be recalled.

As a Product, You Must be ‘Improved’

This returns to the comparison between A.I. processing and the brain. Henri Bergson, when writing about time and perception at the turn of the 20th century is a helpful way of scaffolding the idea of images as permanently ready to hand and comprehensively covering our past. It also helps us to compare the experience of A.I. mediated images and our lived experience.

In Matter and Memory, he gives an account of two different perceptions. There is the perception that relates to ‘mathematical time’ – a series of snapshots, freezing instances for reference later and there is perception that relates to ‘duration’, the way we experience time.

The latter category is alive, rich in detail and ribboned throughout our life experience. The former is frozen, it is solid and unmoving. It may bear some feeling and emotion but it is mostly continuous division and recording.

Bergson made this distinction because he thought it was clear that consciousness is where freedom lay and that corresponded to our experience of duration. Philosophy had erred by ‘mixing’ space and time together. His recasting helps us to identify freedom in our ongoing conscious states. Removing things from the experience, then measuring and categorising them for storage, is spatial. It transposes our lived experience (time) into an artefact (space).

An image is a artefact of time, a slice of time, transposed into space. Even though we experience time as duration, the unspecified period in which our consciousness is present and cycling through different, overlapping experiences – the infinite time in the barber’s chair or the instant spark when our hearts jump at seeing a loved one, the only way to present time is to fix it in place and slice it up.

Our camera roll represents an infinity of memory slices. They are snatches of an intance in time. They can be combined together chronologically but are qualitatively different to our experience of those moments. We live them as ‘duration’ an elongated moment which is indivisible in experience. We measure them as space, divisible into the tiniest infinitesimal sliver of seconds.

Images existed once as aides-memoir. The duration to which they referred was planted in our mind, the image an entry or access point to it. They were irregular, they were chosen for their value.

The Image Becomes Memory

Compiling images together into a kind of virtual flip-book supplants memory and cements the place of nostalgia in our culture and shapes the kind of products that are going to be successful.

In his reflection on nostalgia in The Guardian recently, Mosha Hamid pointed to technology as reproducing our culture of nostalgia

Nor is the realm of technology resistant to nostalgia. Quite the opposite. On our dominant social networks we are pulled out of the present moment to constantly shape and examine and interact with carefully curated pasts.

This is interesting becuase the alternative, a firing of imagined futures, is potentially so radical.

Image products and A.I. are likely to get us hooked on a diet of nostalgia and representation of the past and to further slice up time into discrete points in which we live.

Through technology the past is made real to us in a way that it never has been before. I can see myself five seconds ago, and my first girlfriend five hours ago, and my first child five months ago, and my first dog five years ago, and my first smile in my mother’s arms five decades ago, and I can sift endlessly through these archives of past moments, commingle them with present choices and likes and filters, and craft new past-present hybrids, dancing across time, sometimes alone, sometimes with others, commenting, watching, playing, mesmerising myself as the world outside my screen goes unnoticed for increasingly long interludes.

This is also the road map to success for the companies like Google that are looking for successful consumer products to deploy. It means shifting the time in which we live from the present toward the past.

To borrow from Bergson’s terminology, it means to spatlialise our time. That withdraws us from the duration.

It is the feeling that you are not really paying attention to the thing-in-itself but the thing-mediated-by-screen.

You have felt that. At a concert recently perhaps, or a party or when you should be awestruck by the Geysir and instead stand around with your phone hoisted in front of you.

The demolition of duration is an essential part of the business model, especially as A.I. becomes a solution in search of more and more problems. That means turning users away from attending to the thing in itself (living in duration) and instead thinking about how to store and recast it (living spatially).

It is ensuring that everything is captured as an image – frozen in time – in order to analyse and process it with A.I. To ensure we continue to behave, the A.I. will supplant memory be representing the image to you as memory. A neat trick to accomplish.

Securing a business model on top of this will require our going along with the inverting of our experience to let our images become our memory. I would expect that product managers are moving in this direction and if they are not, they will surely begin soon.

This is important becuase it unpicks habit and unpicks the philosophical foundations of freedom – a theme I will be coming back to repeatedly here.

*The metaphor seems like an issue, something to return to in a later post.