Privacy is Doomed

[This article is, also, available on Medium. If you read there, I would appreciate some Claps. Thanks.]

Here’s a story about the future

– Technology is increasingly empowering the individual.
– 30 or so years out, some Columbine Killers wannabes will be able to use a virus or dirty bomb.
– The only real solution is the surveillance state. 
– There won’t be good individual counter-measures: trying to block surveillance will only make you stand out.
– One path to that solution is panic and partial collapse of our democratic standards similar to the dynamic of post-9/11 legislation.
– It would be nice to do better than that.

To be clear, if the above is likely…and I think it is…our current dicking around with privacy rules, for all the heat it generates, is simply setting up a house of cards.

We need to think beyond our immediate impulses and head-off solutions that could drag in disastrous side consequences.


There’s a class of ideas that I term cockleburrs. These are ideas that work their way into the fabric of my thoughts. And there they sit, irritating and hard to remove.

Privacy’s doom is one of my most/least favorite cockleburrs.

I’ve been tracking the Internet’s privacy implications for the last 15 years; writing about it on my blog and, now, Medium, for half of that.

Privacy concerns wax and wane.  

We are now again, seemingly, at a local maxima. Witness the surge in attempts to address internet privacy through rants, analysis, and (yikes) legislation. Opinions range from insistence that personal privacy must be enforced through government edict (with, naturally, back doors for law enforcement) on over to the claim that it’s already non-existent so get over it.

Much of this discussion becomes irrelevant if we make some reasonable assumptions about how things might play out over a longer time frame.

…30–50 years out any system of individual privacy hammered out in the interim will be hit with at least one major stress test…some actual or threatened 9/11 — amped up a few orders of magnitude.

(Btw, the podcast TWIG is a great resource to keep tabs on the ongoing evolution of the technical, legal, and social sides of the issue(s). A great example of productive discussion along these lines is the conversation between Mike Elgan and Jeff Jarvis in Episode 493. Occasionally, otoh, TWIG seems stuck in a rut. My secret hope for this article is to contribute to moving the discussion forward.)

Stress Testing the System — Privacy’s Inevitable Collapse

Since the paleolithic we’ve seen a ‘hockey stick’ trend in the lethality of our weapons matched by a decrease in what it takes to deploy them. The end point, visible in the near to middle future, is the capacity for dirty bombs or custom diseases implementable by disaffected high school kids.


Here’s a 2019 Ted talk by Rob Reid on the risks with ‘synthetic biology’. It’s got 1.5 million views and counting . (You can find a bit more depth in the Q&A here on the Kevin Rose Show Podcast, Episode 34 — Rob Reid — the dark side of gene editing and synthetic biology.)

I prefer the video that planted the cockleburr for me back in 2013. (It still had under 1000k views last I looked.)


Speaking at the US Naval Academy, John West notes the trends and poses the problem with some precision. He offers the a disquieting solution. More on that below.

I suggest jumping to minute 7:00 and checking out his thought process so you can catch my dis-ease.

  • 07:00 — Bottom up — the future will be empowering the little guy. We’re heading into a networked world; 95% of the change is at the nodes: the little guy.
  • 13:22 — Disturbing change: uranium enrichment 50 years out will need low power and a pipe to sea water so accessible to small actors
  • 15:21 — The solution: a global technological immune system
If the combination of a high power attack and a surveillance state response is highly probable, then much of the current privacy discussion is froth… a turbulence before we go over the falls.

The Likely Result

If any of these or other similar points of risk come close to be actualized, it is likely that some time during the next 30 year any system of individual privacy we hammer out will be hit with at least one major stress test…some actual or threatened 9/11 amped up a few orders of magnitude in impact by technological advances…and that will be counter-able only by the type of surveillance society also made possible by recent technological advances.

Versions of this are already happening in response to real or feared system shocks — for example, Broward County in the aftermath of the Parkland shootings:

The 145-camera system, which administrators said will be installed around the perimeters of the schools deemed “at highest risk,” will also automatically alert a school-monitoring officer when it senses events “that seem out of the ordinary” and people “in places they are not supposed to be.”

Meanwhile, nations are pioneering detailed surveillance at a massive scale. China and England are both making arrests in facial recognition sweeps. China is automating a universal social credit score that can determine your ability to travel or get your kids into school. It is being implement by a branch of the government called, clearly without irony, the Central Leading Group for Comprehensively Deepening Reforms.

If the combination of a high power attack and a surveillance state response is highly probable, then much of the current privacy discussion is froth… a turbulence before we go over the falls.

The question is not will privacy go away, but how.

Back to Mr West

Let’s return to John West’s proposed solution: a “global technological immune system.” I’m going to abbreviate his presentation, but it’s well worth jumping in at 15:20 to hear his full discussion. To quote:

If someone starts to go down that road, a kid in high school or whatever [building a bomb or viruses], then we give them extra resources, we give them extra support, and extra transparency to keep that kid on the straight and narrow.”

…Basically the way democratic societies accept accelerating transparency…is we’ve created this kind of fish bowl world where everyone is watching everyone else in public spaces.

West mentions the Panopticon…but this is more like the Omnopticon — all watching all. I find some of his collection of solutions internally inconsistent but certainly thought provoking.

  • 15:21 — “You’ve got to protect privacy but anonymity goes away.”
  • 16:50 — “The way democratic societies accept transparency is that they need 95% of the cameras being in their hands.”
  • 17:00 — If you resisted watching the video so far, let me invite you to watch the single scariest 30 seconds.

West references the panopticon and then the discussion goes a direction I haven’t seen elsewhere: he assumes surveillance will happen and asks how this can be made compatible with a free democratic society!

First, is he right that, long term, hidden top down surveillance is incompatible with democracy? He doesn’t tease out the why of this, simply makes the assumption, but I think he’s justified in doing so.

We’re left with a what seems like a highly probable future. The question is not will privacy go away, but how. We need to ask if we can get there in some sort of controlled slide that leaves our core values intact!

His solution: embrace an all-watch-all ‘souvelliance’ society.

Second, what the heck then would that actually look like?!

I grew up in a small town in South Dakota where everyone pretty much knew everyone else’s business. My level of comfort with ‘souvelliance’ is probably higher than most, but this is still a stretch.

Granted, we did spend most of the last 250,000+ years in small bands (occasionally holed up in a single long house structure for the winter) so the ideas of anonymity and privacy are recent and a bit odd in the long sweep of human evolution.

Still what might ‘law abiding citizens’ generally fear about losing privacy?

I’m thinking some of the common denominator issues are low level lawlessness and unruly sexuality.

Will your car report you for running a stop light or driving to a suspect location even if you disable auto-pilot? Will your taxes file themselves on auto-pilot as well? Will there be any thing like ‘currency privacy’? Clearly, spending needs to be monitored as a key indicator of situations where, as Mr West states, someone might need a little extra ‘support’ and ‘transparency’.

And will porn filters in 2030 guarantee that content actually is porn and not encoded virus recipes? When I recently read a history of modern Britain, it seemed like every wife was someone else’s mistress. Was that public knowledge?

In short, what the fuck would souvelliance society look like? The answer is left as an exercise for us all.

Postscript 1 — Moral Panic

What’s the connection? If you don’t listen to TWIG, it might not be obvious. It’s part of the dialog on any topic of tech’s potential negative impact, described in greater detail below.

Moral Panic!!!

Moral Panic: the concept is from sociologist Stanley Cohen. It was based on observing the public (over)reaction to the “Mods” vs “Rockers” rivalry in Britain in the 60s and ’70s. Similar to teenagers in the US, teenagers in the UK were considered dangerous. (Of course, judged by their impact on mainstream culture summed through the mid-50s to the mid-70s, they were.)

Note the term is ‘moral panic’ and not ‘moral concern’ — Cohen described an emotional contagion that overrode rational analysis.

All significant culture transformation with a big upside has had a corresponding dark shadow.

As noted above, there’s a current freak-out on ‘big tech’ with a variety of proposed solutions. Some are well considered; some are knee jerk; few systematically analyze the possible unintended consequences of the proposed solution. Often they look like a land grab: the Internet is itself generally considered a vehicle of sedition from the viewpoint of those that seek control.

We fear we’ve created a monster. We fear we’ll throw the baby out with the bathwater.

Or what?

Still, I’m not sure moral panic is a useful concept here.

There’s an inherent problem with it. We can, I assume, agree that the bottom row below is undesirable. Weak Analysis solutions frequently do more harm than good. Even after a deep analysis, our solutions often go sideways.

However, there is a problem with the No Distress / Solid Analysis quadrant: it’s extremely unlikely to actually occur. Analysis is hard work. There’s got to be motivation to start and to put in the effort.

I was an operations guy during my day-job career. The way to work biz operations is to figure out the 3 or 4 ways any critical operation is likely to go sideways and take steps to prevent or mitigate the problem.

My motto: “Panic early!”

Instead of setting a point along the continuum of Distress that triggers a Moral Panic kill switch, it might be more useful (and effective) to ride the beast to an analysis of risk and look for paths to mitigation.

Who do we trust to have these discussions if not us?

We now have a millennia long, studiable history of the type of changes that delivered mixed social consequences both glorious and dire. Can we use that to get smarter?

A few additional points

The information tech situation doesn’t parallel Mods vs Rockers. It’s bigger…somewhere between the Industrial Revolution and the introduction of the automobile.

All significant culture transformation with a big upside has had a corresponding dark shadow.

Let’s take the invention of crop based agriculture. Who could argue that’s a bad thing. Well…okay…it started humbly (in fact, there’s significant evidence that the first ag might have been aimed at making beer.)

The upside, it took the species up to the first .5 billion in population making homo sapiens (until now) a poor candidate for extinction.

But there was a downside. Let’s listen in:

Barney: Hey, this fields thing is going to saddle us with 2500 years of back breaking servitude and brutally short lives!!

Fred: Dude, get a grip. Beer!

Automobiles might be the best example: mobility, freedom, back seat sex (teenagers out of control), ambulances, accidents (a leading cause of death), hearses, global warming, and so on. The impact is mixed. Mitigation has been possible.

We now have a millennia long, studible history of the type of changes that delivered mixed social consequences both glorious and dire.

Can we use that to get smarter?

We should be able to recognize patterns of longer term impact and risk in ways not available to us at the start of either the agricultural or industrial revolution.

Last, here’s an algorithm in action: a recommendation for yours truly based on my youTube search for the Drive By Truckers’ song Gravity’s Gone.

Guess we’re all only 3 degrees of separation from Climate Denial. Could automated surveillance go sideways? Nah.



Privacy Pt 1: Youth Wants To Know

Discussion of privacy on the Internet, and, in particular, in social media, hit a local maxima a short time ago in the nym wars on Google+ against a background of increasing public concern.  Pronouncement by pundits and officials, some informed and most ignorant, rocks steadily on. None of that had pushed me to add my 2 cents to the mix.

What finally took me over the edge was hearing yet another 20 year old explain that his generation doesn’t have the same concern for privacy that we old folks so quaintly maintain. The implication is that, because they’re from a younger generation, the young are better adapted to current conditions.

There is another possible explanation, however. It follows.

First, let me get my on geezer persona.

Allen Ginsberg's Exorcism of the PentagonNow there, youngsters, I’m from a generation that routinely showed up naked in public and engaged in such low profile activities as chanting to levitate the Pentagon. This was dropped onto an America barely emerging from the 50’s and massively less receptive to living publicly. Nudity was performance art aimed at letting some sunlight into the cloister.

Allow me to assure you that my concerns are not an attachment to some arbitrary, bygone, meaningless convention like, for example, not ending a sentence with a preposition. They are, instead, the result of years of observation and balanced analysis…primarily of the government. More on that later.

You kids may have a different concept of the value of privacy but that concept is not different from the concept I held as a kid. Experience, one of a few advantages of aging, has made me wiser.

I’d argue that privacy concerns come down, not to how you feel about living in public, but how you feel about powerful institutions– governmental organizations, primarily, but any institution large enough to have significant information gathering resources–being able to have easy access to your life’s details.

I think providing that access is dangerous.

Rather than get into point/counter-point or personal biography, I’d like to perform a thought experiment.

Draw a axis of government benignity. Start in, say, Amsterdam and end in Pyongyang. Route it through 6 or 8 cities you think span the spectrum of openness. (Let’s pick Berlin, Beijing, Lagos, Sao Paulo, Moscow, San Francisco, Tehran, Istanbul, and London for the purposes of this experiment.) Put them on the axis wherever you think they belong. Now, imagine yourself in living a Amsterdam. Assume you and your tribe share thoughts, interesting books and websites, music, purchases, shops and restaurants via a social media or two. Sometimes you flirt; sometimes you trade ideas and opinions; sometimes you blow off steam.

(Exclude for the purposes of this experiment the information streams created by your cell phone and credit card.)

Move the slider along until having all that information public becomes a significant threat to your well-being…until you cross a line where your behavior is now suspect and where simultaneously you have revealed who you associate with and much of your daily routine. If you get comfortably all the way to Pyongyang, you need to get a life. Commonly you’re in danger of loss of income, harassment, discrimination, imprisonment, or disappearance at some point midway along the line for any of multiple reasons.

Of course, you say, you’re not in Pyongyang but near Amsterdam on the line…safe perhaps in San Francisco with me, and these concerns aren’t appropriate. Or are they?

At issue? These media are primarily built here but deployed globally. What seems benign locally becomes a tool for the police state with a slight shift of context. Regardless of local comfort, our systems would better be measured on a global yardstick.

Specifically, as the creators, early adopters, critics and evangelists for these systems, we have responsibility to take predictable consequences quite seriously as we design, create, implement, and sell. We are building history, brick by brick, and broad current flow through us and our shapings.

We need to take that seriously.

At least now and then.

It will be of general benefit if we lay down cover for our brothers and sisters that are under genuinely under siege. Perhaps they want to practice Qi Kung in Beijing or discuss their Armenian heritage in Istanbul…or draw political cartoons in Amsterdam.

The issues aren’t necessarily easy to engineer. We need to wrestle, for example, with the interface between criminality and dissent, and the interplay of persona and authenticity. But it’s up to us to do that rather than ignore complexity and the likely paths into the future. I have some ideas that might help in this discussion. Probably you do, too.

I’m urging us to design and discuss privacy in a wider scope–spatially, temporally, morally–and remove it from a discussion of personal preference.

In Brian Eno’s terms, we need to think in a broader here and a longer now.

Privacy Pt 2: Python Politics: the Economics of Knowledge Accumulation
Privacy Pt 3: Is there an Engineer in the house?

PS – A few notes in passing.

The US is not necessarily a safe haven.

Consider in light of the recent National Defense Authorization Act. Or police raids to preemptively undermine protests at the 2008 Republican convention:

You may not be a Judi Bari (they arrest you and search everywhere you hang out after someone bombs your car) or organizing widespread protest in Minneapolis (the show up the week before and take all your stuff) but note: systems integration is changing the game for law enforcement everywhere. It might not be J edGar Hoooover you need to worry about but some local middle-aged portly DA with too tight shoes, a sullen teenager, a nasty mortgage, and a depressed spouse that’s pissed off because some 20 year old is driving a car that costs his yearly salary

Finally, here’s a bit of gratuitous Chuang Tzu – the topic is gathering up treasures for thieves :

Privacy Pt 2: Python Politics: the Economics of Knowledge Accumulation

clothed man arrested by naked guys
“A constricting snake like a boa or a python kills its prey by suffocation. It uses the momentum of its strike to throw coils around its victim’s body. Then, it squeezes. Every time the prey exhales, the snake squeezes a little more tightly. Soon, the victim can breathe any more.” ref

Essentially, then, constrictors kill not by crushing but by taking in the slack and not giving it back. I view this as relevant to the topic at hand.

History: the Internet Wants to Be Free.

We seem to have generated a cultural lock-in that demands a free Internet. This is the result of a combination of factors: successful initiatives, ideological leanings (sometimes based on the semantic ambiguity between free as unregulated and free as without cost), and significant self-delusion.

There is a component of this freedom that is real. The freedom exist in a commons collectively built and shared. It exists in the group effort to build standards and protocols to connect us and to share and display information. It exists in the collection of individual visions, not always mutually compatible, that maintain and extend this effort. It exists in our ability to design technical hacks to undermine systems of control.

Wikipedia is a good exemplar of this Internet.

Beyond that, it’s not free at all.

Our vision is embodied in components all of which have both capital and maintenance costs.

The initial freedom was built on simple design hacks and the low bandwidth required to move text around. It sat on top of large computing projects in academia and government and survived on crumbs snatched off the table. You might have to pay a bit for your hook to the the infrastructure but content was provided mostly without cost. This was the world of bulletin boards, irc, usenet groups, email, text based MUDs.

This model has been extended with increasing elaborate ‘free’ content and services: Google, youTube, Flickr, facebook, Pandora,  Spodify, Twitter and so on. Meanwhile, temporary successes in the paid model, eg AOL, generally seem to erode back to free.

This apparent continuity masks a simple fact: the old model didn’t scale. We benefit from increasing costly products. A new funding source…advertising, of course…by and large pays the bill.

Google is the exemplar, here.

Which brings us to the common wisdom: if the product appears free, then you’re the product.

The true coin of this transaction is information about us individually and collectively.

The economics of targeted advertising is pretty simple conceptually. Information is gathered on you to give you advertising you’re likely to respond to. Or perhaps to provide you a more tailored service that will make you more engaged and hence bring you to the advertising more frequently.

With Google this was an add-on that went from humble beginnings. Now the business plans of start-ups are often designed specifically to elicit information. Products are built around data mining and targeting advertising.

We gain significant benefit from this model and here we, perhaps, become complicit.

 This march toward greater public visibility has generated other elements of common wisdom which are more suspect.

You often hear the following: “Privacy is dead. Things are different than in the past. We need to get over it and adjust to new realities. Privacy is already fairly illusory therefore upset is unjustified; just look at the targeted direct mail of the past” And so on.

To my mind, an appeal to the hopelessness of implementing change regardless of that changes desirability somehow lacks punch. We live in a world that’s changing all the time. We live in a world in which an improved technical solution or compelling idea can turn things upside down in considerable less than a decade. And we are inventing that technology every day.

There is a danger that I think it is our responsibility to recognize and attempt to address.

My argument is —

  1. There’s an unremitting economic pressure to create a higher and higher resolution picture of you, what you do, and what you might potentially consume.
  2. We’ll hit a point (and are there already in some locations) where you’ll stand out as suspicious by not being transparent. Given this fine grained picture, not being highly visible will become increasingly suspicious. The blurry individuals then stand out as very likely criminals or revolutionaries.
  3. The most likely to succeed in hiding from scrutiny behind a false front is the deliberate criminal with a carefully constructed public identity. The most likely to fail is the accidental revolutionary…the well meaning public citizen that is compelled to move into some sort of rebellion.
  4. Here’s the NYRB on Chinese dissident Fang Lizhi:

Fang’s path through life observed a pattern that is common to China’s dissidents: a person begins with socialist ideals, feels bitter when the rulers betray the ideals, resorts to outspoken criticism, and ends in prison or exile. Liu Binyan, Wang Ruowang, Su Xiaokang, Hu Ping, Zheng Yi, Liu Xiaobo, and many others have followed this pattern.

Let’s return to a police action in Minnesota cited in the PS to Part 1.

I suspect that these raids proved highly effective in disrupting the protest and that it would have been less so had the naive folks raided been up to actual criminal activity and taken steps to keep backup materials and tools hidden away somewhere. In other words, if they had been the folks that justified a disruptive police action then that action would likely have been ineffective.

Is there an Engineer in the house? How about an ESL Instructor?

We are building, brick by brick, a consolidation of power through an increasingly  fine-grained consolidation of data. We live in a world where a major portion of the US security apparatus has disappeared behind a curtain and is accountable, apparently, mostly to themselves. And a world where the idea of individual political rights to speak, meet, and organize are not widely acknowledged. The result of that intersection creates a problem that is not, I think, insignificant. The constraint on oppressive regimes has often been reach and we, in our work on information technologies, are eliminating some significant constraints.

What’s the solution? Is there one?

Privacy Pt 1: Youth Wants To Know
Privacy Pt 3: Is there an Engineer in the house?