Privacy is Doomed

[This article is, also, available on Medium. If you read there, I would appreciate some Claps. Thanks.]

Here’s a story about the future

– Technology is increasingly empowering the individual.
– 30 or so years out, some Columbine Killers wannabes will be able to use a virus or dirty bomb.
– The only real solution is the surveillance state. 
– There won’t be good individual counter-measures: trying to block surveillance will only make you stand out.
– One path to that solution is panic and partial collapse of our democratic standards similar to the dynamic of post-9/11 legislation.
– It would be nice to do better than that.

To be clear, if the above is likely…and I think it is…our current dicking around with privacy rules, for all the heat it generates, is simply setting up a house of cards.

We need to think beyond our immediate impulses and head-off solutions that could drag in disastrous side consequences.


There’s a class of ideas that I term cockleburrs. These are ideas that work their way into the fabric of my thoughts. And there they sit, irritating and hard to remove.

Privacy’s doom is one of my most/least favorite cockleburrs.

I’ve been tracking the Internet’s privacy implications for the last 15 years; writing about it on my blog and, now, Medium, for half of that.

Privacy concerns wax and wane.  

We are now again, seemingly, at a local maxima. Witness the surge in attempts to address internet privacy through rants, analysis, and (yikes) legislation. Opinions range from insistence that personal privacy must be enforced through government edict (with, naturally, back doors for law enforcement) on over to the claim that it’s already non-existent so get over it.

Much of this discussion becomes irrelevant if we make some reasonable assumptions about how things might play out over a longer time frame.

…30–50 years out any system of individual privacy hammered out in the interim will be hit with at least one major stress test…some actual or threatened 9/11 — amped up a few orders of magnitude.

(Btw, the podcast TWIG is a great resource to keep tabs on the ongoing evolution of the technical, legal, and social sides of the issue(s). A great example of productive discussion along these lines is the conversation between Mike Elgan and Jeff Jarvis in Episode 493. Occasionally, otoh, TWIG seems stuck in a rut. My secret hope for this article is to contribute to moving the discussion forward.)

Stress Testing the System — Privacy’s Inevitable Collapse

Since the paleolithic we’ve seen a ‘hockey stick’ trend in the lethality of our weapons matched by a decrease in what it takes to deploy them. The end point, visible in the near to middle future, is the capacity for dirty bombs or custom diseases implementable by disaffected high school kids.


Here’s a 2019 Ted talk by Rob Reid on the risks with ‘synthetic biology’. It’s got 1.5 million views and counting . (You can find a bit more depth in the Q&A here on the Kevin Rose Show Podcast, Episode 34 — Rob Reid — the dark side of gene editing and synthetic biology.)

I prefer the video that planted the cockleburr for me back in 2013. (It still had under 1000k views last I looked.)


Speaking at the US Naval Academy, John West notes the trends and poses the problem with some precision. He offers the a disquieting solution. More on that below.

I suggest jumping to minute 7:00 and checking out his thought process so you can catch my dis-ease.

  • 07:00 — Bottom up — the future will be empowering the little guy. We’re heading into a networked world; 95% of the change is at the nodes: the little guy.
  • 13:22 — Disturbing change: uranium enrichment 50 years out will need low power and a pipe to sea water so accessible to small actors
  • 15:21 — The solution: a global technological immune system
If the combination of a high power attack and a surveillance state response is highly probable, then much of the current privacy discussion is froth… a turbulence before we go over the falls.

The Likely Result

If any of these or other similar points of risk come close to be actualized, it is likely that some time during the next 30 year any system of individual privacy we hammer out will be hit with at least one major stress test…some actual or threatened 9/11 amped up a few orders of magnitude in impact by technological advances…and that will be counter-able only by the type of surveillance society also made possible by recent technological advances.

Versions of this are already happening in response to real or feared system shocks — for example, Broward County in the aftermath of the Parkland shootings:

The 145-camera system, which administrators said will be installed around the perimeters of the schools deemed “at highest risk,” will also automatically alert a school-monitoring officer when it senses events “that seem out of the ordinary” and people “in places they are not supposed to be.”

Meanwhile, nations are pioneering detailed surveillance at a massive scale. China and England are both making arrests in facial recognition sweeps. China is automating a universal social credit score that can determine your ability to travel or get your kids into school. It is being implement by a branch of the government called, clearly without irony, the Central Leading Group for Comprehensively Deepening Reforms.

If the combination of a high power attack and a surveillance state response is highly probable, then much of the current privacy discussion is froth… a turbulence before we go over the falls.

The question is not will privacy go away, but how.

Back to Mr West

Let’s return to John West’s proposed solution: a “global technological immune system.” I’m going to abbreviate his presentation, but it’s well worth jumping in at 15:20 to hear his full discussion. To quote:

If someone starts to go down that road, a kid in high school or whatever [building a bomb or viruses], then we give them extra resources, we give them extra support, and extra transparency to keep that kid on the straight and narrow.”

…Basically the way democratic societies accept accelerating transparency…is we’ve created this kind of fish bowl world where everyone is watching everyone else in public spaces.

West mentions the Panopticon…but this is more like the Omnopticon — all watching all. I find some of his collection of solutions internally inconsistent but certainly thought provoking.

  • 15:21 — “You’ve got to protect privacy but anonymity goes away.”
  • 16:50 — “The way democratic societies accept transparency is that they need 95% of the cameras being in their hands.”
  • 17:00 — If you resisted watching the video so far, let me invite you to watch the single scariest 30 seconds.

West references the panopticon and then the discussion goes a direction I haven’t seen elsewhere: he assumes surveillance will happen and asks how this can be made compatible with a free democratic society!

First, is he right that, long term, hidden top down surveillance is incompatible with democracy? He doesn’t tease out the why of this, simply makes the assumption, but I think he’s justified in doing so.

We’re left with a what seems like a highly probable future. The question is not will privacy go away, but how. We need to ask if we can get there in some sort of controlled slide that leaves our core values intact!

His solution: embrace an all-watch-all ‘souvelliance’ society.

Second, what the heck then would that actually look like?!

I grew up in a small town in South Dakota where everyone pretty much knew everyone else’s business. My level of comfort with ‘souvelliance’ is probably higher than most, but this is still a stretch.

Granted, we did spend most of the last 250,000+ years in small bands (occasionally holed up in a single long house structure for the winter) so the ideas of anonymity and privacy are recent and a bit odd in the long sweep of human evolution.

Still what might ‘law abiding citizens’ generally fear about losing privacy?

I’m thinking some of the common denominator issues are low level lawlessness and unruly sexuality.

Will your car report you for running a stop light or driving to a suspect location even if you disable auto-pilot? Will your taxes file themselves on auto-pilot as well? Will there be any thing like ‘currency privacy’? Clearly, spending needs to be monitored as a key indicator of situations where, as Mr West states, someone might need a little extra ‘support’ and ‘transparency’.

And will porn filters in 2030 guarantee that content actually is porn and not encoded virus recipes? When I recently read a history of modern Britain, it seemed like every wife was someone else’s mistress. Was that public knowledge?

In short, what the fuck would souvelliance society look like? The answer is left as an exercise for us all.

Postscript 1 — Moral Panic

What’s the connection? If you don’t listen to TWIG, it might not be obvious. It’s part of the dialog on any topic of tech’s potential negative impact, described in greater detail below.

Moral Panic!!!

Moral Panic: the concept is from sociologist Stanley Cohen. It was based on observing the public (over)reaction to the “Mods” vs “Rockers” rivalry in Britain in the 60s and ’70s. Similar to teenagers in the US, teenagers in the UK were considered dangerous. (Of course, judged by their impact on mainstream culture summed through the mid-50s to the mid-70s, they were.)

Note the term is ‘moral panic’ and not ‘moral concern’ — Cohen described an emotional contagion that overrode rational analysis.

All significant culture transformation with a big upside has had a corresponding dark shadow.

As noted above, there’s a current freak-out on ‘big tech’ with a variety of proposed solutions. Some are well considered; some are knee jerk; few systematically analyze the possible unintended consequences of the proposed solution. Often they look like a land grab: the Internet is itself generally considered a vehicle of sedition from the viewpoint of those that seek control.

We fear we’ve created a monster. We fear we’ll throw the baby out with the bathwater.

Or what?

Still, I’m not sure moral panic is a useful concept here.

There’s an inherent problem with it. We can, I assume, agree that the bottom row below is undesirable. Weak Analysis solutions frequently do more harm than good. Even after a deep analysis, our solutions often go sideways.

However, there is a problem with the No Distress / Solid Analysis quadrant: it’s extremely unlikely to actually occur. Analysis is hard work. There’s got to be motivation to start and to put in the effort.

I was an operations guy during my day-job career. The way to work biz operations is to figure out the 3 or 4 ways any critical operation is likely to go sideways and take steps to prevent or mitigate the problem.

My motto: “Panic early!”

Instead of setting a point along the continuum of Distress that triggers a Moral Panic kill switch, it might be more useful (and effective) to ride the beast to an analysis of risk and look for paths to mitigation.

Who do we trust to have these discussions if not us?

We now have a millennia long, studiable history of the type of changes that delivered mixed social consequences both glorious and dire. Can we use that to get smarter?

A few additional points

The information tech situation doesn’t parallel Mods vs Rockers. It’s bigger…somewhere between the Industrial Revolution and the introduction of the automobile.

All significant culture transformation with a big upside has had a corresponding dark shadow.

Let’s take the invention of crop based agriculture. Who could argue that’s a bad thing. Well…okay…it started humbly (in fact, there’s significant evidence that the first ag might have been aimed at making beer.)

The upside, it took the species up to the first .5 billion in population making homo sapiens (until now) a poor candidate for extinction.

But there was a downside. Let’s listen in:

Barney: Hey, this fields thing is going to saddle us with 2500 years of back breaking servitude and brutally short lives!!

Fred: Dude, get a grip. Beer!

Automobiles might be the best example: mobility, freedom, back seat sex (teenagers out of control), ambulances, accidents (a leading cause of death), hearses, global warming, and so on. The impact is mixed. Mitigation has been possible.

We now have a millennia long, studible history of the type of changes that delivered mixed social consequences both glorious and dire.

Can we use that to get smarter?

We should be able to recognize patterns of longer term impact and risk in ways not available to us at the start of either the agricultural or industrial revolution.

Last, here’s an algorithm in action: a recommendation for yours truly based on my youTube search for the Drive By Truckers’ song Gravity’s Gone.

Guess we’re all only 3 degrees of separation from Climate Denial. Could automated surveillance go sideways? Nah.



Science Fiction as Erosion – Meditation on Gibson’s Spook Country

I recently reread William Gibson’s Spook Country. I read it first years ago shortly after publication and it has lurked the back of my memory ever since. Its characters, plot elements and moods would surface even though only vaguely connected to the matters at hand. I wanted to get back into it for another full immersion.

It was a very satisfying re-read.

Now then, I read it on the Kindle  and the Kindle has a dangerous aspect, particularly for insomniacs: you can buy a book on impulse at 2 am and be reading it minutes later. Spook Country (of which I had vivid memories) lead to Pattern Language (which I remembered hardly at all despite some overlap in characters and vibe) and then on into a mini-sf reading spree of early Gibson, newer Ian Banks, and mop up Kage Baker.

Noting what remained vivid in memory and what didn’t  lead to some thinking about what sf does.

I’ve read various theories over the past decades about the pulps and the  limits and appropriate role of genre fiction. My theory of sf matches none of that and simultaneously works as a description of why I like all the interrelated genre’s of sf, fantasy, & horror.

The role of sf is, quite simply, to erode the present.

  • Imagine that we sit down with a new volume by a favorite author on the flat plain of the present.
  • Hills and valleys have been flattened since last time by the familiarity of the everyday world.
  • We read.
  • If the volume is effective the world begins to erode.
  • In the best possible case, like a stroll atop Sheep Mountain in the South Dakota, flat prarie drops away into a deeply erode landscape of shape and color.

The gradient of how things will move forward has now been changed!

The present flows into the future along an altered channel with different resonance and open possibilities.

And finally that new future feeds back and revises the present. Our world becomes more eerie, more open, more wondrous, more strange.

Kage Baker: An Appreciation

A favorite author of mine, Kage Baker, died young at 57 on 1/31/2010.

I took the news as a call to chase down and read anything I could find of hers that I’d somehow missed. There was very little to find. I’d been following her work assiduously since reading her In the Garden of Iden in 1997. Not much had slipped by me.

I was particularly fond of her popular Company series but I’ve liked everything she’s written.
The Company series began with a premise introduced in Garden of  Iden of a secret organization using a highly constrained version of time travel to ‘guide’ human history. Its agents are recruited (by other agents) from among otherwise doomed young children who are trained up, given immortality and near invulnerability, and then deployed to rescue cultural treasures and prevent the derailment of human history. Or at least that’s the story the agents are given.
The master organization exists in the 21st – 24th Century and stays there since time travel is one way and it would doom the non-immortal company officers to life in a primitive past.
The agents slowly realize things are not necessarily as they seem, factions form, and different strategies constellate all focused on 2355…the point where something happens and the news and instructions channel from the future goes dark.
The beauty of Baker’s writing is that she focuses on the human side of her cyborg agents. She gets to play with a clash of cultural backgrounds and character formation from different epochs with agents recruited in periods ranging back to the Neolithic but deployed potentially anytime or anywhere forward…although she focus on relatively recent missions (1500 and on).  She deals with the challenge to relationships and the strains on the personalities caused by her agents inhuman condition and there alienation from the regular human population who are, nonetheless, producing the magnificent cultural artifacts they are raised to protect and treasure.
It’s a great use of a nerdly science fiction premises to anchor an exploration of personality and culture. Oh, and did I mention that a key thread through all books is the romance of the two characters introduced in Iden, one of which repetitively dies and then reappears.
If I have any criticism, it’s that her later books have too much plot to get through. She seemed to be racing forward faster and faster to cover the full narrative to the events of 2355 in fewer and few pages. The final book in the series appeared in 2007. I don’t know the events of her personal biography: perhaps this rush was related to her illness; or it could be she was simply bored with the series but wanted to give her fans a conclusion and get on with other projects. In any case, I liked it better when she took her time and did a slower dive into culture and character.  There she shone although it was all interesting.
My favorite of her novels is Sky Coyote. Imagine an competent version of Rowling’s Gildaroy Lockhart, surgically altered to appear part coyote, and sent to convince a coastal, mercantile-oriented, California tribe to do the Companies bidding and move from their rich home territory. This is the second book in the Company series. I give it a strong recommendation. If you’re a purist, read Iden as a warm up.