Privacy is Doomed

[This article is, also, available on Medium. If you read there, I would appreciate some Claps. Thanks.]

Here’s a story about the future

– Technology is increasingly empowering the individual.
– 30 or so years out, some Columbine Killers wannabes will be able to use a virus or dirty bomb.
– The only real solution is the surveillance state. 
– There won’t be good individual counter-measures: trying to block surveillance will only make you stand out.
– One path to that solution is panic and partial collapse of our democratic standards similar to the dynamic of post-9/11 legislation.
– It would be nice to do better than that.

To be clear, if the above is likely…and I think it is…our current dicking around with privacy rules, for all the heat it generates, is simply setting up a house of cards.

We need to think beyond our immediate impulses and head-off solutions that could drag in disastrous side consequences.

Cockleburrs

There’s a class of ideas that I term cockleburrs. These are ideas that work their way into the fabric of my thoughts. And there they sit, irritating and hard to remove.

Privacy’s doom is one of my most/least favorite cockleburrs.

I’ve been tracking the Internet’s privacy implications for the last 15 years; writing about it on my blog and, now, Medium, for half of that.

Privacy concerns wax and wane.  

We are now again, seemingly, at a local maxima. Witness the surge in attempts to address internet privacy through rants, analysis, and (yikes) legislation. Opinions range from insistence that personal privacy must be enforced through government edict (with, naturally, back doors for law enforcement) on over to the claim that it’s already non-existent so get over it.

Much of this discussion becomes irrelevant if we make some reasonable assumptions about how things might play out over a longer time frame.

…30–50 years out any system of individual privacy hammered out in the interim will be hit with at least one major stress test…some actual or threatened 9/11 — amped up a few orders of magnitude.

(Btw, the podcast TWIG is a great resource to keep tabs on the ongoing evolution of the technical, legal, and social sides of the issue(s). A great example of productive discussion along these lines is the conversation between Mike Elgan and Jeff Jarvis in Episode 493. Occasionally, otoh, TWIG seems stuck in a rut. My secret hope for this article is to contribute to moving the discussion forward.)

Stress Testing the System — Privacy’s Inevitable Collapse

Since the paleolithic we’ve seen a ‘hockey stick’ trend in the lethality of our weapons matched by a decrease in what it takes to deploy them. The end point, visible in the near to middle future, is the capacity for dirty bombs or custom diseases implementable by disaffected high school kids.

Risk

Here’s a 2019 Ted talk by Rob Reid on the risks with ‘synthetic biology’. It’s got 1.5 million views and counting . (You can find a bit more depth in the Q&A here on the Kevin Rose Show Podcast, Episode 34 — Rob Reid — the dark side of gene editing and synthetic biology.)

I prefer the video that planted the cockleburr for me back in 2013. (It still had under 1000k views last I looked.)

 

Speaking at the US Naval Academy, John West notes the trends and poses the problem with some precision. He offers the a disquieting solution. More on that below.

I suggest jumping to minute 7:00 and checking out his thought process so you can catch my dis-ease.

  • 07:00 — Bottom up — the future will be empowering the little guy. We’re heading into a networked world; 95% of the change is at the nodes: the little guy.
  • 13:22 — Disturbing change: uranium enrichment 50 years out will need low power and a pipe to sea water so accessible to small actors
  • 15:21 — The solution: a global technological immune system
If the combination of a high power attack and a surveillance state response is highly probable, then much of the current privacy discussion is froth… a turbulence before we go over the falls.

The Likely Result

If any of these or other similar points of risk come close to be actualized, it is likely that some time during the next 30 year any system of individual privacy we hammer out will be hit with at least one major stress test…some actual or threatened 9/11 amped up a few orders of magnitude in impact by technological advances…and that will be counter-able only by the type of surveillance society also made possible by recent technological advances.

Versions of this are already happening in response to real or feared system shocks — for example, Broward County in the aftermath of the Parkland shootings:

The 145-camera system, which administrators said will be installed around the perimeters of the schools deemed “at highest risk,” will also automatically alert a school-monitoring officer when it senses events “that seem out of the ordinary” and people “in places they are not supposed to be.”

Meanwhile, nations are pioneering detailed surveillance at a massive scale. China and England are both making arrests in facial recognition sweeps. China is automating a universal social credit score that can determine your ability to travel or get your kids into school. It is being implement by a branch of the government called, clearly without irony, the Central Leading Group for Comprehensively Deepening Reforms.

If the combination of a high power attack and a surveillance state response is highly probable, then much of the current privacy discussion is froth… a turbulence before we go over the falls.

The question is not will privacy go away, but how.

Back to Mr West

Let’s return to John West’s proposed solution: a “global technological immune system.” I’m going to abbreviate his presentation, but it’s well worth jumping in at 15:20 to hear his full discussion. To quote:

If someone starts to go down that road, a kid in high school or whatever [building a bomb or viruses], then we give them extra resources, we give them extra support, and extra transparency to keep that kid on the straight and narrow.”

…Basically the way democratic societies accept accelerating transparency…is we’ve created this kind of fish bowl world where everyone is watching everyone else in public spaces.

West mentions the Panopticon…but this is more like the Omnopticon — all watching all. I find some of his collection of solutions internally inconsistent but certainly thought provoking.

  • 15:21 — “You’ve got to protect privacy but anonymity goes away.”
  • 16:50 — “The way democratic societies accept transparency is that they need 95% of the cameras being in their hands.”
  • 17:00 — If you resisted watching the video so far, let me invite you to watch the single scariest 30 seconds.

West references the panopticon and then the discussion goes a direction I haven’t seen elsewhere: he assumes surveillance will happen and asks how this can be made compatible with a free democratic society!

First, is he right that, long term, hidden top down surveillance is incompatible with democracy? He doesn’t tease out the why of this, simply makes the assumption, but I think he’s justified in doing so.

We’re left with a what seems like a highly probable future. The question is not will privacy go away, but how. We need to ask if we can get there in some sort of controlled slide that leaves our core values intact!

His solution: embrace an all-watch-all ‘souvelliance’ society.

Second, what the heck then would that actually look like?!

I grew up in a small town in South Dakota where everyone pretty much knew everyone else’s business. My level of comfort with ‘souvelliance’ is probably higher than most, but this is still a stretch.

Granted, we did spend most of the last 250,000+ years in small bands (occasionally holed up in a single long house structure for the winter) so the ideas of anonymity and privacy are recent and a bit odd in the long sweep of human evolution.

Still what might ‘law abiding citizens’ generally fear about losing privacy?

I’m thinking some of the common denominator issues are low level lawlessness and unruly sexuality.

Will your car report you for running a stop light or driving to a suspect location even if you disable auto-pilot? Will your taxes file themselves on auto-pilot as well? Will there be any thing like ‘currency privacy’? Clearly, spending needs to be monitored as a key indicator of situations where, as Mr West states, someone might need a little extra ‘support’ and ‘transparency’.

And will porn filters in 2030 guarantee that content actually is porn and not encoded virus recipes? When I recently read a history of modern Britain, it seemed like every wife was someone else’s mistress. Was that public knowledge?

In short, what the fuck would souvelliance society look like? The answer is left as an exercise for us all.

Postscript 1 — Moral Panic

What’s the connection? If you don’t listen to TWIG, it might not be obvious. It’s part of the dialog on any topic of tech’s potential negative impact, described in greater detail below.

Moral Panic!!!

Moral Panic: the concept is from sociologist Stanley Cohen. It was based on observing the public (over)reaction to the “Mods” vs “Rockers” rivalry in Britain in the 60s and ’70s. Similar to teenagers in the US, teenagers in the UK were considered dangerous. (Of course, judged by their impact on mainstream culture summed through the mid-50s to the mid-70s, they were.)

Note the term is ‘moral panic’ and not ‘moral concern’ — Cohen described an emotional contagion that overrode rational analysis.

All significant culture transformation with a big upside has had a corresponding dark shadow.

As noted above, there’s a current freak-out on ‘big tech’ with a variety of proposed solutions. Some are well considered; some are knee jerk; few systematically analyze the possible unintended consequences of the proposed solution. Often they look like a land grab: the Internet is itself generally considered a vehicle of sedition from the viewpoint of those that seek control.

We fear we’ve created a monster. We fear we’ll throw the baby out with the bathwater.

Or what?

Still, I’m not sure moral panic is a useful concept here.

There’s an inherent problem with it. We can, I assume, agree that the bottom row below is undesirable. Weak Analysis solutions frequently do more harm than good. Even after a deep analysis, our solutions often go sideways.

However, there is a problem with the No Distress / Solid Analysis quadrant: it’s extremely unlikely to actually occur. Analysis is hard work. There’s got to be motivation to start and to put in the effort.

I was an operations guy during my day-job career. The way to work biz operations is to figure out the 3 or 4 ways any critical operation is likely to go sideways and take steps to prevent or mitigate the problem.

My motto: “Panic early!”

Instead of setting a point along the continuum of Distress that triggers a Moral Panic kill switch, it might be more useful (and effective) to ride the beast to an analysis of risk and look for paths to mitigation.

Who do we trust to have these discussions if not us?

We now have a millennia long, studiable history of the type of changes that delivered mixed social consequences both glorious and dire. Can we use that to get smarter?

A few additional points

The information tech situation doesn’t parallel Mods vs Rockers. It’s bigger…somewhere between the Industrial Revolution and the introduction of the automobile.

All significant culture transformation with a big upside has had a corresponding dark shadow.

Let’s take the invention of crop based agriculture. Who could argue that’s a bad thing. Well…okay…it started humbly (in fact, there’s significant evidence that the first ag might have been aimed at making beer.)

The upside, it took the species up to the first .5 billion in population making homo sapiens (until now) a poor candidate for extinction.

But there was a downside. Let’s listen in:

Barney: Hey, this fields thing is going to saddle us with 2500 years of back breaking servitude and brutally short lives!!

Fred: Dude, get a grip. Beer!

Automobiles might be the best example: mobility, freedom, back seat sex (teenagers out of control), ambulances, accidents (a leading cause of death), hearses, global warming, and so on. The impact is mixed. Mitigation has been possible.

We now have a millennia long, studible history of the type of changes that delivered mixed social consequences both glorious and dire.

Can we use that to get smarter?

We should be able to recognize patterns of longer term impact and risk in ways not available to us at the start of either the agricultural or industrial revolution.

Last, here’s an algorithm in action: a recommendation for yours truly based on my youTube search for the Drive By Truckers’ song Gravity’s Gone.

Guess we’re all only 3 degrees of separation from Climate Denial. Could automated surveillance go sideways? Nah.

 

 
 

Leave a Reply

Your email address will not be published. Required fields are marked *