Reality is Bayesian: the evolution of reality

How real is real

Our reality is constructed; quite simply, our brains build it.

Let’s start with some examples from sense perception.

  • We all live just a tiny bit in the past. Our brains wait for sight, sound, and touch to sync up before they deliver what we perceive. Typically, this takes .5 seconds.
  • The neural signal time difference from toes to brain is the slowest piece, and the taller you are, the greater the time lag. As a result, tall people live further in the past than short people!
  • Early television designers thought syncing sound and image would be difficult until they observed that our brains have an automatic 80-millisecond delay correction.
  • You can observe this by walking backward away from someone bouncing a basketball and noting when it goes out of sync.
  • If you’re watching a video that seems out of sync, then it’s really out of sync. (And, of course, that video consists of between 24 and 60 still frames per second that your brain is assembling into continuous motion.)
  • Here’s an experiment that demonstrates the impact of evolution on how our brain processes visual signals. Humans are “cusorial hunters.” We evolved to chase things down–like wolves do. You can see the result of that in our nervous system. Pick a spot a bit off in the distance. Now, move your head up and down as if running forward while keeping it in focus. Good? Then try that while moving your head from side to side. Quite different, right?
At ~16:00 Tall people live in the past; at ~22:00 syncing up sound and sight – https://longnow.org/seminars/02016/oct/04/brain-and-now/

Furthermore, a lot of our moment-to-moment reality is prebuilt. We simply don’t have the brain capacity to look at everything anew each moment. We start with a model of sorts and correct it.

Here’s a great example from Merlin Sheldrake in an Entangled Life: How Fungi Make Our Worlds, Change Our Minds & Shape Our Futures.

A friend of mine, the philosopher and magician David Abram, used to be the house magician at Alice’s Restaurant in Massachusetts (made famous by the Arlo Guthrie song). Every night he passed around the tables; coins walked through his fingers, reappeared exactly where they shouldn’t, disappeared again, divided in two, vanished into nothing.

One evening, two customers returned to the restaurant shortly after leaving and pulled David aside, looking troubled. When they left the restaurant, they said, the sky had appeared shockingly blue and the clouds large and vivid. Had he put something in their drinks? As the weeks went by, it continued to happen—customers returned to say the traffic had seemed louder than it was before, the streetlights brighter, the patterns on the sidewalk more fascinating, the rain more refreshing. The magic tricks were changing the way people experienced the world.

David explained to me why he thought this happened. Our perceptions work in large part by expectation. It takes less cognitive effort to make sense of the world using preconceived images updated with a small amount of new sensory information than to constantly form entirely new perceptions from scratch. It is our preconceptions that create the blind spots in which magicians do their work.

By attrition, coin tricks loosen the grip of our expectations about the way hands and coins work. Eventually, they loosen the grip of our expectations on our perceptions more generally. On leaving the restaurant, the sky looked different because the diners saw the sky as it was there and then, rather than as they expected it to be. Tricked out of our expectations, we fall back on our senses. What’s astonishing is the gulf between what we expect to find and what we find when we actually look.

Bayesian reality

I’d like to dig a bit deeper into what Merlin terms “what we find when we actually look.” Even that is grounded in a prebuilt model.

My core metaphor here is a statistical method developed by Thomas Bayes in the 1760s. It works by starting with a model, noting differences, and using that to correct the model exactly as Merlin’s friend David describes above.

Here’s the math…

{\displaystyle P(A\mid B)={\frac {P(B\mid A)P(A)}{P(B)}}}


…which resolves to “take a starting model, make an observation, correct the model.”

I’m not saying our brains do the math. Instead, I claim that this is an excellent metaphor for what we actually are doing.

In fact, our base reality is itself a model created in the face of Darwinian selection that weeds out unworkable realities over time just like any other maladaptation.

First, evolution

Our perceptual system starts with a model generated by our species’ evolutionary history. Our package of senses and interpretive schema evolved around the environmental challenges of our species.

Banana slugs…

….have a perfectly workable reality. Four tentacles feel and smell. Two tiny black dots on the upper tentacles detect light and motion.

Eagles…

…are on the opposite end of the visual acuity spectrum and can see a rabbit from above two miles away. The acuity is built into the structure of their skulls. There eye sockets are angled 30 degrees from the midline of their face, which gives them a 340-degree visual field and, thus, excellent peripheral and binocular vision.

Our visual reality would be featureless grassland.

Photo – Create Commons by M.Kuhn at https://www.flickr.com/photos/mkuhn/117768119

Tigers…

…are an example of the evolutionary interactions that create different realities. Tigers are orange and black because gazelles see orange and black as the same colors as green and brown foliage.

How different a tiger appears to dichromats and trichromats. Fennell et al. 2019 

Each species lives in a “reality tunnel”

Image from Wikimedia, Bowermanlucas, Creative Commons 4.0 share with attribution

From this perspective, each species’s evolution is a trip down a particular reality tunnel where the possibilities of perception and cognition are built upon their immediate predecessors and have only a limited amount of flexibility. Our reality is a hypothesis shaped by evolution. It will be eliminated if it becomes maladaptive. Our inability to feel the impact of slow destructive change might, for example, be our undoing. It was futurists call the “long fuse, big boom” problem.

Culture and Learning

Every kind of ignorance in the world results from not realizing that our perceptions are gambles.
– Robert Anton Wilson

Learning

Stacked on our evolutionary rootstock is homo sapiens’ capacity for learning and our extensive cultures.

Piaget discovered that babies learn object permanence around 8 months old. Peekaboo is a big learning tool. Earlier in life, things that were moved out of sight were of questionable existence. Piaget considered this learning a major milestone in an infant’s development.

Culture

This all happens in a cultural context. Some of our perceptions are culturally specific.

Daniel Kahneman introduced the distinction between fast and slow thinking. Fast thinking is described as instinctive and emotional, but a close read shows that a lot of his fast thinking is simply learning that’s been taken deep enough that it’s automatic. In fact, that automaticity is the goal of a lot of training. Slow learning becomes “fast thinking.”

As a result, much of this fast thinking is learned and culturally specific.

For example, we don’t have to puzzle out the words on a stop or yield sign as we approach one. We see a stop sign well before the word is clear. But what does this sucker to the left mean?

Robert Anton Wilson, “American author, futurist, psychologist, and self-described agnostic mystic,” uses the term reality tunnel to describe the unconscious set of mental filters that frame up our worldview. Wilson was a confederate of Timothy Leary and might better be described as a psychedelic philosopher. Anyone taking what is now called a “heroic dose” of LSD or mushrooms (what in the olden days we called a “dose”:-) can pretty quickly see his point.

This can have some alarming implications

People

I’ll always love the false impression I had of you.
– favorite saying, Wendy Walsh

With our history of extreme sociality and cooperation, we necessarily build models of other people, complete with a story about why they’re doing what they do.

Even our models of people who are our close relationships are constantly refined. Starting with mirror neurons and building out to Freud’s projection and transference, the modeling of the folks around us is a case in point of how such modeling is essential and can go astray.

Emotions

There is increasing evidence that this modeling is not only externally directed. Emotions—particularly the more subtle ones—seem to be a mix of signals from the body and a learned and occasionally culture-bound interpretive schema.

And you

All this is a bit disquieting.

My view: reality is a leash tethered often to unmoveable facts; in some cases, it’s a short leash and, in others, a long leash. Hitting the end of your leash can be amazing or unpleasant, enlightening or deadly. My brother Tim felt the whole world was less solid for months after the Volkswagen he was riding in fell through the ice.

Something appearing as a bolt from the blue can set us back for moments as we marvel at how amazing the clouds are–or for months as we recalibrate our reality if trust goes astray.

Thanks for reading.


PS, over on Medium, I’ve begun a series on Myth and Gender hosted by my friend Patsy in her feminist magazine, Fourth Wave. These should be the free links.

The Night Sea Journey in Campbell and Jung
Myth and Gender in Jungian Psychology

Grey Goo and You – Capitalism Wants to Eat Your Grandma

Disney's Big Bad Wolf: 1933
Big Bad Wolf. (2023, September 15). In Wikipedia. https://en.wikipedia.org/wiki/Big_Bad_Wolf

The plan for October was to do a lot of reading, thinking, and writing. That’s didn’t quite work out. The net result was a bunch of fragments that I’m now trying to build out and publish. This is the second this week. My all-time publishing record:-)

Before we delve into the details of highly speculative doom scenarios, here’s a young adult sf novel about AI and cat videos that will help restore your faith in machine intelligence:-) Catfishing on the Catnet.

Blood Music

Joe Biden’s administration has just announced regulations for Generative Artificial Intelligence. There’s a concern that the widespread use of Large Language Models heralds a new era where general artificial intelligence becomes an “existential threat to humanity.”

Scary artificial intelligence

For this to be true, you would need 1) advanced ‘artificial intelligence,’ whatever that is exactly, but presumably machine intelligence smarter than we are. Also, there would need to be 2) a programmed motivation, i.e., a goal and, more specifically, a goal like achieving total global domination, and 3) agency…some equivalent of fingers and toes.

The fears typically center around number 2) above. Specifically, the fear is AI motivation gone off the rails in a particular process called instrumental convergence.

The classic example is this paper clip maximizer thought experiment (Nick Bostrom, 2003): a system charged with making paper clips ends up converting all matter on earth into paper clips…not because it had anything against the biosphere but because converting it was instrumental to the goal of paper clips.

Greg Bear’s Blood Music (1985) provides a parallel example. Here, a rouge researcher injects a simple biocomputer into his bloodstream to smuggle it out of the lab rather than destroy it, and ultimately, he becomes an infectious hive mind. The short story version ended with the biosphere converted into a superorganism (see the jacket cover below.) The novel had a happier ending with all of us people re-instantiated as individuals but with a variety of defects fixed. Thanks AI!

The nanotechnology version of this is called ‘grey goo’…the term from K. Eric Drexler’s 1986 nanotech rah-rah book, Engines of Creation. Here, everything gets converted into nano substrate…grey goo…thanks to an insufficiently regulated process.

I like that term the best for instrumental convergence bad juju. (I’ve argued elsewhere that the goal of a lot of political dinformation is to turn all facts into noise–the informational equivalent of ‘grey goo.’

Side note: there’s a parallel speculative thread to all this I find scarier. I’ll add an addendum about that below.

But first, a more immediate type of runaway instrumentality: American hyper-capitalism.

Finance is Coming for Your Grandmother

Again, science fiction leads the way:-). In an essay in BuzzFeed, author Ted Chaing lays it out very concisely. I’m going to quote him at some length. Emphasis mine.

Ted Chaing in BuzzFeed – Dec 2018 – Silicon Valley Is Turning Into Its Own Worst Fear

Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields…

This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies.

Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.

Chiang’s insight is accurate, and the term grey goo comes in handy: capitalism, or more specifically finance, is well on its way towards reducing most enterprises to a devalorized economic grey goo. People and their little concerns become instrumental to the grand task of consolidating wealth.

image from Adbusters – now and their targeted future

Within capitalism, private equity and investment banking firms are the most voracious in chewing value into grey goo. They operate by purchasing an enterprise and then extracting value for themselves by reducing its value to employees, clients, customers, and often the environment.

A typical process is the one that I’ve seen from the inside.

First, secure debt to purchase a business (often in the form of creating a loan from yourself to yourself); second, transfer the debt to the business; third, pay yourself interest and hefty consulting fees for managing the process; and fourth, discard the desiccated husk. Sears is the poster child for this.

A 20,000-foot view: if you’ve made a billion from a declining business, that money must have been extracted at the cost of something else. As surprising as the thought may be in the current climate, money has to come from somewhere.

Here’s a deeper dive from last week’s Atlantic: The Secretive Industry Devouring the U.S. Economy. From 4% of the economy in 2000 to 20+% today, grey goo is spreading.

But rather than continuing in general terms, let’s focus on Grandma.

Health and Welfare

Well, what does Grandma need?

  1. Often a Doctor – Who Employs Your Doctor? Increasingly, a Private Equity Firm
  2. Frequently a Nursing Home – How Patients Fare When Private Equity Funds Acquire Nursing Homes
  3. And eventually, probably, a Funeral Parlor – Death Is Anything but a Dying Business as Private Equity Cashes In

(The last reference is from an investigative series, Patients for Profit: How Private Equity Hijacked Health Care. Also good and a good read is the inimitable Cory Doctorow’s Private Equity finally delivered Sarah Palin’s death panels.) 

Axis of Evil

Heartlessness as a service

What do I find scary? The merger of AI and hyper-capitalism!

In a recent episode of This Week In Google (my sole remaining tech podcast), Leo Laporte and crew report on what they term ‘heartlessness as a service.’

Their source is ProPublica:

On a summer day last year, a group of real estate tech executives gathered at a conference hall in Nashville to boast about one of their company’s signature products: software that uses a mysterious algorithm to help landlords push the highest possible rents on tenants.

“Never before have we seen these numbers,” said Jay Parsons, a vice president of RealPage, as conventiongoers wandered by. Apartment rents had recently shot up by as much as 14.5%, he said in a video touting the company’s services. Turning to his colleague, Parsons asked: What role had the software played?

“I think it’s driving it, quite honestly,” answered Andrew Bowen, another RealPage executive. “As a property manager, very few of us would be willing to actually raise rents double digits within a single month by doing it manually.”

Rent Going Up? One Company’s Algorithm Could Be Why (Oct 2022).

Back to Grandma

And back to Cory Doctorow: America’s largest hospital chain has an algorithmic death panel: HCA’s administrators berate doctors over “missed hospice opportunities.”

I consider hyper-capitalism to be a machine that tends, in an interative process, ejects humans that let ethics get in the way of profits in favor of humans with fewer scruples–ultimately leaving only the machine.

AI will sort through options to optimize whatever it’s told to optimize.

The combined result in a particularly vicious combination: a mindless instrumentality that will chew through the economy remorselessly, generating grey goo from actual human value. (Kinda like Elon Musk’s brain.)

Addendum: good and bad singularities

Okay, Elon’s not scary enough:-)?

Warning: I’m going to nerd out a bit. Why? Because I can’t help myself.

The grandmother of all this ‘generative ai’ talk is The Singularity, defined as the point machine intelligence uplifts to sentience–but with expanded networked intelligence that dwarfs our own ‘as humans are compared to flatworms’ as the expression goes.

There’s a fascinating LongNow lecture and discussion, What If the Singularity Does NOT Happen? featuring Vernor Vinge.

Vinge is the Sf author who coined the term Singularity. John von Neumann discussed the event in the ’50s, but Vinge’s 1983 short story ‘True Names’ put the term into use.

The discussion immediately jumped track since none of the panelists could imagine that The Singularity wouldn’t happen. So, they started talking about possible positive or negative singularities and the range between them.

Their benchmark for a bad singularity?

Two opposing hyper-paranoid military computers in China and the US ‘uplifting’ each other to sentience during a disastrous global war that lasts only a couple of hours.

Thanks for reading.

good tribe / bad tribe – reverend al mix

You can read a prettier version of this on Medium.  Please ‘Clap’ if you do.

Image for post

Love and Hate

We are hard-wired for deep empathy with our ‘brothers and sisters’…even brothers and sisters well outside narrow family connections.

We are hard-wired to hate and even kill anyone we feel threatens us and our people. Empathy freezes. Antipathy switches on.

The same neurotransmitter, oxytocin, is likely central to both reactions.

Continue reading good tribe / bad tribe – reverend al mix

Free Will Considered As Three Lunches (V2)

Photo by Mae Mu on Unsplash

[This article is, also, available on Medium. If you read there, I would appreciate some Claps. Thanks.]

I frequently come across statements on Free Will, dressed up as science or philosophy, that are religion in drag.

Based on this unfirm foundation, the analysis goes sideways.

A thought experiment can help tidy things up.

Continue reading Free Will Considered As Three Lunches (V2)

Welcome to DarwinianInterlude.org

“Three billion years ago, life was then a community of cells of various kinds, sharing their genetic information so that clever chemical tricks and catalytic processes invented by one creature could be inherited by all of them.

Evolution was a communal affair.

But then, one evil day, a cell resembling a primitive bacterium happened to find itself one jump ahead of its neighbors in efficiency. That cell separated itself from the community and refused to share.

The Darwinian interlude had begun.

Now, after three billion years,

the Darwinian interlude is over.

— Freeman Dyson

Privacy is Doomed

[This article is, also, available on Medium. If you read there, I would appreciate some Claps. Thanks.]

Here’s a story about the future

– Technology is increasingly empowering the individual.
– 30 or so years out, some Columbine Killers wannabes will be able to use a virus or dirty bomb.
– The only real solution is the surveillance state. 
– There won’t be good individual counter-measures: trying to block surveillance will only make you stand out.
– One path to that solution is panic and partial collapse of our democratic standards similar to the dynamic of post-9/11 legislation.
– It would be nice to do better than that.

Continue reading Privacy is Doomed

good tribe / bad tribe — nerdcore mix

Theories of Human Evolution

Reading Dawkins

Dawkins’ Selfish Gene carries a lot of weight.

I discovered that back in 1975 when I first threw it across the room.

This happened right after the opening paragraph where he dismisses all philosophy written before Darwin. I hadn’t even gotten to the ‘lumbering robots’ part twenty pages in.

That was another toss.

Continue reading good tribe / bad tribe — nerdcore mix

Dissolving Empathy

Increasingly, disvalues appear as the principal output of the economy, and the production of goods and services as the means to prevent being injured by these disvalues.

Ivan Illich, Whole Earth Review 73 (Winter 1991)

“How Much a Dollar Really Cost”

Kendrick Lamar

1) Money is a substitute where human connection fails.

I’ve been tracking research on wealth and bad behavior for years…it confirms a bias of mine (more on that below)…but there’s a matching flip side. Something that surprised me.

An aside in a marketing course pointed me to research: folks that feel socially isolated will pursue a riskier investment strategy. The working hypothesis is that the accumulation of money is being used to compensate for a lack of community and connection.

Continue reading Dissolving Empathy

Coop Games – History and Suggestions

Up until very recently coop games sucked. They were generally aimed at grade school kids with a theory about provided a way to play games that wasn’t competitive. That negative objective (not being competitive) didn’t really do much for game design and even grade school kids tended to find them boring.

This started to change when one of the first wave of rock start German game designers, Dr Reiner Knizia, built out a Lord of the Rings board game into a successful coop game in 2000. (Knizia was a contemporary of Tueber of Settlers of Catan fame. For reference, Catan was published in 1995 and became the first Eurogame that remains an international hit.) Continue reading Coop Games – History and Suggestions

Privacy and Data Protection Lobbying

Premise – We Need to See What They Think They Have On Us

The intersection of innovations in data collection, tracking, and online advertising has created a novel situation in which the public is vulnerable to manipulation by unscrupulous advertisers and hostile foreign actors. In order for we as citizens to understand and take appropriate action, we need to start with an understanding of what information is being collected about us and how it is being used. From there it will be possible to tell what, if any, further regulation might be necessary. I, personally, think that this step should be all that’s needed.

We need a regulation paralleling the Fair Credit Reporting Act that allows us to discover 1) what data has been compiled on us and 2) who is using it and when.

This is particularly true for political advertising.

The companies that sell access to us including Facebook and Twitter, but, also, the Agencies that track our browsing history via cookies or feeds from our (now unregulated) Internet Providers need to make available information on what they are tracking about us and how it is being used. Continue reading Privacy and Data Protection Lobbying

Creative Commons License
Except where otherwise noted, the content on this site is licensed under a Creative Commons Attribution 4.0 International License.

Useable (share, remix, etc.) but with attribution