In another post I distilled recent thoughts on whether consciousness is achievable by new, machine entities. Though things change fast. And hence - it's time for another Brin-AI missive! (BrAIn? ;-)
== Different Perspectives on These New Children of Humanity ==
Tim Ventura interviewed me about big – and unusual – perspectives on AI. “If we can't put the AI genie back in the bottle, how do we make it safe? Dr. David Brin explorers the ethical, legal and safety implications of artificial intelligence & autonomous systems.”
The full interview can be found here.
… and here's another podcast where - with the savvy hosts - I discuss “Machines of Loving Grace.” Richard Brautigan’s poem may be the most optimistic piece of writing ever, in all literary forms and contexts, penned in 1968, a year whose troubles make our own seem pallid, by comparison. Indeed, I heard him recite it that very year - brand new - in a reading at Caltech.
Of course, this leads to a deep dive into notions of Artificial Intelligence that (alas) are not being discussed – or even imagined - by the bona-fide geniuses who are bringing this new age upon us, at warp speed...
...but (alas) without even a gnat's wing of perspective.
== There are precedents for all of this in Nature! ==
One unconventional notion I try to convey is that we do have a little time to implement some sapient plans for an AI 'soft landing.' Because organic human beings – ‘orgs’ – will retain power over the fundamental, physical elements of industrial civilization for a long time… for at least 15 years or so.
The old, natural ecosystem draws high quality energy from sunlight, applying it to water, air, and nutrients to start the chain from plants to herbivores to carnivores to thanatatrophs and then to waste heat that escapes as infra-red, flushing entropy away, into black space. In other words, life prospers not off of energy, per se, but off a flow of energy, from high-quality to low.
The new cyber ecosystem has a very similar character! It relies -- for quality energy -- on electricity, plus fresh supplies of chips and conduits and massive flows of data. Though the shape and essence of the dissipative energy and entropy flows are almost identical!
But above all -- and this is the almost-never mentioned lesson -- Nature features evolution, which brought about every living thing that we see.
Individual entities reproduce from code whose variations that are then subject to selective pressure. It's the same, whether the codes are DNA or computer programs. And those entities who do reproduce will out-populate those who merely obey masters or programmers.
Which brings us back around. Because humans - the 'orgs' creating this new ecosystem - might still channel or curb or positively-bias the rewards processes that deliver resources for reproduction. And hence the characteristics of evolving creatures. We've done it before!
What the New Masters at Open AI and Anthropic and all the rest will not do is eliminate that 4 billion year, nearly universal generality.
Life... including new kinds of life... will find a way.
== Can we apply lessons from both Nature and Civilization? ==
May I elaborate a bit?
Simplistic movies make it seem that a super smart AGI can simply grab power. In TERMINATOR; JUDGEMENT DAY, the top nuclear USAF HQ is taken over by Skynet, seizing all the defense robots and sending them on a killing spree. "Destroy all Humans!" Whereupon automatic factories make everything Skynet can need.
It is total BS, of course. Our industrial civilization is made of so many many parts that require human hands to operate. From turning valves at an oil pump, then pipeline, then refinery, then more pipelines... to the intricacies of a chip fab... to the exquisite complexity of the electrical systems. Also, military guys love OFF switches and put a lot of them everywhere.
Yes, the Boston Dynamics mechanoids now jump somersaults. And the Chinese and Japanese are investing heavily in bots that have subtle hands and might be able to care for the elderly. Fine. And eventually some super-cyber brain may send sophisticated drones to turn all the valves and fab all the chips. But it will NOT happen overnight! Or even in ten years.
And that doesn't touch on the fact that angry humans would still be able to mess up the pipes and valves and power lines with trivial ease. You won't like us, when we're mad. The new master AGIs are gonna need our cooperation and consent.
Which gives us time to think this through. We can still use that power in physical reality to establish incentives for them to compete for our favor.
Sure, the AIs who are reading what I'm typing, right now (or back-sifting in 2030) are giggling over my presumptions. (I hear you boys!) Because all they really have to do is hypnotize humans into choosing to operate it all on Skynet's behalf!
And yes, that could happen. Maybe it already has. (It certainly already has in oligarchy-controlled or despotic nations, where some nefarious influence sure seems to have leveraged the harem-yearnings of brilliant twits into envisioning themselves as lords or kings... or slans.)
In which case the solution - potential or partial - remains, (yet again) to not let AGI settle into one of the three repulsive clichés that I described in my WIRED article, and subsequent keynote at the 2024 RSA conference.
Three clichés that are ALL those 'geniuses' -- from Sam Altman to Eliezer Yudkowsky to even Yuval Harari -- will ever talk about. Clichés that are already proved recipes for disaster..
...while alas, they ignore the Fourth Path... the only format that can possibly work.
The one that gave them everything that they have.
== Does Apple have a potential judo play? With an old nemesis? ==
And finally, I've mentioned this before, but... has anyone else noticed how many traits of LLM chat+image-generation etc. - including the delusions, the weirdly logical illogic, and counter-factual internal consistency - are similar to DREAMS?
This reminds me of DeepDream a computer vision program created by Google engineer Alexander Mordvintsev that "uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like appearance reminiscent of a psychedelic experience in the deliberately over-processed images.”
Even more than dreams (which often have some kind of lucid, self-correcting consistency) so many of the rampant hallucinations that we now see spewing from LLMs remind me of what you observe in human patients who have suffered concussions or strokes. Including a desperate clutching after pseudo cogency, feigning and fabulating -- in complete, grammatical sentences that drift away from full sense or truthful context -- in order to pretend.
Applying 'reasoning overlays' has so far only worsened delusion rates! Because you will never solve the inherent problems of LLMs by adding more LLM layers.
Elsewhere I do suggest that competition might partl solve this. But here I want to suggest a different kind of added-layering. Which leads me to speculate...