We'll get to love-bots soon, I promise. But before that... By invitation, I've been jabbering a lately about AI - artificial intelligence - partly from perspectives of science and technology... and of course the deep library of thoughtful SciFi speculations...
... but also by asking "What insights can we draw from history?"
Especially the recent Enlightenment Experiment, whose methods have proved useful (at long last) at taming some of the worst human predators. Might those same methods also apply to these new, powerful, synthetic entities?
Alas, it seems that the geniuses who are racing each other to bring on this disruptive era aren't even remotely interested in anything but cheap clichés from simplistic sci fi and fantasy flicks. See a prime example in the next section.
First though, this linked news article covers - not too badly - some of those clichés that I exposed at the recent "Beneficial AI" conference in Panama.
This artwork actually kinda-sorta captures what I am suggesting: rule-moderated and incentivized competition.
A deeper dive – that I offered as one of the keynotes at the huge, May 2024 RSA Conference in San Francisco – is now available for you to view/listen. “Anticipation, Resilience and Reliability: Three ways that AI will change us… if we do it right.”
Too long? Then try this brief (15 minutes) pod-interview-talk - Can humans (maybe redefined) keep up with AI and the rest? It's one of my better/more efficient ones about the dilemmas we face with AI. And how we might augment natural human capabilities to keep up.
== A few thoughtful essays on AI ==
Eric Schmidt describes three trends that may lead to rapid changes in Artificial Intelligence. One of these is enhanced agency, a word that's also proclaimed loudly as the next step, by OpenAI’s Sam Altman. An agent can be understood as a large language model that can learn something new and then apply that learning outward. This is from Nathan Gardels’s interview of Schmidt in NOEMA:
“These agents are going to be really powerful, and it’s reasonable to expect that there will be millions of them out there… What happens then poses a lot of issues. Here we get into the questions raised by science fiction… at some point, these systems will get powerful enough that the agents will start to work together. So, your agent, my agent, her agent and his agent will all combine to solve a new problem.”
So... there'll be "millions of them"... and they’ll “work together.”
Um... hold that thought.
(Side note: Anthropic is releasing a new feature for its AI chatbot Claude that will let anyone create an email assistant-bot to automate tasks, vet purchases or other ‘personalized solutions.’ Though alas, no one discusses how this will – for example – affect advertising, which funds the internet.)
Back to the Eric Schmidt interview. Nathan Gardels asks: “Don’t you need to regulate at some point along the capability ladder before you get where you don’t want to go?”
Schmidt: “At the moment, governments have mostly been doing the right thing. They’ve set up trust and safety institutes to learn how to measure and continuously monitor and check ongoing developments, especially of frontier models as they move up the capability ladder. So, as long as the companies are well-run Western companies, with shareholders and exposure to lawsuits, all that will be fine."
Oy, hold that thought, as well! And yet, despite this pollyanna reassurance, Schmidt oscillates:
“Look at this problem of misinformation and deepfakes. I think it’s largely unsolvable. … That is why it is so important that these more powerful systems, especially as they get closer to general intelligence, have some limits on proliferation. And that problem is not yet solved."
'Limits on proliferation.' Right. But... but... weren't you just talking about "millions of them"?
Ah, notice how Eric Schmidt performed the trifecta! All of the standard assumptions about AI format -- all three of the clichés from both history and the cheapest sci fi -- offered up almost simultaneously!
Clichéd format #1: Rely on the feudal lords in their castles (Google, OpenAI, MicroSoft, Beijing, DOD, Goldman-Sachs) to rule wisely and to prudently control their warriors – on account of maybe... fear of lawsuits?
Clichéd format #2. Count on the king (gov’t regulations) to keep up, with wise regulations, even as those feudal AI warriors gather the powers of gods. With self-mods that iterate in seconds.
Clichéd format #3. Proliferation as these entities inevitably get copied, copy themselves with abandon, and mutate away from control by kings or castle (corporate) lords, spreading through every crack, across every system or barrier. As in Steve McQueen’s wonderful horror flick, The Blob.
… at which point they might either blob the Web into uselessness or else… coalesce into Skynet.
Seriously, is he saying anything other than all three of the standard motifs that I just paraphrased, above? Ignoring the way they both contradict each other and create instabilities that cannot last?
Moreover, from both sci fi and 6000 years of wretched human history, we know that none of the three have happy outcomes. None of the three answer our dilemmas of misinformation, or predation, or creating a culture that incentivizes accountability.
Oh, I don't want to just pick on Eric Schmidt! He is actually way above average in that clade. In fact, this same recitation of tediously clichéd formats is done by almost all of the brilliant mavens in this field.
For example, many Chinese court intellectuals have pondered all this and concluded that the only way out is to double and triple down on format #2. (See my posting: Central Control over AI). But this centralization approach is similarly doomed. Even if they succeed at first, it only accelerates inevitable evolution from Politburo-controls-Skynet to Skynet-controls-Politburo.
Alas, no one – certainly not Eric Schmidt in the Noema interview – seems even remotely interested in looking past the three clichés, at systems of accountability that we actually developed, across the last two centuries, that enabled us to finally escape the lobotomizing effects of kings and feudal lords and chaos.
Methods that have been tested and proved. Methods that we see all about us, in daily life. Methods that created the civilization that raised and nurtured and empowered Eric Schmidt and all the other geniuses out there. Methods that they take for granted and depend upon, every day of their lives.
Methods that could be applied to AI, with almost trivial ease.
…but won't be. Because of the incredible memic power of clichés.
== Earlier in the same journal, a little wisdom ==
A bit more cogently, Sara Walker’s essay - AI is Life - also in Noema - shows many of the ways that AI will replicate what’s already gone-on here on Earth among living organisms, evolving greater complexity. She cites a lot of facts and parallels... without making much of a useful point beyond “Don’t Panic!”
Still, it’s beautiful writing about big perspectives. (Perspectives that I made even bigger, in EARTH.;-)
Alas, the history of life has been bumpier than she implies, with mass extinctions and imbalances and countless lost opportunities -- and rivers of blood and death --as also happened in the rutted, nearly-always-feudal and mistake-prone tale of human societies called “history.”
Now? It seems we are making a new kind of ecosystem, driven by electricity instead of sunlight, mediated by silicon switches instead of chloroplasts. Already we see analogues to pre-biotic ‘soup’ and primitive plankton (algorithms floating across the Web), plus analogues of predatory devourers or parasites... all the way to the new GPT ersatz Voices-Without-Mind that I predicted, half a decade ago would swarm over us… well… precisely now.
The Walker essay is lovely and calming and I recommend it. But Life’s ‘way’ is often bloody and nescient and we cannot afford to just let the genes fall where they may.
== But… b-but is it conscious or self-aware? ==
Um… does it matter?
No, seriously, there are some earnest efforts to ramp up the study of consciousness and what it means. Though such efforts have been around since well before that smelly old preener, Socrates prattled annoyingly in the Academy. Indeed, in the 1980s I was managing editor for the Journal of the UCSD Laboratory and Center for Human Cognition.
A very brief outline of the overall problem can be found in this scientific American essay “Why the Mystery of Consciousness Is Deeper Than We Thought.” Though it only touches on the quandaries, lightly.
Diving in far more deeply and thoroughly, Robert Lawrence Kuhn – for decades host of the Closer To Truth interview show – has just completed the magnum opus of consciousness studies! A broad survey of (pretty much) the whole field, summarizing more different theories than you could shake a meme at. If the topic of how you’re interested in things interests you, have a look at A Landscape of Consciousness. (A review was just published by IAI.)
Ray Kurzweil has a followup to The Singularity is Near, with... The Singularity is Nearer: When We Merge with AI, proposing that AI will achieve human level intelligence by 2029.
Okay, where are those Machines of Loving Grace?
== Oh, wait... 'love bots' --
You expected that to be about sex?
Well, no room this time. Maybe in my next posting about our coming AI-enhanced future.



