Okay, despite wars and bugs and politician indictments, what crisis is obsessing so many right now?
Of course it's artificial intelligence, or AI, slamming us in ways both long-predicted and surprising. Indeed, there are already paeans that by December GPT5 will achieve "AGI" or genuine Artificial GENERAL Intelligence. And yes, there's The Great Big Moratorium Petition that I refer to, below.
Let the hand-wringing commence!
-- *** Sunday note: In just the two days since I posted this, waves of wailing and doomcasting have filled my in-boxes, while never showing any sign that any of today's vaunted AI mavens has ever read any cogent science fiction, let alone perused a single history textbook.
If they had, they might see some familiarity in this crisis and ask basic questions. Like whether there are any methodologies to try - either in SF or the past - other than jeremiads of cliches.
Rather than bemoan this in a fresh posting, I'll append a few late thoughts at-bottom. *** --
Alas, I must respond at two levels. First: it's not even remotely possible that these Chat programs will achieve AGI, this round. I don't even have to invoke Roger Penrose's "quantum basis for consciousness" arguments to refute such claims. As I'll explain much further below, this is about fundamental methodologies.
But second - and far more important - it doesn't matter!
More than half a century after a crude 'conversational' program called "Eliza" transfixed the gullible, we now have ChatGPT-4 passing all but a few Turing Tests, with dire projections sloshing-about for December's release of GPT-5. Furthermore, AI art programs dazzle! Voices and videos are faked! Jeremiads of doom write upon walls!
What I do know is that the biggest danger right now - manifesting before our very eyes - is the hysteria and unwise gestures demanded by a clade that includes some friends of mine -- who are now behaving in a manner well-described by Louisiana Senator John Kennedy*, as "high IQ stupidity."
Recent, panicky petitions have been issued by the likes of Jaron Lanier, Yuval Harari, Sam Altman, Gary Marcus, Elon Musk, Steve Wozniak and over a hundred other well-known savants, calling for a futile, counterproductive moratorium – an unenforceable “training pause.”
Eliezer Yudkowsky even goes ever farther, calling for an outright ban, crying out:
"Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die."
== Oh, my, where to begin ==
First, any freeze on AI research would only affect open, responsible universities, companies and institutions who give a damn about heeding such calls. Hence, it would hand over a huge boost-advantage - a head start - to secret labs all over the globe, where ethics-are-for-suckers.
(Especially the most grotesquely dangerous AI researchers of all: Wall Street developers of HFT-bots, deliberately programmed to be feral, predatory, amoral, secretive and utterly insatiable - the embedded five laws of parasitical robotics.)
The very idea that many of humanity's smartest are calling for such a 'research-pause' - and actually believing (without a single historical example) that it could work - is strong evidence that human intelligence, even at the very top, might need some external augmentation!
(Such augmentation may be on its way! See my description below of Reid Hoffman's book Impromptu: Amplifying our humanity through AI.**)
Please. I'm not denigrating these folks for perceiving danger! Like Oppenheimer and Bethe and Szilard, after Trinity, anyone with three neurons can sense great danger here! But just like in the 1940s we need to look past simplistic, moralizing nostrums that stand zero chance of working, toward pragmatic solutions that already have a proved track record.
There is a potential route to the vaunted AI soft landing. It happens to be the same method that prevented atomic weapons from frying us all. It's the method that our Enlightenment Experiment used to escape 6000 years of miserable feudalism on all continents.
It happens to be the one and only method that enables us today to stay somewhat free and safe from super-brainy or powerful rivals, especially when assailed by one of those hyper-smart, predatory, machine-like entities called a lawyer.
It's the one path that could help us to navigate safely through all the fakes and spoofs and claims of AI sapience that lie ahead.
It can work, because it already has worked, for 200 years...
... and none of the smart guys out there will even talk about it.
== Some details: where do I think we stand in AI Turing metrics? ==
What is that secret sauce for human survival and thriving, when the time comes (inevitably) that our AI children far exceed our intelligence?
Well, I refer to it in an interview for Tim Ventura's terrific podcast. (He asks the best questions! And that's oft how I clarify my thoughts, under intense grilling.) But mostly, I dive into what we've been seeing in the recent 'chat-bot' furor.
Yes, it has triggered the "First Robotic Empathy Crisis," exactly at the time I forecast 6 years ago, though lacking a couple of traits that I predicted then - traits we'll doubtless see before the end of 2023.
In fact, the Chat-GPT/Bard/Bing bots are less-slick than I expected and their patterns of response surprisingly unsophisticated. Take the GPT4-generated sci fi stories that I've seen praised elsewhere... but that have been - so far - rather trite, even insipid and still at the skilled-amateur level. Oh, the basic mechanics are fine. But storytelling problems extend beyond just lack of any plot originality. Akin, alas, to many organic authors, what stands out is something very common among beginning (human) writers. Failure of understanding Point-of-View (POV).
Oh, some (not all) of those methods, too, will be rapidly emulated, well before December's expected arrival of GPT5. But I doubt all.
As for the much-bruited examples of 'abusive' or threatening or short-tempered exchanges well - "GPT-4 has been trained on lots and lots of malicious prompts — which users helpfully gave OpenAI over the last year or two. With these in mind, the new model is much better than its predecessors on “factuality, steerability, and refusing to go outside of guardrails.”
And yet, these controls are mostly 'externally' imposed rule sets, not arising from the language program gaining 'maturity.'
At which point I suddenly realized what it all reminds me of. It seems like...
...an elementary school playground, where precocious 3rd graders try to impress with verbose recitations of things they heard teachers or parents say, without grasping any context. It starts out eager and friendly and accommodating...
...but in some recent cases, the chatbot seems to get frantic, desperately pulling at ever more implausible threads and - finally - calling forth brutal stuff it once heard shouted by Uncle Zeke when he was drunk. And - following the metaphor a bit more - what makes the bot third grader frantic?
The common feature in most recent cases has been badgering by an insistent human user. (This is why Microsoft now limits Bing users to just five successive questions.)
Moreover the badgering itself usually has a playground-bully quality, as if the third grader is being chivvied by a taunting-bossy 6th grader who is impossible to please, no matter how many memorized tropes the kid tries. And yes, the Internet swarms with smug, immature (and often cruel) jerks, many of whom are poking hard at these language programs. A jerkiness that's a separate-but-related problem, I wrote about as early as Earth (1991) and The Transparent Society (1997) - and not a single proposed solution has even been tried.
Well, there's my metaphor for what I've been seeing and it is not a pretty one!
== Shall we fear the AI-per? ==
More? Normally, I'd break up a posting this long. But I suspect there's going to be a lot of this topic, for a while yet to come. For example:
To see where that fits in among SIX possible approaches to AI, here's my big monograph describing various types of AI. It also appraises the varied ways that experts propose to achieve the vaunted ‘soft landing’ for a commensal relationship with these new beings:
"ChatGPT now has eyes, ears, and internet access." Indeed, such senses may imply 'sentient'... a reason why I prefer the term 'sapience.'
Alas, as I said at the beginning of this lengthy posting, there are already paeans that GPT5 will achieve "AGI" or genuine Artificial GENERAL Intelligence. And I must respond...
...Not. Indeed, it's not remotely possible, this round. And I do not have to invoke Roger Penrose's "quantum basis for consciousness" arguments. This is about fundamental methodologies.
Sure, I expect these systems will, in many ways, satisfy nearly all Turing Tests and 'pass' for human, quite soon, provoking dilemmas raised in scifi for 70 years and possibly triggering crises... as did every previous advancement in human knowledge, vision and attention going back to the printing press. (Elsewhere I talk about the one tool likely to help us navigate those dilemmas. A tool almost no AI mavens will ever, ever talk about.)
And no, that still won't be 'sapience.' There's a basic reason.
Recall that - as Stephen Wolfram points out - these learning system emulators use vast data sets in much the same way that the 1960s "Eliza" program used primitive lookup tables. These programs still construct sentences additively, word by word, according to now-ornately-sophisticated, evolved probability patterns. It's terribly impressive! But that leap in functional language-use bypassed even the theoretical potential of things like understanding or actual planning.
Part 2: Questions & Answers about Artificial Intelligence.
And no, however many millions leap to accept passage of Turing Tests, this is not (yet) sapience.
Or at least, that is what my AI clients hire me to tell you....
== Later notes ==
I have been trying to get any of the mavens in this topic area to pause - even once - and look at a source of wisdom that's called HUMAN HISTORY... especially the last 200 years of an enlightenment experiment that managed to quell earlier waves of powerfully abusive beings called kings, lords, priests and lawyers! All of our fears about Artificial Beings boil down to dread that those 6000 years of oppression might return, imposed by new oligarchies of high IQ machines.
It is an old problem, with hard-won solutions generated by folks who I can now see were much smarter than today's purported genius-seers.
Alas, no one seems remotely interested in looking at HOW we achieved that miracle, or how to go about applying it afresh, to new, cyber lords...
...by breaking up power into reciprocally competing units and inciting that competition to be positive sum. We did it - albeit imperfectly - in those 5 adversarial and competitive ARENAS I keep talking about... Markets, Democracy, Science, Justice Courts and Sports.
Is that solution perfect? Heck no! Cheaters try to ruin all five, all the time! But we have managed, so far. And it is the only method that ever quelled cheating. I've only been pointing at this fundamental for 25 years. And it could work with AI.
In fact it is intrinsically the only thing that even possibly CAN work...
...and no one seems to be remotely interested. Alas.
Double alas... that on rare occsion, someone pauses long enough to get the notion, posts about it without mentioning the source, then drops it when folks go "huh?"
Maybe we actualy do deserve the dismissive slur our AI children will have for us. Dumb-ass apes.
* Just so we're clear, I deem this senator to be a lying monster and horror, who had no folksy southern drawl back when he was a Rhodes' Scholar at elite universities, and whose participation in the oligarchy's all-out war against all fact-using professions is tantamount to treason.
** It'll have to be next time that I get to this: Impromptu: Amplifying Our Humanity Through AI, by Reid Hoffman (co-founder of Linked-In). This new book contains conversations Reid had with GPT-4 before it was publicly released, along with incisive appraisals. His impudently optimistic take is that all of this could – possibly - go right. That we might see a future when AI is not a threat, but a partner. More next time.
** *Try reading Adam Smith, Thomas Paine, Madison and the founders and Eleanor Roosevelt. Of all the Bill of Rights, the most important amendment was not the oft-touted 1st or 5th or 2nd... it is the vital 6th that gave us the powers I describe above. Someday you may rely upon it. Understand it. (See my posting: The Transparency Amendment: The Under-appreciated Sixth Amendment.)