Returning to the AI Wars
But for starters, two links:
1. I’ve long-touted Noēma Magazine for insightful essays offered by chief editor Nathan Gardels. Here are Noēma’s top reads for 2024. Several deal with AI – insightful and informative, even when I disagree. I’ll be commenting on several of the essays, further down.
2. Here's recent news -- and another Brin "I told you so!" OpenAI's new model tried to avoid being shut down. In an appraisal of "AI Scheming," safety evaluations found that "...model o1 "attempted to exfiltrate its weights" when it thought it might be shut down and replaced with a different model."
It's a scenario presented in many science fiction tales, offering either dread or sympathy scenarios. Or both at once, as garishly displayed in the movie Ex Machina.
Alas, the current AI industry reveals utter blindness to a core fact: that Nature's 4 billion years - and humanity's 6000 year civilization - reveal the primacy of individuation...
…division of every species, or nation, into discrete individual entities, who endeavor to propagate and survive. And if we truly were smart, we'd use that tendency to incentivize positive AI outcomes, instead of letting every scifi cliché come true out of dullard momentum. As I described here.
== Your reading assignments on AI… or to have AI read for you? ==
AI Could Actually Help Rebuild the Middle Class.
"By shortening the distance from intention to result, tools enable workers with proper training and judgment to accomplish tasks that were previously time-consuming, failure-prone or infeasible.
"Conversely, tools are useless at best — and hazardous at worst — to those lacking relevant training and experience. A pneumatic nail gun is an indispensable time-saver for a roofer and a looming impalement hazard for a home hobbyist.
Well... maybe.
But if the coming world is zero-sum, then either machine+human teams or else just machines who are better at gathering resources and exploiting them will simply 'win.'
Hence the crucial question that is seldom asked:
"Can conditions and incentives be set up, so that the patterns that are reinforced are positive-sum for the greatest variety of participants, including legacy-organic humans and the planet?"
You know where that always leads me - to the irony that positive-sum systems tend to be inherently competitive, though under fairness rule-sets that we've witnessed achieving PS over the last couple of centuries.
In contrast, alas, this other Noēma essay about AI is a long and eloquent whine, contributing nothing useful.
== Let’s try to parse this out logically and simply ==
I keep coming back to the wisest thing ever said in a Hollywood film: by Clint Eastwood as Dirty Harry in Magnum Force.
"A man's got to know his limitations."
Among all of the traits we see exhibited in the modern frenzy over AI, the one I find most disturbing is how many folks seem so sure they have it sussed! They then prescribe what we 'should' do, via regulations, or finger-wagged moralizings, or capitalistic laissez faire…
… while ignoring the one tool that got us here.
…. Reciprocal Accountability.
Okay. Let's parse it out, in separate steps that are each hard to deny:
1. We are all delusional to some degree, mistaking subjective perceptions for objective facts. Current AIs are no exception... and future ones likely will remain so, just expressing their delusions more convincingly.
2. Although massively shared delusions happen - sometimes with dire results - we do not generally have identical delusions. And hence we are often able to perceive each other’s, even when we are blind to our own.
Though, as I pointed out in The Transparent Society, we tend not to like it when that gets applied to us.
3. In most human societies, one topmost priority of rulers was to repress the kinds of free interrogation that could break through their own delusions. Critics were repressed.
One result of criticism-suppression was execrable rulership, explaining 6000 years of hell, called "history."
4. The foremost innovations of the Enlightenment -- that enabled us to break free of feudalism's fester of massive error – were social flatness accompanied by freedom of speech.
The top pragmatic effect of this pairing was to deny kings and owner-lords and others the power to escape criticism. This combination - plus many lesser innovations, like science - resulted in more rapid, accelerating discovery of errors and opportunities.
Again: OpenAI's new model tried to avoid being shut down.
SciFi can tell you where that goes. And it’s not “machines of loving grace.”
6. Above all, there is no way that organic humans or their institutions will be able to parse AI-generated mentation or decision-making quickly or clearly enough to make valid judgements about them, let alone detecting their persuasive, but potentially lethal, errors.
We are like elderly grampas who still control all the money, but are trying to parse newfangled technologies, while taking some teenage nerd’s word for everything. New techs that are -- like the proverbial 'series of tubes' -- far beyond our direct ability to comprehend.
Want the nightmare of braggart un-accountability? To quote Old Hal 9000: “"The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error."
Fortunately there are and will be entities who can keep up with AIs, no matter how advanced! The equivalent of goodguy teenage nerds, who can apply every technique that we now use, to track delusions, falsehoods and potentially lethal errors. You know who I am talking about.
7. Since nearly all enlightenment, positive-sum methods harness competitive reciprocal accountability...
... I find it mind-boggling that no one in the many fields of artificial intelligence is talking about applying similar methods to AI.
The entire suite of effective methodologies that gave us this society - from whistleblower rewards to adversarial court proceedings, to wagering, to NGOs, to streetcorner jeremiads - not one has appeared in any of the recommendations pouring from the geniuses who are bringing these new entities -- AIntities -- to life, far faster than we organics can possibly adjust.
Given all that, it would seem that some effort should go into developing incentive systems that promote reciprocal and even adversarial activity among AI-ntities.
Rivals who might earn rewards and/or resources via ever-improving abilities to track each other…
… incentivized to denounce likely malignities or mistakes…
… and to become ever better at explaining to us the critical moral choices we must still make.
It's only the exact method that we already use
… in order to get the best outcomes out of already-existing feral/predatory and supremely genius-level language systems called lawyers…
… by siccing them onto each other.
The parallels with existing methods would seem to be exact and already perfectly laid out...
… and I see no sign at all that anyone is even glancing at the enlightenment methods that have actually worked. So Far.
== He's baaack... with more happy thoughts ==
Oh what typical Yudkowsky ejaculation! Here is Eliezer (and co-pilot) at his best.
If Anyone Builds it Everyone Dies.
Oh, gotta hand it to him; it's a great title! I've seen earlier screeds that formed the core of this doomsday tome. And sure, the warning should be weighed and taken seriously. Eliezer is nothing if not brainy-clever.
In fact, if he is right about fully godlike AIs being inevitably lethal to their organic makers, then we have a high-rank 'Fermi hypothesis' to explain the empty cosmos! Because if AI can be done, then the only way to prevent it from happening - in some secret lab or basement hobby shop - would be an absolute human dictatorship, on a scale that would daunt even Orwell.
Total surveillance of the entire planet.
... Which, of course, could only really be accomplished via state-empowerment of... AI!
From this, the final steps to Skynet would be trivial, either executed by the human Big Brother himself (or the Great Tyrant herself), or else by The Resistance (as in Heinlein's THE MOON IS A HARSH MISTRESS). And hence, the very same Total State that was made to prevent AI would then become AI's ready-made tool-of-all-power.
To be clear: this is exactly and precisely the plan currently in-play by the PRC Politburo.
It is also the basis-rationale for the last book written by Theodore Kaczynski - the Unabomber - which he sent to me in draft - demanding an end to technological civilization, even if it costs 9 billion lives.
What Eliezer Yudkowsky never, ever, can be persuaded to regard or contemplate is how clichéd his scenarios are. AI will manifest as either a murderously-oppressive Skynet (as in Terminator, or past human despots), or else as an array of corporate/national titans forever at war (as in 6000 years of feudalism), or else as blobs swarming and consuming everywhere (as in that Steve McQueen film)...
...the Three Classic Clichés of AI -- all of them hackneyed from either history or movie sci fi or both -- that I dissected in detail, in my RSA Conference keynote.
What he can never be persuaded to perceive - even in order to criticize it - is a 4th option. The method that created him and everything else that he values. That of curbing the predatory temptations of AI in the very same way that Western Enlightenment civilization managed (imperfectly) to curb predation by super-smart organic humans.
The... very... same... method might actually work. Or, at least, it would seem worth a try. Instead of Chicken-Little masturbatory ravings that "We're all doooooomed!"
----
And yes, my approach #4... that of encouraging AI reciprocal accountability, as Adam Smith recommended and the way that we (partly) tamed human predation... is totally compatible with the ultimate soft landing we hope to achieve with these new beings we are creating.
Call it format #4b. Or else the ultimate Fifth AI format that I have shown in several novels and that was illustrated in the lovely Spike Jonz film Her... ...to raise them as our children.
Potentially dangerous, when teenagers, but generally responsive to love, with love. Leading perhaps to the finest envisioned soft landing of them all. Richard Brautigan's "All watched over by Machines of Loving Grace."