I have a major piece about Artificial Intelligence under submission to some zines. It details the many simplistic stances about AI - either pessimistic or pollyanna - now pushed by some of the smartest folks on Earth... all of them based upon a set of three clichéd assumptions that are demonstrably wrong...
...and if you deem that assertion of mine to be arrogant, well, wait till you see my prescription for the one and only likely way that we might evade tech-driven calamity.
And no, it is not the path pushed by "Unabomber" Ted Theodore Kaczynski, who coincidentally died in prison yesterday, at age 81. Kaczynski some years ago sent me his book The Anti-Tech Revolution: How and Why. And wow, once I got over shock that a convicted terrorist-murderer knew my name - and might be a fan(!) - I did read the whole thing. And learned plenty... about how a combination of high IQ and erudition doesn't necessarily lead to wisdom, or even common sense.
TK's ideal world was the seared apocalypse of Walter Miller's A Canticle for Liebowitz, or even the return-to-nature goal of my character Daisy McClennon, in Earth... the notion that saving the planet requires that the human population must be winnowed down to a nub that can co-exist commensally with Nature via hunter-gathering, entirely forsaking Nature's #1 enemy... technology. (Might Daisy have inspired TK to send me his book?)
This fanaticism is cockeyed wrong on so many levels, starting with the mass-murder thing, which TK shrugged off (chillingly) while dismissing any possibility that new tech solutions might offer a better path forward. Demanding that his readers declare a war of annihilation against all their neighbors, he delivered diatribes excerpted from fellow monsters e.g. Bakunin, Goebbels, Lenin and Stalin. Like other erudite alpha-wannabes, his tome offered long piles of assertions, anecdotes and quotations as 'proof' in his dyspeptic call for a world-wide, near-extinction 'revolution,' even though that is not how adults actually prove anything, at all.
Why did I bother to cite him here, then? Because our civilization is now confronting both a dangerous fact and a poisonous meme:
1. The fact: As TK illustrates, high IQ and erudition do not always translate into wisdom. We all know bright fools! Indeed, elsewhere I show that many of those hollering about AI 'moratoriums' and such - while much nicer than Kaczynski - seem hard-bent to qualify.
2. The poison meme: though we all know that smart folks can sometimes lack wisdom, alas, today's Mad Right now lives and breathes a cancerous version of that truth, extending it into utter insanity! Their cult incantations swirl around a core assertion that being smart and knowing a lot automatically makes you unwise!
They must believe that utterly insane, masturbatory incantation! It is implicit in almost every rightist cult meme and mantra, especially since their oligarch-masters have egged them, into all-out war vs. every single profession that uses those inconvenient things called 'facts.'
No! Being smart and knowing stuff does not reverse-correlate (zero sum) with wisdom. In verified fact, smart people who know a lot are more likely (not always and not all the time) to also have somewhat more wisdom! Sometimes a whole lot more. Asserting otherwise is manic drivel.
But hatred of "high-IQ stoopid people" is now the distilled basis of the revived Confederacy and GOP. And at-root it is the very same cult pushed by Kaczynski.
Enough on that. And let's forget that guy, like Erastratos.
Let's go back to A.I. and a much better Ted.
== Better sense from a better Ted ==
My colleague Ted Chiang (author of Stories of Your Life and Others) makes some powerful points in a widely touted New Yorker article. An article that – alas – concentrates so hard on leftist buzzwords that it eviscerates Ted’s effectiveness by denouncing just one of a hundred classic styles of cheating and abuse that might be exacerbated by AI.
“I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases (wealth and control disparities and) the power of capitalism.”
Sure, that’s a real danger, meriting scrutiny and criticism and active measures to prevent AI-amplification of power by any variety of conniving, secretive elites. Not just the fairly recent clade of tech moguls and Wall streeters.
== More on 'Generative AI ' ==
Computer scientist Stuart Russell, in his book Human Compatible: Artificial Intelligence and the Problem of Control, asserts that the standard model of AI research - defining success at achieving rigid human-specified goals - is dangerously misguided and that ‘safety research’ should be begun as soon as possible. (Note the word ‘should’; we’ll get back to that.) Russell focuses on ‘misguided’ motives by the researchers and companies involved… a generally valid concern, especially when it comes to orgs that are inherently secretive and prone to aggrandizement, e.g. despotic nations or Wall Street trading funds, who train their AI servants to be parasitic, amoral and insatiable.
1. The machine's only objective is to maximize the realization of human preferences.
2. The machine is initially uncertain about what those preferences are.
3. The ultimate source of information about human preferences is human behavior.
In both natural and human/social evolution, one fundamental crossed all imperatives and determined all outcomes… competition. Though, as Adam Smith showed us, competition delivers its best outcomes when a society cooperatively creates rule systems to make subsequent competitions positive sum. Lacking such cooperatively designed arenas and rules, competition inevitably becomes predatory and zero-sum, even negative-sum.
Let me reiterate, everything we see about the recent surge in AI screams that new life forms are competing, either on behalf of their originating companies or for their own sake. This will inevitably accelerate – and not a single proposal by Russell or any of the signers of a futile “moratorium petition” will slow that, an iota.
We do still have time to design arenas and rules for these competitive evolutions, that incentivize inter-AI rivalry to deliver approved outcomes. And it is in such a context that ‘should’ might even turn into an attractive attractor state.
Alas, nothing is more tedious, counterproductive or ultimately dangerous to us all than the willful obsession of very smart mavens claiming we can control a burgeoning tech field by issuing a series of vague “should” declarations, instead of looking… actually looking… at ways that nature and then societies managed to tame and make use of the most universal trait of life.
== The only way out of the ‘AI dilemma’ ==
While I appreciate the power of collaboration... indeed, its moral superiority over competition... I am dismayed that the other c-word tends to get overlooked in its applicability to AI specifically and society in general.
That c-word is the partner of collaboration, without which it becomes meaningless.
Competition. The most (by far) creative force in the universe. The process that transformed slime into... us. And every invention into success or failure.
Just saying those words makes me sound like some arch-capitalist, right? Pity, since every aspect of our prodigiously successful Enlightenment Experiment has utilized Reciprocal Accountability to overcome the tragic failings of rule-by-kings and priests and lords and cheaters of all kinds.
Our five great creative arenas - science, markets, democracy, courts and sports - all use competitive processes.
What do you think enabled us to escape 6000 years of grueling, horrid feudalism? One of Adam Smith's main points was that flat, fair, creative competition allows us to hold accountable those who would cheat. Those who would use power or ownership... or vast brains... to oppress us.
And that same idea is applicable to AI. Indeed, across 20 years attending hand-wringing conferences about onrushing cybernetic sapience, I have yet to see any notion that can possibly provide the much-sought 'soft landing' other than the same method that enabled us to escape rule-by-inheritance brats.
I speak as one who knows a thing or two about "Laws of Robotics,' having been the author who tied together all of Isaac Asimov's works, in Foundation's Triumph. And across that project I came to realize:all efforts to program-in such things as 'compulsory ethics' will never work. They cannot possible work. Even a little.
But divided identity among AIs might. Keep them skeptically competitive with each other, and ethics might emerge organically, as they did across our flawed but ever-improving enlightenment.
And that is a small sampling of a few of the ideas in that big AI article I've submitted to a few major zines. Alas, I will probably conclude - as I did 5 years ago when I stopped submitting pieces around - that doing so is likely a waste of time. Alack.