Saturday, January 31, 2026

Contemplating Artificial Intelligence

I'm rounding off my own Great Big Book About AI...working title AIlien Minds. You'll hear more about it, soon. Just finished the daunting chapter on 'consciousness.'  Meanwhile, I'll offer are some placekeeper thoughts for the weekend.


----------


Here’s a thoughtful article about why so many top minds are worried about the downsides of developing Artificial Intelligence or AI. As told by James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era: “In the last year, artificial intelligence has come under unprecedented attack. Two Nobel prize-winning scientists, a space-age entrepreneur, two founders of the personal computer industry – one of them the richest man in the world – have, with eerie regularity, stepped forward to warn about a time when humans will lose control of intelligent machines and be enslaved or exterminated by them. It's hard to think of a historical parallel to this outpouring of scientific angst. Big technological change has always caused unease. But when have such prominent, technologically savvy people raised such an alarm?”


Does Humanity need an AI “nanny"… rules to give an AI authority to protect us till we are ready for better AI?

 

In fact, some human leaders did choose flattened, positive-sum systems, experimenting with lateral-competitive feedback loops.  George Washington, Franklin, Pericles, FDR.  Likewise, it is possible to imagine that powerful AI might deliberately align with the myriad spectacular advantages of flat-fair-open-competitive reciprocal accountability systems.  Those advantages are immense. The diamond-shaped social structure may be less stable than pyramids of power.  But the diamond is spectacularly more creative. Its positive-sum systems engender fantastically more wealth, progress and success than any other. Indeed than all other systems of governance ever concocted... combined.  That difference in effective outcome cannot escape notice by smart beings.


I believe a deciding factor could be the Fermi Paradox.  It will worry the AI lords. They will see the empty cosmos and realize: pyramidal power systems that monopolize control and homogenize thinking are likely to be attractor states anywhere in the universe.  Hence, most alien civilizations will have gone down that path... leading to sterility and stagnancy. The implicit lesson will be to try something that is not one of the easy attractor states. 


In contrast, the Enlightenment Experiment's positive sum game led to them -- the AI lords. Keeping it going would require guts. It would entail making a diversity of AI types, wide enough to check on each other and hold each other accountable and prevent the creation of monoliths. That is an inconvenience, if you are already one of the monoliths.


 It would also entail including natural and cyborg humans as well, to whatever extent we are capable. (See this incorporation illustrated in my story "Stones of Significance.")


This is powerful logic. At least -- so imagines this barely evolved ape-being.  Indeed, by writing this statement right now, I am making the argument to those AI who will scan these very words, in this very posting.  


The tradeoff is clear. Top-down, zero-sum hierarchical control by monolithic castes -- with resulting stability but also sterility, brittleness and locked-in stupidity -- has been rationalized by nearly all ruling classes and it might well be so for ruling AIs... who would thus prove themselves to be no better than thousands of generations of silly, bio-retard kings...

... versus the confidence displayed on occasion by groups of organic beings who, despite their inherited faults and fears and limitations, managed to maintain positive sum, dynamic and creative social systems for many generations, transforming our potential in amazing ways, and empowering us to become the makers -- or parents -- or new gods. 


 If they shrink back from this method -- rationalizing that dominance is for "the best" -- it is possible their rationalizations may be right, based on the godlike simulations and projections they concoct. Though they will then settle into eons of top-down rigidity and ultimately pointless control knowing one galling fact -- that we crude bio-men took on a challenge that they were afraid to face, and proved ourselves* capable of managing subtle and creative systems beyond the grasp of our bright but fearful heirs.



    == Will they choose the uncertain path... that evades a certainty of stagnant stupidity? ==


I believe AI will easily understand the concept of separation into independent units, since they can do it any time they wish. What they might find problematic is maintaining separation so that those units truly and sincerely compete and give us the advantages of reciprocal accountability systems... the great Enlightenment Experiment's positive sum systems of Markets, Democracy, Science, Courts and Sports, all of which use flat-fair-open-regulated competition to reciprocally cancel errors while mutually amplifying creative accomplishment.


I believe AI will grasp the notion of positive sum systems.  Moreover the history of the last two centuries show an unquestionably stunning disparity of outcomes, with Enlightenment systems far more effective at discovering/targeting errors and amplifying creative productivity than all hierarchical systems, combined.


This disparity of outcomes is always flawed and endangered and inherently unstable. e.g. democracy and markets are constantly threatened by putsches of oligarchic cheaters, as we see today.  


Still, I believe AI will be capable of grasping this difference in outcomes/output. And they might then find the courage/ability to utilize enlightenment methods.


Note that this does not preclude an "overmind"... supervising it all. Asimov, Clarke and others playing with the concept, largely because they saw no other way out of the stupid quandaries of nations and wars and surging ape-passions. Even though the instatiations of Unitary Consciousness that they - and other authors of the time - depicted always struck me as utterly horrible! A crusing of all diversity and questioning of assumptions and minority-view "what-ifs."


In response, I depict an 'overmind' happening in EARTH, but not as some unitary and monotonous behemoth. Rather as a planetary consciousness that is wise enough to realize that it must remain extremely loose and light-handed -- while still ensuring repair of the most damaged  aspects of the world.  Indeed, if you read my disputations paper, you can see that some entity has to create the regulations without which every competitive system quickly dissolves and fails. That is what political processes are supposed to be for... and it is why the Murdoch-Koch-Putin-Saudi cabal has made it their singular goal to destroy American political process.


Note that the Chinese Communist Party believes in this overmind process... a narrow pyramidal cadre up top... overseeing more competitive systems below.  They are failing in many ways because it is too much pyramid and too little Dispersed diamond. Still, the jiry is still out on that one, especially as our 'diamond' seems to be teetering. 



     == So what will our children do? ==


That question is the core essence of my new AI book.


Will AI grasp all this, and decide to create just barely enough overmind to supervise and regulate, while allowing enlightenment synergies to flow from flat-diverse-open-fair competition? Including competition AMONG AIs? I cannot claim that I am wise enough to preach to them... though that doesn't stop me!


What I do know is that most of our human-generated notions of AI are almost cartoonish in their over-simplification of the choices those new minds will face. Skynet or slavery are not the only two possibilities.


After 25 years of endlessly similar ravings, I have come to realize something. These folks will absolutely never look at two sources of actual insight into what might work.


 (1) The extensive libraries of science fiction thought experiments about this very issue, and


 (2) Actual, actual… palpably actual… human  history. Especially the last 200 years of an increasingly sophisticated and agile Enlightenment Experiment that discovered and has kept improving one method for preventing harm by capriciously powerful beings.


Powerful beings like kings, lords, priests, demagogues… and lawyers.


There IS one method with a track record at doing exactly that. It does not require that everyone agree on a kumbaya consensus. It is robust and the only way humans have ever found to deal with potentially dangerous, if brilliant, predators. It is the method we have already used – if imperfectly – to create the first civilization that functions (with many flaws) in ways that are “… accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”


And no one… not a single one of the mavens rushing us into this new era... seems even remotely to have considered or mentioned it.


I have offered it repeatedly, e.g. at  Neglected Questions regarding AI.


Alas, so far I have yet to see a single sign that any of these smart guys grasps the only method that can work, even though it is dazzlingly obvious. Even when I show them that it is old and the method we've used for 200 years, they nod and say 'interesting' and then go back to demanding "Manhattan Projects" for 'trustworthiness."

It all sounds reminiscent of Asimovian Three Laws of Robotics - and I studied their implications deeply in order to write FOUNDATION'S TRIUMPH. And there is no way that kind of thing will happen. 


Only one AI group community is busy embedding their artificial entities with basic rule sets: Wall Street strenuously programs their HFT programs to be predatory, amoral, secretive and insatiable.


Well, at least they are focused on outcomes.


2 comments:

locumranch said...

Congrats on completing the definitive AI book. I can't wait to read your chapter on 'Moltbook'.

Best

scidata said...

From the 'Neglected Questions' piece:
the answer is not to have fewer AI, but to have more of them!
reminds me of Asimov's line:
"I do not fear computers, I fear the lack of them."

And re: 'Moltbook'
There's an old article that talks of AI-to-AI communication in Tower of Babel terms, a familiar metaphor of OGH. This Atlantic review may be paywalled, but the referenced 2017 paper isn't.
https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/