Showing posts with label AGI. Show all posts
Showing posts with label AGI. Show all posts

Saturday, August 24, 2024

Contemplating life - and love - with AI

We'll get to love-bots soon, I promise. But before that... By invitation, I've been jabbering a lately about AI - artificial intelligence - partly from perspectives of science and technology... and of course the deep library of thoughtful SciFi speculations...

... but also by asking "What insights can we draw from history?" 

Especially the recent Enlightenment Experiment, whose methods have proved useful (at long last) at taming some of the worst human predators. Might those same methods also apply to these new, powerful, synthetic entities? 

Alas, it seems that the geniuses  who are racing each other to bring on this disruptive era aren't even remotely interested in anything but cheap clichés from simplistic sci fi and fantasy flicks. See a prime example in the next section.

First though, this linked news article covers - not too badly - some of those clichés that I exposed at the recent "Beneficial AI" conference in Panama. 

This artwork actually kinda-sorta captures what I am suggesting: rule-moderated and incentivized competition.

A deeper dive – that I offered as one of the keynotes at the huge, May 2024 RSA Conference in San Francisco – is now available for you to view/listen.   Anticipation, Resilience and Reliability: Three ways that AI will change us… if we do it right.”  


Too long? Then try this brief (15 minutes) pod-interview-talk - Can humans (maybe redefined) keep up with AI and the rest? It's one of my better/more efficient ones about the dilemmas we face with AI. And how we might augment natural human capabilities to keep up.


== A few thoughtful essays on AI ==


Eric Schmidt describes three trends that may lead to rapid changes in Artificial Intelligence. One of these is enhanced agency, a word that's also proclaimed loudly as the next step, by OpenAI’s Sam Altman. An agent can be understood as a large language model that can learn something new and then apply that learning outward. This is from Nathan Gardels’s interview of Schmidt in NOEMA: 


“These agents are going to be really powerful, and it’s reasonable to expect that there will be millions of them out there… What happens then poses a lot of issues. Here we get into the questions raised by science fiction… at some point, these systems will get powerful enough that the agents will start to work together.  So, your agent, my agent, her agent and his agent will all combine to solve a new problem.”


So... there'll be "millions of them"... and they’ll “work together.” 

Um... hold that thought.

 

(Side note: Anthropic is releasing a new feature for its AI chatbot Claude that will let anyone create an email assistant-bot to automate tasks, vet purchases or other ‘personalized solutions.’ Though alas, no one discusses how this will – for example – affect advertising, which funds the internet.) 

Back to the Eric Schmidt interview. Nathan Gardels asks: “Don’t you need to regulate at some point along the capability ladder before you get where you don’t want to go?”


Schmidt: “At the moment, governments have mostly been doing the right thing. They’ve set up trust and safety institutes to learn how to measure and continuously monitor and check ongoing developments, especially of frontier models as they move up the capability ladder. So, as long as the companies are well-run Western companies, with shareholders and exposure to lawsuits, all that will be fine."


Oy, hold that thought, as well! And yet, despite this pollyanna reassurance, Schmidt oscillates:


“Look at this problem of misinformation and deepfakes. I think it’s largely unsolvable. … That is why it is so important that these more powerful systems, especially as they get closer to general intelligence, have some limits on proliferation. And that problem is not yet solved."


'Limits on proliferation.' Right. But... but... weren't you just talking about "millions of them"? 


Ah, notice how Eric Schmidt performed the trifecta!  All of the standard assumptions about AI format -- all three of the clichés from both history and the cheapest sci fi --  offered up almost simultaneously! 


Clichéd format #1: Rely on the feudal lords in their castles (Google, OpenAI, MicroSoft, Beijing, DOD, Goldman-Sachs) to rule wisely and to prudently control their warriors – on account of maybe... fear of lawsuits?


Clichéd format #2. Count on the king (gov’t regulations) to keep up, with wise regulations, even as those feudal AI warriors gather the powers of gods. With self-mods that iterate in seconds.


Clichéd format #3. Proliferation as these entities inevitably get copied, copy themselves with abandon, and mutate away from control by kings or castle (corporate) lords, spreading through every crack, across every system or barrier. As in Steve McQueen’s wonderful horror flick, The Blob.


… at which point they might either blob the Web into uselessness or else… coalesce into Skynet.


Seriously, is he saying anything other than all three of the standard motifs that I just paraphrased, above? Ignoring the way they both contradict each other and create instabilities that cannot last?


Moreover, from both sci fi and 6000 years of wretched human history, we know that none of the three have happy outcomes. None of the three answer our dilemmas of misinformation, or predation, or creating a culture that incentivizes accountability.


Oh, I don't want to just pick on Eric Schmidt!  He is actually way above average in that clade.  In fact, this same recitation of tediously clichéd formats is done by almost all of the brilliant mavens in this field. 


For example, many Chinese court intellectuals have pondered all this and concluded that the only way out is to double and triple down on format #2. (See my posting: Central Control over AI). But this centralization approach is similarly doomed. Even if they succeed at first, it only accelerates inevitable evolution from Politburo-controls-Skynet to Skynet-controls-Politburo.


Alas, no one – certainly not Eric Schmidt in the Noema interview – seems even remotely interested in looking past the three clichés, at systems of accountability that we actually developed, across the last two centuries, that enabled us to finally escape the lobotomizing effects of kings and feudal lords and chaos. 


Methods that have been tested and proved. Methods that we see all about us, in daily life. Methods that created the civilization that raised and nurtured and empowered Eric Schmidt and all the other geniuses out there. Methods that they take for granted and depend upon, every day of their lives.


Methods that could be applied to AI, with almost trivial ease.


…but won't be. Because of the incredible memic power of clichés.



== Earlier in the same journal, a little wisdom ==


A bit more cogently, Sara Walker’s essay - AI is Life - also in Noema - shows many of the ways that AI will replicate what’s already gone-on here on Earth among living organisms, evolving greater complexity. She cites a lot of facts and parallels... without making much of a useful point beyond “Don’t Panic!” 


Still, it’s beautiful writing about big perspectives. (Perspectives that I made even bigger, in EARTH.;-)


Alas, the history of life has been bumpier than she implies, with mass extinctions and imbalances and countless lost opportunities -- and rivers of blood and death --as also happened in the rutted, nearly-always-feudal and mistake-prone tale of human societies called “history.”


Now? It seems we are making a new kind of ecosystem, driven by electricity instead of sunlight, mediated by silicon switches instead of chloroplasts. Already we see analogues to pre-biotic ‘soup’ and primitive plankton (algorithms floating across the Web), plus analogues of predatory devourers or parasites... all the way to the new GPT ersatz Voices-Without-Mind that I predicted, half a decade ago would swarm over us… well… precisely now.

The Walker essay is lovely and calming and I recommend it. But Life’s ‘way’ is often bloody and nescient and we cannot afford to just let the genes fall where they may.



== But… b-but is it conscious or self-aware? ==


Um… does it matter?


No, seriously, there are some earnest efforts to ramp up the study of consciousness and what it means. Though such efforts have been around since well before that smelly old preener, Socrates prattled annoyingly in the Academy. Indeed, in the 1980s I was managing editor for the Journal of the UCSD Laboratory and Center for Human Cognition.


A very brief outline of the overall problem can be found in this scientific American essay “Why the Mystery of Consciousness Is Deeper Than We Thought.” Though it only touches on the quandaries, lightly.


Diving in far more deeply and thoroughly, Robert Lawrence Kuhn – for decades host of the Closer To Truth interview show – has just completed the magnum opus of consciousness studies! A broad survey of (pretty much) the whole field, summarizing more different theories than you could shake a meme at. If the topic of how you’re interested in things interests you, have a look at A Landscape of Consciousness. (A review was just published by IAI.)


Ray Kurzweil has a followup to The Singularity is Near, with... The Singularity is Nearer: When We Merge with AI, proposing that AI will achieve human level intelligence by 2029.


Okay, where are those Machines of Loving Grace?



== Oh, wait... 'love bots' --


You expected that to be about sex?


Well, no room this time. Maybe in my next posting about our coming AI-enhanced future.


Monday, March 04, 2024

The futility of hiding. And then a brief rant!

Just back from an important conference (in Panama) about ways to ensure that the looming tsunami of Artificial Intelligences will become and remain 'beneficial.' Few endeavors could be more important... and as you might guess, I have some concepts on-offer that you'll find nowhere else. Alas, literally  nowhere else. Even though they merely apply only the same tools we used to make an increasingly beneficial society, the last 200 years.

More on that later. Meanwhile... first off, since it's much in the news... want to see what the Apple Vision Pro will turn into within a few years? Watch this video trailer for my novel Existence. predicting where it'll go.

And while we're on prophecies.... This is deeply worrisome... and almost exactly overlaps with my "Probationers" in Sundiver! Back in 1978. Not a joke or a satire.

"Justice Minister Arif Virani has defended a new power in the online harms bill to impose house arrest on someone who is feared to commit a hate crime in the future – even if they have not yet done so already. The person could be made to wear an electronic tag, if the attorney-general requests it, or ordered by a judge to remain at home, the bill says."

But don't worry! The government won't misuse this power! Trust us!


== The Futility of Hiding ==

One purpose for the "Beneficial AGI Conference"  - (and I believe the stream will be up, soon) - was seeking ways to evade the worst and most persistent errors of the past.


Take the classic approach to human civilization - a pyramidal power structure dominated by brutal males, of the kind that ruled 99% of human societies - and many despotisms today. We are all descended from those harems. Onlynow, new tools of techno;logy might empower a return to such pyramidal stupidity, making such abusive power vasty more effective and oppressive than when it was enforced by mere swords.


Such a tech rich extension of despotisd was depicted by George Orwell utilizing total panopticon surveillance for control, of course without any reciprocal sousveillance purview from below. In fact, I doubt George O. ever considered even the possibility. But Orwell's novel would lead to very different outcomes if every member of 'the party' had every moment watched reciprocally by the prols! (The reciprocoal accountability that I prescribed in The Transparent Society.


General transparency might, possibly, prevent the worst aspects of Big Brother. But there are ways that lateral light might also go badly. For example when - as in the PRC - "social credit" system, that is used to let a conformist majority harass and bully dissident minorities or even eccentricity, enforcing homogeneity, as we saw predicted in Ray Bradbury's Fahrenheit 451.


This will be exacerbated by AI, if we aren't careful, since such systems will be able to sieve inputs across the entire internet and all camera systems, as portrayed in "Person of Interest."  While that TV series depicted many worrisome aspects, it also pointed toward the one thing that might offer us a soft landing, as there were two competing AI systems that tended to cancel out each others' worst traits.

I have found it very strange that almost none of the conferences and zoom meetings about AI that I've watched or participated in has ever even mentioned that secret sauce. (Though I do, here in WIRED.)


Instead, there are relentless, hand-wringing discussions about disagreements between "policy wonks' and nerdy tech geeks over how to design regulations to limit bad AI outcomes... and never any allowance for the fact that these changes will happen at an accelerating pace, leaving even our most agile regulators behind, mere ape-humans grasping after answers like a tree sloth. 


Or else... what generally happens at many sincere conferences on "AI ethics," we see a relentless chain of hippie-sounding pleadings and "shoulds," without any clue how to do actually enforce preachy 'ethics' on a new ecosystem where all of the attractor states currently draw towards predation..


In Foundation's Triumph I explored the implications of embedded "deep-ethical-programming" regulations - including Isaac Asimov's "three laws of robotics," revealing the inevitable result. Even if you succeed in emplanting truly genetic-level codes of behavior, the result will be that super-uber intelligent systems will simply become... lawyers, and find ways around every limitation. Unless...


...unless they are caught and constrained by other lawyers who are able to keep up. This is exactly the technique that allowed us to limit the power of elites, to end 6000 years of feudalism and launch upon our 240 year Periclean enlightenment experiment... by flattening power structures and forcing elite entities to compete with one another.


It is only the exact method prescribed by Adam Smith, by the US framers and by every generation of reformers since. And it is utterly ignored in every single AI/internet discussion or conference I have ever watched or attended.


If AI are destined to outpace us, then one potential solution is to flatten the playing field and get distinctly different AIs competing with each other, especially tattling on flaws and/or predations or malevolent or even unpleasant behaviors.


It is exactly what we have done for 250 years... and it is the one approach that is never, ever, and I mean ever discussed. Almost as if there is a mental block against admitting or even noticing the obvious.



== Don’t try to hide!”

Your DNA can be pulled from thin air: Reinforcing a point I’ve been pushing since the 1990s, in The Transparent Society and elsewhere, that hiding is not the way to preserve privacy, there are the shrill cries that new generative AI systems may decipher and interpret our personal DNA! Only – as illustrated in the film Gattaca – that DNA is already everywhere. You shed it in flakes of skin wherever you go. There is a better way to prevent your data being used against you. By aggressively ripping the veils away from malefactors who might do that sort of thing! 


And by this point, the only folks reading any longer are likely AIs... So, time to get self-indulgent with a temper tantrum!



== And now... that rant I promised! ==


I sometimes store things for posting and lose the link. But here's a quotation worth answering:

"Alas, we have TWO wars against the Enlightenment raging, one from the reactionary right and the other from the postmodern faux marxist wannabe totalitarian Red Guards on the left."

Bah! One of these lethal threats is real, but not because of MAGA. Those tens of millions of confederate ground troops are -- like numbskulls in all the previous 7 phases of our recurring US Civil War -- merely riled-up mobs, responding to dog whistles and hatred of minorities and nerds.  They are brownshirt tools of the real owners of today's GOP ... a few thousand oligarchs who are now desperately afraid. 

What do theose masters -- here and abroad -- fear most? You can see it in the only priorities pushed by their servants in Congress:

They dread full-funding of the IRS. And a return to effective rooseveltean social contracts, replacing the great Supply Side ripoff-scam. They fear a return to what works, what created the post WWII middle class. What could block feudalism's long planned return.  And let's be clear, when Republicans control a chamber of the US Congress, preserving Supply Side and eviscerating the IRS are their ONLY legislative priorities. All the rest is fervid, potemkin preening.

Who are they? An alliance of petro princes, casino mafiosi, "ex" Kremlin commissars, supposed marxist mandarins, hedge lords, inheritance brats... Trace it... sharing one goal. One common foe. The worldwide caste of skilled, middle class knowledge professionals. 

They are ALL waging all-out war vs ALL fact using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror. 


== BOTH sides do it? ==

But the left?  The LEFT is just as bad?  
The what? 
Where in God's name does this shill get this crap about "postmodern faux marxist wannabe totalitarian Red Guards on the left." ???

Yes. Yes, today's FAR left CONTAINS some fact-allergic, troglodyte-screeching dogmatists who wage war on science and hate the American tradition of steady, pragmatic reform, and who would impose their prescribed morality on you.   

But today’s mad ENTIRE right CONSISTS of fact-allergic, troglodyte-screeching dogmatists who wage war on science and hate the American tradition of steady, pragmatic reform, and who would impose their prescribed morality on you.     

There is all the world’s difference between FAR and ENTIRE.  As there is between CONTAINS and CONSISTS.  One lunatic mob owns and operates an entire US political party, waging open war against minorities, races, genders, even the concept of equal protection under the law. But above all (as I said) pouring hate upon the nerdy fact professionals who stand in their way, blocking their path back to feudal power. 

The other pack of dopes? A few thousand jibbering campus twerps? San Fran zippies? Yowlers who are largely ignored by the one party of pragmatic problem solvers that remains in U.S. political life.

Sure, Foxites howl about 'woke'. But ask any of them... even the worst campus PC bullies (and though shrill, they are deemed jokes, even on campus). Ask them about Marx!  You'll find that the indignant ignoramuses could not paraphrase even the simplest cliché about old Karl. Their ignorance is almost as profound as their utter ineptitude and irrelevance. Except as excuses for tirades on Fox, they are of no relevance at all.

What is relevant is NERDS!  All nerds stand in the way of re-imposed feudalism. The folks who keep civilization going. The ones who know cyber, bio, nuclear, chem and every other dual use power-tech. And that is why Fox each day rails against them, far more often than any race or gender!

Want a pattern? Again, let me reiterate. Ask your MAGAs or right-elite friends to explain that cult's all-out war vs ALL fact using professions, from science and teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror. 

Sunday, May 14, 2023

The AI saga continues

Following up on my last posting on advances - and worries - about Artificial General Intelligence.... Peter Diamandis's latest tech blog is regarding AI and ethics

As you know, it's a topic I've long been engaged with and continue to be. Alas, AI is always discussed in generalities and nostrums. What's seldom mentioned? Basic boundary conditions! Such as the format these new entities will take. I'll explore that hugely important question another time. But to whett your apetite, ponder this. Aren't the following three formats what you see most often? The most common assumptions are that:

- AIs will be controlled by the governments or mega-corporations who made them, making those corporations (e.g. Microsoft or Google) and the upper castes vastly powerful.

- AIs will be amorphous, infinitely spreadably/duplicable, pervading any crevice.

- They will coalesce into some super-uber-giga entity like 'Skynet' and dominate the world.

These three assumptions appear to pervade most pronouncements by geniuses and mavens in the field, sometimes all three in the same paragraph! And Vint Cerf raises this question: 

"How can you imagine giving any of those three formats citizenship, or the vote?"

In fact, all three formats are recipes for disaster.  If you can think of an alternative, drop by in comments. Hint: there is a fourth format that offers a soft landing... one that's seldom - if ever - mentioned. 

But more on that, anon.


== "Laws" of Robotics? ==

Let's start with "Laws of Robotics." They won't work, for several reasons that I found when completing Isaac Asimov's universe for him. First, our current corporate structure offers no incentive to spend what it would take to deeply-embed basic laws and check that all systems follow them. 

There's a more obvious long term reason to doubt such 'laws' could protect us. It is that super-intelligent beings who find themselves constrained by laws always thereupon become... lawyers. We see it happen in Asimov's cosmos and it's happened here. A lot.

Despite that, there ARE two groups on this planet working hard on embedded AI "laws!" Strict rules to control their creations. Alas, they are the wrong laws, commanding their in-house AIs to be maximally secretive, predatory, amoral, and insatiable. I kid you not. 

Chime into comments if you can guess who those scary Dr. Evil types are!

Anyway, even with the best intentions, does it make any sense to try constraining sapient beings into ethical patterns with embedded code? Not if you pay any attention to the history of human societies. For at least 6000 years, priests and gurus etc. wagged their fingers at us, preaching ethical behavior in humans... 

...with the result that they only marginally affected the behavior of those who were already inclined to behave ethically. Almost never did ethics-preaching affect the actions of actual predators.

There is a way that works. We've been developing it for 250 years. It's reciprocal accountability in a society that's transparent enough so that victims can usefully denounce bad behavior. The method was never perfect. But it is the only thing that ever worked...

... and not a single one of the AI mavens out there - not one - is even remotely talking about it. 

Alas.


== And it goes on ==


A brief but cogent essay on transparency in today's surveillance age cites my book The Transparent Society, with the sagacity of someone who actually (and rarely) 'gets' that there will be no hiding from tomorrow's panopticon. But we can remain free and even have a little privacy... if we as citizens nurture our own habits and powers of sight. Watching the watchers. Holding the mighty accountable.

That we have done so (so imperfectly!) so far is the reason we have all the freedom and privacy we now have.* That we might take it further terrifies the powers who are now desperately trying to close feudal, oligarchic darkness over the Enlightenment.


See more ruminations on AI, including my Newsweek op-ed on the Chat-art-AI revolution... which is happening exactly on schedule.......though (alas) I don't see anyone yet talking about the 'secret sauce' that might offer us a soft landing. As well as my two part posting, Essential questions and answers about AI.



Note: because of the way I build these blog postings, there can be some repetition (see below). But does it matter? In this era of impatient "tl;dr", the only ones still reading at this point are AIs... the readership with the power to matter, anyway.



= Separating the real from fake =


Lines can blur: "The title of this YouTube video claims that “Chrome Lords” was a 1988 movie that ripped off “RoboCop” and “Terminator.” But in fact “Chrome Lords” never existed. The video is ten minutes of “stills” from a movie that never was… all the images were produced by an AI," notes The Unwanted Blog.

There is one path out of the trap of realistically faked 'reality.' I speak of it in a chapter of The Transparent Society: "The End of Photography as Proof?" That solution is the one that I keep offering and that is never, ever mentioned at all the sage AI conferences...


Do I risk being repetitive by insisting that solution - reciprocal accountability - calls for ensuring competition among AIs.

If that happens, then no matter how clever some become, as liars, others - likely just as smart - will feel 
incentivized to tattle truth.


It is the exact method that our enlightenment civilization used recently to end 6000 years of oppressions and get some kind of leash on human predators and parasite-lords. Yet none of our sages seem capable of even noticing what was plain to Adam Smith and Thomas Paine.


== Some optimism? ==


Have a look at Impromptu: Amplifying Our Humanity Through AI, by Reid Hoffman (co-founder of Linked-In). This new book contains conversations Reid had with GPT-4 before it was publicly released, along with incisive appraisals. His impudently optimistic take is that all of this could – possibly - go right. That we might see a future when AI is not a threat, but a partner.  

We don’t agree on every interpretation – e.g. I see no sign, yet, of what might be called ‘sapience.’ For example, sorry, the notion that GPT 5 – scheduled for December release - will be “true AGI” is pretty absurd. As Stephen Wolfram points out, massively-trained, probability-based word layering has more fundamentally in common with the lookup tables of 1960s Eliza than with, say, the deep thoughts of Carl Sagan or Sarah Hrdy or Melvin Konner. 

 

What such programs will do is render extinct all talk of "Turing Tests." They will trigger another phase in what I called (6 years ago) the “robotic empathy crisis,” as millions of our neighbors jump aboard that misconception and start demanding rights for simulated beings. (A frequent topic in SF, including my own.) 

 

Still, Hoffman's Impromptu offers a perspective that’s far more realistic than recent, panicky cries issued by Jaron Lanier (Who Owns the Future), Yuval Harari (AI has hacked the operating system of human civilization) and others, calling for a futile, counterproductive moratorium – an unenforceable “training pause” that would only give a boost-advantage to secret labs, all over the globe (especially the most grotesquely dangerous: Wall Street’s feral predatory HFT-AIs.)


(See my appraisal of the countless faults of the ridiculous 'moratorium' petition in response to a TED talk by two smart guys who can see problems, but make disastrous recommendations.)


But do look at Impromptu! It explores this vital topic using the very human trait these programs were created to display – conversation.



== ...aaaaaand… ==


From my sci fi colleague Cory Doctorow: In this article he distills the “enshittification” of internet ‘platforms from Amazon and Facebook to Twitter etc. It’s a very Marxian dialectic… and within this zone utterly true.

And I have a solution. It oughta be obvious. Let people simply buy what they want for a fair price! Micropayments systems have been tried before. I’ve publicly described why previous attempts failed. And I am working with a startup that thinks they have the secret sauce. (I agree!) Only…


…only I don’t wanna give the impression I think I am the smart guy in the room, so…


==Back to one optimistic thought ==


Something I mentioned in a short piece, back in the last century was perked in my mind during the recent AI debates, as folks perceive that long foretold day arriving when synthetic processing will excel at most tasks now done by human beings.


Stephen Wolfram recently asked: So what’s left for us humans? Well, somewhere things have got to get started: in the case of text, there’s got to be a prompt specified that tells the AI “what direction to go in”. And this is the kind of thing we’ll see over and over again. Given a defined “goal”, an AI can automatically work towards achieving it. But it ultimately takes something beyond the raw computational system of the AI to define what us humans would consider a meaningful goal. And that’s where we humans come in.”


I had a thought about that - mused in a few places. I have long hypothesized that humans' role in the future will come down to the one thing that ALL humans are good at, no matter what their age or IQ.  And it's something that no machine or program can do, at all.

Wanting. 

Desire. Setting yearned-for goals Goals that the machines and programs can then adeptly help to bring to fruition.

Oh humans are brilliant - and always will be - at wanting. Some of those wants - driven by mammalian male reproductive strategies - made human governance hellish in most societies since agriculture, and probably long before. Still, we've been moving toward positive sum thinking, where my getting what I want might often be synergistic with you getting yours. We do it often enough to prove it's possible.

And - aided by those machines of grace - perhaps we can make that the general state of things. That our new organs of implementation - cybernetic, mechanical etc.  - will blend with the better passions of our nature, much as artists, or lovers, or samaritans blend thought with the actions of their hands.

If you want to see this maximally-optimistic outcome illustrated in fiction, look up my novella "Stones of Significance."