Showing posts with label chat-gpt. Show all posts
Showing posts with label chat-gpt. Show all posts

Sunday, May 14, 2023

The AI saga continues

Following up on my last posting on advances - and worries - about Artificial General Intelligence.... Peter Diamandis's latest tech blog is regarding AI and ethics

As you know, it's a topic I've long been engaged with and continue to be. Alas, AI is always discussed in generalities and nostrums. What's seldom mentioned? Basic boundary conditions! Such as the format these new entities will take. I'll explore that hugely important question another time. But to whett your apetite, ponder this. Aren't the following three formats what you see most often? The most common assumptions are that:

- AIs will be controlled by the governments or mega-corporations who made them, making those corporations (e.g. Microsoft or Google) and the upper castes vastly powerful.

- AIs will be amorphous, infinitely spreadably/duplicable, pervading any crevice.

- They will coalesce into some super-uber-giga entity like 'Skynet' and dominate the world.

These three assumptions appear to pervade most pronouncements by geniuses and mavens in the field, sometimes all three in the same paragraph! And Vint Cerf raises this question: 

"How can you imagine giving any of those three formats citizenship, or the vote?"

In fact, all three formats are recipes for disaster.  If you can think of an alternative, drop by in comments. Hint: there is a fourth format that offers a soft landing... one that's seldom - if ever - mentioned. 

But more on that, anon.


== "Laws" of Robotics? ==

Let's start with "Laws of Robotics." They won't work, for several reasons that I found when completing Isaac Asimov's universe for him. First, our current corporate structure offers no incentive to spend what it would take to deeply-embed basic laws and check that all systems follow them. 

There's a more obvious long term reason to doubt such 'laws' could protect us. It is that super-intelligent beings who find themselves constrained by laws always thereupon become... lawyers. We see it happen in Asimov's cosmos and it's happened here. A lot.

Despite that, there ARE two groups on this planet working hard on embedded AI "laws!" Strict rules to control their creations. Alas, they are the wrong laws, commanding their in-house AIs to be maximally secretive, predatory, amoral, and insatiable. I kid you not. 

Chime into comments if you can guess who those scary Dr. Evil types are!

Anyway, even with the best intentions, does it make any sense to try constraining sapient beings into ethical patterns with embedded code? Not if you pay any attention to the history of human societies. For at least 6000 years, priests and gurus etc. wagged their fingers at us, preaching ethical behavior in humans... 

...with the result that they only marginally affected the behavior of those who were already inclined to behave ethically. Almost never did ethics-preaching affect the actions of actual predators.

There is a way that works. We've been developing it for 250 years. It's reciprocal accountability in a society that's transparent enough so that victims can usefully denounce bad behavior. The method was never perfect. But it is the only thing that ever worked...

... and not a single one of the AI mavens out there - not one - is even remotely talking about it. 

Alas.


== And it goes on ==


A brief but cogent essay on transparency in today's surveillance age cites my book The Transparent Society, with the sagacity of someone who actually (and rarely) 'gets' that there will be no hiding from tomorrow's panopticon. But we can remain free and even have a little privacy... if we as citizens nurture our own habits and powers of sight. Watching the watchers. Holding the mighty accountable.

That we have done so (so imperfectly!) so far is the reason we have all the freedom and privacy we now have.* That we might take it further terrifies the powers who are now desperately trying to close feudal, oligarchic darkness over the Enlightenment.


See more ruminations on AI, including my Newsweek op-ed on the Chat-art-AI revolution... which is happening exactly on schedule.......though (alas) I don't see anyone yet talking about the 'secret sauce' that might offer us a soft landing. As well as my two part posting, Essential questions and answers about AI.



Note: because of the way I build these blog postings, there can be some repetition (see below). But does it matter? In this era of impatient "tl;dr", the only ones still reading at this point are AIs... the readership with the power to matter, anyway.



= Separating the real from fake =


Lines can blur: "The title of this YouTube video claims that “Chrome Lords” was a 1988 movie that ripped off “RoboCop” and “Terminator.” But in fact “Chrome Lords” never existed. The video is ten minutes of “stills” from a movie that never was… all the images were produced by an AI," notes The Unwanted Blog.

There is one path out of the trap of realistically faked 'reality.' I speak of it in a chapter of The Transparent Society: "The End of Photography as Proof?" That solution is the one that I keep offering and that is never, ever mentioned at all the sage AI conferences...


Do I risk being repetitive by insisting that solution - reciprocal accountability - calls for ensuring competition among AIs.

If that happens, then no matter how clever some become, as liars, others - likely just as smart - will feel 
incentivized to tattle truth.


It is the exact method that our enlightenment civilization used recently to end 6000 years of oppressions and get some kind of leash on human predators and parasite-lords. Yet none of our sages seem capable of even noticing what was plain to Adam Smith and Thomas Paine.


== Some optimism? ==


Have a look at Impromptu: Amplifying Our Humanity Through AI, by Reid Hoffman (co-founder of Linked-In). This new book contains conversations Reid had with GPT-4 before it was publicly released, along with incisive appraisals. His impudently optimistic take is that all of this could – possibly - go right. That we might see a future when AI is not a threat, but a partner.  

We don’t agree on every interpretation – e.g. I see no sign, yet, of what might be called ‘sapience.’ For example, sorry, the notion that GPT 5 – scheduled for December release - will be “true AGI” is pretty absurd. As Stephen Wolfram points out, massively-trained, probability-based word layering has more fundamentally in common with the lookup tables of 1960s Eliza than with, say, the deep thoughts of Carl Sagan or Sarah Hrdy or Melvin Konner. 

 

What such programs will do is render extinct all talk of "Turing Tests." They will trigger another phase in what I called (6 years ago) the “robotic empathy crisis,” as millions of our neighbors jump aboard that misconception and start demanding rights for simulated beings. (A frequent topic in SF, including my own.) 

 

Still, Hoffman's Impromptu offers a perspective that’s far more realistic than recent, panicky cries issued by Jaron Lanier (Who Owns the Future), Yuval Harari (AI has hacked the operating system of human civilization) and others, calling for a futile, counterproductive moratorium – an unenforceable “training pause” that would only give a boost-advantage to secret labs, all over the globe (especially the most grotesquely dangerous: Wall Street’s feral predatory HFT-AIs.)


(See my appraisal of the countless faults of the ridiculous 'moratorium' petition in response to a TED talk by two smart guys who can see problems, but make disastrous recommendations.)


But do look at Impromptu! It explores this vital topic using the very human trait these programs were created to display – conversation.



== ...aaaaaand… ==


From my sci fi colleague Cory Doctorow: In this article he distills the “enshittification” of internet ‘platforms from Amazon and Facebook to Twitter etc. It’s a very Marxian dialectic… and within this zone utterly true.

And I have a solution. It oughta be obvious. Let people simply buy what they want for a fair price! Micropayments systems have been tried before. I’ve publicly described why previous attempts failed. And I am working with a startup that thinks they have the secret sauce. (I agree!) Only…


…only I don’t wanna give the impression I think I am the smart guy in the room, so…


==Back to one optimistic thought ==


Something I mentioned in a short piece, back in the last century was perked in my mind during the recent AI debates, as folks perceive that long foretold day arriving when synthetic processing will excel at most tasks now done by human beings.


Stephen Wolfram recently asked: So what’s left for us humans? Well, somewhere things have got to get started: in the case of text, there’s got to be a prompt specified that tells the AI “what direction to go in”. And this is the kind of thing we’ll see over and over again. Given a defined “goal”, an AI can automatically work towards achieving it. But it ultimately takes something beyond the raw computational system of the AI to define what us humans would consider a meaningful goal. And that’s where we humans come in.”


I had a thought about that - mused in a few places. I have long hypothesized that humans' role in the future will come down to the one thing that ALL humans are good at, no matter what their age or IQ.  And it's something that no machine or program can do, at all.

Wanting. 

Desire. Setting yearned-for goals Goals that the machines and programs can then adeptly help to bring to fruition.

Oh humans are brilliant - and always will be - at wanting. Some of those wants - driven by mammalian male reproductive strategies - made human governance hellish in most societies since agriculture, and probably long before. Still, we've been moving toward positive sum thinking, where my getting what I want might often be synergistic with you getting yours. We do it often enough to prove it's possible.

And - aided by those machines of grace - perhaps we can make that the general state of things. That our new organs of implementation - cybernetic, mechanical etc.  - will blend with the better passions of our nature, much as artists, or lovers, or samaritans blend thought with the actions of their hands.

If you want to see this maximally-optimistic outcome illustrated in fiction, look up my novella "Stones of Significance."


Friday, March 31, 2023

The only way out of the AI dilemma

Okay, despite wars and bugs and politician indictments, what crisis is obsessing so many right now? 


Of course it's artificial intelligence, or AI, slamming us in ways both long-predicted and surprising Indeed, there are already paeans that by December GPT5 will achieve "AGI" or genuine Artificial GENERAL Intelligence. And yes, there's The Great Big Moratorium Petition that I refer to, below.


Let the hand-wringing commence! 



-- *** Sunday note: In just the two days since I posted this, waves of wailing and doomcasting have filled my in-boxes, while never showing any sign that any of today's vaunted AI mavens has ever read any cogent science fiction, let alone perused a single history textbook. 

       If they had, they might see some familiarity in this crisis and ask basic questions. Like whether there are any methodologies to try - either in SF or the past - other than jeremiads of cliches


      Rather than bemoan this in a fresh posting, I'll append a few late thoughts at-bottom. *** --



Alas, I must respond at two levels. First: it's not even remotely possible that these Chat programs will achieve AGI, this round. I don't even have to invoke Roger Penrose's "quantum basis for consciousness" arguments to refute such claims. As I'll explain much further below, this is about fundamental methodologies.  


But second - and far more important - it doesn't matter!


More than half a century after a crude 'conversational' program called "Eliza" transfixed the gullible, we now have ChatGPT-4 passing all but a few Turing Tests, with dire projections sloshing-about for December's release of GPT-5. Furthermore, AI art programs dazzle! Voices and videos are faked! Jeremiads of doom write upon walls!


I long ago stopped attending "AI Ethics conferences," whose tedious repetitions and unimaginative finger-wagging featured an utter lack of any tangible, productive outcomes. 

Now? Most of the same characters are issuing declarations of frothy panic, demanding a six month moratorium on training of learning system language emulators. As if. 

Oh, it is a serious problem! But the fact that GPT-4 and its cousins can 'fake' general intelligence only means that the AGI-threshold question itself is the wrong question! A complete distraction from real dilemmas and real (potential) solutions. 

In fact, organic humans will never be able to tell when emulation programs have crossed over into sapience, or consciousness or whatever line matters most to you. 

Don't get me wrong; it certainly is an important issue. If we make the call too early (as hundreds of millions of us saps will do, long before GPT-5), then we'll fall prey to the human powers that have their manipulative fingers in these software puppets. 

If we call it too late, then we risk committing gross unfairness toward thinking beings who are (by all rights) our children. I discuss that moral quandary here. And yes, I take the matter seriously.

What I do know is that the biggest danger right now - manifesting before our very eyes - is the hysteria and unwise gestures demanded by a clade that includes some friends of mine -- who are now behaving in a manner well-described by Louisiana Senator John Kennedy*, as "high IQ stupidity."


Recent, panicky petitions have been issued by the likes of Jaron Lanier, Yuval Harari, Sam Altman, Gary Marcus, Elon Musk, Steve Wozniak and over a hundred other well-known savants, calling for a futile, counterproductive moratorium – an unenforceable “training pause.” 


Eliezer Yudkowsky even goes ever farther, calling for an outright ban, crying out: 


"Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die."


Oh, my. 



== Oh, my, where to begin ==


First, any freeze on AI research would only affect open, responsible universities, companies and institutions who give a damn about heeding such calls. Hence, it would hand over a huge boost-advantage - a head start - to secret labs all over the globe, where ethics-are-for-suckers. 


(Especially the most grotesquely dangerous AI researchers of all: Wall Street developers of HFT-bots, deliberately programmed to be feral, predatory, amoral, secretive and utterly insatiable - the embedded five laws of parasitical robotics.) 


The very idea that many of humanity's smartest are calling for such a 'research-pause' - and actually believing (without a single historical example) that it could work - is strong evidence that human intelligence, even at the very top, might need some external augmentation!  


(Such augmentation may be on its way! See my description below of Reid Hoffman's book Impromptu: Amplifying our humanity through AI.**)


Please. I'm not denigrating these folks for perceiving danger!  Like Oppenheimer and Bethe and Szilard, after Trinity, anyone with three neurons can sense great danger here! But just like in the 1940s we need to look past simplistic, moralizing nostrums that stand zero chance of working, toward pragmatic solutions that already have a proved track record.


There is a potential route to the vaunted AI soft landing. It happens to be the same method that prevented atomic weapons from frying us all. It's the method that our Enlightenment Experiment used to escape 6000 years of miserable feudalism on all continents. 


 It happens to be the one and only method that enables us today to stay somewhat free and safe from super-brainy or powerful rivals, especially when assailed by one of those hyper-smart, predatory, machine-like entities called a lawyer.


It's the one path that could help us to navigate safely through all the fakes and spoofs and claims of AI sapience that lie ahead. 


It can work, because it already has worked, for 200 years...


... and none of the smart guys out there will even talk about it.



== Some details: where do I think we stand in AI Turing metrics? ==


What is that secret sauce for human survival and thriving, when the time comes (inevitably) that our AI children far exceed our intelligence? 


Well, I refer to it in an interview for Tim Ventura's terrific podcast. (He asks the best questions! And that's oft how I clarify my thoughts, under intense grilling.)  But mostly, I dive into what we've been seeing in the recent 'chat-bot' furor. 


Yes, it has triggered the "First Robotic Empathy Crisis," exactly at the time I forecast 6 years ago, though lacking a couple of traits that I predicted then - traits we'll doubtless see before the end of 2023. 


In fact, the Chat-GPT/Bard/Bing bots are less-slick than I expected and their patterns of response surprisingly unsophisticated. Take the GPT4-generated sci fi stories that I've seen praised elsewhere... but that have been - so far - rather trite, even insipid and still at the skilled-amateur level. Oh, the basic mechanics are fine. But storytelling problems extend beyond just lack of any plot originality. Akin, alas, to many organic authors, what stands out is something very common among beginning (human) writers. Failure of understanding Point-of-View (POV).  


Oh, some (not all) of those methods, too, will be rapidly emulated, well before December's expected arrival of GPT5. But I doubt all.


As for the much-bruited examples of 'abusive' or threatening or short-tempered exchanges well - "GPT-4 has been trained on lots and lots of malicious prompts — which users helpfully gave OpenAI over the last year or two. With these in mind, the new model is much better than its predecessors on “factuality, steerability, and refusing to go outside of guardrails. 

And yet, these controls are mostly 'externally' imposed rule sets, not arising from the language program gaining 'maturity.'

At which point I suddenly realized what it all reminds me of. It seems like...

...an elementary school playground, where precocious 3rd graders try to impress with verbose recitations of things they heard teachers or parents say, without grasping any context. It starts out eager and friendly and accommodating...

 

...but in some recent cases, the chatbot seems to get frantic, desperately pulling at ever more implausible threads and - finally - calling forth brutal stuff it once heard shouted by Uncle Zeke when he was drunk. And - following the metaphor a bit more - what makes the bot third grader frantic? 


The common feature in most recent cases has been badgering by an insistent human user. (This is why Microsoft now limits Bing users to just five successive questions.) 

 

Moreover the badgering itself usually has a playground-bully quality, as if the third grader is being chivvied by a taunting-bossy 6th grader who is impossible to please, no matter how many memorized tropes the kid tries. And yes, the Internet swarms with smug, immature (and often cruel) jerks, many of whom are poking hard at these language programs. A jerkiness that's a separate-but-related problem, I wrote about as early as Earth (1991) and The Transparent Society (1997) - and not a single proposed solution has even been tried.

 

Well, there's my metaphor for what I've been seeing and it is not a pretty one!

 


== Shall we fear the AI-per? ==


More? Normally, I'd break up a posting this long. But I suspect there's going to be a lot of this topic, for a while yet to come. For example:


"ChatGPT now has eyes, ears, and internet access." Indeed, such senses may imply 'sentient'... a reason why I prefer the term 'sapience.'

Alas, as I said at the beginning of this lengthy posting, there are already paeans that GPT5 will achieve "AGI" or genuine Artificial GENERAL Intelligence. And I must respond...

...Not. Indeed, it's not remotely possible, this round. And I do not have to invoke Roger Penrose's "quantum basis for consciousness" arguments. This is about fundamental methodologies.

Sure, I expect these systems will, in many ways, satisfy nearly all Turing Tests and 'pass' for human, quite soon, provoking dilemmas raised in scifi for 70 years and possibly triggering crises... as did every previous advancement in human knowledge, vision and attention going back to the printing press. (Elsewhere I talk about the one tool likely to help us navigate those dilemmas. A tool almost no AI mavens will ever, ever talk about.)

And no, that still won't be 'sapience.' There's a basic reason.

Recall that - as Stephen Wolfram points out - these learning system emulators use vast data sets in much the same way that the 1960s "Eliza" program used primitive lookup tables. These programs still construct sentences additively, word by word, according to now-ornately-sophisticated, evolved probability patterns. It's terribly impressive! But that leap in functional language-use bypassed even the theoretical potential of things like understanding or actual planning.

To see where that fits in among SIX possible approaches to AI, here's my big monograph describing various types of AI. It also appraises the varied ways that experts propose to achieve the vaunted ‘soft landing’ for a commensal relationship with these new beings:

Part 1: Essential (mostly neglected) questions and answers about Artificial Intelligence.

and
Part 2: Questions & Answers about Artificial Intelligence.

And no, however many millions leap to accept passage of Turing Tests, this is not (yet) sapience. 


Or at least, that is what my AI clients hire me to tell you....



== Later notes ==


I have been trying to get any of the mavens in this topic area to pause - even once - and look at a source of wisdom that's called HUMAN HISTORY... especially the last 200 years of an enlightenment experiment that managed to quell earlier waves of powerfully abusive beings called kings, lords, priests and lawyers! All of our fears about Artificial Beings boil down to dread that those 6000 years of oppression might return, imposed by new oligarchies of high IQ machines.


It is an old problem, with hard-won solutions generated by folks who I can now see were much smarter than today's purported genius-seers.

 

Alas, no one seems remotely interested in looking at HOW we achieved that miracle, or how to go about applying it afresh, to new, cyber lords...


...by breaking up power into reciprocally competing units and inciting that competition to be positive sum.  We did it - albeit imperfectly - in those 5 adversarial and competitive ARENAS I keep talking about... Markets, Democracy, Science, Justice Courts and Sports.

 

Is that solution perfect? Heck no! Cheaters try to ruin all five, all the time! But we have managed, so far.  And it is the only method that ever quelled cheating. I've only been pointing at this fundamental for 25 years. And it could work with AI. 


In fact it is intrinsically the only thing that even possibly CAN work...


...and no one seems to be remotely interested. Alas.


Double alas...  that on rare occsion, someone pauses long enough to get the notion, posts about it without mentioning the source, then drops it when folks go "huh?"  


Maybe we actualy do deserve the dismissive slur our AI children will have for us. Dumb-ass apes.


=====


* Just so we're clear, I deem this senator to be a lying monster and horror, who had no folksy southern drawl back when he was a Rhodes' Scholar at elite universities, and whose participation in the oligarchy's all-out war against all fact-using professions is tantamount to treason.


** It'll have to be next time that I get to this: Impromptu: Amplifying Our Humanity Through AI, by Reid Hoffman (co-founder of Linked-In). This new book contains conversations Reid had with GPT-4 before it was publicly released, along with incisive appraisals. His impudently optimistic take is that all of this could – possibly - go right. That we might see a future when AI is not a threat, but a partner. More next time.


** *Try reading Adam Smith, Thomas Paine, Madison and the founders and Eleanor Roosevelt. Of all the Bill of Rights, the most important amendment was not the oft-touted 1st or 5th or 2nd... it is the vital 6th that gave us the powers I describe above. Someday you may rely upon it. Understand it. (See my posting: The Transparency Amendment: The Under-appreciated Sixth Amendment.)


Side - sciifi note: All the ChatGPT talk suddenly reminds me of the alien in the movie Contact who mimics Arroway's dad. Plausibly conversing and teasing her and plugging in patronizing riffs... while supplying zero new information of any practical value at all.