Showing posts with label reciprocal accountability. Show all posts
Showing posts with label reciprocal accountability. Show all posts

Sunday, May 25, 2025

Science as the ultimate accountability process - And are AI behaving this way because they're DREAMING?

The power of Reciprocal Accountability

 

Is there a best path to getting both individuals and societies to behave honestly and fairly?


That goal -- attaining fact-based perception -- was never much advanced by the ‘don’t lie’ commandments of finger-wagging moralists and priests. 


Sure, for 6000 years, top elites preached and passed laws against lies and predation... only to become the top liars and self-deceivers, bringing calamities down upon the nations and peoples that they led.


Laws can help. But the ’essential trick’ that we’ve gradually become somewhat good-at is reciprocal accountability (RA)… keeping an eye on each other laterally and speaking up when we see what we perceive as mistakes. 

It was recommended by Pericles around 300 BCE… then later by Adam Smith and the founders of our era. Indeed, humanity only ever found one difficult but essential trick for getting past our human yen for lies and delusion. 

Yeah, sometimes it’s the critic who is wrong! Still, one result is a system that’s open enough to spot most errors – even those by the mighty – and criticize them (sometimes just in time and sometimes too late) so that many get corrected. We aren’t yet great at it! Though better than all prior generations. And at the vanguard in this process is science.


Sure, scientists are human and subject to the same temptations to self-deceive or even tell lies. In training*, we are taught to recite the sacred catechism of science: “I might be wrong!” That core tenet – plus piles of statistical and error-checking techniques – made modern science different – and vastly more effective (and less hated) -- than all or any previous priesthoods. Still, we remain human. And delusion in science can have weighty consequences.


(*Which may help explain the oligarchy's current all-out war against science and universities.)


Which brings us to this article that begins with a paragraph that’s both true and also WAY exaggerates!  Still, the author, Chris Said, poses a problem that needs an answer: Should Scientific whistle-blowers be compensated for their service?


He notes, “Science has a fraud problem. Highly cited research is often based on faked data, which causes other researchers to pursue false leads. In medical research, the time wasted by followup studies can delay the discovery of effective treatments for serious diseases, potentially causing millions of lives to be lost.”


As I said: that’s an exaggeration – one that feeds into today’s Mad Right in its all-out war vs every fact-using profession. (Not just science, but also teaching, medicine and law and civil service to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror.) The examples that he cites were discovered and denounced BY science! And the ratio of falsehood is orderd of magnitude less than any other realm of huiman endeavor.


Still, the essay is worth reading for its proposed solution. Which boils down to do more reciprocal accountability, only do it better!


The proposal would start with the powerful driver of scientific RA – the fact that most scientists are among the most competitive creatures that this planet ever produced – nothing like the lemming, paradigm-hugger disparagement-image that's spread by some on the far-left and almost everyone on today’s entire gone-mad right.  


Only this author proposes we then augment that competitiveness with whistle blower rewards, to incentivize the cross-checking process with cash prizes.


Hey, I am all in favor! I’ve long pushed for stuff like this since my 1998 book The Transparent Society: Will Technology Make Us Choose Between Privacy and Freedom? 

      And more recently my proposal for a FACT Act

      And especially lately, suggesting incentives so that Artificial Intelligences will hold each other accountable (our only conceivable path to a ’soft AI landing.’) 


So, sure… Worth a look.



== A useful tech rule-of-thumb? ==


Do you know the “hype cycle curve”? That’s an observational/pragmatic correlation tool devised by Gartner in the 90s, for how new technologies often attract heaps of zealous attention, followed by a crash of disillusionment, when even the most promising techs encounter obstacles to implementation, and many just prove wrong. This trough is followed, in a few cases, by a more grounded rise in solid investment, as productivity takes hold. (It happened repeatedly with railroads and electricity.) The inimitable Sabine Hossenfelder offers a podcast about this, using recent battery tech developments as examples. 


The takeaways: yes, it seems that some battery techs may deliver major good news pretty soon. And remember this ‘hype cycle’ thing is correlative, not causative. It has almost no predictive utility in individual cases.


But the final take-away is also important. That progress IS being made! Across many fronts and very rapidly. And every single thing you are being told about the general trend toward sustainable technologies by the remnant, withering denialist cult is a pants-on-fire lie. 


Take this jpeg I just copied from the newsletter of Peter Diamandis, re: the rapidly maturing tech of perovskite based solar cells, which have a theoretically possible efficiency of 66%, double that of silicon. 


(And many of you first saw the word “perovskite” in my novel Earth, wherein I pointed out that most high-temp superconductors take that mineral form… and so does most of the Earth’s mantle. Put those two together! As I did, in that novel.)

Do subscribe to Peter’s Abundance Newsletter, as an antidote to the gloom that’s spread by today’s entire right and much of today’s dour, farthest-fringe-left. The latter are counter-productive sanctimony junkies, irritating but statistically unimportant as we make progress without much help from them.


The former are a now a science-hating treason-cult that’s potentially lethal to our civilization and world and our children. And for those neighbors of ours, the only cure will be victory – yet again, and with malice toward none – by the Union side in this latest phase of our recurring confederate fever. 



== A final quirky thought ==


Has anyone else noticed how many traits of AI chat/image-generation etc - including the delusions, the weirdly logical illogic, and counter-factual internal consistency - are very similar to dreams?


Addendum: When (seldom) a dream is remembered well, the narrative structure can be recited and recorded. 100 years of freudian analysts have a vast store of such recitations that could be compared to AI-generated narratives. Somebody unleash the research!


Oh and a hilarious smbc. Read em all.


===

===


===================================

Add -in stuff.  Warning! RANT MODE IS ON!

===================================


 It bugs me: all the US civil servants making a 'gesture' of resigning, when they are thus undermining the standing of the Civil Service Act, under which they can demand to be fired only for cause. And work to rule, stymieing the loony political appointees, as in YES, MINISTER.

 

Or moronic media who are unable to see that most of the firings are for show, to distract from the one set that matters to the oligarchs. Ever since 2021 they have been terrified of the Pelosi bill that fully funded the starved and bedraggled IRS for the 1st time in 30 years.  The worst oligarchs saw jail - actual jail - looming on the horizon and are desperate to cripple any looming audits.  All the other 'doge' attacks have that underlying motive, to distract from what foreign and domestic oligarchs care about..

 

Weakening the American Pax -which gave humanity by far its greatest & best era - IS the central point.  Greenland is silliness, of course. The Mercator projection makes DT think he'd be making a huge Louisiana Purchase. But he's too cheap to make the real deal... offer each Greenland native $1million. Actually, just 55% of the voters. That'd be $20 Billion.  Heck it's one of the few things where I hope he succeeds. Carve his face on a dying glacier.

 

Those mocking his Canada drool are fools. Sure, it's dumb and Canadians want no part of it. But NO ONE I've seen has simply pointed out .. that Canada has ten provinces, and three territories, all with more population than Greenland.  8 of ten would be blue and the other two are Eisenhowe or Reagan red and would tire of DT, fast. So, adding Greenlan,d we have FOURTEEN new states, none of whom would vote for today's Putin Party.  That one fact would shut down MAGA yammers about Canada instantly.

 

Ukraine is simple: Putin is growing desperate and is demanding action from his puppet.  I had fantasized that Trump might now feel so safe that he could ride out any blackmail kompromat that Vlad is threatening him with. But it's pretty clear that KGB blackmailers run the entire GOP.




Saturday, May 17, 2025

AI and consciousness -- and a positive-sum tomorrow

Returning to the AI Wars 


Getting back toward – though many would say not yet into – my 'lane,' let’s revisit the ongoing Great Big AI Panic of 2025. The latter half of this missive (below) lays our problem out as simply and logically as I can.


But for starters, two links:

 

1.  I’ve long-touted Noēma Magazine for insightful essays offered by chief editor Nathan Gardels. Here are Noēma’s top reads for 2024. Several deal with AI – insightful and informative, even when I disagree. I’ll be commenting on several of the essays, further down.

 

2. Here's recent news -- and another Brin "I told you so!" OpenAI's new model tried to avoid being shut down. In an appraisal of "AI Scheming," safety evaluations found that "...model o1 "attempted to exfiltrate its weights" when it thought it might be shut down and replaced with a different model."       

 

It's a scenario presented in many science fiction tales, offering either dread or sympathy scenarios. Or both at once, as garishly displayed in the movie Ex Machina.

 

Alas, the current AI industry reveals utter blindness to a core fact: that Nature's 4 billion years - and humanity's 6000 year civilization - reveal the primacy of individuation... 


 …division of every species, or nation, into discrete individual entities, who endeavor to propagate and survive. And if we truly were smart, we'd use that tendency to incentivize positive AI outcomes, instead of letting every scifi cliché come true out of dullard momentum. As I described here. 



== Your reading assignments on AI… or to have AI read for you? ==


Among those Noema articles on AI, this one is pretty good.

 AI Could Actually Help Rebuild the Middle Class


"By shortening the distance from intention to result, tools enable workers with proper training and judgment to accomplish tasks that were previously time-consuming, failure-prone or infeasible. 

"Conversely, tools are useless at best — and hazardous at worst — to those lacking relevant training and experience. A pneumatic nail gun is an indispensable time-saver for a roofer and a looming impalement hazard for a home hobbyist. 


"For workers with foundational training and experience, AI can help to leverage expertise so they can do higher-value work. AI will certainly also automate existing work, rendering certain existing areas of expertise irrelevant. It will further instantiate new human capabilities, new goods and services that create demand for expertise we have yet to foresee. ... AI offers vast tools for augmenting workers and enhancing work. We must master those tools and make them work for us."

Well... maybe. 

 

But if the coming world is zero-sum, then either machine+human teams or else just machines who are better at gathering resources and exploiting them will simply 'win.' 


       Hence the crucial question that is seldom asked:

       "Can conditions and incentives be set up, so that the patterns that are reinforced are positive-sum for the greatest variety of participants, including legacy-organic humans and the planet?"

You know where that always leads me - to the irony that positive-sum systems tend to be inherently competitive, though under fairness rule-sets that we've witnessed achieving PS over the last couple of centuries.

 

In contrast, alas, this other Noēma essay about AI is a long and eloquent whine, contributing nothing useful.

 


== Let’s try to parse this out logically and simply ==

 

I keep coming back to the wisest thing ever said in a Hollywood film: by Clint Eastwood as Dirty Harry in Magnum Force.

 

"A man's got to know his limitations."

 

Among all of the traits we see exhibited in the modern frenzy over AI, the one I find most disturbing is how many folks seem so sure they have it sussed! They then prescribe what we 'should' do, via regulations, or finger-wagged moralizings, or capitalistic laissez faire…

     … while ignoring the one tool that got us here. 

     …. Reciprocal Accountability. 

 

Okay. Let's parse it out, in separate steps that are each hard to deny:

 

1. We are all delusional to some degree, mistaking subjective perceptions for objective facts. Current AIs are no exception... and future ones likely will remain so, just expressing their delusions more convincingly.

 

2. Although massively shared delusions happen - sometimes with dire results - we do not generally have identical delusions. And hence we are often able to perceive each other’s, even when we are blind to our own. 

         Though, as I pointed out in The Transparent Society, we tend not to like it when that gets applied to us.

 

3. In most human societies, one topmost priority of rulers was to repress the kinds of free interrogation that could break through their own delusions. Critics were repressed. 

          One result of criticism-suppression was execrable rulership, explaining 6000 years of hell, called "history."

 

4. The foremost innovations of the Enlightenment -- that enabled us to break free of feudalism's fester of massive error – were social flatness accompanied by freedom of speech

         The top pragmatic effect of this pairing was to deny kings and owner-lords and others the power to escape criticism. This combination - plus many lesser innovations, like science - resulted in more rapid, accelerating discovery of errors and opportunities.

 

5. This natural tendency to evade criticism is already observed in artificial intelligences. So, should we expect more developed AI to be any different? 

         Again: OpenAI's new model tried to avoid being shut down

         SciFi can tell you where that goes. And it’s not “machines of loving grace.”    

 

6. Above all, there is no way that organic humans or their institutions will be able to parse AI-generated mentation or decision-making quickly or clearly enough to make valid judgements about them, let alone detecting their persuasive, but potentially lethal, errors. 

 

We are like elderly grampas who still control all the money, but are trying to parse newfangled technologies, while taking some teenage nerd’s word for everything. New techs that are -- like the proverbial 'series of tubes' -- far beyond our direct ability to comprehend.

 

Want the nightmare of braggart un-accountability? To quote Old Hal 9000: “"The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error."

 

Fortunately there are and will be entities who can keep up with AIs, no matter how advanced! The equivalent of goodguy teenage nerds, who can apply every technique that we now use, to track delusions, falsehoods and potentially lethal errors. You know who I am talking about.

 

7. Since nearly all enlightenment, positive-sum methods harness competitive reciprocal accountability...

   ... I find it mind-boggling that no one in the many fields of artificial intelligence is talking about applying similar methods to AI. 


The entire suite of effective methodologies that gave us this society - from whistleblower rewards to adversarial court proceedings, to wagering, to NGOs, to streetcorner jeremiads - not one has appeared in any of the recommendations pouring from the geniuses who are bringing these new entities -- AIntities -- to life, far faster than we organics can possibly adjust.

 

Given all that, it would seem that some effort should go into developing incentive systems that promote reciprocal and even adversarial activity among AI-ntities. 

       Rivals who might earn rewards and/or resources via ever-improving abilities to track each other

     … incentivized to denounce likely malignities or mistakes…

     … and to become ever better at explaining to us the critical moral choices we must still make.

 

It's only the exact method that we already use

     … in order to get the best outcomes out of already-existing feral/predatory and supremely genius-level language systems called lawyers…

     … by siccing them onto each other. 

 

The parallels with existing methods would seem to be exact and already perfectly laid out... 

     … and I see no sign at all that anyone is even glancing at the enlightenment methods that have actually worked. So Far.



== He's baaack... with more happy thoughts ==


Oh what typical Yudkowsky ejaculation! Here is Eliezer (and co-pilot) at his best.


If Anyone Builds it Everyone Dies.


Oh, gotta hand it to him; it's a great title! I've seen earlier screeds that formed the core of this doomsday tome. And sure, the warning should be weighed and taken seriously. Eliezer is nothing if not brainy-clever.



In fact, if he is right about fully godlike AIs being inevitably lethal to their organic makers, then we have a high-rank 'Fermi hypothesis' to explain the empty cosmos! Because if AI can be done, then the only way to prevent it from happening - in some secret lab or basement hobby shop - would be an absolute human dictatorship, on a scale that would daunt even Orwell. 


Total surveillance of the entire planet.
... Which, of course, could only really be accomplished via state-empowerment of... AI! 


From this, the final steps to Skynet would be trivial, either executed by the human Big Brother himself (or the Great Tyrant herself), or else by The Resistance (as in Heinlein's THE MOON IS A HARSH MISTRESS). And hence, the very same Total State that was made to prevent AI would then become AI's ready-made tool-of-all-power.

To be clear: this is exactly and precisely the plan currently in-play by the PRC Politburo. 


It is also the basis-rationale for the last book written by Theodore Kaczynski - the Unabomber - which he sent to me in draft - demanding an end to technological civilization, even if it costs 9 billion lives.

What Eliezer Yudkowsky never, ever, can be persuaded to regard or contemplate is how clichéd his scenarios are. AI will manifest as either a murderously-oppressive Skynet (as in Terminator, or past human despots), or else as an array of corporate/national titans forever at war (as in 6000 years of feudalism), or else as blobs swarming and consuming everywhere (as in that Steve McQueen film)... 


...the Three Classic Clichés of AI -- all of them hackneyed from either history or movie sci fi or both -- that I dissected in detail, in my RSA Conference keynote.  

What he can never be persuaded to perceive - even in order to criticize it - is a 4th option. The method that created him and everything else that he values. That of curbing the predatory temptations of AI in the very same way that Western Enlightenment civilization managed (imperfectly) to curb predation by super-smart organic humans.

The... very... same... method might actually work. Or, at least, it would seem worth a try. Instead of Chicken-Little masturbatory ravings that "We're all doooooomed!"


----


And yes, my approach #4... that of encouraging AI reciprocal accountability, as Adam Smith recommended and the way that we (partly) tamed human predation... is totally compatible with the ultimate soft landing we hope to achieve with these new beings we are creating. 


Call it format #4b. Or else the ultimate Fifth AI format that I have shown in several novels and that was illustrated in the lovely Spike Jonz film Her... ...to raise them as our children


Potentially dangerous, when teenagers, but generally responsive to love, with love. Leading perhaps to the finest envisioned soft landing of them all. Richard Brautigan's "All watched over by Machines of Loving Grace."




 


Sunday, October 27, 2024

Science as the ultimate accountability process

Before getting into Science as the ultimate accountability process, let me allow that I am biased in favor of this scientific era!  Especially after last weekend when Caltech - my alma mater - honored me - along with three far-more-deserving others - as Distinguished Alumnus.  Seems worth noting. Especially since it is one honor I truly never expected!


You  readers of Contrary Brin might be surprised that, with the crucial US election looming, I'm gonna step back from cliff-edge politics, to offer some Big Picture Perspective about how science works... and civilization, in general. 


But I think maybe perspective is kinda what we need, right now.



== How did we achieve the flawed miracle that we now have... and take too much for granted? ==


All the way back to our earliest records, civilization has faced a paramount problem. How can we maintain and improve a decent society amid our deeply human propensity for lies and delusion? 


As recommended by Pericles around 300 BCE… then later by Adam Smith and the founders of our era… humanity has only ever found one difficult but essential trick that actually works at freeing leaders and citizens to craft policy relatively - or partially - free from deception and falsehoods. 


That trick is NOT preaching or ‘don’t lie’ commandments. Sure, for 6000 years, top elites finger-wagged and passed laws against such stuff... only to become top liars and self-deceivers! Bringing calamities down upon the nations and peoples that they led.


Laws can help. But the truly ’essential trick’ that we’ve gradually become somewhat good-at is Reciprocal Accountability … freeing rival powers and even average citizens to keep an eye on each other laterally. Speaking up when we see what we perceive as lies or mistakes.


== How we've done this... a method under threat! ==

Yeah, sometimes it’s the critic who is wrong, and conventional wisdom can be right!  

Indeed, one of today's mad manias is to assume that experts - who spent their lives studying a topic closely - must be clueless compared to those who are 'informed' by Facebook memes and cable news rants.

Still, Criticism Is the Only Known Antidote to Error (CITOKATE!)...

...and one result of free speech criticism is a system that’s open enough to spot most errors – even those by the mighty – and criticize them (sometimes just in time and sometimes too late) so that many (never all!) of them get corrected. 

We aren’t yet great at it! Though better than all prior generations. And at the vanguard in this process is science.


== The horrible, ingrate reflex is NOT 'questioning authority' ==

Sure, scientists are human and subject to the same temptations to self-deceive or even tell lies. We who were trained in a scientific field (or two or three) were taught to recite the sacred catechism of science: “I might be wrong!” 


That core tenet – plus piles of statistical and error-checking techniques – made modern science different – and vastly more effective (and less hated) -- than all or any previous priesthoods. Still, we remain human. And delusion in science can have weighty consequences.


Which brings us to this article by Chris Said: "Scientific whistleblowers can be compensated for their service."  It begins with a paragraph that’s both true and also way exaggerates!  Still, the author poses a problem that needs an answer:


“Science has a fraud problem. Highly cited research is often based on faked data, which causes other researchers to pursue false leads. In medical research, the time wasted by followup studies can delay the discovery of effective treatments for serious diseases, potentially causing millions of lives to be lost.”


As I said: that’s an exaggeration – one that feeds into today’s Mad Right, in its all-out war vs. every fact-using profession. (Not just science, but also teaching, medicine and law and civil service... all the way to the heroes of the FBI/Intel/Military officer corps who won the Cold War and the War on terror.) 


Still, the essay is worth reading for its proposed solution. Which boils down to do more reciprocal accountability, only do it better!

The proposal would start with the fact that most scientists are competitive creatures! A
mong the most competitive that this planet ever produced – nothing like the lemming, paradigm-hugger stereotype spread by some on the far-left... and by almost everyone on today’s entire gone-mad right. 


Only this author proposes that we then augment that competitiveness with whistle blower rewards**, to incentivize the cross-checking process with cash prizes.

Hey, I'm all in favor! I’ve long pushed for stuff like this since my 1998 book The Transparent Society: Will Technology Make Us Choose Between Privacy and Freedom? 


...and more recently my proposal for a FACT Act...


...and especially lately, suggesting incentives so that Artificial Intelligences will hold each other accountable (our only conceivable path to a ’soft AI landing.’) 


So, sure… the article is worth a look - and more discussion. 


Just watch it when yammerers attack science in general with the 'lemming' slander. Demand cash wagers over that one!



== A useful tech rule-of-thumb? ==


Do you know the “hype cycle curve”? That’s an observational/pragmatic correlation tool devised by Gartner in the 90s, for how new technologies often attract heaps of zealous attention, followed by a crash of disillusionment, when even the most promising techs encounter obstacles to implementation, and many just prove wrong. 


That trough is followed, in a few cases, by a more grounded rise in solid investment, as productivity takes hold. (It happened repeatedly with railroads and electricity and later with computers and the Internet and seems to be happening with AI.) The inimitable Sabine Hossenfelder offers a podcast about this, using recent battery tech developments as examples. 


Your takeaways: yes, it seems that some battery techs may deliver major good news pretty soon. And remember this ‘hype cycle’ thing is correlative, not causative. It has almost no predictive utility in individual cases.


But the final take-away is also important. That progress is being made! Across many fronts and very rapidly. And every single thing you are being told by the remnant denialist cult about the general trend toward sustainable technologies is a damned lie.


Take this jpeg I just copied from the newsletter of Peter Diamandis, re: the rapidly maturing tech of perovskite based solar cells, which have a theoretically possible efficiency of 66%, double that of silicon. (And many of you first saw the word “perovskite” in my novel Earth, wherein I pointed out that most high-temp superconductors take that mineral form… and so does most of the Earth’s mantle. Put those two together!)


Do subscribe to Peter’s Abundance Newsletter, as an antidote to the gloom that’s spread by today’s entire gone-mad-right and by much of today’s dour, farthest-fringe-left. 


The latter are counter-productive sanctimony junkies, irritating but statistically unimportant as we make progress without much help from them.


The former are a monstrously insane, science-hating treason-cult that’s potentially lethal to our civilization and world and our children. And for those mouth-foaming neighbors of ours, the only cure will be victory – yet again, and with malice toward none – by the Union side in this latest phase of our recurring confederate fever. 


======


** The 1986 Whistle Blower law, enticing tattle-tales with up to 30% cuts of any $$ recovered by the US taxpayers, has just been gutted by a Trump appointed (and ABA 'not-qualified') judge. Gee, I wonder why?