While striving to find time to finish a first draft of my big new book on AI - (isn't everyone writing one?) - I figure we'll take a break from political/social upheavals, in order to talk about a topic that's on (or inside) everyone's minds.
== Are we amid a thought revolution right now? ==
As usual, Nathan Gardels offers a thought-provoking essay in Noema Magazine, this time about how AI may enable humans to clamber up to higher levels of consciousness, as appeared to happen during a ‘leap’ in philosophical awareness called the “Axial Age” – when Buddha, Socrates, and other sages all seemed to emerge within roughly a single generation, across Eurasia. And possibly other areas, without historical note. Gardels suggests we may be embarking on another, greater leap, if AI tools allow us to augment our prodigious neo-cortexes to new levels of awareness and sensibility.
In fact, the two cited events cited by Gardels… a past “Axial Age” and a looming "AI-xial Age"... are clearly not alone. For example, about 40,000 years ago humanity's tool sets -- and presumably language and art -- took huge leaps ahead in just a few centuries, maybe less.
Then there's Julian Jaynes. Although his brilliantly entertaining tome The Origin of Consciousness in the Breakdown of The Bicameral Mind is now dismissed by scholars as quaint, that’s unfair, since clearly something happened in the era of which he wrote, including the till-now ill-explained Bronze Age Collapse.
In my novel EXISTENCE I argue that we've had at least seven such major self reprogrammings, and we were already in another one, even without AI.
Gardels continues with an excellent thought: "AI-driven planetary-scale computation has enabled us to conceive of climate change — a phenomenon that could not be apprehended by the human mind alone"...
Absolutely! In fact, I deem it likely that failure to perceive ecological self-destruction may be one of the top contending theories for the Fermi Paradox. Perceiving and cancelling such failure modes in time may be rare... as Jared Diamond discusses in his great book COLLAPSE.
"Perhaps an approach that explores how consciousness expands and unfolds..."
We know the answer to that. It unfolds under competitive pressure - NATURE's tool for 4 billion years. Our enlightenment tames competition to be less bloody and more productive and rapid than Nature! Nevertheless, that fact remains.
So how do we get that tool – reciprocally vigorous competition - to work peacefully and well for us? Well enough to deliver sane, decent AI consciousnesses? I discuss that elsewhere.
== More on the hottest topic: Can we replicate consciousness… and should we? ==
Want other Noema articles on AI? This one: The Danger of Superhuman AI is not what you think - is a long and eloquent whine, contributing absolutely nothing useful.
OTOH this one is much better! AI Could Actually Help Rebuild the Middle Class.
"By shortening the distance from intention to result, tools enable workers with proper training and judgment to accomplish tasks that were previously time-consuming, failure-prone or infeasible. Conversely, tools are useless at best — and hazardous at worst — to those lacking relevant training and experience. A pneumatic nail gun is an indispensable time-saver for a roofer and a looming impalement hazard for a home hobbyist.
"For workers with foundational training and experience, AI can help to leverage expertise so they can do higher-value work. AI will certainly also automate existing work, rendering certain existing areas of expertise irrelevant. It will further instantiate new human capabilities, new goods and services that create demand for expertise we have yet to foresee. ... AI offers vast tools for augmenting workers and enhancing work. We must master those tools and make them work for us."
Well... maybe. But if the coming world is zero-sum, then either machine+human teams or else just machines who are better at gathering resources and exploiting them will simply 'win.' Hence the question that is never asked -- and that is the only crucial one -- is:
"Can conditions and incentives be set up, so that the patterns that are reinforced are positive-sum for the greatest variety of participants, including legacy-organic humans and the planet?"
You know where that always leads me - to the irony that positive-sum (PS) systems tend to be inherently competitive, though under fairness rule-sets that we've witnessed achieving PS over the last couple of centuries.
Which leads us to: A new study by Caltech and UC Riverside uncovers the hidden toll that AI exacts on human health throughout its production and use, from chip manufacturing to data center operation.
And this re my alma mater: Caltech researchers have developed brain–machine interfaces that can interpret data from neural activity even after the signal from an implant has become less clear.
(Side note: I’ll keep letting slip – by accident – that Caltech gave me this honor. Hey. Credibility is relevant.)
Though… my kids remind me that the server farms delivering “AI” LLMs and other golem pre-entities are rapidly approaching bitcoin mining in their deleterious effects, heating up the planet.
== And more on the topic? ==
A new consciousness book by Micah Blumberg -- "Bridging Molecular Mechanisms and Neural Oscillatory Dynamics" -- introduces "Self Aware Networks: Theory of Mind" a new framework for understanding how the experience of being someone arises from neural activity.
And sci fi history scholar Tom Lombardo is running one of his extensive, multi-part deep-dives into a topic, this time highly pertinent: The Future Evolution of Consciousness Webinar.
Conscium claims to be “the world’s first applied AI consciousness research organisation.” From which my friend from the UK (via Panama) Calum Chace sent me this long paper re “Principles for Responsible AI Consciousness Research”. He is deeply involned with PRISM – “The Partnership for Research Into Sentient Machines” (Promoting responsible research into AI consciousness.) Which recruited me for their board.
Though in fact I truly cannot say which – if any – of these endeavors is on a ‘right track’ toward humanity’s soft and happy landing with our new children of the mind. While detailed and illuminating, that PDF – for example - remains vague about the most important things, like how to tell ‘consciousness’ from a system that feigns it… and whether that matters.
Broadening a bit into speculation… Take the exploratory dive into “what is consciousness’ that you’ll find in Peter Watts’s novel “Blindsight.” … wherein Watts makes the case that a sense of self is not necessarily needed, in order for a being to behave in a way that is actively intelligent, communicative and even ferociously self-interested.
All you need is evolution. And an overall system in which evolution remains (as in nature) zero-sum.
== And again, my own highly… different … takes on AI and dangerous clichés… like how best to navigate the future by using tools we already have! ==
While some – like Nathan Gardels and even more-so Reid Hoffman and Ray Kurzweill – suggest a coming AI-propelled apotheosis – or AIpotheosis (did I coin it first?) – from merging and expanding what it means to be human…
… others, like Eliezer Yudkowski publish jeremiad hand wringing like the self-explanatorily titled If Anyone Builds It, Everybody dies.
What is NOT helpful is the dismal mental laziness of Sam Altman and all the other Masters of The World, each of whom believes he (always ‘he’) is the Robur of Robots (look up the Verne reference!) A clade of tech wizards whose lazy assumptions about AI format could wind up proving Eliezer right.
And so, I conclude by reiterating…
My WIRED article breaks free of the three standard 'AI-formats' that can only lead to disaster, suggesting instead a 4th. That AI entities can only be held accountable if they have individuality... even 'soul'...
And still-more pertinent than ever… my related NEWSWEEK op-ed dealt with 'empathy bots'' that feign sapience and personhood.
Want it laid out more vividly detailed? My Keynote at the huge, May 2024 RSA Conference in San Francisco – is now available online. “Anticipation, Resilience and Reliability: Three ways that AI will change us… if we do it right.”
And you'll get it all, plus a dozen missing contexts that the robo-roburs have been utterly ignoring, in my coming Ai book. That is, if those new children of humanity allow it to appear in time.
1 comment:
The obvious solution to 'heating up the world' is to build space-based data and compute centers. Unbridled scaling and automation are becoming our new gods.
Post a Comment