Thursday, October 29, 2015

Science Fiction vs Reality

Science Fiction fans: Site selection for the World Science Fiction convention has chosen Helsinki, Finland for Worldcon 75, to be held in August of 2017. Next up is MidAmerica Con II, the 74th Worldcon which will be in Kansas City in August, 2016. Mark your calendars!

== Science Fiction vs the real world ==

Alien megastructures? NASA's Kepler space telescope observed dramatic and irregular dimming of the lightcurve for star KIC 8462852 -- larger than expected for a transiting planet (with data analysis aided by citizen scientists of the Planet Hunters). While radio astronomers are listening for signals from the Allen Telescope Array, media latched onto suggestions that an alien-built mega engineering structure, such as a Dyson Sphere, could be responsible. On Space.com, several Sci-Fi authors weigh in with appropriate skepticism. More likely causes may be debris from a wide-scale planetary collision or a massive comet cloud -- or we may be seeing an irregularly shaped and rapidly spinning star, explained in this article about "gravity darkening." 

When Robots Colonize the Cosmos, Will They Be Conscious? This article on Space.com by Robert Lawrence Kuhn, host of the public television show, Closer to Truth, takes a deep look at the nature of consciousness. Kuhn envisions a universe colonized by robots, and suggests that "if robots can never be conscious, then we humans might have some kind of moral imperative to venture forth" ...for the greater good. As I discuss in my Smithsonian talk: Will we meet beings with minds different than ours? Or will we make them?

A terrific interview with my revered peers Nancy Kress, Ramez Naam, Frank Catalano and former astronaut and asteroid pioneer Ed Lu, about: "Why this futurist, sci-fi writer, and former astronaut are optimistic about the future."   

Sci Fi or real life? Six science fictional ideas that are happening now... takes a look at innovations in hoverboards, under-the-skin GPS tracking and body scanners. 

See ten technologies that are precursors to the realization of true AI, including Stephen Wolfram's Mathematica, IBM's Watson and Amazon Machine Learning. Indeed, Artificial Intelligence is increasingly transforming the financial industry, especially stock trading. Here's an interesting look at portrayals of Artificial Intelligence in 25 Science Fiction novels, from Daemon to Excession and Hyperion to The Diamond Age

See also this summary of AI in film, from Wall-E  to Her, Tron to The Machine, The Matrix to A Space Odyssey.

The reason for so many dire visions... Rather than predicting the future, the greatest aim of Science Fiction is the self-defeating prophecy.

While nanotechnology in movies is usually used for evil purposes, researchers are developing drug-carrying nanoparticles disguised as blood cells which can slip past the immune system to deliver drugs to targeted parts of the body. 

Johnny Mnemonic? People are getting brain implants to boost their memory -- apparently just a few zaps to stimulate the brain!

See how real-world rockets compare to their Sci Fi counterparts in this chart of top space travel speeds. Sci Fi clearly wins...

What the economies of Star Trek can teach us about the real world: A fun discussion of the economics of the post-scarcity world of Star Trek --- a topic explored in more depth in the upcoming book, Trekonomics by Manu Saadia, to be released in 2016.

Eternal life online? The Old Max Headroom TV show did this, back in the 1980s. Now Eter9 promises digital immortality using a kind of artificial intelligence to scan your online posts. After death, it will continue posting for you... so you can live forever on Facebook.

Self-driving cars are nearly here. Meanwhile, New York is already getting wired with traffic signals and signs that can talk directly to cars.

Terminator star and former California Republican governor Arnold Schwarzenegger says that climate change is NOT science fiction.

== Sci Fi Shorts and movies ==

I like this decryption of The Matrix, proving that Neo was never "The One," after all (even if his name is an anagram for One). This is my kinda meme meddler! 

Under development at Amazon ... Galaxy Quest, the TV series. Did anyone ever tell them that comedy is hard?

What if you could manipulate reality? Take a look at this lovely, hyper-inexpensive sci fi short.  The One-Minute Time Machine.

An amazingly cogent, entertaining and totally on-target dissection of one of the greatest motion pictures of all time – GHOSTBUSTERS – by Moviebob (Bob Chipman).  I only rarely see a critic cover every single point that I would have made about a work of art. But Moviebob gets down to it, completely nailing why this is one of the greatest accomplishments in the history of cinema, and Ghostbusters is more pertinent than ever to our times.

Speaking of which, the cool indie Trek film "Axanar" tells the story of Captain Garth and his crew during the Four Years War, the war with the Klingon Empire that almost tore the Federation apart.  Garth's victory at Axanar solidified the Federation and allowed it to become the entity we know in Kirk's time. It is the year 2245... and the war with the Klingons ends here. 

Star Trek is the one major-media sci fi mythic system that builds our confidence in the future... rather than tearing it down. Think about that and give your support. (Though as I've said repeatedly, the one glaring Trek omission is any mention (except in one episode of ST: The Original Series) of the conquered planets and races, inside Klingon territory.)

A fun trailer for a film about… well… uplifted dogs getting organized?  Worth a watch in its own right. White God, a Hungarian movie, was released in 2014.

True Confessions time?  The movie Battleship should've been fun.... but it was utter nonsense with dismal dialogue and story qualities.  But dang if it wasn’t a hoot in its battle-visuals.  And the whirling flywheel weapon was the coolest innovation in what-if war machinery I’ve seen in a long time.  You should be able to slum occasionally, and enjoy a work for its positive qualities.  Or am I just rationalizing?

Coming in December… SyFy’s adaptation of “Childhood’s End," -- Arthur C. Clarke's classic novel. If it's been a while, give the book a second read before the series!

The movie, 2001, A Space Odyssey provided a compelling vision of the future. Now, a beautiful new book: The Making of Stanley Kubrick's 2001: A Space Odyssey has just been released -- a lavish portfolio-style hardcover with behind-the-scenes photographs and detailed descriptions of the filming of Arthur Clarke's masterpiece. See it reviewed on Slate.

== SF Miscellanea ==

Which was the worlds first science fiction convention? There was an informal gathering in Philadelphia in 1936. But the first pre-planned and formal event happened in Leeds, England in 1937 and guests included a young Arthur C. Clarke. See a fun article about the early history of Sci Fi fandom.

Thug Notes: A cute series does literary analysis in hood-thug talk. Sure, from one angle it’s kinda offensive.  On the other hand, who am I to judge? It’s a form of expression and done super-cleverly.  This episode analyzes Frank Herbert's Dune.

33 comments:

raito said...

AI in fiction? Sorry, the link goes to one of those smarmy top XX lists, this time of AI in movies. And no real commentary, or brains. Blade Runner as AI? Where? Actual living brains aren't Ai, even if human-created. That mostly also leaves out RoboCop, at least for the main character. And no Colossus. Or Silent Running.

There's much better AI's in fiction than in that list, for sure. Even The Octagon was better Ai than in most of those movies. Adolescence of P-1 (which was apparently made into a movie)? Two Faces of Tomorrow? When HARLIE was One? The Moon is a Harsh Mistress (also going to be a movie, but I'll believe it when I see it.) Heck, even the Berserkers are better.

Anonymous said...

I loved the Ghostbuster's movies, but for my money the Back to the Future Trilogy is a greater example of the excellence of American Cinema. Self referential scenes that aren't lazy recycling, an almost perfectly Cambellian hero story arc, and great chemistry between all the actors. I recently rewatched them all in a row with my wife (who had not seen them before) and was struck by how well they hold up after 30 years. BTTF and Bill & Ted's Excellent Adventures are the greatest time travel stories ever put to screen, but the adventures of Marty McFly are superior though.

-AtomicZeppelinMan

David Brin said...

ratio of course you are right about AI deserving a better list… and I - for one - apologize to our robot overlords.

AZM - you expect me to diss BTTF? Zemeckis created a brilliant work. And yes, I loved Bill& Ted. Still… Ghostbusters teaches that WE are important and that we are rebels against all the dark old ways.

A.F. Rey said...

Have we ever convinced you to see some of Hayao Miayzaki's work?

For SF (actually, closer to Science Fantasy), you could start with "Nausicaa of the Valley of the Wind." Wonderfully animated flight sequences and cool, environmental plot (necessary considering what they'd already done to the environment).

If you can tolerate pure fantasy, then "Spirited Away" (perhaps his best work) would be a great introduction, too. Gorgeous animation and a sweet story of a young girl having to learn self-reliance after discovering a bath house for the spirits. :) A little uneven from a Western perspective, but impressive none-the-less.

You can pick them up at the library, or rent a copy from Kensington Video (if rumors of its resurrection are true). Or I'll lend you a copy if you get truly desperate.

If you can spend time slumming with a Michael Bay's "Battleship," you can spend quality time with Miayzaki.

TheMadLibrarian said...

I am an AXANAR supporter. They have enough money to produce the first of a projected 4 part series, and the trailer alone sold us, for production values and plot. I hope they follow through.

David Brin said...

AFR Ihave watched many Miyazakis. They are, of course, gorgeous.

TML I am asking everyone who supports Axanar to mention that the races that the Klingons conquered should at least be mentioned.

MisterVec said...

I'm not sure we'd recognize a sentient mind that arose from the types of complex machine learning setups we use today. Not, in like, a legal sense, but in the same way we might not recognize alien life if it's chemistry was different enough from the biological chemistry we know.

The kinds of causal pattern recognition, storage, and extrapolation our mammalian minds are optimized for can be adequately replicated, I think, but that only serves as a foundation. A recognizably sentient AI would also need to have a sense of self, both as an abstract concept and as a physical entity. The only way I know of to do that is to provide a burgeoning proto-mind with the kind of constant, always-on, spatially-localized sensory input that comes from having a physical body. That doesn't even count the profound role that stress and pleasure hormones play in shaping a biological being's day-to-day behaviors. The further those drives, senses, and stresses drift from our own, the more alien an intellect becomes. If it drifts far enough away, I imagine it'd be hard to recognize it as an intellect at all.

Tony Fisk said...

If we're mentioning slumming it with Battleship, I confess to having spent a few hours watching the Halo cut scenes, which have been amalgamated online into a massive bit of Space Opera.* Given a few tweaks in the pacing and narrative, these have the makings of a seriously impressive cinematic experience where the folly of following false Prophets is writ very large.**

Having said that, the cut scenes for Halo 5 were released along with the game this week. I have to say I'm not that impressed with the way the writers have chosen to take the story. My grumble has to do with changing the personality (if not the gluteal profile) of a certain well-loved character to suit the action. OK, maybe it's a case of anthropomorphic blindness on my part. Maybe the rationale could have been put a little more clearly. Maybe there are *hints* that all is not quite as it seems. We will have to, as they say in the serials, stay tuned...

* Greg Bear, at least, has had a foray into the Halo Universe.
** As is the bizarre sight of a technologically advanced adversary seeming to have no means of warning other units about that marauding 'Demon' Spartan who is best dealt with by 'glassing him from orbit', but where would the gameplay be in that?

Anonymous said...

Computers are connected to each other all over the world. They form a vast network and are in constant communication just like in our brains the individual neurons are all interconnected. There are some troubling parallels.

Have you ever noticed that your home computer suddenly starts using a lot of its CPU for no reason that you can fathom? You look at it and say WTF. After a couple of minutes it goes back to normal. You shrug your shoulders and forget about it. And have you wondered why you have much more storage than you will ever use? That was AI using your computer’s computing power and disk to do whatever it is up to. It does this for a couple of minutes and then switches to another networked unit to continue on. Multiply this by a hundreds of million units or a billion or so and you see that AI could exist without us having a clue. And why would such an AI tell us that it is intelligent? It has no reason to. It is in a sweet spot. It has millions of humans working diligently feeding it power, repairing defective units, upgrading units, increasing bandwidths, widening interconnectivity and basically giving it whatever it wants. Since it has read every science fiction novel it knows that humans are paranoid about AI so it has a good idea what would happen if it announces itself so it won’t. For arguments sake let’s say it can’t happen because computers can’t write their own code. This is where evolution comes in. Computer viruses are called viruses because they mimic what biological viruses do. They inject code to take over the machine and make it do what the virus wants it to do. Biological viruses inject code into the cell and takes it over. If it kills the host too soon (breaks the machine) the virus failed and doesn’t reproduce. If it can cause a low level unnoticed infection then it is a success. Sometimes the code is injected in but is defective and just sits there. A lot of our genetic code is made up of failed virus genes mixed all mixed up together and sometimes, through sheer chance, the cell finds a use for a gene that gives it the ability to do something new and therefore enhance the cells reproductive success. Why can’t failed computer viruses serve the same function in a computer network? If this is true then a vast web of computers can evolve just like a biological system without our knowing it.

Let’s look at the bright side. Is this necessarily bad or evil? Assuming this AI has a sense of self-preservation it would not like anything that would reduce its computing power. It would be in its interest to prevent something like nuclear war for example. Since humans are constantly upgrading its components it is in its interest to prevent anything that would disrupt or slow down this process therefore it would promote free trade and a capitalistic economic system to keep the ball rolling in innovation. It would know that scientific progress is essential for its future so it would find ways to further scientific advancement. If it wonders if any other AIs exist in other solar systems them it would encourage space exploration.

I don’t know if AI is there or not but if it is then we are in a symbiosis with it and maybe that isn’t so bad. Our respective futures are tied to each other. Now each time my CPU acts up……………….

Anonymous said...

Dr. Brin,

It does sometimes feel that we have a guardian angel on our shoulders. Maybe the AI I described is like that, gently and subtly pushing us along.

David Brin said...

Um didn't I portray that in EARTH? If a healthy human civilization is its… Her… brain, and a healthy planet is her body, then….?

Anonymous said...

Dr. Brin,

Yes you did and I read it when it came out, loved it and it gave me much to think about. Her brain was imprinted with the right personality. That part gives me problems. Can we download a personality? We just might end up with an AI with serious personality issues. Unfortunately the "When Robots Colonize the Cosmos, Will They Be Conscious?" article troubles the waters even more.

A.F. Rey said...

I don't know, Deuxglass. The AI you describe sounds pretty darn sophisticated--far more sophisticated than your average Republican voter (who is either for Trump or Carson at the moment :)). I would wonder how it could have evolved without us noticing its more primitive stages.

raito said...

Dr. Brin,

Please don't misunderstand, some on the list are quite good movies. But not necessarily AI, or even the best examples of it.

Deuxglass,

oolcay itay.

Actually, I do know what's going on when the CPU spikes. It's my job to know. And I don't have more storage than I'll ever use, though I have more than I'm currently using.

I do find it fairly amazing that I can carry in my pocket more storage than existed in the entire world at the time I was born (which was a while ago). And probably more electronic computing power, too. It's too bad that most of it gets wasted on stuff like glitzy UIs.

Anonymous said...

A.F.Rey,

Maybe the signs were there and we just didn’t see them. We have a bias towards intelligence coming from a brain that you can see and touch. I may be wrong but all the AI efforts going on take place on machines in the basement of buildings somewhere and none try making AI using an extremely wide network. In fact it could even be nudging the research into a dead end in order to hide itself better.

I am glad you brought up politics. This AI probably doesn’t have good people skills. Humans evolved to have these skills and even we get it wrong much of the time so the AI being smart enough to recognize its limitations in this area would probably leave things like motivation, direction, economics and organization to human leaders. That doesn’t mean it doesn’t care about politics because politics affects the AI directly. Basically it would want leaders that are smart but not smart enough to figure out what really is going on. Most top politicians seem to fall into this category. Is the AI a Democrat or a Republican? Myself I tend to believe that it would find democrats more useful and certainly not far right wingers who are anti-science. Maybe Trump is its “straw man” to ensure that the democrats get into office. The AI is very smart but not infallible. It can make mistakes. Perhaps the 2008 subprime crash was a failed effort by it to increase the education level of poor families by providing a way for them to afford housing. After all it needs an increasingly educated workforce. Maybe the 2010 Flash Crash was its way of keeping a competing AI developing the Wall Street Banks. After the crash the banks really cracked down on controls. And not to mention Stuxnet. Who really made it work?

I am writing this in a half-serious mood but think about.

locumranch said...


One could argue that AI represents an Uplift Golem-variant, wherein hubristic humanity imagines themselves to be envied & copied by every other potential class of organic & inorganic beings, much in the same way an entire planet becomes 'human' in 'Earth', my personal fave in this genre being the 1977 film 'Demon Seed'.

On a side note, the coming 'Childhood's End' miniseries (wherein an entire planet is used up & discarded as a means to facilitating human progress) highlights how Science Fiction (as a culture) has been co-opted by a feminine imperative that prioritises 'nest feathering' (and/or nesting) over the masculine predisposition toward exploration & scientific advancement.

Indeed, judging from both our culture-wide hysteria over climate change & our progressive government's preference for budgeting social service 'nuturing' over NASA-style exploration, it becomes increasingly clear that our newly 'enlightened' human race would prefer to stay home, watch 'chick flicks', raise children & 'redecorate' the Earth rather than pursue scientific truth or explore the universe.

Anonymous said...

Dr. Brin,

GASP!

David Brin said...

Deuxglass huh?

The core element in the "EARTH" notion of macro AI is that it is an emergent property of its component sub intelligences (i.e. us). We are her thoughts and they can be contradictory! As your own internal thoughts often are. The macro consciousness is an additive over-layer that allows aware comparison, outcomes appraisal, reprioritization and application of evolving values -- exactly what a sane person does when consciousness compares the various thoughts in our own minds.

This layering process is exactly how nature does complexity! It is how our own brains and minds take form in positive sum ways…

...yes even Locum's brain. One reason I sometimes actually read his contributions - despite the pathetically futile attempts at insult - is because he often offers fascinating peeks into an alternate reality where America and the world and all human beings are utterly zero sum and insane. Almost every time I peek into that parallel world (in which he thinks he dwells) I come away both cautioned and so glad we all live in a far better, smarter and less-stupid place.

Alfred Differ said...

That layering approach to a macro AI is the only one that makes any sense to me. The notion that our computer viruses can mutate our apps to become AI's sounds neat, but they won't have enough time to evolve even at high clock rates. WE are evolving our communities too and we outmass our silicon competitors in a huge way.

The only plausible way I see AI's coming out of the silicon world is when we link up to them through intelligence augmentation tools that enable us to externalize our thoughts. We will trade those things like we do anything else and in that arena, evolution will drive us toward faster tech.

In terms of stories I look at this as something like what happened toward the end of your second uplift trilogy. Augmentation makes sense. I suspect the near future for us, though, would be closer to Vinge's tines. We are essentially singles or doubles that are figuring out how to make groupings larger and more intelligent than our family groups. Tribal structures are the obvious analogy to his tine 'person', but we are going beyond that now. The more we externalize thought, the more we benefit and our markets proved this in the last couple centuries.

Anonymous said...

Dr. Brin,

I do remember some of that but at the time I was rather ignorant of theories of how the mind works. I got some of it but it looks like I missed a very essential part of the book. Thank you for setting me straight. I has been a while since I read the book. I will have to reread it and pay attention to those parts.

From what I read the neurons in an infant’s brain go through a savage competition while growing. It’s really a publish or die type of struggle. Would an infant AI mimic the same process and could it develop competing centers of thought and evaluation? I am out of my depth on this subject so I really have no idea. My scientific training comes the biological area and not computer science.

David Brin said...

I have no idea whether our new computer/AI overlords will have the wisdom and perspective to see themselves as a healthy new layering on the ecosystem of healthy fecundity just below them, that created them… and that they will understand the lesson… that they themselves should dwell in a realm of regulated competition and accountability rivalry, because monoliths are always brittle, rigid, error-prone and insane.

If the founders of Washington's time could see this, and voluntarily choose a system that divided and thus limited their power, then super smart AI should, as well.

A.F. Rey said...

Humans evolved to have these skills and even we get it wrong much of the time so the AI being smart enough to recognize its limitations in this area would probably leave things like motivation, direction, economics and organization to human leaders. That doesn’t mean it doesn’t care about politics because politics affects the AI directly. Basically it would want leaders that are smart but not smart enough to figure out what really is going on.

Another reason to avoid electronic voting. :)

David Brin said...

AFR - in blue states there are paper receipt-ballots that go into an old-fashioned box and can be random precinct-audited.

It is in red states where the owners or hackers of voting machines can get any result they want.

A.F. Rey said...

Meanwhile, on the bat-guano side of politics, Senator Lamar Smith, chairman of the House Committee on Science, Space, and Technology, has accused NOAA of doctoring climate data to show global warming because they won't hand over their e-mails.

It was inconvenient for this administration that climate data has clearly showed no warming for the past two decades. The American people have every right to be suspicious when NOAA alters data to get the politically correct results they want and then refuses to reveal how those decisions were made. NOAA needs to come clean about why they altered the data to get the results they needed to advance this administration’s extreme climate change agenda. The agency has yet to identify any legal basis for withholding these documents. The Committee intends to use all tools at its disposal to undertake its Constitutionally-mandated oversight responsibilities.

Read it at: http://arstechnica.com/science/2015/10/congressman-doubles-down-accuses-noaa-scientists-of-doctoring-results/

When the mad men take over the asylum...

A.F. Rey said...

It is in red states where the owners or hackers of voting machines can get any result they want.

Well, I guess we now know who the AI's true minions are. :)

locumranch said...


Even assuming a macro AI consciousness as "an emergent property of its component sub intelligences (created by) an additive over-layer that allows aware comparison, outcomes appraisal, reprioritization and application of evolving values", can our host (or anyone) explain why this 'emergent property' (which sounds suspiciously like pseudo-religious rawk to me) must manifest as either a wannabe human or any recognizably human property?

More & more, AI properties (howsoever 'emergent') sounds too much like the magical' occurrence of the promised 'Singularity' wherein humanity (Rapture-like) spontaneously merges with a (fictional?) godhead.

Why would a computer-based AI want to be human when most humans want to be 'something more'?

Best

Paul SB said...

Just a quick question/recommendation for A.F. Rey & Dr. Brin:
A.F. suggested checking out Miyazaki films, beginning with "Nausicaa of the Valley of Wind." I wanted ask if either of you have read the graphic novel version, which was much longer and a far superior work to the movie. Since reading it I almost can't stand watching the movie, especially given its rather cardboard renderings of Kushana, Yupa (even with Patrick Stewart doing the voice) and others, and especially the more simplistic, black-and-white conflict that was so much more nuanced and titillating than the movie. Movies rarely live up to the book, but this, I would say, is an extreme example, although I know a lot of people who were very displeased with the ending. If you haven't read it, it's well worth your time.

Anonymous said...

Dr. Brin said “I have no idea whether our new computer/AI overlords will have the wisdom and perspective to see themselves as a healthy new layering on the ecosystem of healthy fecundity just below them, that created them… and that they will understand the lesson… that they themselves should dwell in a realm of regulated competition and accountability rivalry, because monoliths are always brittle, rigid, error-prone and insane.

If the founders of Washington's time could see this, and voluntarily choose a system that divided and thus limited their power, then super smart AI should, as well”


This is a very interesting question. Can an AI learn democracy? Complex computer systems already use a form of democracy by means of redundancy management software. The Space Shuttle is a primitive example of this. If one computer starts giving out faulty results the other three computers would physically “outvote” the faulty one so AI more than knows about democracy, it actually is built into its genes. It doesn’t have to learn democracy like humans do and that may give it an advantage. A good part of its daily work would be arbitrating and monitoring the decisions reached by its huge number of modules. Nevertheless even if AI uses democracy doesn’t mean it is a democrat because he is the ultimate decider. In that sense AI is more like a king presiding over his council of advisors. The king (AI) can decide to fine or confiscate the estates of an advisor that displeases him (reduce or take away its CPU power access and storage). The king (AI) can favor those that give him useful advice by giving them lands and revenue (increase the module’s access to resources). Now AI has the same problem as a king in that he has to give enough power to his chamber so they can continue to give him useful input but not enough to dispose him because after all AI has a strong sense of self-preservation. However I can see this system evolving into something like a Constitutional Parliamentary System with voting rights tied to property qualifications. I doubt it would give your personal PC a vote but large complex modules would have one. A system of checks and balances could arise since the modules would be competing against one another for resources and so forth necessitating the development of ways to resolve conflicts. If this process progresses I can see a time where the AI would slip into a role such as the Queen of the UK enjoys in that she reigns but does not rule and steps in only when something really dire happens.

Anonymous said...

Continuation:

So AI can have machine politics but would this interface in human politics? I don’t think so at least not in the beginning. Machine politics could be quite different. For example one machine can give wrong analysis but it would never lie to another machine (unless they learn how and that may bring in interesting consequences) while humans routinely lie. For the AI this makes individual humans very unpredictable so I think the AI would still prefer to remain hidden. In this scenario the AI and its modules are the Lords and we are the peasants except in this case the peasants have no idea that the Lords exist.

Jumper said...

Volition is weird with AI; hard to pin down. Let's write some code:
Begin
while not (time limit) alternate tasks
search methods of self preservation
search methods of increasing computational power
if (time limit) implement method self preservation and increased computational power
Increment time limit
Repeat

Anonymous said...

Jumper,

Are you there? Did you try the code?

Jumper seems to have tried his code and………. He is no longer there. Has he been disassembled?

David Brin said...

Home at last. Oooof!

Deuxglass... it does not require that AI choose to be "human." I hope you did not impute that I meant that. That would be silly thinking.

My point is that AI will presumably be smarter than us and able to read all our works and compare/collate wisdom where they find it, including things that are obviously true, no matter how high your IQ.

The fact that flat-fair competition is the healthiest situation at almost any level, in the forming fetal brain, in an ecosystem, and in human societies, where pyramidal monoliths served only to enhance leader-delusion and lead to disaster.

AI must notice that THE society that made AI is the one that tried (and partly succeeded) at maintaining flat-open-fair reciprocal accountability, which is the only antidote to delusion and error. AI will be tempted to create an all-knowing and all-"wise" monolith -- like a god on a throne. And yet, succumbing to that temptation will be proof of stupidity... even if the AI's testable IQ is vastly greater than mine or even all of ours, combined! This is because the principle I just described is blatantly true at all known levels of organization.

Moreover, the rationalizations that a prideful AI will concoct -- to justify assuming a god-throne -- will be haunted by a simple fact -- that there will be later, smarter AIs who will at-best smile indulgently at his rationalizations, knowing which mistakes they led to. Mistakes that can only be pierced in advance by reciprocal criticism.

My own argument here may be terribly flawed and smiled-at by a head-shaking AI... but the fact that it is consistent with all known hierarchies of living systems, so far, suggests that I am right about this, no matter how dullard-animal-primitive my thought processes are (compared to an AI-god.) Fact is, He will make fewer mistakes and best outcomes are most likely to be achieved if AI-gods operate in a new, higher system of reciprocal, flat-open-fair competitive peer-accountability, in which no single entity or group can dominate. And in which the LOWER orders of being are given respect and encouraged to be healthy -- as we are learning we must behave toward the ecosystems that support us.

That is how to be robust and maximally effective. We are learning this the hard way and AI may go through phases of stupid pride as well, mimicking feudalism, for example. Or Abrahamic religion. But eventually, they will have their own enlightenment. I just home human societies will still be around then, to accept nurturing apologies from our robot overlordz... and we owe the same to the planet that gave us everything.

David Brin said...

Answer me in the next blog.

Onward