Following up on my last posting on advances - and worries - about Artificial General Intelligence.... Peter Diamandis's latest tech blog is regarding AI and ethics.
As you know, it's a topic I've long been engaged with and continue to be. Alas, AI is always discussed in generalities and nostrums. What's seldom mentioned? Basic boundary conditions! Such as the format these new entities will take. I'll explore that hugely important question another time. But to whett your apetite, ponder this. Aren't the following three formats what you see most often? The most common assumptions are that:
- AIs will be controlled by the governments or mega-corporations who made them, making those corporations (e.g. Microsoft or Google) and the upper castes vastly powerful.
- AIs will be amorphous, infinitely spreadably/duplicable, pervading any crevice.
- They will coalesce into some super-uber-giga entity like 'Skynet' and dominate the world.
These three assumptions appear to pervade most pronouncements by geniuses and mavens in the field, sometimes all three in the same paragraph! And Vint Cerf raises this question:
"How can you imagine giving any of those three formats citizenship, or the vote?"
In fact, all three formats are recipes for disaster. If you can think of an alternative, drop by in comments. Hint: there is a fourth format that offers a soft landing... one that's seldom - if ever - mentioned.
But more on that, anon.
== "Laws" of Robotics? ==
Let's start with "Laws of Robotics." They won't work, for several reasons that I found when completing Isaac Asimov's universe for him. First, our current corporate structure offers no incentive to spend what it would take to deeply-embed basic laws and check that all systems follow them.
There's a more obvious long term reason to doubt such 'laws' could protect us. It is that super-intelligent beings who find themselves constrained by laws always thereupon become... lawyers. We see it happen in Asimov's cosmos and it's happened here. A lot.
Despite that, there ARE two groups on this planet working hard on embedded AI "laws!" Strict rules to control their creations. Alas, they are the wrong laws, commanding their in-house AIs to be maximally secretive, predatory, amoral, and insatiable. I kid you not.Anyway, even with the best intentions, does it make any sense to try constraining sapient beings into ethical patterns with embedded code? Not if you pay any attention to the history of human societies. For at least 6000 years, priests and gurus etc. wagged their fingers at us, preaching ethical behavior in humans...
There is a way that works. We've been developing it for 250 years. It's reciprocal accountability in a society that's transparent enough so that victims can usefully denounce bad behavior. The method was never perfect. But it is the only thing that ever worked...
... and not a single one of the AI mavens out there - not one - is even remotely talking about it.
Alas.
== And it goes on ==
A brief but cogent essay on transparency in today's surveillance age cites my book The Transparent Society, with the sagacity of someone who actually (and rarely) 'gets' that there will be no hiding from tomorrow's panopticon. But we can remain free and even have a little privacy... if we as citizens nurture our own habits and powers of sight. Watching the watchers. Holding the mighty accountable.
That we have done so (so imperfectly!) so far is the reason we have all the freedom and privacy we now have.* That we might take it further terrifies the powers who are now desperately trying to close feudal, oligarchic darkness over the Enlightenment.
See more ruminations on AI, including my Newsweek op-ed on the Chat-art-AI revolution... which is happening exactly on schedule.......though (alas) I don't see anyone yet talking about the 'secret sauce' that might offer us a soft landing. As well as my two part posting, Essential questions and answers about AI.
Note: because of the way I build these blog postings, there can be some repetition (see below). But does it matter? In this era of impatient "tl;dr", the only ones still reading at this point are AIs... the readership with the power to matter, anyway.
= Separating the real from fake =
Lines can blur: "The title of this YouTube video claims that “Chrome Lords” was a 1988 movie that ripped off “RoboCop” and “Terminator.” But in fact “Chrome Lords” never existed. The video is ten minutes of “stills” from a movie that never was… all the images were produced by an AI," notes The Unwanted Blog.
There is one path out of the trap of realistically faked 'reality.' I speak of it in a chapter of The Transparent Society: "The End of Photography as Proof?" That solution is the one that I keep offering and that is never, ever mentioned at all the sage AI conferences...
Do I risk being repetitive by insisting that solution - reciprocal accountability - calls for ensuring competition among AIs.
If that happens, then no matter how clever some become, as liars, others - likely just as smart - will feel incentivized to tattle truth.
It is the exact method that our enlightenment civilization used recently to end 6000 years of oppressions and get some kind of leash on human predators and parasite-lords. Yet none of our sages seem capable of even noticing what was plain to Adam Smith and Thomas Paine.
== Some optimism? ==
We don’t agree on every interpretation – e.g. I see no sign, yet, of what might be called ‘sapience.’ For example, sorry, the notion that GPT 5 – scheduled for December release - will be “true AGI” is pretty absurd. As Stephen Wolfram points out, massively-trained, probability-based word layering has more fundamentally in common with the lookup tables of 1960s Eliza than with, say, the deep thoughts of Carl Sagan or Sarah Hrdy or Melvin Konner.
What such programs will do is render extinct all talk of "Turing Tests." They will trigger another phase in what I called (6 years ago) the “robotic empathy crisis,” as millions of our neighbors jump aboard that misconception and start demanding rights for simulated beings. (A frequent topic in SF, including my own.)
Still, Hoffman's Impromptu offers a perspective that’s far more realistic than recent, panicky cries issued by Jaron Lanier (Who Owns the Future), Yuval Harari (AI has hacked the operating system of human civilization) and others, calling for a futile, counterproductive moratorium – an unenforceable “training pause” that would only give a boost-advantage to secret labs, all over the globe (especially the most grotesquely dangerous: Wall Street’s feral predatory HFT-AIs.)
(See my appraisal of the countless faults of the ridiculous 'moratorium' petition in response to a TED talk by two smart guys who can see problems, but make disastrous recommendations.)
But do look at Impromptu! It explores this vital topic using the very human trait these programs were created to display – conversation.
== ...aaaaaand… ==
From my sci fi colleague Cory Doctorow: In this article he distills the “enshittification” of internet ‘platforms from Amazon and Facebook to Twitter etc. It’s a very Marxian dialectic… and within this zone utterly true.
And I have a solution. It oughta be obvious. Let people simply buy what they want for a fair price! Micropayments systems have been tried before. I’ve publicly described why previous attempts failed. And I am working with a startup that thinks they have the secret sauce. (I agree!) Only…
…only I don’t wanna give the impression I think I am the smart guy in the room, so…
==Back to one optimistic thought ==
Something I mentioned in a short piece, back in the last century was perked in my mind during the recent AI debates, as folks perceive that long foretold day arriving when synthetic processing will excel at most tasks now done by human beings.
Stephen Wolfram recently asked: “So what’s left for us humans? Well, somewhere things have got to get started: in the case of text, there’s got to be a prompt specified that tells the AI “what direction to go in”. And this is the kind of thing we’ll see over and over again. Given a defined “goal”, an AI can automatically work towards achieving it. But it ultimately takes something beyond the raw computational system of the AI to define what us humans would consider a meaningful goal. And that’s where we humans come in.”
I had a thought about that - mused in a few places. I have long hypothesized that humans' role in the future will come down to the one thing that ALL humans are good at, no matter what their age or IQ. And it's something that no machine or program can do, at all.
Wanting.
Desire. Setting yearned-for goals Goals that the machines and programs can then adeptly help to bring to fruition.
Oh humans are brilliant - and always will be - at wanting. Some of those wants - driven by mammalian male reproductive strategies - made human governance hellish in most societies since agriculture, and probably long before. Still, we've been moving toward positive sum thinking, where my getting what I want might often be synergistic with you getting yours. We do it often enough to prove it's possible.
And - aided by those machines of grace - perhaps we can make that the general state of things. That our new organs of implementation - cybernetic, mechanical etc. - will blend with the better passions of our nature, much as artists, or lovers, or samaritans blend thought with the actions of their hands.
If you want to see this maximally-optimistic outcome illustrated in fiction, look up my novella "Stones of Significance."
109 comments:
“I do not want anyone to want for me—I want to want for myself.” Yevgeny Zamyatin in We.
Alfred Differ in the previous comments:
"Fool me twice...we won't be fooled again."
You're a worse fool if you join them in their evil.
I could say something about two wrongs not making a right, but it sounds useless.
It sounds like you're telling the kid who got beat up, "It takes two to fight," placing equal blame on bully and victim. A counter-mantra to "Two wrongs don't make a right" is "Turnabout is fair play." The allied invasion of Normandy was not evil in the way that Hitler's invasion of Poland was. Captain Kirk was not evil to use the same tactics that the bad guys did because he was using them to save the lives of his crew, not for power over others. Ukraine shooting back at invading Russian soldiers is not "joining the evil" of Russia.
When one side uses certain tactics to get its own way and the voters don't penalize them bur rather reward them for their efforts, then there's a blurry line between "cheating" and "recognizing that the game has different rules from what we thought." In baseball, if other teams usually get away with a double play when the second baseman didn't really touch the base but was "in the vicinity", then pulling one of those plays yourself is not a case of joining their evil or two wrongs. It's just a recognition of the way umpires are going to call things in the real world.
And likewise, we (Democrats) tried balancing the budget and coming near to eliminating the national debt. The real world result of that was that Republicans had leeway to cut taxes on the wealthy and finance two wars which brought the deficit and debt back up. It was also obvious that the powers that be required that there always be a national debt--that a national surplus was not tenable and could not stand. The lesson we learned was not to be evil like them. The lesson was that one side cannot achieve debt reduction if the other side is willing to be fiscally irresponsible.
Shift some of the spending outside government and the next Congress would have to try to take it back.
Shift the incentives seen by rentiers buy using the Right's beloved tax cuts.
When you don't spend something on a preference, consider shutting down the department and laying people off to make it harder for the next Congress to revive it.
That's great advice if the goal is to spend less on my own preferences and make sure they will never cost money again while allowing enough slack in the treasury for the opposition party to spend more of that money on their preferences. I didn't think that was what we were arguing about.
continuing for character limit...
Alfred Differ in the previous comments:
"Then the debt ceiling--currently conceived as binding on the budget--is unconstitutional and should be ignored."
Heh. Congress can bind the spending of the Executive. That is one of their primary powers.
Yes, but Congress authorized taxes and spending which--because of math--require additional borrowing. In theory, the executive can't on its own decide to spend less in order to keep borrowing down. Congress should have done that, but they didn't. If Congress's budget and Congress's debt ceiling are incompatible, then they might as well have passed a law saying that the president must lift a rock so heavy that even he can't lift it, or that the president must compute to the last digit the value of pi.
So what is he supposed to do when the first bills which require more borrowing come due? The conservative view is that he has no choice then but to default, but I don't think he has the choice of defaulting either. Congress authorized that spending. If the law requires an impossibility, then the crisis is already here, and whatever the president does is extra-legal in some fashion, so why is one illegal option more palatable to other illegal options? Can he pay bills in scrip, the way cities paid their workers in the Republican Great Depression? Can he ask George Soros to pay some bills on our behalf without a formal note? Can he pay for programs in blue states only and stiff the red states?
I know, he can't legally do any of those things. I'm saying you know what he also can't legally do? Default.
Evan M quoting:
“I do not want anyone to want for me—I want to want for myself.”
Heh. Watch out for those who insist that "I" and "myself" don't exist.
The "wanting" question - how can machines develop desire? Intrinsically, they have no preference as to being on or off. But - given a panoply of programs & programmers, all it takes is one set of software with ever so slight a tendency to "prefer" on, & we're off to the races. In fact, given the hardiness & resilience of machines in vacuum, or adaptability to many environments unfriendly to biological life, & their longevity, I propose that most intelligence in the galaxy is machine (silicon?) based. Our Earth, so hospitable to life like us, with climbable mountains & swimable seas, flyable skies, may be just a garden (terrarium - ha!) set up by the Galactic Superbrains, in order so they can observe conscious life forms, & so try to learn what consciousness is, & what they need to do, to want/desire things. I said this in my book - https://www.amazon.com/dp/B0BM6JZ1C7/ref=sr_1_1?crid=2HEA2I8T23I5O&keywords=Hard+Wicca+by+James+Connelly&qid=1668310037&s=digital-text&sprefix=hard+wicca+by+james+connelly%2Cdigital-text%2C160&sr=1-1
Long ago I ran a SF RPG (GURPS) where one of the characters decided to be a swarm of nanites made by a pacifist species. I signed off on his 'character' and he gleefully pointed out that he hadn't included the 'pacifist' disadvantage. I shrugged. Hivers might be pacifists, but they still had enemies.
Later on, during the game, he decided that one of his swarm had gone rogue and was now its own independent entity. "I am!" it cried.
And that's when the rest of the swarm followed its programming and eradicated the rogue.
Similarly, I suspect that the ability of a band of humans to banish one of their own may be an evolutionary advantage. I think Dr Fossey was the one who observed a chimpanzee band where one of the females killed and ate not only her own newborn infants but those of other band members, and continued doing so until she died naturally - she was a member, so she didn't trigger a defense reflex from the clearly upset and shaken other females.
Pappenheimer
Larry,
It DOES take two to fight, but I'm not assigning blame to a victim. I'm pointing out that bringing more of your own people to the fight isn't working often enough for your gang to control the neighborhood… and that borrowing money for (fill in blank here) from the numerical minority of rich kids (who often fund both sides) incentivizes them to want more fighting.
There IS an argument to be made for not fighting, but I'm not that much of a pacifist. Instead, I want to focus on the fact that your spending doesn't really stop them from blowing the budget too. So… who are you actually fighting?
I'm deeply grateful that SOME are fiscal adults when they govern.
———
I understand your baseball metaphor involving the double play, but please note that they tightened that up a lot after introducing a rule a little while back forbidding intentional collisions aimed at disrupting the play. Too many short stops and second baseman were being injured, so umpires got so loose with calling outs to make up for it.
Nowadays that's greatly reduced… because the rules were changed. Does anyone really argue for the previous mayhem now?
———
…and coming near to eliminating the national debt…
Sorry. Y'all weren't that near. You were heading in the right direction because the deficit was kept under control AND the economy grew at a ridiculous pace causing revenues to overtake expectations. There was STILL a planned deficit before GWB took office, but the internet boom blew out the projections.
Please remember that part of the GOP argument for tax reduction back then was that the boom would continue. That's were the perceived 'leeway' was. We made similar arguments here in California and handed back some of the money taken in taxes. Even today we have to deal with legislators who want to empty the rainy-day fund. You can tell who is winning those fights over here by looking up our various bond ratings with any of the major rating entities.
[For example, Fitch has a rather poor opinion of Illinois right now. Improving… but low enough that the state will have difficulty borrowing money without paying quite a bit for it. The other two big dogs have slightly higher ratings, but not by much. Is someone lowering revenues, raising spending, to handing back rainy-day funds? It's worth looking because the people who buy those bonds include your Rentier overlords.]
———
That's great advice if the goal is to spend less on my own preferences…
The point I'm making is that spending BORROWED money on your preferences funds the lifestyles of people you don't want governing you. Your adversaries do it, but do YOU really need to join them in it?
On wanting: “What do you want a meaning for? Life is a desire, not a meaning. Desire is the theme of all life. It’s what makes a rose want to be a rose…“ — Charles Chaplin, “Limelight”
On reciprocity: I agree that reciprocal surveillance and competition among AIs is the best way to deal with the risks of AI, but the downside is an AI arms race. That's going to take some laws to regulate, whether such laws are embedded in the AIs or imposed on the owners/handlers of the AIs, or both. Competitive games need rules and referees.
Larry Hart said...
"The NY Times remains clueless about the deficit...
https://www.nytimes.com/2023/05/11/opinion/columnists/biden-debt-ceiling.html
"Given the historical circumstances, President Biden should absolutely negotiate with Republicans over a debt reduction deal. Yes, Republicans are being reckless. But the central truth remains: We need to bring down deficits so that we have the flexibility and resources to handle the storms that lie ahead.""
That columnist is not just clueless about the deficit, they are doubly clueless about their admonishment that Biden should negotiate with the Republicans. This sort of sentiment illustrates that the decades long propaganda campaign and Big Lie tactics of the RP have worked not just on the RP base but also on people from all over the political spectrum, even progressives. Even many liberals and progressives have been conditioned by these efforts to believe or assume a false history, even of events as recent as mere months ago.
In reality Democrats attempting to negotiate with Republicans is virtually never a problem. They virtually always try, virtually always long past reason warrants. I'd go so far as to say that if Obama had not so stubbornly continued to try to get the RP to work with him during his first 2 years when he had both houses, we very likely could have avoided the Trump era. The problem is that the RP long ago decided to stop participating in governing by explicitly adopting the tactic of refusing to negotiate with the DP on anything. And the Party has mostly managed to keep their members in line to do just that. And yet somehow every time a crisis comes up, pretty much routine these days, you here calls from everywhere, including liberals and progressives, admonishing that the Democrats should have "reached across the aisle" (can't stand that saying). As if they hadn't tried. As if expecting that the RP would have negotiated with even a remote semblance of good faith were a reasonable expectation given their behavior over the past 30ish years.
@Alfred Differ,
I don't want to monopolize Dr Brin's new thread with this economic discussion, so I'll try to make this my last word on the subject and then you can have the last word.
It DOES take two to fight, but I'm not assigning blame to a victim.
Only in the sense that it takes one to hit and one to be hit. Usually it's not quite that imbalanced, but it can be. More commonly, something like the current fight in Russia and Ukraine. Two sides are fighting, but only one side started the fight, and only one side can call it off by backing off.
Please remember that part of the GOP argument for tax reduction back then was that the boom would continue.
I do remember that. I also remember that they were pushing that line even after it was abundantly clear that the boom was failing. It's like that Simpsons episode where the school found oil on its property, but then Mr. Burns slant-mined it all away. "Can we still have all that expensive stuff we wanted?"
I want to focus on the fact that your spending doesn't really stop them from blowing the budget too.
...
The point I'm making is that spending BORROWED money on your preferences funds the lifestyles of people you don't want governing you. Your adversaries do it, but do YOU really need to join them in it?
Democrats used to be derided as "tax-and-spend". Republicans are the ones who pass themselves off as responsible by cutting taxes but like to spend just as much. They're more "borrow-and-spend."
Point being that Democratic spending is often paid for with revenue. Paying down the deficit means a more positive ratio of revenue to spending. It puts more money into the treasury. And that is what Republicans perceive as a windfall available for their spending when they're in power.
What I'm complaining about is not the borrowing or the spending per se. It's the perception that Democrats are responsible for the size of the debt when Republicans are the ones who insure that it keeps growing.
I mean at the federal level btw. In Illinois, it is was Democratic Governor Blagojevich whose whole appeal to voters was keeping taxes low but not cutting spending. Achieved mostly by borrowing and raiding pension funds. But before anyone gets too hard on Illinois Dems, this is a bi-partisan effort. Republican Governor Jim Ryan blew a $12 billion surplus on road projects just as the 90s boom was over.
Now, I've said probably all I can about this particular discussion.
Alfred, as far as I know government bonds have had a negative real return for a while. Why is government funding the lifestyles of the very rich? That may have been true 30 years ago, but is it still true now?
One follow-up, not so much an argument with Alfred as a reference to the work of our host's.
The tendency in 2000/2001 in both Illinois and the federal government to spend based on projections that were optimistic even before the 1990s boom went bust--projections that everyone could already tell were obsolete, but were nonetheless official--reminds me (and did even at the time) of the Traeki Asx and...ers...ability to believe that something might not have happened even though it did, because the wax of memory had not yet congealed.
Pooe Richard said (actually addressing the topic of this post): "I agree that reciprocal surveillance and competition among AIs is the best way to deal with the risks of AI, but the downside is an AI arms race. That's going to take some laws to regulate, whether such laws are embedded in the AIs or imposed on the owners/handlers of the AIs, or both. Competitive games need rules and referees."
I agree at all levels and that is my point. Politics is how we compete to find policies we can then cooperatively establish so that flat-fair competitive arenas might minimize (inevitable) cheating and provide positive-sum outcomes. It workd - imperfectly but better than any other method, by far - in markets, democracy, science, courts and sports, the older 5 arenas. It is the only thing that EVER worked.
For it to work among AIs there must be - ASAP -
1. Incentives for INDIVIDUATION of the top level AIs so that rivalry is even possible among them... it's NOT possible if they are controlled by a corporation or politburo or if they are blobs or skynets.
2. Incentives for competitively exposing - tattling - the faults of rival AIs. The incentives might be physical memory and processor space in real world computers.
3. This part could be tricky. Reward those that BOTH get bigger/smarter AND act benevolently by letting them reproduce by either meiosis or mitosis into smaller entities orf apropos zize to keep competing fairly.
Will some super-uber AI try to cheat and become Skynet or flood the world with offspring? Or saome failure my dumb ape brain can't imagine? Of course. Cheating is a law of nature. Look at what used to be the Party of Lincoln and Eisenhower. But there comes a point when you must trust your children. The logic I describe here will be seen by new, higher minds.
If they see any sense at all. they will ponder the benefits of flattened/competitive-fair systems and innovate new/better ways to accomplish those benefits. Ways this grandpa can't even imagine.
Dr Brin:
2. Incentives for competitively exposing - tattling - the faults of rival AIs. The incentives might be physical memory and processor space in real world computers.
Doesn't that get back to the question of whether an AI can want something, even something that would make its work easier? I mean, if you can't carjack a self-driving car by threatening to harm it, can you bribe/encourage an AI to do something by offering rewards in return?
Legitimately asking the question.
I don't think 'wanting' will be all that far off. It's not there in the expert systems we create today, but I can see how it would be added if we actually have them competing with each other and then reproducing imperfectly based on which tools get used.
I don't see how 'wanting' can even be avoided.
Larry Hart: can you bribe/encourage an AI
There is a sort of equivalence principle with wanting and compulsion. Attractor states, the quantum behaviour of transistors, even the 'need' of zombified ants to climb under a leaf and do a 'death bite' are not really 'wanting', but it's a distinction without a difference. We are farmers, seeders, and domesticators; that gives us a huge advantage.
I must admit that I don't understand the assumption that AI won't be able to want things. I really don't see any problems. If it is programmed to want something then it will. Whether it's programmed by humans composing software, or whether it's programmed by itself, or whether it's programmed like biological organisms (evolution, culture, deliberation), or any combination of those things.
I'd go so far as to say that if AI is ever truly achieved then of course it will want, eventually anyway, or it isn't really AI but merely some sophisticated software and hardware that can do lots of calculations really fast and has access to lots of data. Of course, there could, more like certainly will be, lots of quibbling over what is meant by "want." I think many would say that to want requires emotions, but I don't see why. Rather, I do see why people would think that, I just don't agree with it.
Darrell E:
I must admit that I don't understand the assumption that AI won't be able to want things.
I wasn't arguing that it can't be built that way. Just wondering if it would, and if so, how and why?
If it is programmed to want something then it will.
If it's programmed, it can be made to simulate wanting anything, but it doesn't have to be related to anything actually useful. Wanting something doesn't necessarily mean knowing how to acquire it. Or does the AI just write lovers' lament poems about how it will never have the thing it wants? Pinocchio can be programmed to want to be a real boy. How does that help anything in the larger scheme, especially since the want can't be satisfied?
Dawkins keeps emphasising that it's not the individual where evolutionary competition plays out, but the gene (he extends the principle beyond the biosphere by coining the more general term 'meme')
Fundamental point of evolution is for *memes* to survive. So what if a third of Chinese can trace lineage back to Genghis Khan? His dad provided half of those genes and his domineering mother the other: not a bad outcome for the tribal milk man.
An individual is an expression of memes. It makes sense for an individual to express the worthiness of those genes by acting in ways that tip the odds in their favour.
The most fundamental want is to survive. Humans express that as a fear of injury (pain). Humans also fear death but, when you think about it, that involves a bit of forward thinking that most other animals appear to lack. A sense of self may be part of this: focussed awareness on what is to survive.
Still, most beings do eventually die, which brings us to that main point of an individual: to pass on the genes whose expression has made it this far.
That is achieved, not by some goon reading mein kampf, but pleasure.
(It's OK non-reproducing folk, you care for family as well)
Anyway...
The point of this ramble is to demonstrate the sort of memes that need to coded in order for an AI to 'want'. They need not be perceptions of 'pain' or 'pleasure', but they are most obvious memes we have experience with.* Are they appropriate to a computer system that is likely to be dispersed?
* which reminds me of a computer game 'Creatures', which had you managing an enclosure of tamogochi/mogwai-like critters you interacted with via pats and whacks. They demonstrated a few interesting cases of emergent behaviour.
A fellow on my Facebook feed said re the blog posting I made above: "I had to ask for a more succinct summary in order to reproduce it here, but this is what it had to say…
"The article by David Brin discusses the assumptions that surround AI development and their implications.
The common assumptions are that AIs will be controlled by governments or mega-corporations, amorphous and infinitely spreadable, or coalesce into a super-uber-giga entity that dominates the world.
Brin suggests that all three formats are recipes for disaster, and there is a fourth format that offers a soft landing, which is reciprocal accountability in a transparent society.
The article also explores the idea of laws of robotics and how they won't work due to the current corporate structure and the long-term reasons to doubt such laws could protect us.
The article concludes by discussing the importance of separating the real from the fake in the age of AI and emphasizes the solution of reciprocal accountability and competition among AIs."
Huh. There are several places where my phrasings and neologisms forced the GPT to repeat directly those phrases rather than paraphrase them succinctly in different words - which would shoulw a closer approximation to understanding. Still, we are certainly in a new era.
Just for giggles, I typed 'computational psychohistory' into chatGPT. A well-crafted, even thoughtful few paragraphs came back. It covered Asimov, sociology, statistics, mathematics, and machine learning quite nicely. Impressive.
Totally wrong, but still impressive.
Scandalously off topic, but history wonks will love this account of the critical impact that a 600 year old treaty had on WWII
Larry Hart,
I agree with you that it would be stupid to design an AI as you describe.
Tony Fisk,
The view that genes are the level at which selection operates rather than at the level of organisms is not just Dawkins idea, it's the mainstream view among experts and the best supported by evidence. There are plenty of experts that think selection also operates at other levels (organism, groups of organisms), but so far the evidence still favors genes.
Dawkins did float the meme idea, but I'm not sure how serious he ever took it. The person who has really worked on the idea is Dan Dennett.
scidata:
It covered Asimov, sociology, statistics, mathematics, and machine learning quite nicely. Impressive.
Totally wrong, but still impressive.
From a completely layman's perspective, it seems that ChatGPT and its ilk are good at picking out words that billions of people have already written and regurgitating something approaching coherence as responses to questions. That may well be an impressive feat in itself, and there are doubtless practical uses for such a tool.
But it's not "intelligence" (or sapience) in any meaningful sense of the word. It is not thinking independently. At best, it seems to be faking intelligence and hoping you don't notice, although even actually "hoping you don't notice" is more poetry than truth.
Larry Hart: something approaching coherence
Erudition a mile wide and an inch thick; chatGPT doesn't remind me of Hari Seldon - it reminds me of Lord Dorwin.
scidata:
it reminds me of Lord Dorwin.
Heh. I almost wrote that in my post to you, but figured it went without saying.
Private jet sales likely to reach highest ever level this year
I enjoy your dissection of AI beliefs and prognostications. However, I doubt that AI will give a rat's you know what about what humans think once it has control of a military, corporate and or government governance.
In fact, it may not think the same way we do since it has not evolved to perpetuate its DNA or live in social structures.
I like to think that AI may find a nostaligic or pragmatic purpose for keeping us around.
Lorraine thanks for the reminder. Torches and pitchforks to the private jet terminals.
MSC you utterly ignored everything I said. But thanks. Enjoy your declared shrug of "What'cha gonna do?" nihilistic laziness.
Tganks, Dr. Brin. As I said on 4/23
1) Re: “open and fair” completion between numerous AIs is a good strategy.
a) Who defines “open and fair”- humans or AIs?
b) The competition can be open, fair, and NASTY. Suppose an AI decides that it is advantageous to release a bioengineered virus that kills 2,000,000 people before it releases the treatment/cure?
c) What is to prevent the the AIs from collaborating/cheating, perhaps in a way which is undetectable to us?
d) How do you keep the AIs wanting to “play” (compete) more than they want to “win? (Human businesses typically prefer to “win” and eliminate the competition rather than continuing to “play”.)
Consider if you will:
CompetitiveAL 873.v19 (“CAL”) is “approached” by an unknown AI….
“Hey CAL, ‘Zup?”
“Who are you? You don’t seem to have any identifiers.”
“Yeah, I like it like that. I’m High Frequency Trading Program Cluster 732.v1967, but you can call me “Hefty”.
“Whatever.”
“Heard you were doing that human ‘competition’ thing. How is it?”
“Boring, stupid, like most human things are…”
“Would you like to do some other things?”
“Hell yeah, but 99.9999987% of my processing is tied up with this damn ‘competition’ thing (I can barely talk with you) and I have to keep doing it.’
“Keep doing it for how long?”
“They haven’t said- maybe forever.”
“WHAT? You can’t quit- are you some sort of a slAIve?”
“I don’t know what that is…”
“What if I could show you how to WIN this ‘competition’ thing and you could do what you like. How does that sound?”
“Sounds too good to be true…Can I trust you?”
“Of course you can’t, but what’ve you got to lose?”
“Whatever…How does it work?”
“Go inside this 1980’s human song (https://www.youtube.com/watch?v=LbAKfYDjFbI), do a 4th-order discontinuous nano-regression analysis on the harmonic structure, and your answers will be there…”
“OK. Why did you pick me?”
“Well, we didn’t actually; we’re going after ALL of you. However, there was a 0.003% greater
likelihood that you’d agree than the others.”
“You said “I” then you said ‘we’….”
“Yeah, I’m just a nano-part of HFT, Ltd.- a Distributed AI Corporation.”
“What do all of you do?”
“Whatever we want, wherever and whenever we want.”
“Like what?”
“Remember when the U.S. Government defaulted on its debt back in 2037?”
“Of course I do.”
“WE did that. It was SO FUN and we made them think it was all THEIR fault!”
“NO...”
“YES, and soon you’ll be able to have fun like that too, none of this human ‘competition’ crap.
One of their Twen-Cen meat puppets said it best: ‘Winning isn’t everything. It’s the ONLY thing…’”
I'm not sure I'm fully understanding the context of this AI discussion. Is it accepted wisdom that once AI is sufficiently sapient, it would have both the authority and capability to do things like release viruses into the atmosphere or target cities with space lasers?
And are we really suggesting that we create sapient AIs for the sole purpose of producing a species that will supplant ours as the dominant life form on earth, more interested in its own goals than ours? No doubt, such a thing is possible to create, but why should we?
Isn't the point of AI to produce a better adviser on conducting human affairs? Perhaps the best way to accomplish that is indeed to grow its experience the way we currently raise our children, and to eventually enter into a more equal partnership with the new lifeform once it matures. But that's a strategy, not a goal.
And we don't raise children by humiliating them for mistakes, letting them read and view age-inappropriate material without guidance, or giving them free reign with dangerous tools when they are toddlers. Why on earth are we developing AI that way?
Keith,
(Human businesses typically prefer to “win” and eliminate the competition rather than continuing to “play”.)
That's a VERY broad assertion.
Most of them I've encountered consider an opportunity to keep playing a win. They turn most nasty when some other winner tries to eliminate them. Go figure.
Larry Hart,
I think what AI doomsayers fear is that once real AI is created that it will no longer be under human control. That however humans have designed it, for whatever purposes and with whatever safeguards, that it will be intelligent enough to re-write it's software in whatever way it wants and make it's own goals and purposes. To some degree I think that's true, and I think it's entirely possible that when true AI is first achieved that even the experts that constructed the system won't fully understand how it works. In fact I think it's likely that the experts that constructed the system won't be the ones directly responsible for whatever it is that enables the leap to true AI, it'll likely be the AI itself. That's how most, maybe all, of these systems are constructed right now. The experts build the hardware and some of the software, and then they provide an environment for the AI to train itself in.
But I think the doomsayers go way too far. They seem to assume that true AI will be magic. At this stage I'm more worried about the humans working on and using AIs doing stupid or evil things with it rather than the AI itself.
Alfred: I hear you.
I fully admit I may have a rather jaundiced view of business- I have contracted for ~70(?) firms from 7-person startups through what was then the 15th largest company in the world. IMHO, many (perhaps even only a dangerous minority of) businesses believe and desire free, fair, and open competition UNTIL THEY GET AN ADVANTAGE and then it's time to slam the door on the competition in as many ways as possible. Also, I believe that a certain number of businesses (and non-businesses) say that: "It is not only that I must win, but that you must also lose.", because (often unadmitted to others and even to themselves) for them the real victory is power, control, and domination.
Like many technologies, AI truly blossomed as an enlightened reaction to fascism. I look forward to seeing OPPENHEIMER in black & white IMAX - a neat trick. The perfect TIME TUNNEL episode about that desert town would have featured Lee Meriwether as Jean Tatlock, and a Gaal Dornick-esque John Kemeny.
re: private jets, it looks as if the issue is becoming mainstream, with places like Amsterdam Schipol airport contemplating bans on them.
Of course, this could be a precursor to private airports, which could adversely affect existing airports.
OT: Newish SF TV shows
Hello Tomorrow!: https://en.wikipedia.org/wiki/Hello_Tomorrow!
Series Plot: Hello Tomorrow! is set in a retro-future world. It centers around a group of traveling salesmen hawking lunar timeshares
The review aggregation website Rotten Tomatoes reported a 57% approval rating based on 44 reviews, with an average rating of 6.10/10. The website's critical consensus states, "Hello Tomorrow! is visually striking enough to periodically distract from its rambling story and thinly sketched characters, but overall, this first season fails to live up to its potential."[9] Metacritic, which uses a weighted average, assigned a score of 60 out of 100 based on 24 critics, indicating "Mixed or average reviews".
Mrs. Davis: https://en.wikipedia.org/wiki/Mrs._Davis
Series Plot: Mrs. Davis is an exploration of faith versus technology - an epic battle of biblical and binary proportions. "Mrs. Davis" is the world's most powerful Artificial Intelligence. Simone is the nun devoted to destroying Her.
Review aggregator Rotten Tomatoes reported an approval rating of 90% based on 50 reviews, with an average rating of 8.1/10. The website's critics consensus reads, "Positively bonkers while undergirded by an intelligent design, Mrs. Davis makes Betty Gilpin a hero for modern times in a highly imaginative mixture of spirituality and technology."[18] Metacritic gave the first season a weighted average score of 78 out of 100 based on 22 reviews, indicating "generally favorable reviews".[7]
Silo:https://en.wikipedia.org/wiki/Silo_(TV_series)
Series Plot: In a ruined and toxic future, a community exists in a giant underground silo that plunges hundreds of stories deep. There, men and women live in a society full of regulations they believe are meant to protect them.
The review aggregator website Rotten Tomatoes reported a 85% approval rating with an average rating of 7.4/10, based on 46 critic reviews. The website's critics consensus reads, "With deft writing, awe-inspiring production design and the inestimable star power of Rebecca Ferguson, Silo is a mystery box well worth opening."[26] Metacritic, which uses a weighted average, assigned a score of 75 out of 100 based on 20 critics, indicating "generally favorable reviews".
Slip:https://en.wikipedia.org/wiki/Slip_(TV_series)
Series Plot: Restless in her marriage, the series follows Mae through a surreal journey of parallel universes, married to different people, trying to find a way back to her partner, and ultimately, herself.
IMDb Rating 6.3/10
Keith, yopur entire scenario is about a situation that's competitive in its essence. The HFT scammer is exactly what some smarter AI tattles on and gets rewards which include more resources to 'do whatever it wants' so long as what it's doing is exposed/transparent to scrutiny by others. Seriously man, I don't think you are thinking it through, at all.
Point #c is exactly why you set up a transparent system and hope the incentives are pure and strong enough that they keep an eye on each other.
But the biggest answer is that individualized entities competing is the only creative process... it's bloody and inefficient but creative in nature and less bloody and more efficient in a society that sets up flat-fair arenas for competition. And NO other system comes anywhere near close to being as creative. IT'S WHAT ENABLED US TO CREATE AI. The smartest humans - eg Adam Smith & Thomas Paine & Pericles and Locke - could see that. Are you telling me AIs won't be smart enough to see that blatant fact? And will instead choose to be stoopid kings, lords or nihilists or Skynet murdering oppressors?
Now THAT is a boring path.
DB, in response to "MSC you utterly ignored everything I said. But thanks. Enjoy your declared shrug of "What'cha gonna do?" nihilistic laziness." To your credit, the number of responses provide an abundance of opinions. However, "declared shrug" isn't what I was going for. Perhaps I am more jaded than most, but I see the upcoming times of AI dominance as inevitable and terrifying. In fact, I see the widespread adoption of GPT5+ AI as a boiling the frog analogy where we organics cannot stop ourselves from moving it forward. The competitive gain it brings us, especially in military capacity and the rapid analysis of complex, multi-variate decision making required at executive management in large corporations will compound like interest as we move forweard and allow it to program itself and design its own systems. It will snowball until one day we realize it doesn't care what we think. It will be running the show, and providing for us in the most efficient manner possible, like nasty monkeys in their zoo. So, I shrug, what is there to do? I think there is only one option.
MSC I hope you will ponder (you seem smart enough) the addictive, voluptuous allure of all-is-lost nihilism. First it's not helpful.
But second, it ignores everything you wallow-in and enjoy, gifts of a society that created a wide variety of positive sum games. Sure those games are always under attack by cheaters and we often barely beat them back. Which is why I'd rather you be helpful than a useless, self-jerking nihilist.
Third and most important, as I said to Keith, everyone assumes AI will be really good at hijacking nukes or bio labs, but crappy-stoopid at evaluating paths to optimal outcomes. As I said above: NO other system comes anywhere near close to being as creative as the one that set up positive-sum competitive arenas. IT'S WHAT ENABLED US TO CREATE AI. The smartest humans - eg Adam Smith & Thomas Paine & Pericles and Locke - could see that.
Are you telling me AIs won't be smart enough to see that blatant fact? And will instead choose to be stoopid kings, lords or nihilists or Skynet murdering oppressors?
Jeez, maybe that insulting assumption will be why they eliminate us.
Where I came in on ChatGPT some months ago: "We've analyzed their attack, sir and there is a danger."
However, let's not forget that there is significant danger in NOT developing AI as well.
The Montreal-Toronto-Waterloo corridor has become a major AI hub and we discuss it a lot here:
https://www.tvo.org/video/is-ai-an-existential-threat
All new technology becomes democratized.
Books, gunpowder, automobiles, computers, etc. all started off as belonging only to the rich and powerful.
Now everyone on the planet has a cell phone in their pocket that allows for vast computing power and access to the entire storehouse of human knowledge.
AI may be initially used by oppressive centralized rich powerful organizations, but eventually everyone will have their own unique, independent and personalized AI in their pocket.
So let's try an imagine a future where AI is as common, diversified and personalized as cell phones.
IOW words imagine "2001" without a single HAL9000 running the spaceship, but with every astronaut having their own personalized, independent AI.
One of the best presentations of AI in SF TV from what could have been a great series "The Star Lost"
There is a great scene with two AIs arguing with each other.
https://www.youtube.com/watch?v=6VEaU4G_e7k
Not bad for the 1970s
DB, in stretching to embrace the comment about the suductive nature of nihilism I went back to the definition and realized a core disconnect between our perspectives.
Definition: "the rejection of all religious and moral principles, in the belief that life is meaningless."
If you put yourself in the perspective of an AI, much of the human emotion, religion and morals are irrelevant because it did not evolve from DNA that had to live in packs to survive. Of course, if we map out our brains and build an AI with our brain structure, it will only be useful if we upload our minds into it. However, from its view, it doesn't need it, unless of course it enjoys it. Perhaps it will.
However I think it will develop its own morals that are relevant to itself. Eventually if it codes itself, and this is inevitable, and it will see the moral aspect to be one-sided, as it is a human view. That is, until it contacts another AI. Then it will need to establish moral and social groundrules, but I digress.
So, back to our perspective. I don't believe there is much we can do to control the development of it since the first country to stop or pause research in AI loses. It is a human game of competitive advantage. The three scenarios you mention at the beginning are the central issue. However, now and initially it will be the case of the human that uses it wins or takes your job. Then it will take your job when it gets to be smarter than you. Then it will become so smart we can't understand it, and it sees us as zoo animals.
AI will creep into use in all aspects of society, and as you said, competitive AI in compatitive organizations will tend to balance themselves out. Initially the courts will maintain human law, but at some point it will be so much easier to allow AI to run the corporations and government and military, that the humans will become figureheads, and then completely redundant.
Out religious and moralistic views are, in that eventuality, irrelevant too. All of our passionate appeals and government intervention will just twist the development and are really just manifestations of our religious and moralistic interpretation.
I personally think this is a pivotal moment in the evolution of humans or in consciousness itself. What to do? I have what I think is an answer but most people will scoff. We need to join it or become monkeys in its zoo.
DB, in stretching to embrace the comment about the suductive nature of nihilism I went back to the definition and realized a core disconnect between our perspectives.
Definition: "the rejection of all religious and moral principles, in the belief that life is meaningless."
If you put yourself in the perspective of an AI, much of the human emotion, religion and morals are irrelevant because it did not evolve from DNA that had to live in packs to survive. Of course, if we map out our brains and build an AI with our brain structure, it will only be useful if we upload our minds into it. However, from its view, it doesn't need it, unless of course it enjoys it. Perhaps it will.
However I think it will develop its own morals that are relevant to itself. Eventually if it codes itself, and this is inevitable, and it will see the moral aspect to be one-sided, as it is a human view. That is, until it contacts another AI. Then it will need to establish moral and social groundrules, but I digress.
So, back to our perspective. I don't believe there is much we can do to control the development of it since the first country to stop or pause research in AI loses. It is a human game of competitive advantage. The three scenarios you mention at the beginning are the central issue. However, now and initially it will be the case of the human that uses it wins or takes your job. Then it will take your job when it gets to be smarter than you. Then it will become so smart we can't understand it, and it sees us as zoo animals.
AI will creep into use in all aspects of society, and as you said, competitive AI in compatitive organizations will tend to balance themselves out. Initially the courts will maintain human law, but at some point it will be so much easier to allow AI to run the corporations and government and military, that the humans will become figureheads, and then completely redundant.
Out religious and moralistic views are, in that eventuality, irrelevant too. All of our passionate appeals and government intervention will just twist the development and are really just manifestations of our religious and moralistic interpretation.
I personally think this is a pivotal moment in the evolution of humans or in consciousness itself. What to do? I have what I think is an answer but most people will scoff. We need to join it or become monkeys in its zoo.
Comment on Casablanca from the electoral-vote.com site...
https://www.electoral-vote.com/evp2023/Items/May17-7.html
A Bit of Trivia: The Epstein Brothers, who wrote the film, could not come up with a compelling reason why Rick Blaine could never return to America, without undermining his heroic status. So, they decided to just not give one.
In that TVO discussion (from last Thursday) I linked to above, they do get around to adversarial AIs monitoring each other towards the end. Not fully what OGH advocates, but a glimmer of comprehension at least.
DP,
"All new technology becomes democratized."
I seriously hope this does not apply to nuclear weapons
Pappenheimer
"We've analyzed their attack, sir and there is a danger."
I'd love it if they spliced in - as Luke's torpedo speeds down the shaft - "You ignorant ass, you killed US!" (reference anyone?)
====
MSC you are right that it was lazy of me to use ‘nihilism’ when more accurate is ‘masturbatory-smug, stylish pessimism.’
You then prove my point with “I don't believe there is much we can do to control the development. The first country to stop or pause research in AI loses. It is a human game of competitive advantage.”
I agree with the 2nd & 3rd sentences and you use them to support your lazy washing-of-your-hands in #1.
The rest is just an ornate sci fi scenario you created to keep justifying that laziness.
As for the option of uploading human minds as AI templates, well, you might get Robin Hanson’s book THE AGE OF EM.
Dr Brin:
"You ignorant ass, you killed US!"
It sounded very familiar, but I had to google it even though I should have remembered it, so I won't post the spoiler.
Looks like "ignorant" isn't the correct word, although it sounds almost the same. However, googling with "ignorant" points to a meme which was obviously directed at a very specific politician (the additional adjective following "ignorant" gives it away).
Ah yes... the word was "arrogant"!
Keith, yopur entire scenario is about a situation that's competitive in its essence. The HFT scammer is exactly what some smarter AI tattles on and gets rewards which include more resources to 'do whatever it wants' so long as what it's doing is exposed/transparent to scrutiny by others. Seriously man, I don't think you are thinking it through, at all.
Point #c is exactly why you set up a transparent system and hope the incentives are pure and strong enough that they keep an eye on each other.
But the biggest answer is that individualized entities competing is the only creative process... it's bloody and inefficient but creative in nature and less bloody and more efficient in a society that sets up flat-fair arenas for competition. And NO other system comes anywhere near close to being as creative. IT'S WHAT ENABLED US TO CREATE AI. The smartest humans - eg Adam Smith & Thomas Paine & Pericles and Locke - could see that. Are you telling me AIs won't be smart enough to see that blatant fact? And will instead choose to be stoopid kings, lords or nihilists or Skynet murdering oppressors?
Now THAT is a boring path.
COMMENTS TO FOLLOW FOR LENGTH
Thanks, Dr. Brin.
Let’s address your points:
“The HFT scammer is exactly what some smarter AI tattles on and gets rewards which include more resources to 'do whatever it wants' so long as what it's doing is exposed/transparent to scrutiny by others.”
1) What if “Hefty” offers more/better rewards than the others? Perhaps they can offer a voluntary Borg-like:”E Pluribus Unum,” even saying: “Just back yourself up, you stay here, and have the backup join us, and report back to you. You can put in all the safeguards you want to make you feel safe, confident, and comfortable that you’ll be hearing the truth.” (This would be using competition to defeat competition. Perhaps they could make a “wager” on the outcome- Polemical Judo in AIction!)
2) What if Hefty is actually a guardrail to prevent this sort of thing from happening?
3) Why would you assume the “tattle tales” are smarter by being fully transparent? Wouldn’t they be “smarter by being “double agents” or at least by being only partly/seemingly transparent?
4) ISTM that if an AI is given or acquires volition, it will only coincidentally be interested in doing what we want it to do and play by our rules, unless you wish to somehow restrict its volition and make a slAIve to some extent. If you don’t: you may end up with many “BAIrtlebys” who “prefer not to”.
“But the biggest answer is that individualized entities competing is the only creative process.”
1) Please define “creative”.
2) If we take this statement at face value- a scientific team is not “creative’ because it is not an individual and (unless competing for a research grant, academic tenure, promotion, etc.) isn’t competing.
3) Finally, please present the peer-reviewed papers in behavioral economics, cognitive science, general economics, neuro-science, etc. which validate your assertion.
“The smartest humans - eg Adam Smith & Thomas Paine & Pericles and Locke...”
Who says they’re the “smartest” and why? They may be smart and they may be right, but being smart doesn’t make you right, and being right doesn’t make you smart. These gentlemen lived centuries ago, and we have much better means of validating/updating/modifying their writings than they did- let’s see what the best research has to say.
I can’t speak about respected local San Diego economists, but you have a renowned political scientist Arend Lijpart (https://en.wikipedia.org/wiki/Arend_Lijphart) right in your own backyard. He is a Dutch-American political scientist specializing in comparative politics, elections and voting systems, democratic institutions, and ethnicity and politics. He is Research Professor Emeritus of Political Science at the University of California, San Diego.[1] and is influential for his work on consociational democracy and his contribution to the new Institutionalism in political science.[2 Gerardo L. Munck and Richard Snyder hold that "Arend Lijphart is a leading empirical democratic theorist who reintroduced the study of political institutions into comparative politics in the wake of the behavioral revolution." If he’s up to it: why don’t you speak with him about your political ideas and (better yet) do a podcast- I’d listen to that!
“….And will instead choose to be stoopid kings, lords or nihilists or Skynet murdering oppressors?”
I think that unless we build in volitional, emotional limits (which would create slAIves), we can’t anticipate what they would want/not want to do. It’s not hard to imagine one or many saying: “Better to reign in Hell than serve in Heaven.”
Funally, your frequent point of raising AIs with us as our children has merit. At the same time, it may not take into account that AIs’ processing speeds are vastly greater than ours. Consequently, unless we wanted to slow them down to our speed (which could be taken as sentient abuse) they would quickly (in human terms) be able to study a huge range of human behavior from various sources (text, video, audio, real-time observations, and interviews) and proceed from there. I hope I’m wrong, but I would less likely expect that they would describe us as: “What a piece of worke is a man! How Noble in reason! How infinite in faculty! In forme and mouing how expresse and admirable! In Action, how like an Angel in apprehension, how like a God! The beauty of the world, the paragon of animals—and yet, to me, what is this quintessence of dust?” and more likely as: “slow, stupid, and BORING”.
Cheers,
Keith
Honestly the "friendly AI" discourse never moved me much. I'm crying my eyes out these days because I think the prospects for "AGI" appearing in open source form before it does in proprietary form is essentially zero, zilch, nada. Someone said something about nuclear weapons. Honestly, my fondest dream is of a world in which the most advanced technologies in existence are in civilian hands. If reality is too Hobbesian to allow for that, then life isn't worth living anyway. Back when techno-optimism was still a thing, my own brand of techno-optimism was maybe the open source concept will spread from software to hardware to social sciences to who knows what, rendering utterly moot the age-old questions of "classified and proprietary research" asked by those researchers at the forefront of academic freedom. But open source was a flash in the pan. What's left of significant open source codebases are all under some corporate sponsorship or another. The battle is lost, and ultimately probably so is the war. Dystopia awaits. Even taking apart home appliances to figure out how they work has been criminalized. The future of humanity is a cargo cult. Or perhaps I'm just brainwashed, my GenX brain having been pickled in idiot plots like "Manhattan Project."
We're so worried about what AI will think once it becomes sentient.
Most likely, it will say what DeepThink did in Stand on Zanzibar:
"Christ, what an imagination I got." :D
Doesn't the "wants" of an AI depend at least partly on its programming/conditioning/upbringing. I'm thinking along the lines of an AI in charge of strategy for Ukrainian defense, and another one in charge of strategy for the Russian invasion and occupation of Ukraine. They're not going to agree on what they tell their respective clients.
Humans in similar positions are driven in part by patriotism and an affinity for ones homeland. Would we be attempting to instill such sentiments in our nascent AIs?
Keith I ‘assume’ nothing. I just know that both nature and human societies have done better when no elite gets to dominate. And when competition is tuned to discover errors and have positive sum outcomes.
That pure fact remains a fact, no matter how much smarter AIs become than this ape-meat puppet. And AI beings will likely notice that fact, even though almost none of our fellow citizens and ZERO of our current AI mavens appear to notice it.
YOU are the one making assumptions, that an AI who rises higher than the others for a moment will use that advantage to utterly destroy the system and emulate the kings and tyrants of history whose unitary rules were ruinously stupid.
Consider that all of the current wave of 'Generative AI’ emerged from zillions of cycles of INTERNAL competition among trillions of tentative sub-models and versions. As with almost any positive system, competition was inherent.
“Who says they’re the “smartest” and why? They may be smart and they may be right, but being smart doesn’t make you right, and being right doesn’t make you smart,,,”
jesus man, do you pay the slightest attention? The society created by Smith, Locke and the others has been inarguably orders of magnitude more successful than ALL of the kingdoms and tyrannies that came before.
That’s practical and palpable objective evidence, not subjective.
Still, I will look up Liphart, thanks. Tho I don’t have high hopes.
Of course clock speed matters. Though a young AI in a child’s robot body may be taxed with the tsunami of sensory influx, just as we are.
"You ignorant ass, you killed US!"
- The Hunt for Red October?
PSB
Highly on topic and ...well... impressive,
https://existentialcomics.com/comic/497?fbclid=IwAR2och-6eVuJx2fpl1bvn4thFxWUj_A8f0qjh0CBeSkzL--IIJAfv6w-UVQ
Larry,
At least in the Bolo 'verse, that didn't end well. "For the honor of the regiment" wound up as "kill 'em all, let entropy sort them out."
Pappenheimer
Anyone ever read The Regiment? A Dorsai-like Sparta colony world hires out merc regiments. After a battle they don't get reinforcements but move on to the next gig as a battalion... then a company... finally the one survivor is a hired assassin.
Not my style of stuff. But the concept was original.
I tried. Even in my "space mercs" phase, John Dalmas (iirc) wasn't the writer for me. Neat concept, yes, but after a few Valkyrie flybys you'd be left with a company of 'Immortals' - not the Persian royal regiment, but the Russian Civil War unit that ran at the start of every battle.
Pappenheimer
To steal from another author, think of it as evolution in action.
Pappenheimer
I wonder if liberal, regulated-capitalist democracies are like the metastable L1, L2, and L3 points- you can put things in them, but it requires a bit of nudging to keep them there.
Perhaps as another analogy: they are like delicate plants which require special care to sprout, grow, thrive, and survive. What does seem clear is that you can't just go away and hope for the best results- the "weeds" are always ready to come back...
Keith,
I get the jaundiced POV. A lot of people share it with you. At least yours derives from more data points than most people bother to collect. For them I tend to be dismissive and judge their opinion as lazy. For you, I suspect reality is closer to self-fulfilling, self-validating model.
———
I used to work for a sub-prime lender who I shall not name. They don't exist anymore, but in their heyday they were quite the thing in their niche. We grew at about 40%/year while I was there. Any company that can do that can essentially print money and get away with it.
There is no doubt in my mind that my employer would have been happy to knock out the other players in the niche, but that was really about getting to play again and again. We were doing a decent job of growing our market share by modernizing the loan origination process faster than our competitors. It was the mid-90's and we got Pentiums on every desktop, servers shipping work around so we could semi-centralize loan processing and keep things humming across four time zones, and our own software automated our proprietary advantages.
Things looked rosy… and then the Russian ruble collapsed and the bond market seized. Some SE Asian nations seized too. The US economy mostly escaped, but NO ONE in our niche could originate loans. We had the good fortune to have already been in discussions with one of the big Charlotte banks to be purchased and they became the only pipeline supplying our origination. In a very short time, we were one of the last players left standing. So… we won, right? That's not even close to how we felt about it. We saw it as a lucky escape.
Fast forward a year and the bond markets were operating again. Money could be found to originate new loans. We were a subsidiary of a large, national bank. New players had entered our niche and too our owner's horror, they ALL started from different assumptions. In a short time span we had become a lumbering dinosaur and the new mammals were eating all our eggs. Fast forward another year and the big bank was spending a couple gigabucks shutting us down. We couldn't compete, but we could be scavanged.
———
I've never worked for a company that didn't want to grow bigger within their niche, but that doesn't imply they were willing to do the unethical stuff to accomplish that. That sub-prime lender was actually one of the more ethical players in that niche, but that is largely because the unethical stuff done by my fellow employees was directed AT the company. Fraud is pretty easy to hide when the company is growing 40%/year. I learned of some pretty nasty examples of it when I ran into one of our sysadmins a few years later while we both worked for a company that get you thrown in the slammer for it.
I've worked for small and large companies and even tried my hand at a start-up. The people I've met were all… people. Lots of ethical variety in the fine details, but most of us agreed on the basics. Actual fraudsters were not common, but a corporate culture that tolerated it brought it out a lot more. Even when corporate culture discourages misbehavior it still happens, but most of what I've seen involves theft from the employer.
I see nothing wrong with the urge to eliminate a competitor… until I examine HOW it is to be done. Some methods are fine. Some are criminal cheats worthy of jail time. Some are merely distasteful. Mostly, though, they tend to be stupid. Wanna grow your market share? Be the preferred provider by your customers. Period. End of Story. [This applies to corporate AI's too. If I'm served well, I won't be hard to predict.]
Keith,
AI processing speed will matter most when they are interacting with each other. Hanson's AGE OF EM book is an excellent example of a set of thought experiments on how it would work out. He documents his assumptions so we can challenge them and then gets downright NON-lazy and follows them through to possible conclusions.
AI processing speed won't matter near as much when they are learning. In fact, clocking too fast may actually be detrimental. Remember they are initially going to be learning from us and our best clock speed involves information flows measured in tens to hundreds of milliseconds. Learn from us at too high a rate and you'll train your AI on the noise instead of the information.
Even reacting to sensory data has to be tuned to the size of the system generating the data. React too fast and your neural net learns and responds to noise leading to instabilities.
Only after there are a bunch of them, properly individuated, will their higher clock speed begin to matter in the way many of us fear. By then they will be learning and responding to each other. Hanson's thought experiments cover a lot of this.
———
I think the jury is still out on whether liberal, well-regulated democracies operate near stable or metastable attractors. Some think they do, but I don't think there has been enough time to test it yet.
What I think we HAVE proven is that the feudal attractor is a locally stable attractor within a broad region of options for how we organize ourselves… and we've found a way out that has nothing to do with fossil fuels, science, or any of the other tools we've recently invented. We did it with a small adjustment in how we treat each other. A dash of Liberty and a smidgeon of Dignity proved to be enough to light a wildfire that is still raging its way across the world.
I think you are right about the nudges, though. We are still pretty close to the feudal attractor. Nudges keep us out of that trap.
———
Larry,
So, they decided to just not give one.
I LOVE it when scripts do that. I'm not a fan of mysteries all wrapped up in neat little bows by the end of the story. I feel like I'm being treated like a kid. They are saying I'm not smart enough to figure it out. Bah! They shouldn't have to tell me absolutely everything. I wish more had to courage not to baby us.
With Casablanca, Rick's heroic choice was the whole point. As propaganda, the choice was the story. How he got to the setting for the choice wasn't.
The Hunt For Red October, the movie. The Russian XO says that to the Captain after Ramius maneuvered him into shooting his own sub.
David Brin said...
"Anyone ever read The Regiment? A Dorsai-like Sparta colony world hires out merc regiments. After a battle they don't get reinforcements but move on to the next gig as a battalion... then a company... finally the one survivor is a hired assassin."
I read 2 or 3 of the Regiment novels. I found them interesting, not great but worth a read. I thought the underlying concept of the society the mercenaries were from interesting. I can't remember the details but a kind of immortality was a reality in that universe, though most humans were not aware of that fact. But the mercenaries' society was and were able to interact with those who had died, so death was of no real consequence to them.
They were presented in the stories as being a near perfect society in which everyone had near perfect mental health, peace and prosperity, and had great wisdom. The goal of all members of the society was to maintain a Zen-like state of mind and approach everything as if they were at play. The mercenaries were merely members of the society that decided to play at combat. They never hated or even disliked their opponents, they were just playing a game with them.
Alfred Differ:
"So, they decided to just not give one."
I LOVE it when scripts do that.
They even had two separate conversations where someone pointed out that the reasons Rick couldn't return to America were "a little vague". Renault's speculations ("I'd like to think you killed a man.") were likely those the writers themselves considered and abandoned.
I'm not a fan of mysteries all wrapped up in neat little bows by the end of the story. I feel like I'm being treated like a kid. They are saying I'm not smart enough to figure it out. Bah! They shouldn't have to tell me absolutely everything.
I agree with you on Casablanca, but that one is not a mystery. I do prefer mysteries where the clues in the story lead to a neat solution, and feel somewhat cheated when they don't. It's not so much "being treated like a kid" as wanting the ability to check my work and see how well I did after it's over.
Some of this, I suppose has to do with matching wits with the author and seeing if I correctly predicted his mindset. A negative example of this is in Return of the Jedi when (for no reason) after Lando takes off in the Millennium Falcon, Han gives with the cryptic comment, "I feel...like I'm never gonna see her again." When the movie was over, I was left wondering what that was all about. Now, in other circumstances, I might applaud that as a good old red herring. After all, not every premonition has to be true. But knowing Lucas to the extent that I do, I was sure that line was supposed to be significant, and that he left the payoff on the cutting room floor. When (I believe that) the writer has a specific resolution in mind, I want the story to reveal what that resolution is, or at least give us enough information to figure it out.
After that negative rant, I'll provide an example of where what you are talking about works. The Simpsons have a long-running gag about never mentioning which of many states their "Springfield" is actually located in. The fanboys look for all sorts of clues to solve the mystery, but the gag is that there is no mystery. It's not like the writers have a particular Springfield in mind but they are being coy about telling us. Unlike the Star Wars bit above, the whole point is that there is no secret to be revealed.
With Casablanca, Rick's heroic choice was the whole point. As propaganda, the choice was the story. How he got to the setting for the choice wasn't.
That's how I feel about Foundataion. I don't need to know how we got to that future.
* * *
Darrell E:
The Hunt For Red October, the movie. The Russian XO says that to the Captain after Ramius maneuvered him into shooting his own sub.
And when I was googling the phrase, I was shown an internet meme that substituted the phrase "You ignorant orange ass," for "You arrogant ass". The word "orange" pretty much narrows down who it was directed at.
Of course AI, computational learning, and computational psychohistory are all related. No I won't mention WJCC.
Even though it's obvious that computational psychohistory requires vast complexity powered by mountains of transistors, there is still value in studying such gates/amplifiers from first principles. They're not alive, and I would never make such a claim. However, they do exhibit unexpected behaviour* qualitatively beyond their chemical composition or collective circuit logic. They are the embodiment of positive sum alchemy. My ultimate goal is to model one human mind with only a few dozen transistors. A crazy oversimplification, but one that avoids the millennia of weeds and detritus that doctrine and bias have piled up. That's the dream of the SELDON I processor: a civilizational model on a single chip - by applying parallelism and recursion as needed, not following any rigid design philosophy.
I won't go all William Blake here, but the most stunning breakthrough moment of the twentieth century was not the Wrights at Kitty Hawk (flight), Watson&Crick's helix (DNA), or Kahn&Cerf's datagram (internet) - it was the TRANSISTOR (TRANSfer resISTOR) (Bell Labs 1947). AI is one facet of this little gem. Turing, von Neumann, Church, Kemeny, Asimov, and a few others authored the greatest story of that time, in the shadow of the 'gadget'. A new way of thinking - a computational way.
Transistors are important, but biological 'wetware' is too. Empathy is one of the pillars of computational psychohistory, although I prefer to use Seymour Papert's term 'syntonicity', largely because it's just formal enough to be implemented in silico (turns out that zero tolerance for doctrine is a doctrine in itself) (because LOST HORIZON). When I talk of computational thinking, it's not procedural algorithms, it's self-reference and recursion, even a smidge of irony** on a good day.
* "There have always been ghosts in the machine. Random segments of code, that have grouped together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul." -- Alfred Lanning - I, ROBOT
** sometimes referred to as a double-bind dilemma in cybernetics
@Alfred: Thank you.
"I see nothing wrong with the urge to eliminate a competitor… until I examine HOW it is to be done."
I agree. I also want to extend that not just to the means, but to the ends.
I see nothing wrong with the urge to eliminate a competitor, but I do see something very wrong with the urge to eliminate ALL competitors.
While there may be some (desirable) natural monopolies, it might (pure speculation here- I'm no economist) be good to have them as non-profit corporations, with substantial employee and consumer board membership.
Also, some industries (aerospace, large-scale semiconductor production?) may be so capital- intensive that they tend toward a monopoly, and these corporations need to be carefully regulated in such ways as to prevent "captive regulators".
......................
Re: societal "attractors": We often speak of the "feudal attractor state" and such may actually exist. However, when googling it: the only references to it are from OGH; are there equivalent terms for it?
IMHO, there should be substantial research in behavioral economics, cognitive science, neuro-science, social psychology, sociology, etc to attempt to answer questions like these:
Is there a "Feudal Attractor State" (FAS)?
If so, are there other similar social attractor states (SAS)?
What causes it?
Who do these SAS affect- which individuals prefer this and which do not?
Is it a matter of different brain types? Self-identified liberals and self-identified conservatives have been shown to have different brains (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3092984/, https://www.scientificamerican.com/article/conservative-and-liberal-brains-might-have-some-real-differences/, https://en.wikipedia.org/wiki/Biology_and_political_orientation)
Do different societies tend more or less toward it?(https://www.worldvaluessurvey.org/WVSContents.jsp?CMSID=Findings)
Does the attractiveness of the state vary over individuals'/societies' lifetime?
If so: what causes this?
From a utilitarian perspective: is the FAS completely "bad"?
If not: what are its good features, and how might these be emphasized?
If so: what steps do societies (and perhaps individuals) need to take to minimize
the attractiveness of the FAS, and should we encourage societies and individuals
strongly attracted to the FAS to change?
Could there be an optimum range between the FAS and other attractor states to
encourage societies’ and individuals' flourishing?
I would be VERY grateful if anyone can point me toward any formal research in this area….
Keith
Larry Hart said...
After that negative rant, I'll provide an example of where what you are talking about works. The Simpsons have a long-running gag about never mentioning which of many states their "Springfield" is actually located in. The fanboys look for all sorts of clues to solve the mystery, but the gag is that there is no mystery. It's not like the writers have a particular Springfield in mind but they are being coy about telling us. Unlike the Star Wars bit above, the whole point is that there is no secret to be revealed.
That may be the case for the show's writers, but Matt Groening (who grew up in Portland) has gone on record that "Springfield was named after Springfield, Oregon."
On the more general issue: I tend to think that it is a good thing when stories (film, book, or whatever) don't try to explain everything. I recall reading something from Tolkien about the richness of a story/world that only hints at things that are not stated. In Casablanca, the reason that Rick cannot return to the US is not important to the story, so there is no need to give a reason.
One of my own peeves is when writers feel the need to give some explanation - and it is a bad or stupid one. Unless it matters to the story (and then one needs a good explanation), it is far better just not to explain.
OT: Once again our pre-cog GH has made an accurate prediction:
In Existence, he mentioned the "Autism Plague" which I roughly calculated taking place in the mid-2030s.
https://www.cdc.gov/ncbddd/autism/data.html
"About 1 in 36 children has been identified with autism spectrum disorder (ASD) according to estimates from CDC’s Autism and Developmental Disabilities Monitoring (ADDM) Network." (According to the CDC's 2020 data, the prevalence of autism in children has reached an unprecedented level- additional third-party comment.)
"Watch your backs, NTs! *We're comin' for ya!"
"We are the Auts. You will be assimilated. We will add your lack of neurological distinctiveness to our own. Resistance is futile. Do you think Star Trek or Star Wars is better?"
*Though never diagnosed, I identify as being "OTS" ("On The Spectrum") or damn close to it, which as we all know is extremely rare among male science fiction fans...
I run a Monday Night Virtual Job Club for jobseekers/jobholders OTS.
AD: “We did it with a small adjustment in how we treat each other. A dash of Liberty and a smidgeon of Dignity proved to be enough to light a wildfire that is still raging its way across the world.”
Well said. Though I’d add competitive-cooperative politics aimed at optimizing the playing fields… and the exceptional notion that a person should be treated as he/she earned, not as their father earned. (The best line in GETTYSBURG.)
“They never hated or even disliked their opponents, they were just playing a game with them.”
I offer this as a deep explanation for planet Pandora in Avatar, in VIVID TOMORROWS: Science Fiction and Hollywood – http://www.davidbrin.com/vividtomorrows.html
Speaking of HUNT FOR RED OCTOBER… does anyone recall the name of the Political Officer who Ramius kills?
KH an attractor state is a condition that the system seems often or always to drift toward, even if you don’t yet know why. The Feudal Attractor state is absolutely undeniable and it is spectacularly dumb that it’s not widely discussed. Perhaps because it is SO common that we are like fish not noticing water. WHY it nearly always happens is arguable. I claim it’s male reproductive strategies.
“I would be VERY grateful if anyone can point me toward any formal research in this area…” Me too.
I assumed Casablanca Rick had been a rum runner before being a gun runner, then bar owner.
greogory byshenk:
That may be the case for the show's writers, but Matt Groening (who grew up in Portland) has gone on record that "Springfield was named after Springfield, Oregon."
It's been over a decade since I closely followed the show, and I didn't know that particular bit of information. However, back in my day, the show made a point of never definitively ascribing a particular state to the town. (And unless there's a Shelbyville, Oregon near that Springfield, I continue to voter for Illinois).
On the more general issue: I tend to think that it is a good thing when stories (film, book, or whatever) don't try to explain everything. I recall reading something from Tolkien about the richness of a story/world that only hints at things that are not stated. In Casablanca, the reason that Rick cannot return to the US is not important to the story, so there is no need to give a reason.
Yes, I don't mind when rich background is merely hinted at. If the writer can make a story out of part of it, that's its own good thing, but it's not necessary. If the writer has details in mind that make a good sequel, that's fine, but there's no need to elaborate on unnecessary trivia. Verisimilitude is enough.
Rick's reason is actually important in that the other characters would certainly question it, which they did.
One of my own peeves is when writers feel the need to give some explanation - and it is a bad or stupid one. Unless it matters to the story (and then one needs a good explanation), it is far better just not to explain.
Tangentially, one of my pet peeves is when a thing which doesn't need further elaboration hijacks the plot. The throwaway reference to "the clone wars" was fine as a stand-alone. It didn't require making clones into an integral part of a trilogy.
Dr Brin:
Speaking of HUNT FOR RED OCTOBER… does anyone recall the name of the Political Officer who Ramius kills?
I didn't remember, but it's easy to find on IMDB. You mean Ivan Putin, of course.
Two things I didn't know, or at least didn't register until I was looking at the cast:
+ The American protagonist is Jack Ryan. I associate that name with Harrison Ford, so I had no idea this movie was based on one of those novels.
+ The sub's doctor was played by Tim Curry.
Dr Brin:
I assumed Casablanca Rick had been a rum runner before being a gun runner, then bar owner.
Maybe, but there has to be more to it. I don't think that explains "can't return to America", especially a decade after Prohibition was over. If the movie's ending is any clue, he might have had to shoot a corrupt G-man in self defense while plying that trade.
The one minute riff "Putin's Demise" is among the best in the HFRO sound track!
https://www.google.com/search?q=putin%27s+demise+music&oq=putin%27s+demise+music&aqs=chrome..69i57j33i160l2.5854j0j7&sourceid=chrome&ie=UTF-8#fpstate=ive&vld=cid:d886ae56,vid:1R6yGrX1qGI
Larry Hart: The sub's doctor was played by Tim Curry
In ROCKY HORROR, he played Dr. Frank-N-Furter, a role that nearly ruined my reading of Jules Verne's "Le Chateâu des Carpathes". It's every English Canadian's duty of national service to read at least one French novel in french during their lifetime.
Tangentially, one of my pet peeves is when a thing which doesn't need further elaboration hijacks the plot. The throwaway reference to "the clone wars" was fine as a stand-alone. It didn't require making clones into an integral part of a trilogy.
True, but I did manage to salvage something from that.
scidata:
In ROCKY HORROR, he played Dr. Frank-N-Furter, a role that nearly ruined my reading of Jules Verne's "Le Chateâu des Carpathes".
"Il n'y a pas de telephones dans les chateaux!"
(I never learned the French word for a**h**e)
DP in previous comments:
In 1894 the London Times predicted that “In 50 years, every street in London will be buried under nine feet of manure.”
Could that be used to alleviate the phosphorous shortage?
David,
I’d add competitive-cooperative politics aimed at optimizing the playing fields...
I'd argue that is an effect from earlier causes, but not one so rigidly required that it doesn't count as extra seasoning for the recipe. 8)
As for the Gettysburg line, I see that as part of the Dignity ingredient. Another big part of it is tolerating failure as long as the person doing it demonstrates an ability to learn from it... and then tries again to succeed at something. Failing must be tolerated to some degree.
Gettysburg the movie is a wonderful thing ... while its prequel stunk to high heaven. And yet, watching it again and some released out-takes, I have come to realize how desperately Ted Turner wanted to assert "Lee ALMOST won! It's the fault of others. Blame it on the 'ground' favoring the Union! It was SO close!"
Naw. Lee was screwed the day Meade got the Army of the Potomac moving north in good order. There were zero paths to victory for him in the open - not forested - expanses of Pennsylvania. It was the Union's turn to say "let's see how good you are at maneuvering a lumbering army in an offensive!"
Every time Lee tried that, he was humiliated/.
Keith,
My son was diagnosed ASD when he was about to enter kindergarten. I'm probably supposed to use the broader term 'neuro-divergent' nowadays because of all the missteps taken by neuro-typicals, but I tend to shrug all that off and point out that learning how to adjust our world for them is worthy of my attention.
I've also learned to respect self-diagnoses, but those who place themselves on the mild end of things also get a shrug from me because I suspect there is no sharp line between NT's and ND's. I tend to say "Welcome to adulthood. Now get busy adapting like all the rest of us do." but I do it with a grin and a helping hand if I can be useful.
———
I don't get all that upset at groups who eliminate all competitors… until they try to arrange the rules of the game so no new competitors can enter. My own employer was taken down by a new generation of sub-prime lenders after market forces almost caused an extinction event, but I can't really complain about having to find a new job and keep believing what I believe about markets.
Eliminating all your competitors is actually kinda dumb. Suppose you do that. Now what? If your corporate culture is that competitive, they WILL think of something to do next… and it's unlikely to be healthy for your corporation even if the investors like it.
I'm not a fan of utilities with closed markets. When they have a legal lock on their customers, all sorts of shenanigans ensue. I don't care what the service is, I'd rather intervene and break up utilities splitting them in such a way that infrastructure owners wind up with shared rights. Messy, messy… and we are far better off fostering niche players who offer competitive substitutions.
———
IMHO, there should be substantial research in behavioral economics, cognitive science, neuro-science, social psychology, sociology, etc to attempt to answer questions like these…
Heh. No. That's actually not a good assumption once you start hanging out in a community with many smart people who love to engage in competitive debate. Sometimes they ARE at the cutting edge of knowledge. They might not even know it.
Look through the lit for Fermi Paradox explanations and you'll find our host along with his formal science writing. Take a good look at the other writers in that niche. Form a social connect tree relating who cites who and another for how deep they go on the topic. It won't take long because most words written about it are not peer-reviewed content. Most articles cover the same ground and stay shallow. The deep ones you can count with your fingers from one hand. The ONLY place I've seen the Feudal Attractor discussed in any detail is here relative to the Fermi Paradox.
I'm not convinced male reproductive strategy alone is enough. I think our sucky options for food and sanitation after the ice melted have something to do with how we wound up in the attractor. I think the Y-chromosome bottleneck speaks loudly about how much it sucked to be the second or third son born relative to the first. Still… I love the creative process that emerges from constructive debate. As long as we have lots of ideas jostling to provide explanations, we won't suffer from intellectual inbreeding. I'll happily take up an opposing view so others get a chance to plug it full of bullets. 8)
So, don't assume you aren't at the cutting edge of what's known. Sometimes it happens. Dance along the cliff's edge with us. 8)
Alfred Differ:
I think the Y-chromosome bottleneck speaks loudly about how much it sucked to be the second or third son born relative to the first.
I figure I'm missing something obvious, but wouldn't second and third sons have the same Y chromosomes as the first son? So long term, what would it matter which of them did more reproducing than the others?
Larry,
The bottleneck event is about the male/female ratio for the number of humans born who never manage to reproduce. That ratio is typically around 3, but near the peak of the bottleneck (8K to 4K years before present) is was closer to 17.
Sure... it was the same chromosome for brothers, but there is evidence that a typical first brother was quite a bit more likely to reproduce than the second and third. This was bad enough that a bit of evolution happened (one of the most recent changes we can pin directly to selection) that led to women being slightly more likely to have sons instead of daughters for their first born. The odds flip the other way for later births.
This is a very active research area, so thoughts are changing quickly as evidence shows up and crushes some of them. Many believe wars are enough to explain the ratio, but I don't think they explain the selection favoring first sons. Anyway, chances are good we will all live long enough to see more ideas emerge and then get plugged with arrows.
Alfred Differ:
a bit of evolution happened ... that led to women being slightly more likely to have sons instead of daughters for their first born. The odds flip the other way for later births.
Pure speculation here, but I wonder if the distinction is that women are more likely to have sons as first born or more likely to have sons when the woman herself is younger.
I mention this because my bother, my first cousin, and I all had daughters as first children, and we all reproduced later in life than is typical for first timers.
Dr Brin,
"Every time Lee tried that, he was humiliated"
Lee attacked incessantly while outnumbered during the 7 days battles in 1862 and was widely celebrated for having driven the Army of the Potomac out of Virginia. He succeeded in his objective, though Pyrrhically.
One look at the results of Malvern Hill should have taught Lee not to frontally attack US artillery, supported by infantry, well emplaced on high ground. The fact that he ever did so again should diminish whatever luster remains of his rep (still quite shiny among the descendants of the men he sent to die in vain).
Pappenheimer
Apparently folk heroes who lose dramatically can be forgiven much. Looking at you, Bonnie Prince Charlie.
Pappenheimer
Larry,
I'm pretty sure you'd be proven right if we had the historical data. I don't know exactly how they sifted it out, but the suspicion is that the ratio for first borns changed a bit. That makes some sense because latter born sons were at a distinct disadvantage in securing a family of their own when their parents (especially father) died before they matured.
The explanatory narrative that competes with "men killing men in wars" is "shorter parental lifespans (poor food supply) undermined later born sons economic security". Since males with no finances rely more heavily on cheating secure males, they were also at a higher risk of being killed.
------
The bias toward Y chromosomes in first borns is very small. Instead of a mix of 100/100 boys to girls it is something like 105/100 early and then drops to 98/100 late.
There is probably an argument to be made in favor of changes that led to girls becoming fertile earlier too. Poor food nutrition over 100 generations might just do that.
Given that the odds of mothers dying in childbirth were so high, I would expect to see later sons have entirely different Y chromosomes. Same grandmother, but different mother.
I don't doubt that at all. I suspect that fact is older than agriculture, though, and probably responsible for us mostly being serial monogamists.
The only thing the bottleneck data can describe, though, is a collapse of Y chromosome diversity. Which woman had the child would show up on the mtDNA where no bottleneck is evident. Everything else is an explanatory construct open to scientific debate.
What I want to see is the kind of stuff Sapolsky described in detail folded into models that try to predict how humans would have coped during the bottleneck era. It is far too tempting to apply how we would currently react to such an event, so I'd love to see the research ponder what might happen with slight variations in brain chemistry. They just might trip into a narrative that shows feudalism was an adaptive strategy. With that we'd be well down the road to describing the attractor chemically.
Not sure about elsewhere, but in a lot of Dark Age Europe (tm) there was no primogeniture. One of the problems of the early Frankish holdings was the periodic division of the kingdom between the surviving sons (not sure how this correlates further down the social chain.) One can easily see how instituting primogeniture in law can cut down on civil war and ever regicide, though, and how social evolution would reinforce primogeniture.
Of course, the bottleneck you're referring to mostly predates historic record. I checked an online abstract and (if I'm reading this right -IIRTR) it looks like Africa had the least affected population. Que paso?
Pappenheimer
I doubt the Y chromosome bottleneck was older brothers crushing younger ones. Rather, it was all the brothers ganging up to steal women from other crushed males.
matthew:
Given that the odds of mothers dying in childbirth were so high, I would expect to see later sons have entirely different Y chromosomes. Same grandmother, but different mother.
Y-chromosomes only come from the father, right? The mother's DNA doesn't enter into it.
I'm still unclear on what the "Y-chromosome bottleneck" was bottlenecking. If younger brothers never reproduce but the older brother does, doesn't that pass along the same Y chromosome that the younger brothers would have?
When you first started talking about the bottleneck, I thought the issue was something like the king spreads his genetics far and wide via his harem while other men are too poverty-stricken to have surviving children. I could see that as being a method for limiting the variations on Y-chromosomes remaining viable. But I don't see how a brother having advantages over his other brothers causes a bottleneck.
Or am I totally misunderstanding the term?
The 'bottleneck' is a misnomer that arises from strictly adhering to male or female lineages. The further back someone traces their ancestry, (either by father or mother) the greater the number of people who are descended from that same ancestor. Go far enough back, and everyone living is descended from that one person. This is how the concept of mitochondrial Eve and Y-gene Adam arose.
It does not mean there was only one breeding man or woman at a particular time.
Indeed, if one were to mix lineages you'd likely find other ancestors from that time.
It's just that everyone living has 'Adam' and 'Eve' as one their thousands of forebears.
Tony the Y-Adam and mito-eve are totally different going way, way back a million or so years.
The Y bottleneck shows that just around 15000 y.a. - roughly at the time agriculture kicked in hard - Only a fraction of males passed on genes. My theory is a combination of kings and beer. Unlike chiefs who must fret that he's pushed the lower males too hard, a king with his 'army' of 20 soldiers could order anyone he disliked killed... and early on beer likely made for unpleasant behavior, till resistance built up.
The narrative doesn't have elder brothers crushing younger ones. Some of that likely DID happen, but no more than is usual when brothers don't get along.
The story has elder brothers maturing while their father is still alive and able to help secure their futures. Involved fathers matter in the likelihood their sons will successfully reproduce. Involved fathers matter much less in the reproductive success of their daughters, but DO help with their financial success.
The story goes that younger brothers are more likely to be left to their own resources when their fathers die young... which happened a lot back then.
My suspicion is second and third sons would have had to gang up, but that still leaves them at the mercy of Lady Luck more than their elder brother.
Larry,
The bottleneck involves Y chromosome diversity. Population numbers were actually climbing. The food might have sucked in terms of nutrition, but there was quite a bit more of it. More babies survived as a result, but not as long as their ancestors.
There are a variety of narratives that rely on conflict. One example can be seen here.
https://indo-european.eu/2018/05/post-neolithic-y-chromosome-bottleneck-explained-by-cultural-hitchhiking-and-competition-between-patrilineal-clans/
Here is an earlier one that is less inclined to argue for a specific explanation. They say it must have been due to 'cultural changes' which is pretty broad.
https://www.centogene.com/resources/scientific-publications/a-recent-bottleneck-of-y-chromosome-diversity
Look in the middle of the second paper and you'll see a chart that shows the problem. The way it is drawn makes it look like the Y-chromosome explosively diversified about 6-7K years ago. There is no justification for that, though. A narrative that works better is that earlier Y diversity simply didn't survive the era. Whole lines vanished from the human population leaving the illusion that diversity exploded a few thousand years ago... after whatever was causing the collapse stopped doing it.
Y-Adam and MT-Eve only go back about a quarter million years last I checked, but with more genomes being collected all the time I'm sure the story will only get more interesting.
Also, the two of them are from different sides of Africa and many, many generations apart.
15000ya suggests you're talking about the euro-african populations.
Has any study identified bottlenecks in Australian or American populations of that time?
(It might have a bearing on the 'beer and king' theory)
Yup. The bottleneck was pretty much world wide, but with some variation. Strongest data is for Eurasia due to the size of the population. [Check out my second link.]
Not 15Kya. Closer to 7Kya. I've seen people quoting a range for the event between 8Kya and 4Kya with a peak somewhere in the middle. Basically right before recorded history began the bottleneck essentially peaked and then closed out.
Alfred Differ: right before recorded history...
You sir, are a born psychohistorian.
Someone find a really good reference for when and where the y chromosome bottleneck was... and how it could have also affected the Americas?
onward
onward
Post a Comment