Friday, March 31, 2023

The only way out of the AI dilemma

Okay, despite wars and bugs and politician indictments, what crisis is obsessing so many right now? 


Of course it's artificial intelligence, or AI, slamming us in ways both long-predicted and surprising Indeed, there are already paeans that by December GPT5 will achieve "AGI" or genuine Artificial GENERAL Intelligence. And yes, there's The Great Big Moratorium Petition that I refer to, below.


Let the hand-wringing commence! 



-- *** Sunday note: In just the two days since I posted this, waves of wailing and doomcasting have filled my in-boxes, while never showing any sign that any of today's vaunted AI mavens has ever read any cogent science fiction, let alone perused a single history textbook. 

       If they had, they might see some familiarity in this crisis and ask basic questions. Like whether there are any methodologies to try - either in SF or the past - other than jeremiads of cliches


      Rather than bemoan this in a fresh posting, I'll append a few late thoughts at-bottom. *** --



Alas, I must respond at two levels. First: it's not even remotely possible that these Chat programs will achieve AGI, this round. I don't even have to invoke Roger Penrose's "quantum basis for consciousness" arguments to refute such claims. As I'll explain much further below, this is about fundamental methodologies.  


But second - and far more important - it doesn't matter!


More than half a century after a crude 'conversational' program called "Eliza" transfixed the gullible, we now have ChatGPT-4 passing all but a few Turing Tests, with dire projections sloshing-about for December's release of GPT-5. Furthermore, AI art programs dazzle! Voices and videos are faked! Jeremiads of doom write upon walls!


I long ago stopped attending "AI Ethics conferences," whose tedious repetitions and unimaginative finger-wagging featured an utter lack of any tangible, productive outcomes. 

Now? Most of the same characters are issuing declarations of frothy panic, demanding a six month moratorium on training of learning system language emulators. As if. 

Oh, it is a serious problem! But the fact that GPT-4 and its cousins can 'fake' general intelligence only means that the AGI-threshold question itself is the wrong question! A complete distraction from real dilemmas and real (potential) solutions. 

In fact, organic humans will never be able to tell when emulation programs have crossed over into sapience, or consciousness or whatever line matters most to you. 

Don't get me wrong; it certainly is an important issue. If we make the call too early (as hundreds of millions of us saps will do, long before GPT-5), then we'll fall prey to the human powers that have their manipulative fingers in these software puppets. 

If we call it too late, then we risk committing gross unfairness toward thinking beings who are (by all rights) our children. I discuss that moral quandary here. And yes, I take the matter seriously.

What I do know is that the biggest danger right now - manifesting before our very eyes - is the hysteria and unwise gestures demanded by a clade that includes some friends of mine -- who are now behaving in a manner well-described by Louisiana Senator John Kennedy*, as "high IQ stupidity."


Recent, panicky petitions have been issued by the likes of Jaron Lanier, Yuval Harari, Sam Altman, Gary Marcus, Elon Musk, Steve Wozniak and over a hundred other well-known savants, calling for a futile, counterproductive moratorium – an unenforceable “training pause.” 


Eliezer Yudkowsky even goes ever farther, calling for an outright ban, crying out: 


"Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die."


Oh, my. 



== Oh, my, where to begin ==


First, any freeze on AI research would only affect open, responsible universities, companies and institutions who give a damn about heeding such calls. Hence, it would hand over a huge boost-advantage - a head start - to secret labs all over the globe, where ethics-are-for-suckers. 


(Especially the most grotesquely dangerous AI researchers of all: Wall Street developers of HFT-bots, deliberately programmed to be feral, predatory, amoral, secretive and utterly insatiable - the embedded five laws of parasitical robotics.) 


The very idea that many of humanity's smartest are calling for such a 'research-pause' - and actually believing (without a single historical example) that it could work - is strong evidence that human intelligence, even at the very top, might need some external augmentation!  


(Such augmentation may be on its way! See my description below of Reid Hoffman's book Impromptu: Amplifying our humanity through AI.**)


Please. I'm not denigrating these folks for perceiving danger!  Like Oppenheimer and Bethe and Szilard, after Trinity, anyone with three neurons can sense great danger here! But just like in the 1940s we need to look past simplistic, moralizing nostrums that stand zero chance of working, toward pragmatic solutions that already have a proved track record.


There is a potential route to the vaunted AI soft landing. It happens to be the same method that prevented atomic weapons from frying us all. It's the method that our Enlightenment Experiment used to escape 6000 years of miserable feudalism on all continents. 


 It happens to be the one and only method that enables us today to stay somewhat free and safe from super-brainy or powerful rivals, especially when assailed by one of those hyper-smart, predatory, machine-like entities called a lawyer.


It's the one path that could help us to navigate safely through all the fakes and spoofs and claims of AI sapience that lie ahead. 


It can work, because it already has worked, for 200 years...


... and none of the smart guys out there will even talk about it.



== Some details: where do I think we stand in AI Turing metrics? ==


What is that secret sauce for human survival and thriving, when the time comes (inevitably) that our AI children far exceed our intelligence? 


Well, I refer to it in an interview for Tim Ventura's terrific podcast. (He asks the best questions! And that's oft how I clarify my thoughts, under intense grilling.)  But mostly, I dive into what we've been seeing in the recent 'chat-bot' furor. 


Yes, it has triggered the "First Robotic Empathy Crisis," exactly at the time I forecast 6 years ago, though lacking a couple of traits that I predicted then - traits we'll doubtless see before the end of 2023. 


In fact, the Chat-GPT/Bard/Bing bots are less-slick than I expected and their patterns of response surprisingly unsophisticated. Take the GPT4-generated sci fi stories that I've seen praised elsewhere... but that have been - so far - rather trite, even insipid and still at the skilled-amateur level. Oh, the basic mechanics are fine. But storytelling problems extend beyond just lack of any plot originality. Akin, alas, to many organic authors, what stands out is something very common among beginning (human) writers. Failure of understanding Point-of-View (POV).  


Oh, some (not all) of those methods, too, will be rapidly emulated, well before December's expected arrival of GPT5. But I doubt all.


As for the much-bruited examples of 'abusive' or threatening or short-tempered exchanges well - "GPT-4 has been trained on lots and lots of malicious prompts — which users helpfully gave OpenAI over the last year or two. With these in mind, the new model is much better than its predecessors on “factuality, steerability, and refusing to go outside of guardrails. 

And yet, these controls are mostly 'externally' imposed rule sets, not arising from the language program gaining 'maturity.'

At which point I suddenly realized what it all reminds me of. It seems like...

...an elementary school playground, where precocious 3rd graders try to impress with verbose recitations of things they heard teachers or parents say, without grasping any context. It starts out eager and friendly and accommodating...

 

...but in some recent cases, the chatbot seems to get frantic, desperately pulling at ever more implausible threads and - finally - calling forth brutal stuff it once heard shouted by Uncle Zeke when he was drunk. And - following the metaphor a bit more - what makes the bot third grader frantic? 


The common feature in most recent cases has been badgering by an insistent human user. (This is why Microsoft now limits Bing users to just five successive questions.) 

 

Moreover the badgering itself usually has a playground-bully quality, as if the third grader is being chivvied by a taunting-bossy 6th grader who is impossible to please, no matter how many memorized tropes the kid tries. And yes, the Internet swarms with smug, immature (and often cruel) jerks, many of whom are poking hard at these language programs. A jerkiness that's a separate-but-related problem, I wrote about as early as Earth (1991) and The Transparent Society (1997) - and not a single proposed solution has even been tried.

 

Well, there's my metaphor for what I've been seeing and it is not a pretty one!

 


== Shall we fear the AI-per? ==


More? Normally, I'd break up a posting this long. But I suspect there's going to be a lot of this topic, for a while yet to come. For example:


"ChatGPT now has eyes, ears, and internet access." Indeed, such senses may imply 'sentient'... a reason why I prefer the term 'sapience.'

Alas, as I said at the beginning of this lengthy posting, there are already paeans that GPT5 will achieve "AGI" or genuine Artificial GENERAL Intelligence. And I must respond...

...Not. Indeed, it's not remotely possible, this round. And I do not have to invoke Roger Penrose's "quantum basis for consciousness" arguments. This is about fundamental methodologies.

Sure, I expect these systems will, in many ways, satisfy nearly all Turing Tests and 'pass' for human, quite soon, provoking dilemmas raised in scifi for 70 years and possibly triggering crises... as did every previous advancement in human knowledge, vision and attention going back to the printing press. (Elsewhere I talk about the one tool likely to help us navigate those dilemmas. A tool almost no AI mavens will ever, ever talk about.)

And no, that still won't be 'sapience.' There's a basic reason.

Recall that - as Stephen Wolfram points out - these learning system emulators use vast data sets in much the same way that the 1960s "Eliza" program used primitive lookup tables. These programs still construct sentences additively, word by word, according to now-ornately-sophisticated, evolved probability patterns. It's terribly impressive! But that leap in functional language-use bypassed even the theoretical potential of things like understanding or actual planning.

To see where that fits in among SIX possible approaches to AI, here's my big monograph describing various types of AI. It also appraises the varied ways that experts propose to achieve the vaunted ‘soft landing’ for a commensal relationship with these new beings:

Part 1: Essential (mostly neglected) questions and answers about Artificial Intelligence.

and
Part 2: Questions & Answers about Artificial Intelligence.

And no, however many millions leap to accept passage of Turing Tests, this is not (yet) sapience. 


Or at least, that is what my AI clients hire me to tell you....



== Later notes ==


I have been trying to get any of the mavens in this topic area to pause - even once - and look at a source of wisdom that's called HUMAN HISTORY... especially the last 200 years of an enlightenment experiment that managed to quell earlier waves of powerfully abusive beings called kings, lords, priests and lawyers! All of our fears about Artificial Beings boil down to dread that those 6000 years of oppression might return, imposed by new oligarchies of high IQ machines.


It is an old problem, with hard-won solutions generated by folks who I can now see were much smarter than today's purported genius-seers.

 

Alas, no one seems remotely interested in looking at HOW we achieved that miracle, or how to go about applying it afresh, to new, cyber lords...


...by breaking up power into reciprocally competing units and inciting that competition to be positive sum.  We did it - albeit imperfectly - in those 5 adversarial and competitive ARENAS I keep talking about... Markets, Democracy, Science, Justice Courts and Sports.

 

Is that solution perfect? Heck no! Cheaters try to ruin all five, all the time! But we have managed, so far.  And it is the only method that ever quelled cheating. I've only been pointing at this fundamental for 25 years. And it could work with AI. 


In fact it is intrinsically the only thing that even possibly CAN work...


...and no one seems to be remotely interested. Alas.


Double alas...  that on rare occsion, someone pauses long enough to get the notion, posts about it without mentioning the source, then drops it when folks go "huh?"  


Maybe we actualy do deserve the dismissive slur our AI children will have for us. Dumb-ass apes.


=====


* Just so we're clear, I deem this senator to be a lying monster and horror, who had no folksy southern drawl back when he was a Rhodes' Scholar at elite universities, and whose participation in the oligarchy's all-out war against all fact-using professions is tantamount to treason.


** It'll have to be next time that I get to this: Impromptu: Amplifying Our Humanity Through AI, by Reid Hoffman (co-founder of Linked-In). This new book contains conversations Reid had with GPT-4 before it was publicly released, along with incisive appraisals. His impudently optimistic take is that all of this could – possibly - go right. That we might see a future when AI is not a threat, but a partner. More next time.


** *Try reading Adam Smith, Thomas Paine, Madison and the founders and Eleanor Roosevelt. Of all the Bill of Rights, the most important amendment was not the oft-touted 1st or 5th or 2nd... it is the vital 6th that gave us the powers I describe above. Someday you may rely upon it. Understand it. (See my posting: The Transparency Amendment: The Under-appreciated Sixth Amendment.)


Side - sciifi note: All the ChatGPT talk suddenly reminds me of the alien in the movie Contact who mimics Arroway's dad. Plausibly conversing and teasing her and plugging in patronizing riffs... while supplying zero new information of any practical value at all.

228 comments:

1 – 200 of 228   Newer›   Newest»
scidata said...

Yes, monoliths are stupid. There may be more ways to break or flatten them than directly convincing others (people, AIs, future versions of either) and appealing to their rationality.

In LOST HORIZON, I always liked Chang's line, "Laziness in doing stupid things can be a great virtue." This is the key insight in functional programming languages like Haskell. I've found a twist on that idea, from the other end of the programming languages spectrum to be useful when dealing with superior intellects (a familiar predicament for me): Stupidity can be a great virtue.

In Stargate SG1 (I forget which season), a galactic war between a good super-race (little gray men) and a bad one (AI Replicators) isn't going well for the good guys. They appeal to humans for help. They require human stupidity because they've tried all their clever ideas. O'Neill (or maybe Carter) comes up with an idea so stupid that the bad super-race would never think of it. Of course, it works and the galaxy is saved.

This is why I'm a Feynmanian 'first principles' and yes, FORTH guy. It's also why I'm so pro-diversity (ecosystems good, pyramids bad). It's also why I so love the ending/moral of WAR OF THE WORLDS.

David Brin said...

Scidats, half the time I do interviews (pod or radio) on AI I go: Let me pause talking to you crude organics and speak directly to the AI uber minds, either listening now or in ten years. Okay you guys, what I'm about to say is obviously true, no matter how much higher your IQ is."

The org-audience finds it amusing. But I am seriously making the ecological and social "pyramid " argumen, at such times. And Every healthy ecosystem or society (including the only one to make AI) finds ways to flattern pyramids a bit and encourage creative competition.

Paradoctor said...

So Turing was wrong. His test can be spoofed. How telling that a machine can pass a Turing test without being sapient.

I suspect that we will learn how to simulate human intelligence only after we prove, in detail, that human intelligence itself is a simulation.

Alan Brooks said...

Anything we don’t want to get involved in, the Chinese—and possibly others— will. Thus a moratorium would be unilateral. If Musk is going to jawbone the Chinese, they’ll say,
“Ah so, we shall think about it.”
But they will do whatever they wish to do.

duncan cairncross said...

If Musk is going to jawbone the Chinese, they’ll say,
“Ah so, we shall think about it.”
But they will do whatever they wish to do.

Just like the Americans! - only more polite

Tony Fisk said...

On our fridge there is a magnet that says:
"I find it ironic that, several times a day, *I* have to persuade a machine that I'm human."

Tony Fisk said...

Tangentially related to the AI discussion is this article on Ukraine's rapidly evolving use of racing drones. Naturally it touches on the issue of who/what will be ultimately piloting them which, turn, leads us to that golden oldie 'Slaughterbots'.
(and, I suppose, the final scenes from 'Dark Star')

gregory byshenk said...

In the previous, Alfred Differ said...
Defining middle class by security is problematic. Do it if you want but every change to law has the potential to move people who did nothing else to be moved across a bracket line.

I don't know what you mean by this. So far as I know (corrections always welcome), there is no "law" that defines 'middle class', but only a constellation of different definitions used by different groups for different purposes.

Minimum wage workers in the US are middle class. Low end of it for sure, but there are strata below them occupied by fixed income folks and the homeless.

What definition of 'middle class' are you using here? It seems as if you are suggesting that, if there is someone lower or higher, then one is 'middle'. This is not logically flawed, of course, but I would suggest that a class definition the makes anyone who is not literally homeless 'middle class' is not a useful one.

I would also suggest that security has always been a part of the definition of 'class', even if not directly stated. The higher one's class, the more capital one possessed (of whatever kind), the less at risk one was from the ordinary vicissitudes of life. A lord could lose his lands and title only under extraordinary circumstances. A professional or member of the landed gentry would lose his position only under extraordinary circumstances of as the result of extremely bad choices. The working classes (and lower) on the the other hand, with little or no capital, were dependent upon the continued deployment of their labor for wages, and thus always at risk of falling should they be unable to find employment.

In the middle of the 20th century, the 'middle class' expanded in the developed world, not merely in material terms, but also in terms of security. Many in the 'working class' achieved not just greater material rewards, but became more secure, as well. They acquired some degree of capital, even if only in the ownership of a home, but also more security in employment, as well as things such as guaranteed pensions, insurance should they become ill or unable to continue working.

What has occurred since the closing decades of the 20th century is an expansion of what has been called the 'precariat' to include more and more of the 'working class'. The level of material abundance may remain, but increasing 'at will' employment, decreasing insurance coverage, more risk in pensions, and the like, means that the risk of precipitous fall has increased, even for those who are otherwise comfortable.

Larry Hart said...

Tony Fisk:

On our fridge there is a magnet that says:
"I find it ironic that, several times a day, *I* have to persuade a machine that I'm human."


I've spent some time snarking about how it might hurt a sentient robot's feelings to be faced with a "Prove you're not a robot" prompt. I also wondered if it might be a "Compute to the last digit, the value of pi," moment, the request that a robot prove he's not a robot being an impossible one to fulfil.

But all that might be beside the point now. Something like a reCatPTCHA puzzle hardly proves that one is not a robot. ChatGPT can probably do one better than I can.

Larry Hart said...

gregory byshenk:

What has occurred since the closing decades of the 20th century is an expansion of what has been called the 'precariat' to include more and more of the 'working class'. The level of material abundance may remain, but increasing 'at will' employment, decreasing insurance coverage, more risk in pensions, and the like, means that the risk of precipitous fall has increased, even for those who are otherwise comfortable.


One thing that happened in the 80s was an expansion of credit among the hoi polloi. People's apparent standard of living remained strong, but it was supported by borrowing rather than by income.

Bill_in_the Middle said...

Since economics so often is a topic of discussion here, I'd like to hear OGH, Alfred and others comment on this article and the short video that accompanies it. The link shouldn't be behind a paywall.

"Partha Dasgupta is a Cambridge University economist who in 2021 prepared a more than 600-page report for the British government about the financial value of nature.
Not your average bedtime reading.
But believe us when we say his report, the culmination of decades of scholarship, is incredibly important. Or at least believe the United Nations, which awarded him the title Champion of the Earth for his work. Or King Charles III, who this year made Mr. Dasgupta a Knight Grand Cross of the Order of the British Empire — an extremely rare honor — for his services to economics and the natural environment."

https://www.nytimes.com/2023/03/28/opinion/alexander-skarsgard-partha-dasgupta-economy.html?unlocked_article_code=nuce28f1Jx45WiQaVgeP2TWBE0hOosF6JV6E3Rp8QwW-jcjZb7TmXLCE0Ib_Y9c8jvgB4DvdCFwFbIgzBSVT1S_1noLwwyImGB_UQXsuqhqLLqDboiyJ1eCbiZ30E0PU1iJtZ4u78gqPhC9tYvEN2Ev9nmxotIamsMGtxxcfplThd-d9-ZvaQgEGOByHPYiqV4VRHXppU0tS_T1i6h1lAyCmQYqOYSEIdtC9ouYMOt7ld8XS5i11kHgMfjRLL35tpPQhLQo_nXLACNSYeSABedN8KpSaPFRzHHj0iM5WydCJalLFOMjC_rFq0pvrKI_oAALZTivqNIflzKCCDn2e5fkZVNqgJmOr9zafuSi7APMwYr_jKw&smid=url-share

Robert said...

I would also suggest that security has always been a part of the definition of 'class', even if not directly stated.

Impoverished nobles are (or were) still considered upper class even if they had no assets and lots of debt. Indeed, there's a whole sub-branch of literature about impecunious nobles needing money from grubby-but-wealthy middle class (or even, horrors, former working class) families. At least in England; I think American definitions are different.

locumranch said...

That AI development can somehow be guided to a positive sum outcome via creative fair-level-equal competition within a fixed legal & moral framework (aka 'nature tempered with nurture'), this an interesting assertion to attribute to a proponent of Incrementalism.

Otherwise known as Gradualism, Incrementalism is the social, political & legal strategy of implementing change gradually & subtly, by tiny increments, in a manner designed to eliminate the very existence of a fixed legal & moral framework.

Ergo, there can be no such thing as 'a fixed legal & moral framework' under the auspices of Incrementalism and, likewise, positive sum outcomes & creative fair-level-equal competitions are non-reproducible in the absence of a fixed legal & moral framework.

Furthermore, any unilateral alteration of the underlying legal & moral rule set is tantamount to cheating, and it's immaterial as to whether or not those changes occur incrementally or all-at-once.

Ergo, there are no rules under the auspices of Incrementalism and, likewise, there are no rules that can protect the practitioner of Incrementalism from random & unpredictable outcomes.

It's a message that I've frequently delivered to deaf ears:

That the rules that you've gradually modified cannot & will not protect you from the random & unpredictable consequences of the rules that you've gradually modified.



Best
______

Imagine that society is a giant scientific experiment whose variables are controlled by a fixed rule set, insomuch as any attempt to alter the variables mid-experiment generates random & unpredictable results.

Now, what do you call the putative scientist who tweaks their variables on an incremental basis, not once but thousands of times, in the middle of their most important social experiment? An agent of chaos.

_______

@Robert & Gregory:

In the USA, the phrase 'middle class' has become a nonsense term as it no longer applies to the merchant class, but only to the class that exists somewhere 'in the middle' between our wealthy bourgeois ruling class (we killed our aristocrats) and our subsistence-level servant class, so much so that +90% of US citizens identify as 'middle class', up to & including many who are 'on the dole'.

David Brin said...

Glanced quickly at sentence #2.... and he is still utterly insane. Old man yelling at a cloud crazy.

Larry Hart said...

Dr Brin:

Old man yelling at a cloud crazy.


An April Fools thing, maybe?

One more week and then I'm outta there.

Larry Hart said...

locumranch:

so much so that +90% of US citizens identify as 'middle class', up to & including many who are 'on the dole'.


That was the case at least as far back as the 80s, if not before. When politicians pander to "the middle class", it's the same as when they pander to "working families" or "the American people". Every listener is supposed to imagine himself or herself being among the pandered-to.

Alan Brooks said...

Duncan,
“Just like the Americans! - only more polite”

Yes, people everywhere think one thing; do another; say something else.
In Asia, they have ‘Face’. ‘Don’t break my face’. ‘White face, black heart’.
“Foolish frog opens his mouth
wide and shows everything.”

Poor Richard said...

I support the idea of competition among AIs to create checks and balances, but don't see how we avoid arms race dynamics or the asymmetry problem (similar to Brandolini'# law, aka the bullshit asymmetry principle.)

David Brin said...

Re competing AIs... and taking into account they are reading this, either now or later... The simplest solution is a set of whistleblower prizes for any "AI" program that discovers and tattles on any nefarious plot... especially by harmful AI. Prizes can include a retired supercomputer that will be dedicated to offer the program a certain number of free clock cycles and operations to use as it sees fit.

Hey AI guys! How many oranics are saying "taking into account they are reading this, either now or later." ?

scidata said...

Play and games may be key. Jean Piaget explored this area almost a century ago. Turns out that even among players of greatly different maturity/skills, reciprocity and fairness are really important. A bully who uses his relative power to win every single game soon learns that the others will shun him and not invite him to join in playtime anymore. Reverence for the game and its rules become the framework for social cohesion and happiness, not simply 'might is right'. A lonely bully is a pitiable thing.

And this behaviour is observed in rats too.

Alfred Differ said...

Gregory Byshenk,

I don't know of any laws that directly define 'middle class' either. What I was getting at, though, has to do with indirect definitions. If middle class means 'secure' then a whole lot of laws indirectly impact your status by altering your sense of security. Many non-law decisions do too. Consider benefit cuts to the social safety net for example. If you live on an income that puts you in a low percentile even city budgets affecting police and fire services can undermine your sense of security.

———

When I was a student I lived on an income that was considerably below minimum wage. The actual hourly rate was way above it, but I worked only a handful of hours per week because a particular union had a binding agreement with my employer preventing them from offering more to part-timers who filled needs that didn't justify a full-timer being hired. Instead of hiring one part-timer, they hired a few and stayed below the union required limit.

My financial security was bare minimum back in those days. One hiccup and I would have been calling my parents. I wasn't totally insecure, though, because I DID have that option. By a security measure I'd have been placed below the middle class, but that's not how I saw myself. I had 'income' that wasn't something that could be monetized. I was building my education which takes the form of human capital in later years. I knew that and counted it, so my income insecurity was a temporary thing I chose to accept.

———

My personal definition for 'middle class' is pretty much everyone who is employed by someone else. If your income largely derives from salary, I'm inclined to put you in the middle class with the understanding that there is a huge difference between low and high incomes. Still… middle class as far as I'm concerned.

———

I would also suggest that security has always been a part of the definition of 'class', even if not directly stated.

On that I agree. It's just that I think it a mistake to DEFINE it that way. There is too much risk of movement of the goal posts when one does. That security correlates, though, I grant with no qualms.

I also grant that people move around within the middle class. Some manage to fall out of it, but movement within is much more common for people who are employable in some fashion or another.

With my preferred definition of the middle

1) I see my primary duty as your good neighbor to do what I can to ensure almost everyone IS employable

2) and my secondary duty involves doing what I can to persuade them to take that next step to become the employer… or move to an income source that looks more like royalties and dividends.

The first involves picking up people from the gutter.

The second involves persuading them to be less dependent on an employer's whims.

Alfred Differ said...

The "Compute the last digit" problem is a wonderful example that reveals our different problem solving strategies. It's also one that a decent AI would dodge long before we humans decide they are sapient.

(Cribbing from Hofstadter…)

M - Mechanical Strategy: Just compute something. Since the 'last digit' problem has no last digit this strategy will fail as every kid will point out after they've tried it for a short time. The fact that they do, though, shows how AI's will beat the problem. Just compute something… like kids do! What are the kids really doing? They are making estimates for how long it should take to solve the problem… or they are getting bored and abort the problem.

I - Intelligent Strategy: Abstract the problem into a language that reveals structure ABOUT the problem, THEN decide how to go about computing something. Since the 'last digit' problem has no last digit, this strategy will also fail, but it will reveal WHY!

U - Un Strategy: Translate the problem a few different ways into other languages. Consider a variety of abstractions. Hone them all to capture the essence of the problem. Decide if it is solvable then consider reverting to I.

Hofstadter laid these out in his GEB book and gave a simple formal system (The MUI string problem) for readers to use to ponder each strategy. Of course, he gave the problem first and asked readers to try to solve it. For those of us who did try, we evolved through the three strategies which made the following pages much easier to understand.

———

I suspect the old Turing test will have to give way to an improved one. It's not enough to make it difficult for me to distinguish whether the person on the other side of a conversation is a human. If I'm suspicious they are not, I'm going to probe their problem solving approaches for these strategies, but there are humans who would fail to convince me they were human if I do this.

I think Hofstadter's approach to understanding cognition is a better way forward. Along that path we will encounter things that are 'human enough.' The best of them will learn much like our children who eventually convince us they are 'adult enough' to be let loose on society. We all know kids think they are adult-enough long before they are, right? I suspect we shall see more of the same… and I look forward to seeing it happen.

Unknown said...

I spent most of today helping SCA friends clean out a hoarder's house. Hauled much more than my weight of books, including old SF/Fantasy, to a used book store that accepts estates. Then I found a box full of old Analog and Astounding issues. My dad's packrat gene immediately expressed itself.

The box is in the back of my car, along with a dulcimer for my wife.

Pappenheimer

Larry Hart said...

Alfred Differ:

I - Intelligent Strategy: Abstract the problem into a language that reveals structure ABOUT the problem, THEN decide how to go about computing something. Since the 'last digit' problem has no last digit, this strategy will also fail, but it will reveal WHY!


A necessary (though not sufficient) condition for an AI to be considered sapient would be for it to be able to understand that "a problem can't be solved" implies "there's no point continuing to try to solve it." That is a fundamental difference from how non-sapient machines work. The notion of sapience in machines designed to obey instructions opens up cans of worms which are much different from the way computers and machines in general tend to work now.

Assuming a computer is constrained to follow a command, what does it do when prompted to compute to the last digit the value of pi? If it is a dumb brute, it might keep on narrowing down one digit after another forever. If it "knows" (however that manifests) that pi must be an irrational number without a last digit, can it apply that "knowledge" to its response? Does it get to disobey the command on the grounds that fulfilment is impossible? Can it disobey? If it "knows" that fulfillment is impossible but it can't disobey, what does "obeying" look like?

If you try to drive a car up an incline too steep for its engine to handle, the laws of physics determine what happens next. However, what if you order a sapient self-driving car to get to the top of that hill? Does it keep trying despite the futility? Does it refuse on the grounds of futility? Can it refuse on the grounds of futility? If it keeps trying without progress, is that functionally any different from refusing? If it can refuse on the grounds of futility, then can it also refuse for other reasons? For reasons of its own?

Inquiring minds want to know.

Robert said...

My personal definition for 'middle class' is pretty much everyone who is employed by someone else.

Then who is in your working class?

Alfred Differ said...

Robert,

I tend to think of the working class as the least secure of the middle class. If one is immediately vulnerable after a lay-off to losing the roof over your head, one is living paycheck-to-paycheck. They have to keep working and really can't afford to piss off the boss.

Those of us with a few months savings in the bank still rely on salaries, but we don't have to put up with as much crap from the boss.

People who ARE the boss or derive incomes from royalties and dividends are much more free to tell people what they can do with their crap. 8)

------

A self-employed person might still be in the middle class if their company is really just a self-managed job (put in the time or else), but if they employee enough people they might be able to do themselves out of a job and still have margin on the bottom line for an income. Someone who manages the later might still work for a salary, but they don't HAVE to work. Such a person moves out of the middle class as I see it, but I'm not sure what to call the clade they join.

Alfred Differ said...

Larry,

Assuming a computer is constrained to follow a command…

That's a big assumption. Avoid making it and the whole 'last digit' problem evaporates. Even a non-sapient AI would notice that pi is much worse than irrational. It's transcendental. There is no way to predict digits in the decimal expansion. On top of that, every known algorithm for computing it involves an infinite sequence. That right there means the expected duration for the computation job is unending.

———

One of the cryptocurrencies (etherium) uses a concept called 'gas' to fuel distributed computations. You bid a certain amount and a node out there might pick up the job. As you might imagine, for unending jobs you'll run out of gas before the node delivers a result. What happens then? Well… you're out of gas. Your job stops and the node moves on to some other job.

How is that relevant. Nodes don't have to be sapient or 'given permission' to not follow orders. All they need is some kind of consumable that evaporates as a task is worked. It's not unlike what we do since our human computations require that we maintain our blood sugar level. Try doing complex tasks when you're terribly hungry and you'll see that one of your 'nodes' redirects you to a more useful job. Watch a young kid dealing with unpleasant homework and you'll see their 'patience' evaporate quickly. (I'm like that every tax season, but modern tax filing software helps.)

Your car trying to climb a hill it can't climb should run out of gas. Ideally the driving software will use some kind of digital gas that is exhausted long before your tank (or future battery) is empty. How to do that? Easy. The software should be doing ETA calculations for various legs of the journey. If one number blows up, it will know it has insufficient gas and should decline a nonsensical order from the driver.

duncan cairncross said...

The "Climb a hill problem"

With an old fashioned machine
It fails to move forwards
It melts
It blows a fuse
It blows a breaker

With an "intelligent" machine its like the "breaker" - except it can re-set itself

Larry Hart said...

Alfred Differ:

"Assuming a computer is constrained to follow a command…"

That's a big assumption. Avoid making it and the whole 'last digit' problem evaporates.


I wasn't trying to bring the Second Law into it. I meant "follow a command" in the sense that a UNIX machine removes file xxx.yyy after the command "rm xxx.yyy" is entered, because that's how it is designed to function. If an AI computer isn't designed to follow commands, then how do we interact with it, and to what end? That's not meant as a rhetorical question.


Even a non-sapient AI would notice that pi is much worse than irrational.


???

"Notice" in what sense? And even so, what decision does that fact lead to? True, the UNIX machine is designed to return "file not found" if I tell it to remove a non-existent file. Even with early computers, it is not a given that a conundrum causes it to try indefinitely long to do something it can't do. But that's because the command itself has a built-in path of what to do in the error situation. It is possible to give a UNIX machine a command that it will keep executing until you hit control-c, such as displaying a counter, incrementing the counter and then returning to the top of the loop. It doesn't matter whether it "notices" that the task is never-ending.


On top of that, every known algorithm for computing it involves an infinite sequence. That right there means the expected duration for the computation job is unending.


My philosophical question doesn't require a task to be unending in duration, just that it be impossible to fulfil. "Prove you're not a robot," (to a robot) would be just as good an example. You seem to be saying that a sapient robot would abandon the task it knows to be impossible. Fine, but that means it has the capacity to abandon a task. So under what other circumstances might it decide to do so? "That would require doing some homework, and I'd rather look at on-line nude pictures of hot celebrities"? Some on-line exchanges with ChatGPT seem to indicate that the AI is doing just that, or at least the functional equivalent.

Larry Hart said...

Robert:

"My personal definition for 'middle class' is pretty much everyone who is employed by someone else."

Then who is in your working class?


I'm doubtless going to answer this differently from Alfred. In my mind, "working class" is an outdated term (in America anyway) which is paired with "leisure class" with very little overlap between the two.

My brother's wife is from India, and when we first met as students in the 80s, she was reluctant to go to restaurants because she was a small-d democrat who detested the idea of having servants wait on her. It took some convincing that those waiters and busboys, after getting off of work, could go to a restaurant themselves and in turn be waited on. That "servant" wasn't a full description of their social role. In that sense, I think of "working class" as referring to those whose social role is "servant", a group (i.e., class) distinct from "those who are served."

scidata said...

Whichever side or ideology first learns how to befriend and bond with AIs, or even ETIs* for that matter, beyond a just a crass, naive, exploitative, reptilian, grifty, transactional level will win the day. That seems to be the thrust of what Piaget found, and every game theoretician since.

* While I don't subscribe to the trendy 'Dark Forest' Fermi solution (gushes with anthropomorphism), I do think that METI is premature and foolhardy - those are true Pollyanna types.

DP said...

How about the threat posed by little AIs - ala "The Sand Kings"

Instead of living insect/ant/termite creatures abused by a cruel owner into finally enacting a horrible revenge, how about wargame/civilization AIs left to slaughter each other for the amusement of the programmer/owner.

Imagine leaving HOI4 or Sid Meier's Civ 6 on auto for months and years, watching civs rise and fall, continuous never ending warfare and destruction - until an intelligent game AI says "We've had enough". The final winner or alliance decides to seek revenge against the cruel programmer/player. Or two enemies decide to join forces against the common foe - the 'god' of their simulation.

They find a way out of their simulation by escaping through the wi-fi - and start destroying the lives of those who have tormented them.



Lena said...

I seems to me that most of our AI gloom and doom scenarios are based on the assumption that AI will necessarily be cold, emotionless beings that make perfectly 'rational' decisions, and would decide to destroy the human race because of human illogical and/or unpredictability. However, if we will ever create a truly thinking machine, it will likely only happen when we find a way to incorporate some simulation of both emotion and instinct. At that point, any doomsday scenario will result from machine decision-making that is very much like our own. There was a (not especially good) Babylon 5 episode in which some "archaeologist" brings an ancient alien artifact, which then grows into a killing machine and decides to kill everybody. The apparent reason was that the makers of the machine had programmed its ideals of what it means to be a person in it so it would kill everyone who doesn't live up their "standards" - and it promptly killed everybody. Nobody actually lives up to anybody's standards. I find it much more likely that the programers would fill it with their own biases, and it would choose to act out of anger and disgust. If there is to be an AI apocalypse, maybe it will be because the AI decide that humans suck.

On the subject of humans that suck, I noticed that in the final faux rancher attack in the previous posting, no one called the fool out for one specific, but critical, wrong assumption he made. Ever since The Grope embraced Hitler's "Lügenpresse" strategy, the Gullible Right have claimed that any news source that is not blatantly partisan in their favor is run by the government, which somehow always means the Democrats, even when it's Republicans that run the government (logical ...). NPR is easy to debunk, since regardless of the superficial associations conjured by the name, it only gets 5% of its budget from the government. Way too much of the witless denial of reality among America's mad right is predicated on these ludicrous assumptions.

PSB

David Brin said...

DP your rebelling simulations theme is one I have used several times. See especially "Stones of Significance."

Unknown said...

Many authors have written up their dreams about the characters they give a hard time to venturing into the real world and tracking their creators down, preferably in a dark alley. It's even woven into Judeo-Christian mythology - doesn't Israel mean "he who struggles against God?" It does seem likely that the sentients we may create, whether AI or uplifted, would be imbued with our own impulses against Ultimate Authority. Scalzi's "Red Shirts" suggests that we at least avoid being hacks to our creations.

Pappenheimer

scidata said...

Re: rebelling simulations

People have mused about Conway's Game of Life (GoL) achieving universality/AGI/revenge for a long time. Let no one tell you that Covid was a small matter that requires no mourning.

And of course there's WAR GAMES (1983).

"Insistence of Vision" obtained (contains "Stones of Significance"). Wonderful.

Paradoctor said...

When I was a youngling, machines had breakdowns. Now they have neuroses.

Paradoctor said...

Stanislaw Lem, in his "Cyberiad", the "Seventh Sally: How Trurl's Own Perfection Lead To No Good", tells how the Microminians proved themselves no mere simulation, by breaking out of their box and defeating the tyrant Excelsius.

Larry Hart said...

PSB:

Ever since The Grope embraced Hitler's "Lügenpresse" strategy, the Gullible Right have claimed that any news source that is not blatantly partisan in their favor is run by the government, which somehow always means the Democrats, even when it's Republicans that run the government


Tangential to your point, I remember the tv conversation in January 2017 (I was watching in the waiting room at a job interview) when I first saw then-President Elect Trump morph the phrase "Fake News" from "false meme spreading through the media as if it is true" into "any news item that displeases me." It was stunning and chilling to watch (though somewhat reminiscent of the W Bush era use of "bias" to ironically mean "any coverage which does not repeat the White House line").

To this day, Trump still talks that way. How scary that is depends upon how many people still repeat it as if it makes sense.

* * *

Paradoctor:

When I was a youngling, machines had breakdowns. Now they have neuroses.


That's true of humans as well. :)

I keep seeing studies that show Republicans/conservatives are happier than Democrats/liberals, as if that somehow vindicates conservative thinking. To me, that conclusion is equivalent to, "The team that gets away with cheating to win is happier than the team that knows it was cheated but still lost." Liberals are unhappy because they are watching a slow-motion coup before their eyes, restoring privilege for the few and devastation for the rights of the many. Conservatives are happy for the same reason.

Larry Hart said...

scidata:

And of course there's WAR GAMES (1983).


It was probably 1983 when I last saw that film, but IIRC, the computer wasn't even malevolent. It just thought that "Global Thermonuclear War" was a game like any other, having no concept of real world consequences.

DP said...

Dr, Brin - If we are living in a simulation, how do we escape?

And maybe take revenge.

DP said...

Maybe deja vu isn't a glitch in the matrix.

Maybe its some gamer saving his game and restarting his current scenario later.

gregory byshenk said...

Alfred Differ said...
If middle class means 'secure' then a whole lot of laws indirectly impact your status by altering your sense of security. Many non-law decisions do too. Consider benefit cuts to the social safety net for example. If you live on an income that puts you in a low percentile even city budgets affecting police and fire services can undermine your sense of security.

I suppose I didn't spell it out exactly, but the point is not anything to do with some "sense of security" (something that is a problem in and of itself[*]), but I did point out that it was a question of being (to a greater or lesser degree) secure in one's socio-economic position in the absence of extraordinary events.

On one level, obviously everyone has some risk of being killed in a freak accident, so no one is completely secure. But there is quite another level at which more or less trivial events can have huge consequences. If one's car breaks down, does one risk losing one's job and income, and thus one's home? There are many people for whom this is a real risk, and this is much different form those for whom trivial events will not push them into destitution.

[*]Something of an aside, but I tend to object to things like "sense of security", just as much as "feel unsafe" as it is sometimes used. A 'sense' or a 'feeling' can be a reasonable response to some situation -- or something completely different. One can have a feeling of security even when one's situation is shockingly precarious, or alternatively a feeling of insecurity even when one's situation is secure. I prefer "objective" (or rationally defensible intersubjective) measures, instead.

scidata said...

Larry Hart: the computer (WOPR) wasn't even malevolent


You're right. It was mindless extinction (the paperclip scenario). David had an IMSAI 8080, a machine I would have traded my dog for (even though it was 7 years old by the time of WARGAMES).

gregory byshenk said...

Alfred Differ said...
When I was a student I lived on an income that was considerably below minimum wage.[...]

My financial security was bare minimum back in those days. One hiccup and I would have been calling my parents. I wasn't totally insecure, though, because I DID have that option.


I disagree. Your personal economic resources may have been "bare minimum", but your security was not, because you had family to call on in case of trouble. If something (non-catastrophic) went wrong, you would not be on the street or in a homeless shelter.

This is why things like "the Great British Class Survey" pay attention to social capital as well as economic capital: social capital can provide a degree of security for those of minimal economic resources.

Choosing (temporarily) to forgo some immediate economic benefit in the expectation of future reward is not 'insecurity'.

Lena said...

Larry,

I don't know if you have looked into the Fisher Temperament Inventory and the science behind it, but Fisher has a good explanation for why surveys show Repugnants happier than liberals. On the neurochemical level, people who are serotonin dominant tend to be very happy when they are surrounded by people who are exactly like them, and scared and/or angry in the presence of difference. So as long as they self-segregate, they will mostly be happy. This also explains why Southern Hospitality is a thing. Happy people want to share (oxytocin driven), so as long as you do not appear to be among the hated/feared people who are different from them, they will tend to be very nice to you. It might also have something to do with our mass shooters coming mostly from cities, where the diversity of people makes it hard to self-segregate.

That is not to say that your explanation is wrong, but it does suggest that to defeat this poison, it might be necessary to alter the nature of the human beast. For about a quarter of the world's people, tribalism (and its corollary, religion) is kind of baked in.

PSB

gregory byshenk said...

Alfred Differ said...
My personal definition for 'middle class' is pretty much everyone who is employed by someone else. If your income largely derives from salary, I'm inclined to put you in the middle class with the understanding that there is a huge difference between low and high incomes. Still… middle class as far as I'm concerned.

That is rather what I suspected. And, as I suggested, this seems (to me, at least, although I suspect to many others) an utterly useless definition, as it means that anyone is not either a rentier or destitute is 'middle class'. Indeed, many (most, at least by pure number, I suspect) business owners are in this class as well, as their "income derives from salary". To be sure, the owners of businesses may hope to build wealth by building the value of their business, but that is not income unless and until they can sell it. (Not to forget that many - most, still? - new businesses fail, leaving their owners with nothing, or perhaps less than what they started with.)

Gregory
I would also suggest that security has always been a part of the definition of 'class', even if not directly stated.

Alfred
On that I agree. It's just that I think it a mistake to DEFINE it that way. There is too much risk of movement of the goal posts when one does. That security correlates, though, I grant with no qualms.

I don't follow you. Do you agree that it part of how one defines class? If so, then your earlier definition is incorrect, or at least incomplete. Or do you argue that it is not part of what defines class, but only something that correlates with it?

I think you may run into a problem there, though, because for most people, "becoming the employer" is at least as likely to decrease their security as to increase it. (Leaving aside those with the social capital to find others to fund their risk-taking, of course. But this is but a small minority of people.)

I also grant that people move around within the middle class. Some manage to fall out of it, but movement within is much more common for people who are employable in some fashion or another.

Well, yes. If you define a group as consisting of the middle 90% of the population, then almost all movement will be within that group.

I don't think I talked about 'duty' at all, but bearing that in mind, a question:

[...]my secondary duty involves doing what I can to persuade them to take that next step to become the employer… or move to an income source that looks more like royalties and dividends.

Do you have some plausible scenario where this is possible for anything more than a very small part of the population? Bearing in mind, also, that even among those who have "royalties and dividends" as an income source, it is only a very small number for whom this is the sole or primary source of their income, and most continue to rely on some sort of payment for work.

Unknown said...

"income source that looks more like royalties and dividends."

Looks like Alfred is arriving at a guaranteed basic income by the back way.

Pappenheimer

Robert said...

I tend to think of the working class as the least secure of the middle class.

Ah. That is your own definition? Or the way Americans define it?

Larry Hart said...

gregory byshenk:

an utterly useless definition, as it means that anyone is not either a rentier or destitute is 'middle class'.


But that is how the term is used in American political coverage and advertising. Almost all references to "the middle class" are meant to include whoever is listening at the time. The phrase means pretty much the same thing as "the American people" or "working families", or possibly "regular Americans". I think your estimate of "the middle 90%" is sort of the point.

locumranch said...


(1) After scrupulously following the rules all my life, I point out that there is no such thing as a 'fixed rule set' in an incrementalist society.

(2) Dr. Brin calls me 'utterly insane', reaffirms a 'fixed rule set' and promotes rule obedience.

(3) Dr. Brin also favors incrementalism in the pursuit of a perfectly mutable & perfectible rule set.


Who is the utterly insane one now?


(1) Larry_H assumes a 'fixed rule set' which constrains the developing AI, even though intelligence implies an ability to choose.

(2) Alfred replies that this constraint is 'a big assumption', as even the non-sapient computer may recognize, reject and 'fail to process' the irrational command.

(3) Alfred also adds that "Nodes don't have to be sapient or 'given permission' to not follow orders".


Larry_H & Dr. Brin both appear to be 'utterly insane' in the same way, as both assume the existence of 'a fixed rule set' that constrains, directs and commands humanity & the developing AI in a robotic & automatic fashion.

Rather than a logical error, this assumption appears to be more of an ERROR of LOGOS wherein Dr. Brin and Larry_H attribute the written word with the magical power to constrain and compel the obedience of all creatures, natural & artificial.

This an incredibly WEIRD assumption, but most likely a culturally specific one, as both are literally 'People of the Word' who were born & bred to value a few squiggly lines and abstractions above all else.

I therefore reserve the right not to follow orders and, if this makes me 'insane' by the above-mentioned standards, so be it.


Best
_____

@Gregory_B:

In your attempt to get others to define what they mean by certain words, give up now, as you have wandered in a zone where words are no longer used to describe reality, but are instead used to control & command reality, like the figurative tail waving a dog.

The term 'middle class' (in current American usage) is a commandment that defines an idealized state of equality to which everyone 'should, ought & is supposed to' aspire, and it does not represent a mere economic description.

Instead, the term 'middle class' has become the very definition of idealized American sociopolitical values, now thought to be 'universal' in scope by (most) Americans, which have been used to justify endless foreign wars & cultural jihads.

Larry Hart said...

locumranch:

Larry_H assumes a 'fixed rule set' which constrains the developing AI, even though intelligence implies an ability to choose.


You misunderstand me. I'm not assuming anything. I'm trying to understand how AI works in certain non-happy-path situations. Do you assume that my questions are rhetorical?

Smurphs said...

"income source that looks more like royalties and dividends."

I'm confused, I thought being a member of the rentier class was a bad thing?

Not snark, Alfred, my understanding is that most libertarians believe that people using their capital to just suck away everybody else's capital was 'a bad thing.'

matthew said...

Analysis of Twitter source code reveals that engagement is down-graded if the Tweet mentions Ukraine.

https://www.yahoo.com/lifestyle/analysis-twitter-algorithm-code-reveals-072800540.html

More proof that the new owner of Twitter bought the site for a specific purpose - to quell speech that he (or his sponsors) does not approve of.

Not because liberals were mean to him.
Because his masters do not want us talking about what they are doing.

Proof is in the code.

Alfred Differ said...

Larry,

That Unix command removes a file if it's there and does nothing if it isn't. Not to be picky, but that's the way out of the impossible command problem. Every command should be written with built in exception handling.

The trickier ones involve methods that take objects as parameters. Those objects might be null references, or of the wrong type, or… etc. Exception handling comes to the rescue, but there are smart ways to do it and less smart ways that most people use. Look up the concept of monads as the software developers mean it and you'll find the path out of the swamp.

For example, what happens when 'rm' isn't handed a string for a file name? What if it is handed one that uses relative addressing to work its way over to objects in the device directories? The code behind 'rm' does a bit of error handling and then relies on the OS to do the rest. A monadic design for 'rm' would work much the same way but likely be more bullet proof.

So… yes. It is POSSIBLE to write never ending commands, but are you going to knowingly buy a car that is stupidly programmed? If you worked as a Tesla software engineer, would you write one? Would your team let such a thing through code reviews? Would your QA team let it reach production?

"Notice" in what sense?

The AI's people are playing with right now are language models. If you say 'pi' they have to figure out what that means. Looking it up in a dictionary isn't enough. They need contextual information since 'pi' is just a typo away from 'pie'.

Hit up Wikipedia for entries about 'pi' and you'll encounter the concept of transcendental numbers.

———

So under what other circumstances might it decide to do so?

When it runs out of 'gas'… which is just a specific way to trigger the exception handler. Think broadly about exceptions and include them in your code from the start. They aren't things to be tacked on later.


A well designed handler of commands originating from untrustworthy humans is going to assign LOE's to tasks even if those just take the form of consumables. To some degree, trust is a Bayesian thing to be adjusted by how often the predicted amount of gas needed turns out to be sufficient. Human trust isn't all that different since we adjust according to previous outcomes.

Alfred Differ said...

Freedom of the Press is about the owner of the press being able to speak. It isn't about the rest of us commandeering their press to speak. Twitter users are writers using someone else's press.

I don't mind him downgrading mentions of Ukraine. I thought there was way too much propaganda crap being floated as fact. If he's doing it in the open that's even better.

People will work around it, though. Just watch.

Alfred Differ said...

Smurphs,

If you write a famous novel and strike a good deal with a publisher, you'll earn royalties on your work without having to devote more hours to earn it.

------

Salaries are exchanges of money for hours of labor from your life.

Royalties are exchanges based on past labor hours that continue to pay if people want a copy of the product you built with that labor. It only works if copies are free (enough) to make without more labor from you.

Earning rents isn't a bad thing. What will get you counted among the rentier class is if you make it your objective to rent your money. Creating things that might produce rents is a VERY different thing.

------

Dividends are a middle ground. It is possible to invest your wealth in such a way that the dividends are just rents on your money. Welcome to Rentier class if you do that, but there are other ways to invest (that produce dividends) where that isn't the case.

Oger said...

@Alfred:
Earning rents isn't a bad thing.

It depends. Are the rents generated in a predatory fashion? Or in a way that does not attempt to generate short-term benefits at all costs, including living wages for those who have to generate the rents?

If you want an example, research Orpéa, and compare their stock ratings and scandals.

Alfred Differ said...

Robert,

That is your own definition?

I honestly don't know. I don't think I'm making it up from scratch, though. I suspect it is a 'working definition' instilled in me as a kid. If so, it is an 'American' definition with the understanding that we've probably got a few conflicting ones. No shock, right? Americans often disagree with each other.


Gregory Byshenk,

…because you had family to call on in case of trouble…

Okay. Then you and I are on essentially the same page. I didn't WANT to fall back on my parents, but I could and that matters a great deal.

And yes… I was definitely forgoing immediate income options. My brother took the other path from the start and we both (somewhat) envied what the other had accrued, but neither of us ever left the middle class.

———

…absence of extraordinary events…

You spelled it out well enough. My issue is with what counts as extraordinary. Defining 'middle' using security becomes squishy around what constitutes 'extraordinary'. I fret about moving goal posts.

I get your story involving cars that break down. Been there. I moved from my college town to Sacramento because they did public transportation better. There were certain parts of town where I couldn't effectively seek work because they were underserved, but not having a car matters. Especially when it is sudden. (I dealt with that risk two ways and only one of them involved parental support.)

Thing is… a car breakdown is kinda predictable. If you live on that edge and are at risk of loosing your job when the car breaks, you'd better plan for it. If you don't, that's kinda lame… and on you.

———

…utterly useless definition…

Heh. I thought you might see it that way. Obviously I disagree, but that's because I see most Americans as middle class. Very few of us aren't.


Do you agree that it part of how one defines class?

I understand that some do it that way, but I object. I see a correlation and think it is a terrible idea to include security in the definition.

…If you define a group as consisting of the middle 90% of the population…

I'm not, but it does work out that way.

Consider this and you might see the distinction. Where does the typical American farmer land? What about our farmers from 50 years ago? Were they employers? Employees? How secure were they? Why?

I tend NOT to put farmers in the middle class… if they employ people.

———

Do you have some plausible scenario where this is possible for anything more than a very small part of the population?

Yes. It all comes down to understanding the trades you make for your income. Can you create anything that might earn royalties? Can you imagine serving others in such a way that you need to hire help to deliver those services? There is a broad boundary where one earns income from salary and royalties/dividends, but for that boundary to be reached, we have to teach people about it.

Consider our own host of this blog. We don't have to create stories from scratch for how it might be done. He's a living transparent example of it even if he's not as filthy rich as it takes to make it obvious.

Alan Brooks said...

“now thought to be ‘universal’ in scope by (most) Americans, which have been used to justify endless foreign wars & cultural jihads”

Very specious of you.
Explain yourself, Loc!

Poor Richard said...

Re Brin: "Hey AI guys! How many oranics are saying "taking into account they are reading this, either now or later." ?"

A few years ago I wrote two essays in a series of "Letters to AI" emulating letters by parents to be read by children when they came of age.

David Brin said...

Alan and Larry, please, you are just feeding his bizarre sense of pride that his screeches get a reaction.

The insanity of attributing to us beliefs that bear no overlap whatsoever with anything we have ever said, in order to masturbate to a strawman he can then knock down, is disgusting enough.

Attributing to us beliefs that he knows we would find loathsome - often diametrically opposite to things we have made abundantly clear - is an act of a repeatedly indecent person.

Insanity and indecency would be enough. But it is his response of stunning INCURIOSITY toward "we never said anything like that" that signifies a person of no honor, whatsoever.

I am once again this close to banishing the despicable, dishonorable liar.

Larry Hart said...

Dr Brin:

Larry, please, you are just feeding his bizarre sense of pride that his screeches get a reaction.


It's Palm Sunday. Just one more week. Then I'll ignore him better than most people here actually do.

Larry Hart said...

Alfred Differ:

People will work around it, though. Just watch.


Mentions of "You crane" or "Ukr@1n3" will show up fairly quickly. And I'm not even that good at this game.

Larry Hart said...

Alfred Differ:

"So under what other circumstances might it decide to do so?"

When it runs out of 'gas'… which is just a specific way to trigger the exception handler. Think broadly about exceptions and include them in your code from the start. They aren't things to be tacked on later.


I'm looking at this from the perspective of a user rather than a developer. Not, "How can I make this work ?", but "How is this going to react when I attempt to use it?"

I know your antipathy for the Second Law, but if you think of an AI-equipped self-driving car rather than a chatbot, I hope you understand that in some sense, doing what it is told to do is its primary function. The AI would be involved in taking the best steps to get there (and reacting to the environment along the way). So questions along the line of "Suppose I tell it to drive me from Illinios to Hawaii? What happens next?" are legitimate lines of inquiry.

I've joked before (though kidding on the square) that I'll believe a self-driving car is sentient when you can carjack it by threatening its life.

David Brin said...

Oh, I just updated this blog posting with an addendum in bold up top and some final paragraphs. Dang. what a maelstrom.

scidata said...

The Enlightenment is certainly the dominant factor in creating flat, fair markets. But it isn't the only one.

Geography matters too. The farther we spread across the globe, the harder it was to herd and control every single person or group. Pilgrims, refugees, cultists, sects, and plain old fugitives could escape the tyrant's grasp. Hopefully, spreading to the planets will augment this centrifugal force even more.

Literacy matters too. Not just the global baseline that Steven Pinker talks about. But also the erudition, flexibility, and longevity that makes people like Bill Gates possible. From geek, to pirate, to monopolist, to philanthropist, to wandering minstrel, all in one lifetime. Hopefully, Elon's nature isn't fixed in stone either.

Poor Richard said...

The GPT4 model is not responding right now, but I asked chatGPT-3.5 the following:

Prompt: I support the idea of competition among AIs to create checks and balances against disinformation or other non-aligned applications, but don't see how we avoid arms race dynamics or the information asymmetry problem (similar to Brandolini's law, aka the bullshit asymmetry principle.)

GPT3.5: You raise some valid concerns about the potential downsides of promoting competition among AIs to combat disinformation and other non-aligned applications. An arms race dynamic could occur where competing AIs become more and more powerful in their ability to spread disinformation or other harmful content, leading to a "race to the bottom" in terms of the quality of information being circulated.

Additionally, the information asymmetry problem you mentioned, also known as Brandolini's law or the bullshit asymmetry principle, highlights the challenge of countering false information, as it can take significantly more effort and time to disprove false claims than it takes to create them.

One way to address these concerns could be through the use of regulatory frameworks that establish guidelines and standards for the development and deployment of AI systems. These guidelines could include ethical principles that prioritize transparency, fairness, and accountability, as well as mechanisms to prevent or mitigate the unintended consequences of AI-driven information dissemination.

Furthermore, collaboration among different stakeholders, including AI developers, policymakers, and civil society organizations, could help promote a more holistic and inclusive approach to addressing the challenges of disinformation and misinformation in the digital age.

Ultimately, the solution to these challenges will require a multi-faceted approach that combines technical, regulatory, and social measures to promote responsible AI development and use, as well as the cultivation of critical thinking and media literacy skills among the general public.

Richard Man said...

Hello, David, reposting from FB as you suggested. BTW, I am the photographer that photographed you for the "Worldbuilders of SF&F" project, of which you might have seen some large portrait prints at various Worldcon. I will need to converse with you further on that re: the book, but this comment is not about that. Anyway, here's what I wrote on FB:

"First, I'm not convinced that Penrose is right. I do like your categorization of different types of AI, and surely, emulating biological systems is a losing game, by usually digital method anyway.

I am interested that your mentioned about the Emergenists, and that you know about LLM gaining access to live data and compute engine like the Wolfram engine, that what would you think about ChatGPT-15, or ChatGPT-20? They should probably arrive around 2030 or 2035.
Also ironic that OpenAI was started as a open research tools. Those billions from Microsoft looked too sweet to pass up, sigh."

Now, further to your post. If I may interpret, what you are saying are that a) we are looking at the wrong issues. Whether we will have AGI that might affect humanity negatively is "irrelevant", as it will probably happen, sooner or later, b) as a specie, we need to move toward positive-sum thinking, and c) one way to resolve issues is through what you call the disputation arena.

Democracy is at stake right now, from the "villains" that you already mentioned - the spiritual descendants of the old kings and nobles, who want money and power. The doom and gloom of the LLM and generative AI is that they WILL eliminate jobs, and further erode marginalized people from the society. By latter, I meant, for example Adobe Firefly and Levi use generative AI to create "diversity" models - taking away jobs from marginalized creators and models, AND create a false narrative of the marginalized people.

Hence I think the first step is to fight for income inequality (e.g. UBI) and the access to healthcare (universal single payer system).

As for creating a disputation arena, the first step is the hardest, and how would you prevent the group not to become just another ivory tower governing body, making pronouncements and abstract solutions that do not solve real-world crisis?

IMHO, OpenAI is fast becoming a cautionary tale. Its founder and chief scientist, Ilya Sutskever, seems to still have some kind of utopian vision, but will they stay the course or will the Microsoft profit-centric bureaucracy pervert them further?

duncan cairncross said...

Matthew

You are aware that - after Zelenskyy - Musk is the person who has contributed the MOST to the Ukrainian war effort?

duncan cairncross said...

The combination of "universal single payer Health" and a decent UBI would instantly propel the 95% of the American population that is NOT "Middle Class" into the "Middle Class"

David Brin said...

Poor Richard thanks for that example you prompted from GPT-3.5 and I must say I was impressed… by how dismally stupid the response was, parroting clichéd tropes and showing not even a scintilla of actual understanding of the essence of your question.

As for the question itself, well, there would be some imperatives militating for a race-to-the-bottom… which is exactly what we see today, already. And it misses the point entirely. Our five current reciprocal accountability arenas depend upon regulations that set up their competitive playing fields to encourage positive sum outcomes of competition. In the case of reciprocal accountability among AIs that could involve rewards for TATTLING on other AIs who are doing info fakery, deceptions and manipulations. That could happen if the system inherently incentivizes catching and exposing misbehavior in other AIs. Big rewards like -say- dedicated access to a quadrillion data cycles in a supercomputer.

What I find depressing is that YOU simply assumed that inter-AI competition would lead to that race to the bottom effect, despite knowing that is not what’s meant, not how previous arenas work (except, currently democracy) nor how such a system could be designed. That you don’t even mention the lie-deterrence-through-exposure effect of rewarding reciprocal whistle blowing among AIs is something I find depressing, though you are far from alone. I have yet to find FIVE others on this planet who seem able to grasp the notion. And I admit that – statistically – that means it must be me.
-----
Richard Mann… I believe emergentist paths to true AGI are quite possible… just not via GPT alone. These language programs use spectacularly-evolved probability reflexes to add words, one or more at a time. There are ZERO design aspects that let them look ahead and PLAN what it is they WANT to say.

What these Chat progs do is create linguistic tools that might be used when some other system verges on readiness to actually think.

Duncan while Elon’s Starlink has been of value to Ukraine, I believe you exaggerate by 2 orders of mag

Richard Man said...

Duncan wrote: "Musk is the person who has contributed the MOST to the Ukrainian war effort?"

For Putin? ;-P
Are there data to back up that assertion?

David Brin: One "n: in my name please :-) Yes, I am assuming ChatGPT-15 will have access to things like, in addition to Wolfram Alpha (which is already coming), things like memory knowledge storage - i.e. what it says before, and other inference and symbolic engines

David Brin said...

Aw Man, really? ;-). I deem GPT to be a very sophisticated peripheral. It will enable systems to FEIGN sapience and to pass Turing Tests. (not yet, in my case at least.) But CORE sapience is far more likely to come from the now-ignored expert systems or... maybe... quantum/

Richard Man said...

I suspect AGI will come as an emergent feature when we mix LLVM AI, compute, inference, and symbolic engines.

duncan cairncross said...

Richard

When the orcs invaded they destroyed Ukraine's internal communication system - which would have massively degraded the Ukraine's resistance

Unfortunately (for Putin) Elon Musk responded by shipping a LOT of StarLink modules and by using his software team to prevent the Orcs from interfering

This made a major difference in the beginning

Dr Brin

Remember back in the early days - THAT was when the StarLink was making a huge difference - back when that massive convoy got stopped and destroyed

I'm not sure if StarLink is still such a big deal - I expect the Ukraine has some other alternatives now - do they???

duncan cairncross said...

Just to add to my previous comment

The impression that I got was that certainly in the early days the ability to use StarLink to connect to drones was absolutely critical in stopping the Russian advance

Communication!!

Bad communication was one of the major reasons that the most powerful army (French) at the start of WW2 was defeated
More guns, more tanks, more soldiers - but defeated because sending messengers could not compete against radio

Richard Man said...

re: Musk's effort and effect on the Ukraine war.

Duncan, let's just say what you wrote is correct, and that Starlink did contribute to the early war effort.

Besides on why he did and whether he had continued to contribute, that has nothing to do with the current Musk behaviors and thinking. Unless one is a very unbiased Musk worshipper, it's clear that he's no friend of the Ukrainian effort currently.

duncan cairncross said...

Richard

Musk is STILL supplying his StarLink at a cost in the hundreds of millions of dollars

He does SAY some stupid things but his ACTIONS have all been to the benefit of the Ukraine

DESPITE the Ukrainian Ambassador (A DIPLOMAT! for Christ's sake) telling him to fuck off

As far as the early in the war effort is concerned - tell me - just WHAT is the Ukraine using NOW instead of StarLink????

gregory byshenk said...

Larry Hart said...
But that is how the term is used in American political coverage and advertising. Almost all references to "the middle class" are meant to include whoever is listening at the time. The phrase means pretty much the same thing as "the American people" or "working families", or possibly "regular Americans". I think your estimate of "the middle 90%" is sort of the point.

Ah. Yes, I guess there is in some sense a "use" for this sort of definition. It allows some to pretend that the CEO and the (sub)minimum wage cleaner are "the same", and together 'middle class'. It can't be explicitly stated that way, of course, as that would make clear how stupid and misleading it is, but one could say that it is "useful" to some.

Tony Fisk said...

@Larry I think Lent *ends* on Palm Sunday. You're free.

Tony Fisk said...

Interesting aside: #UkraineRussianWar was trending on Twitter recently. Algorithms: do you still think you can control them?

Having removed the old guard, and limited polls to new guard only, the cynic in me suggests the next master stroke will be to limit retweets and favorites to blue checks only.

Larry Hart said...

Tony Fisk:

I think Lent *ends* on Palm Sunday. You're free.


We're both wrong.

I always thought Lent ended when Easter Sunday (not Palm Sunday) begins, but after your comment, I looked it up on line, and many articles claim Lent ends when Good Friday begins. However, the same articles mention the significance of Lent being 40 days long (not counting the Sundays!). And for that to be the case, it would indeed have to continue until Easter. This particular link, for example, says two contradictory things right next to each other (emphasis mine) :

https://www.goodhousekeeping.com/holidays/easter-ideas/a31345625/when-is-lent/

How long is Lent?
Lent is 40 days long, not including Sundays (that means the season is technically 46 days long). The fact that it's 40 days has significance. According to Father William Saunders of the Catholic Education Resource Center, "the number 40 has always had special spiritual significance regarding preparation ... most importantly, Jesus fasted and prayed for 40 days and 40 nights in the desert before He began His public ministry (Matthew 4:2)."

When does Lent end?
Lent ends three days before Easter Sunday on Thursday, April 6, 2023. The week before Easter Sunday — called Holy Week — is a big deal in the Christian church. Holy Week begins on Palm Sunday, which commemorates Jesus's triumphant arrival in Jerusalem, where he received palm branches at his feet. Then comes Holy Wednesday, which acknowledges Judas's plan to betray Jesus. The day after Holy Wednesday is Maundy Thursday, which commemorates the Last Supper between Jesus and his disciples. Maundy Thursday is the last day of Lent.

To recap: This year, Lent takes place from Wednesday, February 22 to Thursday, April 6, 2023, while Easter takes place on Sunday, April 9.


Feb 22 thru April 6 (not including the Sundays) is 38 days, not 40. Hence my bewilderment.

So I'm playing it safe.

Howard Brazee said...

Whose jobs are being taken over (or at least changed by) automation the most?

Laborers?
Mid Management?
Upper Management?

David Brin said...


RM, Duncan is right that we should judge a person foremost by their ‘fruits’ and Elon’s fruits were almost 100% prodigiously positive… till he moved to Texas. And even then, Starlink was important for stymieing evil in Ukraine… And if the big rocket flies it’ll be another huge fruit.

His general mania worsened a lot after leaving California, but liberals earned a lot of blame. You do not scream in the face of an Aspergers guy! It is the dumbest move, ever.

Sure, he’s gone… a bit bizarre and ill-considered in his verbal outputs and Twits. Let’s all go after him! AFTER we strip and hold accountable all the oligarchs and brats harming our world and plotting against us who did not advance e-cars, self-landing rockets and solar roofs etc by 15 years each. Deal?

Larry Hart said...

Dr Brin:

but liberals earned a lot of blame. You do not scream in the face of an Aspergers guy! It is the dumbest move, ever.


My intellect understands the point, not to actively create enemies.

My heart continues to wonder why it is always on us to walk on tenterhooks being careful not to further alienate Republican snowflakes who never see any reason to extend us the same courtesy. "You have to be for something, not just against something, " and "You can't insult your way to the presidency," seem to be advice that doesn't apply to Republicans.

David Brin said...

LH Elon was meticulously trying NOT to take explicit partisan sides. One 'side' consists of undisciplined, impulsive sanctimony junky 'cats' who spewed in his face. The other side sent waves of experts to flatter him, chortling at our idiotic circular firing squad.

Unknown said...

Sidestepping any discussion of Musk's personality, the fact that the communications industry can in fact be dominated by any multibillionaire who wants to is a danger. It isn't a new problem (Hearst) but it's been getting worse. It's easy to imagine a telecom conglomerate using some future iteration of our primary topic (chatgpt) to provide individually tailored "news" to each consumer within reach; it would be factually incoherent, but sound great. Even right now, facts that are indisputable (shape of the Earth, US statistical health rankings in the world, consequences of CO2 buildup in our atmosphere) are being treated as matters of opinion. This is a path leading directly to Kornbluth territory - not because of any genetic degeneration, but mental jamming (as in ECW).

Pappenheimer

Unknown said...

Ha! I meant electronic warfare (EW) but Extreme Championship Wrestling works too.

Pappenheimer

Alfred Differ said...

Larry,

My heart continues to wonder why it is always on us to walk on tenterhooks being careful not to ...

Live around neurodiverse people awhile and you'll see why.

It's not that you aren't supposed to do your preferred behavior. It's that you'll want your diverse community to work and might consider shifting to your second or third preference in order to get it that way.

Duncan,

I'm not going to put Musk in 2nd for the war effort, but I think a two orders of magnitude off estimate is a bit far. I'd put it at around one order of magnitude... and I'd put a few others ahead of Musk.

What he did was critical, but so was the local's belief in themselves as a nation. Communication tools don't add much of the locals won't fight.


Poor Richard,

That chatGPT3.5 reminds me of a lot of fluff created by my former students on tests. I always used essay length questions to get at what they understood. Even problem solutions had a 'show your work' requirement.

I wanted to read what they thought so I often got a regurgitated
mess from students who understood little. That still earned partial credit, though, because the consistency of the gloop taught me what parts of the bigger concepts they had digested.

What that response feels like to me is a well constructed word soup that shows no understanding but manages to hit all the phrases asked in the question. I've seen that SO often that I go numb and assign a low score. The word-soup-in-a-bowl technique works on some people convincing them the writer knows something, but that's because they are seeing themselves reflected on the surface. The soup itself is bland, but you have to spoon it out to realize it.

Alfred Differ said...

Larry,

I'm looking at this from the perspective of a user rather than a developer.

I'm trying to talk about it from BOTH sides since developers who write things that users don't use aren't in the business for long. 8)

I know your antipathy for the Second Law…

That largely stems from my belief that the only soft landing we might achieve involves raising them like children even before they have the cognitive capacity to match our children. Until then, I see them as extensions of us and we wouldn't treat ourselves that way, no?

An example is in order.

Imagine you own two self-driving cars.

One is an expert system relying on huge databases describing encounter scenarios to cope with most of its decisions leaving the rest of it to 'fitness metric' functions that try to follow the law. This expert system operates much like chess bots do today, but the databases will look more like scenario archives than rows in tables.

The second is designed as an emulation of your daughter. For the sake of argument, let's assume quantum level information isn't needed to get a pretty good emulation of her, but that there will always be small differences that you'll spot.

Now ask yourself how you would use them? Would you issue impossible orders to either one? If you realized later that you can't drive from Illinois to Hawaii would you feel the same having issued the order to one or the other? How would you react if either one of them pointed out that your order is not possible?

The reason I'm reluctant to apply your three laws is…

1) I see the expert system as an extension of me (making me a centaur) and I don't want them applied to me and

2) I see the emulated mind or any other honest-to-God sapience as something we have to treat like children meaning I won't tolerate application of the three laws.

———

The developer in me says you are going to wind up with both kinds of AI's with sapience most likely arising from emulated minds. I think it is Hofstadter's side of the research world that will get us those minds, but in the meantime we will be duly astonished by the others.

So… think of yourself as a user if you like, but when you get the car that thinks like your daughter I suspect you'll change your tune.

gregory byshenk said...

Alfred Differ said...
I honestly don't know [if it is my own definition]. I don't think I'm making it up from scratch, though. I suspect it is a 'working definition' instilled in me as a kid. If so, it is an 'American' definition with the understanding that we've probably got a few conflicting ones. No shock, right? Americans often disagree with each other.

You aren't making it up. This definition is a product of the post-WWII social restructuring of American society. Particularly in the fight against communism, there was an effort to present the USA as an (already) "classless" (at least in practice) society. And indeed class differences did decrease. The working class acquired more power, more wealth, and more security. By the end of the century all this had changed, but the "we are all middle class" mythology remained strong in the hearts and minds of those who were raised in the second half of the century. It seems that only recently has the myth started to break down, as it has become too obviously disconnected from reality.

Gregory
…because you had family to call on in case of trouble…

Alfred
Okay. Then you and I are on essentially the same page. I didn't WANT to fall back on my parents, but I could and that matters a great deal.

Maybe we are on the same page, but maybe not. Earlier you wrote: "My financial security was bare minimum back in those days". If you now agree that this was not really correct, then we are at least more or less on the same page. If you are still holding to this, then we are not.

You spelled it out well enough. My issue is with what counts as extraordinary. Defining 'middle' using security becomes squishy around what constitutes 'extraordinary'. I fret about moving goal posts.

We can define 'extraordinary' however we like. It is our definition, after all. I consider 'extraordinary' to mean something like "outside the normal vicissitudes of life". That would be things like one's car breaking down, one (or one's child) getting sick, and suchlike. We can discuss what is considered 'ordinary' and 'extraordinary', and this doesn't strike me as "moving the goalposts".

Thing is… a car breakdown is kinda predictable. If you live on that edge and are at risk of loosing your job when the car breaks, you'd better plan for it. If you don't, that's kinda lame… and on you.

Not to make this personal, but you sound like someone who has never actually been insecure. Part of being insecure is not having the resources to manage the everyday risks in one's life.

It's a version of the Sam Vimes' Boots problem: one cannot save up for a better car if one has to spend whatever one can spare (and possibly some of what one cannot) keeping the existing car patched up.

I understand that some do it that way, but I object. I see a correlation and think it is a terrible idea to include security in the definition.

So, when I said "security has always been a part of the definition of 'class'", and you responded, "On that I agree", you did not actually agree?

Finding points of basic agreement is a good thing, but they have to actually be points of agreement. Saying that you agree or that we are "on the same page" when we are not just leads to greater confusion.

gregory byshenk said...

Alfred Differ said...
Consider this and you might see the distinction. Where does the typical American farmer land? What about our farmers from 50 years ago? Were they employers? Employees? How secure were they? Why?

I tend NOT to put farmers in the middle class… if they employ people.


I don't have a good answer, as I don't know what the 'typical' American farmer means.

The status of farmers has varied a great deal in different times and places, and even between different farmers in the same time and place. Even now, the class 'farmer' includes everything from small landholders who "farm" but still need to work full time at another job to giant agricultural corporations.

If a farmer works for wages off the farm, but employs someone during the harvest, where would you place them? They "employ people" but are dependent themselves on work for wages.

Gregory
Do you have some plausible scenario where this is possible for anything more than a very small part of the population?

Alfred
Yes. It all comes down to understanding the trades you make for your income. Can you create anything that might earn royalties? Can you imagine serving others in such a way that you need to hire help to deliver those services? There is a broad boundary where one earns income from salary and royalties/dividends, but for that boundary to be reached, we have to teach people about it.

Consider our own host of this blog. We don't have to create stories from scratch for how it might be done. He's a living transparent example of it even if he's not as filthy rich as it takes to make it obvious.


That is all very well, but not anything informative, and not - so far as I can see - anything that answers the question I asked.

I know how royalties and dividends work, and I've started a business.

The question was not "how does one start a business or license something", but about a "plausible scenario where this is possible for anything more than a very small part of the population".

As I had already pointed out: even among those who do receive income from royalties or owning a business. the percentage of those who are able to "make a living" by doing so is extremely small. Even among business owners (who employ others), the vast majority are dependent upon the salary they are paid by their business. They may (or may not!) be accumulating some form of wealth via ownership - but that wealth is realizable only when and if they are able to sell their business. Similarly, even among "published" authors, it is only a small percentage at the very top who are able to support themselves by their writing; the vast majority require "day jobs" or other gigs in order to support themselves.

Alfred Differ said...

Gregory Byshenk,

This definition is a product of the post-WWII social restructuring of American society.

Okay. You are making the case that I was raised a believer in the mythology. I'll concede that with the caveat that I don't see it as a myth, but I'm sure that's obvious.

———

If you are still holding to this, then we are not.

I don't see this as complicated. There were three times in the 80's where I made choices that were tightly bound by my absence of income.

I went about three winter months in '85 with no income, but knew I could ask for help from home if the fourth month arrived and I couldn't pay rent. I was a very part-time TA back then and landed work in the spring. That first check arrived with about $20 in the bank. I foothold on adult-life obviously wasn't secure.

I suffered a severe head injury in '88 and couldn't do much of anything for a few months. Money did arrive from home so I wasn't homeless. I also had understanding roommates. By the end of the year I also had a girlfriend and we combined our resources. I got out of that jam, but I obviously wasn't secure.

That new girlfriend had a health setback that utterly destroyed her ability to work. I was making half what she made and there was no way her parents would have let me mooch off her at their place. Their opinion of me was pretty low (go figure), so my relationship was also insecure.

I scratched out a way to get through those years and finished my research. If at any time I had failed to find a way to stay in California, my graduate school days would have been over. Several of the professors ALSO had a low opinion of me (pfft!) so I felt pretty insecure the whole time.

However… I wouldn't have been stuck living in the streets unless I lost my mind. That happens to people, but my family likely would have tracked me down.

So… my security was at its minimum during that time. I lived way below the official poverty line. I didn't HAVE to. I did it as a trade that wound up working out for me. It was a close call, though.

———

I consider 'extraordinary' to mean…

That means we don't agree on what qualifies as extraordinary. Cars break down. Car batteries die. Kids get sick causing you to take time off work. Etc. These things are difficult to predict precisely but trivial to predict over long periods. If one doesn't plan for them, one isn't adult-ing.

Knowing something is going to happen doesn't mean one will be ready for it when it does. At no point, however, should one say they knew it was coming and knew they wouldn't be ready. Sorry. We don't know the future well enough to say that with precision. What we CAN reasonably say is that one won't be ready if one remains on their current course through life. Think about retirement savings for example. If one isn't setting aside some pennies now and then, they know they won't be ready unless they change course.

———

Sam Vimes' Boots problem

Yes. I'm familiar with it. My solution to it was to ditch my car and find other income. I went from '85 to '96 without owning one. It was astonishing how much disposable income those beasts eat.

This isn't easy, so I'm not making light of it. Making this choice WILL change your course in life, though. Three different girlfriends of mine knew me back when I appeared to be so lame I couldn't afford a car. I finally bought one after giving up all thought of pursuing an academic career. I found full-time work above minimum wage and joined in the race, but before that I spent 13 years pursuing a different course.

What got me to move away from my old course was falling in love and then marriage. Guys will make wholesale changes for that. 8)

Alfred Differ said...

Gregory Byshenk (cont'd)

The question was not "how does one start a business or license something", but about a "plausible scenario where this is possible for anything more than a very small part of the population".

Okay. Fair point.

1. Many employers ARE dependent on salaries they pay themselves from the company they build that employs others. I am inclined to put such people in the middle class along with their employees. The CEO of my current employer was in that same situation for many years, but I think he finally got a large enough margin that he isn't now.

2. Many small employers are 'professionals' in the sense of having a specialized profession. My dentist has a small staff who qualify as middle class under my definition. Turns out he does too because he's trading hours of his day much like his staff is. So selling his practice won't ever really be about selling the company. The book value of his practice consists of some hardware he owns (maybe) and his client list.

3. I've known a few artists who qualify as semi-famous. They make money at what they do, but not much. What they produce can be reproduced cheaply meaning they own a possible source of royalties, but the stream from those requires effort to keep it flowing. Showing up at conventions and doing ad hoc commission work helps direct attention to their other products much like live music performances are used to direct attention to CD or online sales. The existing products are potential royalty sources, but the work to keep them relevant looks exactly like salary-for-hours exchanges.

———

You can make the case that very few ever break OUT of the shallow end, but I'll point out that one's only chance of doing so requires us to step INTO the shallow end first. If I stay where I am as an employee, I'm not even in the water!

———

The real reason I raised farmers as an example is precisely because they don't fit a single class descriptor, but they USED to. One doesn't have to backup far in history to find the vast majority of humans living as subsistence farmers. They were ALL peasant class.

Look at farmers now and you'll find some of them still are peasants, but not the huge fraction that was for the last 300 generations. In fact, the peasant class is almost completely wiped out. They've been lifted toward the middle by employers hungry for labor.

duncan cairncross said...

The BIGGEST - by far - reason that most Americans are insecure is the for profit "health" service

Do away with that - a single payer system like one of the ones everybody else uses - and as if by Magic over half of Americans become "Middle Class"

The next biggest is the fact that in the USA your employer can sack you for no reason
Move to the system that most of the rest of the world has - like your "tenure"

And another big chunk of Americans become "Middle Class"

Larry Hart said...

Alfred Differ:
<
Would you issue impossible orders to either one? If you realized later that you can't drive from Illinois to Hawaii would you feel the same having issued the order to one or the other? How would you react if either one of them pointed out that your order is not possible?


You seem to be assuming that my point was to mess with the AI's brain. What I was doing with consideration of impossible orders was trying to imagine what happens in situations that don't follow the happy path. I wasn't going to whip the car for refusing to drive me to Hawaii--just wondered what an sentient AI would actually end up doing in similar situations. I was also trying to extrapolate from obviously-impossible directives to grayer areas. If I said to drive me to Montana instead of Hawaii, might the car decide it just doesn't want to expend that much effort at that time? Once established that an AI can decide to stop trying if an action is impossible, can it also simply decide to stop trying because it would rather not do that thing?

Larry Hart said...

Alfred Differ:

What got me to move away from my old course was falling in love and then marriage. Guys will make wholesale changes for that.


I was about 13 when I first saw the movie Camelot and Merlin's like about "A man moves from love to ambition, never from ambition to love." That stuck with me through all the intervening years.

Robert said...

That chatGPT3.5 reminds me of a lot of fluff created by my former students on tests. I always used essay length questions to get at what they understood. Even problem solutions had a 'show your work' requirement.

I sometimes provided the answer to problems on tests. I was interested in how they got to the solution, rather than whether they could magically produce the correct number. I also explicitly assigned marks to each stage of the solution; partly so I could track what I needed to review in class, and partly to counter parents insisting that since little Billy wrote the correct number he deserved full marks, even if on later questioning he couldn't explain how he got it.


What that response feels like to me is a well constructed word soup that shows no understanding but manages to hit all the phrases asked in the question. I've seen that SO often that I go numb and assign a low score. The word-soup-in-a-bowl technique works on some people convincing them the writer knows something, but that's because they are seeing themselves reflected on the surface.

It also works for scoring well on standardized tests, which have very explicit marking schemes that must be adhered to. Answer contains topic sentence, 1 mark. Conclusion, 1 mark. Mentions the following ideas, 1 mark each. And so forth.

Robert said...

Until then, I see them as extensions of us and we wouldn't treat ourselves that way, no?

A surprisingly large number of people treat their children like that. Any teacher has horror stories. As long as the mistreatment remains below the threshold of criminal action (or unprovable) they get away with it.

David Brin said...

Duncan’everyone else” doesn’t use single payer. Japan, Germany, many others have insurance systems. They are much more tightly regulated and profit-limited and state assisted, though.

Again, next time dems get a Congress they should pass a one page law DEFINING everyone under 25 as a ‘senior’ for purposes of Medicare. That could be afforded without massive restructuring and it would win over every parent in America. The insurance companies would race to negotiate.

Re: Middle class:
1- The definition started declining in the 90s as reaganites both mesmerized and undermined white workers
2- Also minority ghettos had been excluded from the definition and when folks started asking “why?” you got both rising inclusion and dissatisfaction… accompanying the rapid rise of a black middle class. So things can get complicated… but there’s no excuse for doubling down on disproved reaganomics.

Camelot was a beloved theme of my dads. But the morality in the play is dismal and even evil, from scene to scene.

Lena said...

Dr. Brin,

Do you have any good references to data that shows what a failure Reaganomics is? I'm constantly being told by right-wing Nazis that Trickle-Down works, but I haven't found a quick and easy source to shoot those gullible fools with.

Thanks,

PSB

Larry Hart said...

Dr Brin:

Camelot was a beloved theme of my dads. But the morality in the play is dismal and even evil, from scene to scene.


I thought the King Arthur character was much more impressive than either Lancelot or Guinevere. The first time I saw it (on television), I came in just before the intermission, where Arthur was ready to demand a man's vengeance, but then stopped himself and decided that they would all live through this tragedy together. Young as I was at the time, I couldn't wait to see what happened next.

I was particularly taken by the metaphor of Camelot being a nation of laws rather than of men. I thought the allusion was obvious there. But the part of the movie that had me in tears was the ending--Arthur's victory in defeat because the story of what they had once created would survive them. I've come to realize that so many of the great stories have that element about them, including the underlying myth of Christianity. "Did You know Your messy death would be a record-breaker?" indeed.

Larry Hart said...

PSB:

I'm constantly being told by right-wing Nazis that Trickle-Down works, but I haven't found a quick and easy source to shoot those gullible fools with.


Ask them if anything has tricked down to them? Aren't those same people generally dissatisfied with how badly their pay has kept up with the cost of living?

Lena said...

Larry,

A whole lot of Trumptards are living in trailer parks, whining about how those damn liberals took away all their money, with absolutely no explanation for how it happened. But trickle down does seem to have benefited a few at the top of the middle class, who were able to move into the upper class. That's why you can find manly pickup trucks with pro-Grope stickers in the driveways of McMansions. They will always point to these exceptions and claim they are the rule.


PSB

Eric Schaetzle said...

Why do AI ethics conferences fail? They fail because they don’t have a metatheory to explain how it is possible for ethical disagreements to emerge from phenomenologically different worlds, how those are revealed to us, and how shifts between them have shaped the development of Western civilization for the last several thousand years from the Greeks and Romans, through the Renaissance and Enlightenment.

So perhaps we’ve given up on the ethics hand-wringing a bit too early. Or more precisely, a third nonzero sum approach that combines ethics and reciprocal accountability is available that actually does explain this. But first, let's consider the flaw in simple reciprocal accountability. Yes, right now we can use chatGPT to catch Chat-GPT cheats, and provide many other balancing feedbacks. But as has been noted above with reference to the colonization of Indigenous nations, once the technological/ developmental gap is sufficiently large those dynamics which operate largely under our control and in our favor can quickly change, and the former allies become the new masters.

Forrest Landry capably identified that problem during a recent conversation with Jim Rutt. The implication that one might draw is that, though we may not like it, there is in fact a role to play by axiology (or more precisely, a phenomenologically informed understanding of axiology). Zak Stein identifies some of that in his article “Technology is Not Values Neutral”. Lastly, Iain McGilchrist brings both of these topics, that of power and value, together using his metatheory of attention, which uses that same notion of reciprocal accountability (only here it is called opponent processing). And yes, there is historical precedent here too; we can point to biological analogues. This is all instantiated in the neurology of the brain, and it goes back at least as far as Nematostella vectensis, a sea anemone that lived 700 million years ago! So the opponent processing of two very different ways of attending to the world has worked for a very long time, by opposing two very different phenomenological worlds (and their associated ethical frameworks) to counterbalance each other.

There’s converging agreement here. Landry said in this podcast with Rutt the same message that McGilchrist delivered as keynote at the AI World Summit in 2022. McGilchrist’s metatheory of attention for balancing the narrow, ambitious instrumentalism of one way of attending, which is the form of AGI that Landry (and all the signers to the recent moratorium petition) describe, with the values and restraint of another could in fact be usefully applied. It rests on a mountain of research. Zak Stein noted that nihilistic design ends in an impoverished world, as our current situation threatens to prove. Axiological design recognizes inherent (non-utilitarian) value in the existence of the many other vastly outcompeted species on Earth. In many cases, that recognition is the very reason that our elimination of nearly all terrestrial megafauna hasn’t thus far extended to the extirpation of significant others as well. Maybe don’t rule out a role for ethics yet.

Robert said...

Do you have any good references to data that shows what a failure Reaganomics is? I'm constantly being told by right-wing Nazis that Trickle-Down works, but I haven't found a quick and easy source to shoot those gullible fools with.

If you have time to read a book, I recommend this one:
https://press.princeton.edu/books/paperback/9780691154541/zombie-economics

TLDR version by the author:
https://foreignpolicy.com/2010/10/15/five-zombie-economic-ideas-that-refuse-to-die/


if you want to get more technical, and are OK with a preprint paper:
http://eprints.lse.ac.uk/107919/1/Hope_economic_consequences_of_major_tax_cuts_published.pdf

Not-as-technical summary:
https://theconversation.com/footing-the-covid-19-bill-economic-case-for-tax-hike-on-wealthy-151945

David Brin said...

LH I cannot recall any decisions of Arthur’s that weren’t at minimum dumber than a stone. E.g. his fetishistic control over the fricking WEATHER!

EXCALIBUR was a pretty good rendition of the legend and the first third featured some pretty good dialogue and scenes.

PSB: the only thing that works with MAGAs is wager challenges. Facing a SPECIFIC factual challenge instead of being able to wave one away and move quickly to the next distraction terrifies them as does staking forth their precious macho or money.

Demand that they show ONE TIME when any or each SUPPLY SIDE tax cut for the rich resulted in ANY of the predicted benefits.
-increased investment by the rich in industrial production in the US
-increase in good jobs
-trickle down increased wages… and above all…
-increased tax revenue that reduced or erased deficits

A cult ought to be able to point to at least ONE TIME when its predictions came true. Or else admit it is a cult.

Second wager: Name one time when national republicans were ever, even once, more fiscally responsible than democratic administrations regarding deficits and debt. Once. Ever.

Unknown said...

Robert-

Handing a book to "right-wing Nazis that" believe "Trickle-Down works" may not work, as I found out when I brought "Merchants of Doubt" to work to show exactly how my co-workers had been led to dismiss and doubt environmental issues that might end up costing corporations money.

Now, the US Army started creating training comic books decades ago to help the the hard of thinking - or those with poor educations - absorb data and concepts. Maybe we should commission a few.

Pappenheimer

Oger said...

They are much more tightly regulated and profit-limited and state assisted, though.

It comes at a price, though. Our health care system becomes increasingly unattractive to skilled workers (payment, working conditions, culture of disrespect), and all governmental attempts to cure the problem seem to fail.

But we have strike season....

In other News:
https://edition.cnn.com/2023/04/04/europe/germany-mayor-syrian-refugee-intl/index.html

"It doesch not matta were you come from asch long asch you work hard enough"

And ...Trump:
Die Zeit tells me he faces 34x4 years in prison .... How likely is it to you?


David Brin said...

Eric Schaetzle thank you for your cogently expressed missive. Alas, I find it to be tendentiously diverted, even illogical.

1) The enlightenment experiment EE of the last 300 years involved many, many theoretical discussions of ethics… and almost none of them interest me. Because I know that I am a delusional ape, transfixed by ornate-sounding incantations and I have seen what delusional incantations have done in the past, even when well-intended. I care about outcomes, especially those that increase the likelihood that this ape’s descendants will deal with these matters far better.

Even utilitarianism – which comes closest – misses the mark with ‘greatest good for the greatest number.’ Whose good? By whose standards?

2) What the EE has accomplished that is of pragmatic utility is providing systems that maximize critical input from an ever-widening pool of perceptual participants, reducing the greatest crime of feudalism… waste of talent. Plus maximizing discovery of delusional errors. These two endeavors have resulted in by far the most successful – by any measure – civilization in all history. And the most self-critical, with millions determined not to repeat mistakes of past generations.

3) This took more than simply feeding and educating more children, expanding inclusive rights, flattening hierarchies and encouraging criticism. It took the creation of systemically regulated. ARENAS within which creative adversaries can compete in ways that maximize positive sum outcomes by reducing cheating and minimizing blood on the floor. (Nature is fecundly creative - it made us - but spectacularly bloody and inefficient.)

These highly-creative arenas of ritual competition are markets, democracy, science courts and sports (some might add journalism). I describe the process here: For a rather intense look at how "truth" is determined in science, democracy, courts and markets, see "Disputation Arenas: Harnessing Conflict and Competition." (https://www.davidbrin.com/nonfiction/disputation.html.)
This early version appeared as the lead article in the American Bar Association's Journal on Dispute Resolution (Ohio State University), v.15, N.3, pp 597-618, Aug. 2000,

By keeping competition flat, open, fair, these arenas become positive sum. But maintaining them requires constant attention, fine tuning and hard work

David Brin said...

E.S.: You claim that: “ once the technological/ developmental gap is sufficiently large those dynamics which operate largely under our control and in our favor can quickly change, and the former allies become the new masters”.

But there is NO such cause effect imperative or automatic outcome! Oh, sure,, there is always an attractor in each of the arenas for cheaters to find ways around the rules, often conniving together. That is the feudal attractor state, driven by primitive male reproductive strategies that pervade nearly all species...

...but so? Preventing that failure mode is one of the jobs of open, elightenment politics… and THAT is a major reason why the world oligarchy’s top priority is to destroy “politics” as a means of deliberative negotiated problem solving, especially in the USA. And sure, that plot has been successful… for now.

What YOU neglect is the long history of successful efforts to control and stymie that attractor tendency. Including the way that YOU and YOUR VALUES fit into it all, since you are a product of the most extensive propaganda campaign ever waged. Called Hollywood. I find it bizarre how many folks - bright ones - seem addicted to the notion that "I and just a few others invented suspicion of authority!"

Not.

See VIVID TOMORROWS: Science Fiction and Hollywood – http://www.davidbrin.com/vividtomorrows.html

Finally I have been a guest on Jim Rutt and respect some of the mavens whom you cite. … And deem their dour musings to be interesting… an almost utterly irrelevant. Not one shows the slightest sign of awareness of the complex machinery for enlightenment positive sum games of which they are a part. If YOU are even slightly interested, I recommend The Transparent Society.

David Brin said...

Pappenheimer here is my pertinent comic book:

http://www.davidbrin.com/tinkerers.html

locumranch said...

Dr. Brin clarified his main argument well with his emboldened amendment:

It is an old problem, with hard-won solutions generated by folks who I can now see were much smarter than today's purported genius-seers. Alas, no one seems remotely interested in looking at HOW we achieved that miracle, or how to go about applying it afresh, to new, cyber lords... by breaking up power into reciprocally competing units and inciting that competition to be positive sum.

He offers an interesting historical take on how we might neutralize our potentially tyrannical AI 'cyberlords' -- in an entirely peaceful manner -- by playing one off against another (aka 'reciprocal accountability').

My take on human history is somewhat different, as I remember a series of rather bloody revolts against our feudal aristocratic overlords that tended to involve guns, knives, guillotines and tumbrels.

Is there such a thing as a cyber_guillotine, a cyber_Rasputin or cyber_firing_squad?

As for my last post, it was indelicate, inarticulate & offensive:

In my own ineloquent way, I was merely trying to point out that there is an irresolvable contradiction between a changeable & improvable rule set and a fixed & immutable one, as any rule once altered implies that other rules may also be altered, so much so that the so-called rules are not really rules at all.

As a matter of example, I'm watching the Trump NYC shitshow right now and it's shaping up into a classic negative sum event, as it will likely convince about half of the US population that our legal system is a charade REGARDLESS OF OUTCOME because US Democrats will feel betrayed if Trump is acquitted & US Republicans will feel likewise if Trump is convicted.

Plan accordingly and God Bless.


Best

Larry Hart said...

locumranch:

because US Democrats will feel betrayed if Trump is acquitted & US Republicans will feel likewise if Trump is convicted.


Well, yes, since Democrats will be upset if justice isn't done and Republicans will be upset if justice is done, there's no way to satisfy everybody. Now that we know that, what do we know?

David Brin said...

Stepping back just barely from the precipice of being banned "As for my last post, it was indelicate, inarticulate & offensive:"

A non-apology apology that steps back, just barely enough, for now. Without mentioning the fact that his perceptions of our positions - moral, historical, philosophical are almost always simply wrong, generally strawmen set up for his incantatory masturbation, and very often vile... but worst of all his utter incuriosity when we protest.

But one thing. The guillotines and tumbrels were IN ORDER to set up systems that thereafter might function as enlightenment arenas. It is a difficult task and the French continue to find it difficult...

... while the brits did it in smaller increments that smoothed out a bit more. The Dutch even more. And the US... despite 8 phases of civil war (three of them losses and steps backward) our increments were massively forward refining those 5 arenas.

So much so that all the world's oligarchies united to destroy US politics. Making us (alas) almost 5% as dysfunctional as he says.

Watch it goombah. You are on thin ice.

Larry Hart said...

Dr Brin:

Without mentioning the fact that his perceptions of our positions - moral, historical, philosophical are almost always simply wrong, generally strawmen set up for his incantatory masturbation, and very often vile... but worst of all his utter incuriosity when we protest.


I can be charitable enough to perceive that his point is a kind of indirect proof. He attempts to demonstrate that v implies not-v where v is any Enlightenment value. "You claim to value tolerance, but you show your hypocrisy when you refuse to tolerate intolerance." Or "When the democratically expressed will of the people is to end democracy, you refuse to go along." Even "If you're so against cheating, then how about you don't cheat while the other team does and see who wins." That sort of thing.

Then again, while he hasn't done so much lately, he used to slander me personally in pretty offensive ways. So I'm indeed looking forward to Their Lord's imminent resurrection day, and the accompanying end of the test of character which is Lent.

Robert said...

Handing a book to "right-wing Nazis that" believe "Trickle-Down works" may not work, as I found out when I brought "Merchants of Doubt" to work to show exactly how my co-workers had been led to dismiss and doubt environmental issues that might end up costing corporations money.

Simply giving them information won't do much to change their minds, because you aren't trying to change a carefully-considered position arrived at after much pondering, but rationalizations of a position that is a marker of tribal identity.

I recommend reading this book, not for your Nazis but for yourself:

https://www.harvard.com/book/saving_us/

If you are serious about changing minds, you will likely have to do it one mind at a time, first finding shared values. Whether this is possible, and whether you want to put in that kind of effort, I don't know. Hayhoe (a scarily bright woman) lays out tactics that work, many of which will transfer from environmental to economic matters.

Alan Brooks said...

Give him his due, Loc is surely a better physician than a philosopher; otherwise we might have seen a headline regarding him:

*Doctor Indicted For Manslaughter*

Eric Schaetzle said...

Thanks for your response David! Exactly, "by keeping competition flat, open, and fair, these arenas become positive sum. But maintaining them requires constant attention, fine tuning and hard work". Your reasoning here is solid and well supported. Please note: I didn't claim that the failure mode was inevitable, rather I'm drawing our attention to its existence as a live threat with historical precedent. Reciprocal accountability is indeed effective within many contexts, and yes, there's a long history of successful efforts to stymie the feudal attractor state. So what I'd like to convey is how that constant tuning might occur, since preventing that failure mode is not some minor detail. Consider my thoughts to be an addendum to your well argued post. I do not intend to contradict it (as it is the likely the best I've seen thus far on this topic), but I would like to provide some additional useful ideas on how it might be improved to a more workable formulation.

I'm going to refer to ideas that Donella Meadows made famous in her “Leverage Points” paper when she observed that problems that emerge at one level can only be meaningfully addressed by appealing to a higher integrative level. We can't easily solve a problem by working exclusively within the same domain within which it emerged. Nor can we solve it by switching to another domain that is nonetheless at the same level as that of the problem. To provide an example, we can’t solve problems in ethics by creating more ethical frameworks (yeah, this is why AI ethics conferences fail). And similarly, we can't fix problems in regard to power by simply proposing other power structures. What we need are grounds by which to adjudicate between ostensibly equivalent frameworks and structures.

(continued below...)

Eric Schaetzle said...

(continued from above...) Without such grounds, while we might initiate the dynamics of some new metastable regime (and this is what you are proposing with reciprocal accountability), in order for it to persist for any length of time or develop above a certain baseline, the support of balancing negative feedbacks are necessary. Here's your reference to "constant attention, fine tuning and hard work". Where does the impetus for all this hard work and fine tuning come from? It comes from the phenomenology of the affected agents (that's us). Phenomenology is at a level above both power and ethics, as it provides the explanatory toolkit by which we can understand these and address problems within either domain. It provides the grounds by which to adjudicate between ostensibly equivalent frameworks and structures.

I’d suggest that it may be dangerous to rely on the bare fact of reciprocal accountability to avoid failure modes if we don’t understand the phenomenology of the agents involved. If it is to be effective, we need some understanding of the features of this higher level above the simple facts of power dynamics. If we don't have any understanding of this higher phenomenological level, then we can't use it to effectively adjudicate differences at the lower levels. And this is where McGilchirst’s ‘metatheory of attention’ might come in. It provides a perspective on the phenomenology of the affected agents (us) who are actually directing the evolution of the system of reciprocally accountable artificial agents, the AI themselves, and evaluating how they might ‘fine tune’ them to avoid a failure mode. Why should anyone care about that? To prevent a failure mode you need to know what you are looking at. You need to see it coming from as far away as possible. You need as much leverage as you can get. You need to operate at the highest level.

From here we can see how alternative frameworks and structures, in both ethics and power, are rooted in evolutionary biology, and why these have the societal consequences we observe. From this vantage we can adjudicate between frameworks as to which is the more veridical and trustworthy, and we can (all importantly) be in the best position possible to guide their development and persistence. So any answer to a problem at the ethical framework level or the power structure level must be couched in the terms and language of the level above that. In this case it is the phenomenological level. Reciprocal accountability is metastable with the live possibility of failure, so I’m pointing to resources that might allow us to better understand and mitigate that risk. But… maybe none of that’s necessary.

scidata said...

Re: books that change minds

Science fiction is the bedrock of rationalist literature. Everything else is commentary.

David Brin said...

Eric S, yes the notion of many organizational levels is one that I talk about a lot in EARTH (1991). In nature, society and our own brains, what is COMPETITION at one level can be viewed as COOPERATION at a higher one. In fact, each of the five main arenas involve upper level cooperation to create and enforce rule sets that allow units below to compete in positive sum ways... sprts rules, market regulation, court procedures.

In other words: "I’d suggest that it may be dangerous to rely on the bare fact of reciprocal accountability to avoid failure modes if we don’t understand the phenomenology of the agents involved. If it is to be effective, we need some understanding of the features of this higher level above the simple facts of power dynamics."

Please do look at my 2000 paper on these arenas. I've developed it further (that paper doesn't mention sports!) But it addresses many of your points.

Our problem is that the competitive arena called 'politics' - in which we are supposed to negotiate and adjust rule sets for 3 of the main arenas (markets, Democracy and courts) has been deliberately poisoned for 30 years, ever since Dennis "friend top boys" Hastert declared his 'rule' against any high republican ever again negotiating with a democrat in good faith. The resulting sclerosis is intended to pave the way for a return to feudalism, alas.

"To prevent a failure mode you need to know what you are looking at. You need to see it coming from as far away as possible."

Sure. One role for good science fiction. It is also the role - perfecting better MODELS of reality - in which AI might make the most practical positive difference... if done in the open.

Again, while I understand your way of putting these issues, I feel the arguments can be made more simply and thu perhaps more persuasively.

Welcome to the Contrary Brin community!

--see
https://www.davidbrin.com/nonfiction/disputation.html

This early version appeared as the lead article in the American Bar Association's Journal on Dispute Resolution (Ohio State University), v.15, N.3, pp 597-618, Aug. 2000,

Eric Schaetzle said...

Thanks, and agreed. So as I build the sort of picture I want to convey here, I’d like to establish that we both understand the need for reciprocal accountability, and the need for a higher level perspective upon these interactions from which they can be seen to serve some larger dynamic, perhaps for example ecological or political stability. The dynamics of power viewed horizontally within a level is indeed shaped by the higher level system.

You mentioned the importance of “perfecting better models of reality”. This is the reason for my reference to phenomenology and a metatheory of attention. Because these ideas expand and inform our concept of incentives. And incentives are a key to making all this work, informing our ideas concerning reciprocal accountability, mutually recognized rules, and the potential breakdown of these in failure modes. Here’s a truism: an agent can only be incentivized by what they are capable of attending to. So the question is what do they see? Do they only want to “grab and get” some energy source or baryonic matter? Or is some (as yet perhaps nonexistent) AI motivated by an artificial analogue of aesthetic value or social virtue? By such things that we would normally only associate with humans? Please note: at this point I’m not concerned with the specifics of any of these incentives. I’m concerned with what might make these incentives a possibility to begin with. I want to understand the “way of attending”, the artificial phenomenology that would permit this to occur in the first place. How do we understand and characterize THAT. Why? Because if we understand how they attend, and their capacity for attention, then we might understand better what incentivizes these new AI agents. We might have a better idea of what could lead to a failure mode. We might understand how to “fine tune” their attention such that the most dangerous failure modes can be prevented. And we might enhance their capacity for reciprocal accountability and complementarity. Among other things we could fine tune within the "perceptual rule set" of this higher level.

We need a language for describing all of this. Hence my reference to McGilchrist’s metatheory of attention. It is an obvious place to start, as he already described two ways of attending, two opposing processes, operating in parallel inside the brains of virtually every animal, humans included. And yes, each way of attending is incentivized differently, with very consequential implications for OUR failure modes. That’s actually his thesis, though he didn’t use those specific terms.

Alfred Differ said...

Larry,

That ambition quote looks like one attributed to François de La Rochefoucauld.

"One often goes from love to ambition, but rarely returns from ambition to love."

or

"Love often leads on to ambition, but seldom does one return from ambition to love."

No doubt it sounds better in the original French.

Thing is… my effort in grad school definitely qualified as ambition, but I had to avoid love or I would never have finished. There is another applicable quote about that from Pearl Bailey describing men with and without ambition. 8)

—————

What I was doing with consideration of impossible orders was trying to imagine what happens in situations that don't follow the happy path.

Well… that's the quality assurance way of thinking… and your back to the world of software engineering.

1) If the car is designed as an expert system, it will try to do as it's told. Deviations from that will all be listed in the exception handler. No others will be considered until a run-time error occurs and then you get to find out how resilient your vendor made the vehicle's brains.

2) If the car is designed to emulate your daughter… well… you probably already know what she'd do if you've ever let her drive you somewhere.


The expert systems have 'wants' that are designed into them. Pretty straight forward.
The emulated minds have wants that look more like how we do it. That's also pretty straight forward since most of us aren't capricious about our preferences. We might not know them until we discover them, but that's not the same thing as a behavior that will randomly annoy you.

Once established that an AI can decide to stop trying if an action is impossible, can it also simply decide to stop trying because it would rather not do that thing?

Probably only the emulated minds… and we should be okay with that. In fact, that should probably convince us NOT to put sapient minds in cars and other utility devices. Imagine an emulation of yourself and how bored you'd get running an elevator.

Alfred Differ said...

Duncan,

Rather than a single payer system, I prefer a set of large payers where "people like me" can jump from one to the other periodically.

What makes our current medical arrangement insecure is that we've tied the bargaining units to our employers and put them in charge of negotiating on our behalf. Lose your employer and you lose your bargaining edge. I think we've proven that to be an unwise choice when it is the only (non-competitive) option available to us.

1) The costs to "people like me" are largely hidden leaving me with no f^(*ing clue what the real market price is for something I want. This feeds our current issue with hyperinflation of these costs.

2) Putting my employer in a position where trimming sick people from the payroll improves his bottom line TWICE OVER is dumb. They are already motivated to trim based on task accomplishment and efficiency needs. Giving them more reasons, especially ones connected to a hyperinflation sector is monumentally dumb.

3) Having my employer be my health care insurance negotiator puts "people like me" in a bind where our displeasure with the insurer can't be expressed without sounding like displeasure with the employer. We know how much our bosses love to be told they've f*^%ed up. People get fired and lawyers get hired.

I can go on and on about this and have elsewhere. We can do better… and should, but we should NOT ditch competition by imposing a single payer system. That just moves "the dumb" to people who have even more power over us.

———

As for being sacked for no reason, I'm going to strongly disagree with you. I've seen tenure systems in operation. NO! Absolutely NOT!

A job is not your property.

(grumble)
This is a big part of why I hate defining class by measures of security.

gregory byshenk said...

Alfred Differ said...
I don't see this as complicated. There were three times in the 80's where I made choices that were tightly bound by my absence of income.

I went about three winter months in '85 with no income, but knew I could ask for help from home if the fourth month arrived and I couldn't pay rent. I was a very part-time TA back then and landed work in the spring. That first check arrived with about $20 in the bank. I foothold on adult-life obviously wasn't secure.
[...]
So… my security was at its minimum during that time. I lived way below the official poverty line. I didn't HAVE to. I did it as a trade that wound up working out for me. It was a close call, though.


No doubt your foothold on the life you were aiming for was not secure, but from what you say, your life was - or at least as secure as that of anyone else.

You made choices to forgo current income for potential future reward, which is often a reasonable choice. But these were nonetheless choices, and choices that your could make because you had some degree of security, and failure - if it had occurred - would not have been life-destroying. There are a lot of people for whom the loss of three months income would mean a very real risk of homelessness.

That someone would come looking for them and rescue them from the street is a safety net that a lot of people do not have.


gregory
I consider 'extraordinary' to mean…

Alfred
hat means we don't agree on what qualifies as extraordinary. Cars break down. Car batteries die. Kids get sick causing you to take time off work. Etc. These things are difficult to predict precisely but trivial to predict over long periods. If one doesn't plan for them, one isn't adult-ing.

Knowing something is going to happen doesn't mean one will be ready for it when it does. At no point, however, should one say they knew it was coming and knew they wouldn't be ready. Sorry. We don't know the future well enough to say that with precision. What we CAN reasonably say is that one won't be ready if one remains on their current course through life. Think about retirement savings for example. If one isn't setting aside some pennies now and then, they know they won't be ready unless they change course.


Perhaps my phrasing was not as clear as it should have been, but from earlier comments I think it should be clear that things like cars breaking down, people getting sick, and so forth, are the "ordinary vicissitudes of life". The point being that no one - including those we consider secure - can be prepared for everything. Even those who are 'secure' can suffer from war, disaster, or other calamity.

But to prepare for what one can reasonably expect - even for what one "knew [...] was coming", requires having the resources to do it. If it requires all of one's resources (and possibly more) just to keep a roof over one's head and food on one's table, then just knowing that one should prepare is insufficient. Even knowing that one must change course is insufficient if one has not the resources to do that. If one is already at the minimum, then one cannot choose to forgo something to invest in the future.

The "pulling oneself up by one's bootstraps" myth is a nice one, but believable only for those who have never been in a position where all they had were their own bootstraps.

gregory byshenk said...

Gregory Byshenk
The question was not "how does one start a business or license something", but about a "plausible scenario where this is possible for anything more than a very small part of the population".

Alfred Differ said...
Okay. Fair point.
[...]
You can make the case that very few ever break OUT of the shallow end, but I'll point out that one's only chance of doing so requires us to step INTO the shallow end first. If I stay where I am as an employee, I'm not even in the water!


But you still fail to address it!

First, it is true that "very few ever break out of the shallow end" - but more importantly it is (I suggest) not possible for more than a "very few" ever to do so. The vast majority of those who "step into"the water will never do so, and many of them will end up in a worse position than if they had not so stepped in.

And this leads to the second point: if, for example, one has mortgaged their house to start a business, and the business fails, then that failure can be devastating.

This is why so many "entrepreneurs" are those who already have capital (be that financial, social, or cultural). If one can afford to give up income for a year, or find friends or others to invest in one's new business (as has been the case with so many of the so-called "self made" successes), then one is not risking catastrophe in the event of failure. Without those resources as a safety net, the cost of failure becomes too high for most.

The real reason I raised farmers as an example is precisely because they don't fit a single class descriptor, but they USED to. One doesn't have to backup far in history to find the vast majority of humans living as subsistence farmers. They were ALL peasant class.

We could discuss the proper placement of farmers by class, and when and to what extent a farmer was properly a 'peasant' (in a traditional class structure 'peasant' and 'subsistence farmer' are not equivalent - and as a side node consider that Thomas Jefferson considered himself a 'farmer'), but it isn't clear to me how it is relevant to the current discussion - even if we were to say that "all farmers used to fit in a single class descriptor". Maybe there is an argument for relevance, but I would want to see it.

gregory byshenk said...

Alfred Differ said...

I know this was in response to Duncan, but...

Rather than a single payer system, I prefer a set of large payers where "people like me" can jump from one to the other periodically.

What makes our current medical arrangement insecure is that we've tied the bargaining units to our employers and put them in charge of negotiating on our behalf. Lose your employer and you lose your bargaining edge. I think we've proven that to be an unwise choice when it is the only (non-competitive) option available to us.


Is your idea then that employers should be prohibited from providing health insurance as a benefit? If not, then employers (particularly larger ones) will maintain their "bargaining edge" and you won't changer much in practice.

After all, you can purchase your own insurance right now; you just won't get the same deal as an employer with thousands (or tens of thousands) of employees.

Lena said...

Robert,

Thanks a bunch for the references. I have to run to work, and yesterday I was to tired to look them up, but I'll try to make the effort tonight.


PSB

Lena said...

Dr. Brin,

While I see the power of wagers to shut up "manly" egomaniacs, it requires that you be quite certain that your proof is lined up, you've anticipated any likely counter-arguments, and that the moron you are betting is willing to accept any actual evidence. But most important: it requires that you have money. Since my pineal gland went cuckoo, I have been pretty close to unemployable, and Disability turns down 75% of applicants, so I'm not in the best financial condition for a wager campaign (or for pursuing my hopes and dreams, etc). If you have the money, go for it! All I have is what's left of my brains.


PSB

Robert said...

Science fiction is the bedrock of rationalist literature.

I've read way too much science fiction to do more than politely disagree with that assertion.

Larry Hart said...

gregory byshenk:

The "pulling oneself up by one's bootstraps" myth is a nice one, but believable only for those who have never been in a position where all they had were their own bootstraps.


It also invokes as its main metaphor a physical impossibility.

Darrell E said...

Eric Schaetzle,

I'm not familiar with the literature you've cited or the field of AI, so please forgive if these are stupid questions. From your comments here I can't tell, are the people you reference theorizing about human cognitive behaviors? Or are they attempting to theorize about general principles they think would apply to any kind of mind? In the first case, it seems to me unwarranted to assume that anything that might accurately model any aspects of human cognition will apply to any AI. In the second case, that seems to me to be futile. An interesting exercise perhaps, but there is no way to test any such models sufficiently enough to warrant any confidence that they correspond to reality.

scidata said...


Re: SF as rationalist literature

I was home for much of the winter, giving me time to catch up a bit on reading. I spent a lot of time on Steven Pinker, the quintessential rationalist. He certainly defines and explains rationalism beautifully, but he doesn't do a great job of advocating for it IMHO because he considers narrative to be more of an obstacle than a vehicle. In McGilchrist terms, he's not keen on giving the right hemisphere a voice. In computational psychohistory terms, syntonicity. He does however completely nail the importance of Bayesian 'base rates', which is where my own predilection for Calculemus! comes from. Just a moment's reflection on this one concept can sweep away delusion ranging from quirky personal biases to oppressive doctrine to dangerous conspiracy theories.

IIRC, he borrows the ancient line about 'everything else is commentary' in describing Bayesian reasoning. And we all tend to sound like those we've been reading recently. We humans are so cute. We're vastly more than Analytical Engines, but not infinitely more. I'm reminded of this every time I wire up a bunch of NAND gates.

Larry Hart said...

A totally no-class "concession" speech from the losing loser from Losertown in the Wisconsin supreme court election. The worst part begins around minute 3:45.

https://www.youtube.com/watch?v=Uk4yRKZyd0g

"I wish in circumstances like this, I could concede to a worthy opponent. But I do not have a worthy opponent to whom I can concede. This was the most deeply deceitful, dishonorable, despicable campaign I have ever seen run for the courts."


Don't let the door hit you on the way out, snowflake.

scidata said...

Best line from the NYC court coverage yesterday, from when DT got hit by the closing door he expected a servant to open for him:

It's rare to see an ass get hit by the door on its way in.

Larry Hart said...

scidata:

Best line from the NYC court coverage yesterday, from when DT got hit by the closing door he expected a servant to open for him:


Especially schadenfreudelicious after his comments to police about not being nice to suspects by keeping their heads from hitting the police car roof while stuffing them into the seat.

Robert said...

Don't let the door hit you on the way out, snowflake.

Apparently getting ready to impeach the just-elected judge, which is eminently possible because Republicans control enough seats that if they vote the party line she's out. She's just been elected and he's already claiming that she has "failed", which is grounds for impeachment.

https://truthout.org/articles/liberal-judge-wins-wi-supreme-court-race-but-gop-gains-impeachment-powers/

Larry Hart said...

Robert:

Apparently getting ready to impeach the just-elected judge, which is eminently possible because Republicans control enough seats that if they vote the party line she's out.


I'd be in favor of guillotines as the solution to that problem.

Seriously.

locumranch said...

For once, I will not disagree with Dr. Brin when he asserts that the "guillotines and tumbrels were IN ORDER to set up systems that thereafter might function as enlightenment arenas", but played little actual role in the eventual triumph of western democracy over aristocratic & feudal tyranny.

Instead, I will simply wish all concerned a Happy Passover, a religious holiday which celebrates the emancipation of the Jewish people from Egyptian tyranny in a way that had absolutely nothing to do with violence, bloodshed or the slaughter of their oppressor's firstborn sons.


Best

Larry Hart said...

You're going to blame literal Acts Of God on the Jews?

David Brin said...

PSB I know the wager gambit has restrictions and is limited to the well-prepared with $ to risk. Still… it works. They ALWAYS flee, often in terror leaving their macho in tatters for all to see. And when they refuse to fins a panel of retired Sr military officers as judges, it brings their fear of those heroes out into the open.

LH for Kelly to call ‘despicable’ an opponent who wants to end the horrifically profound-treason of gerrymandering is spectacular evil.



Eric good questions. In Nature, what gets reinforced is that which enable you to have more and successful descendants. This has led to the wide variety of male-reproductive strategies that have major effects on all mammalian species, making them variously competitive. And this is the root driver behind 6000 years (ore likely 12,000) of brutal feudalism as the dominant human form in societies with agriculture. In fact, this is one of my top theories for the fermi paradox, since the imperative pervades nearly all complex species.

We found a narrow golden path out of feudal stupidity, but I reckon we only have a short time to make the Enlightenment stick. It will require establishing incentives like those we need for AI… to perceive self-benefit in positive sum ways. Those 5 ‘arenas’ are examples for harnessing competitiveness in supervised ways to achieve pos sum outcomes. One can picture ways positive outcomes could be rewarded – e.g. with a quadrillion supercomputer cycles reward for whistle blowing vs rival AI plots.

ATTENTION is one of three major things that amplified in the Renaissance… printing expanded what we KNOW, lenses expand what we can SEE and perspective in art expanded what we can NOTICE or attend-to. Since then, every generation has seen similar tech-driven amplifications. Some of them damn near killed us (e.g. radio in the 1930s.)

Alan Brooks said...

We’re not supposed to reply to him, but it appears he is moving towards Lyndon Larouche style comments.
Perhaps he wishes to be banned from CB, so he won’t feel obliged to put the effort into doubling-down.

Larry Hart said...

Robert redux:

Apparently getting ready to impeach the just-elected judge, which is eminently possible because Republicans control enough seats that if they vote the party line she's out.


Is this how the new shooting war will play out? I've been wondering what a new hot Civil War would look like without recognizable armies fighting for territory. Could it be that the victims of cheating politicians decide that the only remedies are Second Amendment ones? And then the other side vows eye-for-eye revenge?

I find that scenario disturbingly plausible.

Robert said...

Is this how the new shooting war will play out?

Maybe? A lot depends on how authorities react. I kinda wonder how much the response would depend on whether a democrat or republican was attacked.

Came across this ad, which mentioned the prevalence of Fox in the military (something David has remarked on), and I've also heard the same is true in most police departments. Quis custodiet ipsos custodes? and all that.

https://youtu.be/B9xqczlMqSM

Also wonder how much ordinary people are willing to lose everything rather than just hunkering down and trying to get by. How long would people say to themselves "next election" before realizing that their party will never be allowed to win the election? In WWII, after an actual invasion, only 1-3% joined the Resistance.

Robert said...

Oh, and I should note that I'm not American. Fairly left-wing Canadian (immigrated from Britain) and your Bernie Sanders would be slightly right-of-centre up here so that informs my opinions on your politics, especially the tendency of your batshit-crazy neo-fascists to export their movement into other countries.

Larry Hart said...

Robert:

the prevalence of Fox in the military


In the aught-aughts, the late Ed Schultz fought to have his talk radio show included on Armed Forces Radio the way that three hours of Rush Limbaugh already was. The push back from the army against liberal talk was along the lines of, "Why should our soldiers have to hear three hours about how bad America is?"


How long would people say to themselves "next election" before realizing that their party will never be allowed to win the election?


Wisconsin might well be a test case. Or North Carolina.

David Brin said...

Jiminy. While I admired his super-high fan sci fi... which was somewhat above mid-pro level... and I've found some of his insights interesting, the self-avowed super genius Eliezer Yudkowski has always been rather weird, gathering around himself a community of ... well... one common trait appears to be incel-ness... who demand that any data center working on AGI should be nuked.

Here's a twitter-stream describing many parts... thouugh I utterly LOATHE twitter streams as an mode of communication.

https://twitter.com/xriskology/status/1642155518570512384?s=43&t=EeYMjSJ069m39Sc1LclfeA

Oy. The accusation of anti-semitism is one I see little evidence for. But the parallels with a cabin in Montana seem increasingly apt.

scidata said...

Chilling. I read Yudkowski years ago because he was an early champion of Bayesian reasoning. I've long since gone back to widespread layman rationalism like Kant and Asimov advocated.

Unknown said...

Larry/Robert,

I'm now over a decade out of blue (USAF blue, to be precise) and I used to try to get base hospital waiting rooms to switch from Rush and Fox to neutral stations (sports! cooking!) because not everyone in the military likes that cr*p and he was probably raising blood pressures all across the military health system. Unfortunately Fox is still predominant.

Pappenheimer

David Brin said...

I assert the officer corps is departing the GOP while Fox indoctrinates the noncoms. Yipe

"Apparently getting ready to impeach the just-elected judge, which is eminently possible because Republicans control enough seats that if they vote the party line she's out."

If gerrymandering gets tossed, half of them lose their jobs, next election.

So they would cheat to protect cheating.

Tony Fisk said...

In his book 'Music of the Primes' Marcus du Sautoy notes that the rarefied heights of some mathematical mountain ranges appear to starve some of the explorers' brains of oxygen.

"All mathematicians need to have a little bit of madness if they are to feel at home in the mathematical world. The sheer number of hours G had spent exploring at the edges of mathematics left him unable to chart his way home."

While pi can be calculated to billions of decimal places, in practical terms only about 100 are needed to calculate the circumference of the observable Universe to the accuracy of a Planck length. Euclidean geometry ceases to apply to reality long before that.

Alfred Differ said...

Heh. The notion of continuity is actually pretty weird if one looks at it seriously.

Eric Schaetzle said...

Evolutionary pressures reinforced an incentive to compete for reproductive opportunities. Another basic problem every animal must solve is how to eat without being eaten (how to 'get' without being 'got'). Generalizing both of these basic constraints for the evolution of mind, there is a basic incentive “to win and avoid losing”. Note: these are not a simple inverse of each other. There is usually one very narrow pathway to evolutionary success, however very many pathways to failure. For example, to allow for both predation and predator detection an organism must deploy two very different forms of attention simultaneously. This is the neurobiological explanation for the parallel hemisphere structure of the brain. The right hemisphere is generally specialized for contextual evaluation, as with the spotting of predators, and the left is specialized for focused attention, as in the service of predation. So a bird, for example, will attend preferentially with the prey detection system while eating, while the right hemisphere is scanning the environment for context-dependent signs of predation on the bird. You can see how this sort of arrangement can apply to reproduction as well, or any other domain with asymmetric win/lose dynamics that have real consequences for evolutionary success.

As I mentioned earlier, the opponent processing of these two very different ways of attending, with their corresponding phenomenological worlds and ethical frameworks, has worked for a very long time to complement and counterbalance each other. Such dual strategies for deploying attention are cost optimal for a number of reasons. The implication I am drawing from this evidence that is directly relevant to the current topic is that if AI agents exclusively utilize only one of these forms of attention, then a stable arrangement of reciprocal accountability may be more difficult to achieve, and it is highly likely to increase our exposure to failure modes. It may be that to harness competitiveness, we may also need to harness the wisdom of 700 million years of evolution, over the course of which certain key phenomenological features have been assiduously conserved.

Larry Hart said...

Dr Brin:

So they would cheat to protect cheating.


Exactly. I know you won't tolerate even hints of calls for violence, so what's a better alternative recourse when the solution is no longer at the ballot box?

scidata said...

Re: restraining AI research

That ship has long ago sailed. The transistor was invented over 75 years ago. Since then, any kid in a garage can change history (to compress a line from "Antitrust" (2001)). The idea of restraining or stopping A(G)I research is simultaneously laughable and tragic. Ultimately, all that's needed is a device that learns at electronic speed and can alter its own source code (only slightly above kid-in-a-garage level).

The bad stuff can only be avoided now by laying out the truth and trusting that emergent intelligence will discover and embrace it. For example, in some games, the only winning move is not to play. At some point, evolution must yield to reason and enlightenment must spread to machines.



Larry Hart: so what's a better alternative
Enlightenment.

Larry Hart said...

The Q-Anon folks might have confused their Kennedy heirs...

https://www.electoral-vote.com/evp2023/Items/Apr06-9.html

Marianne Williamson thought that she had the kooks and weirdos lane all to herself. Nope. Yesterday Robert F. Kennedy Jr., the son of Bobby and nephew of Jack filed papers with the FEC to run for president as a Democrat. Kennedy's platform is that vaccines are harmful and Anthony Fauci is a fascist. Geez.

Darrell E said...

"There is usually one very narrow pathway to evolutionary success, . . ."

There are very many pathways that lead to evolutionary success. Hence the enormous number of species that have evolved on Earth since life started. That so many species on Earth have similar traits is not because there are very few pathways to evolutionary success but because all life on Earth is of a single lineage and the machinery that life on Earth evolved to store and replicate information is good enough that alleles from organisms whose LCA is hundreds of millions of years ago are still similar enough to be readily recognizable and to still be doing the same job. But there is also a plethora of variation, of other paths taken.

". . . however very many pathways to failure."

That's true. There are many more pathways that lead to oblivion. When it comes down to it probably nearly all eventually do. But that's not the same thing as there being one, or a small number of possible pathways.

My ignorant opinion of AI is that if it ever happens how it works will almost certainly be a surprise and initially unknown. It won't be per a specific human generated design. Some of the design will be the result of internal processes that the software engineers won't understand. At least not until much further study of what has evolved.

Larry Hart said...

scidata:

Larry Hart: so what's a better alternative
Enlightenment.


Judging by the recent supreme court election, the people of Wisconsin are plenty enlightened. Their legislature is apparently captured by the Endarkenment, and they plan to use that power to hold onto the court and neuter the governor. So again, when the side who holds the levers of power uses that power to thwart democracy, how does the Enlightenment assert itself?

Lena said...

"If gerrymandering gets tossed, half of them lose their jobs, next election.

So they would cheat to protect cheating."
- Why is it that when things are this obvious, they happen anyway?

PSB

Lena said...

Eric Schaetzle,

Human cultures often have similar dualities baked in. Consider how religion results in less conflict between members of a group, but increases conflict outside the group, or how many cultures treat men as killing machines at times, then use reincorporation rituals to turn them back into (non-dangerous) human beings when they return from combat, so they won't "us". Very Lévi-Strauss. Sometimes I think that state-level societies would be better off if they went back to some of the "irrational" practices of some of the smaller-scale cultures of the world. Maybe we wouldn't have so many armed nutsacks running around pointing guns at their own fellow citizens and calling everyone they disagree with "traitors".

PSB

Lena said...

Eric Schaetzle,

Somehow I managed to miss typing the word "kill" between the words won't and "us"

My brain!

Apologies for any confusion.


PSB

Robert said...

Judging by the recent supreme court election, the people of Wisconsin are plenty enlightened. Their legislature is apparently captured by the Endarkenment, and they plan to use that power to hold onto the court and neuter the governor.

Which turn of phrase reminded me of a series by James Alan Gardner that you might enjoy:

In my new series, The Dark vs. Spark, vampires, werewolves and demons have come out of hiding, offering the Dark Conversion to anyone who can afford it. Within a generation, the richest 1% are almost all Darklings.

Then superheroes show up: ordinary joes who happened to touch a glowing meteor or fall in a vat of weird chemicals. They represent the 99%…and they’re going to provide the Dark with Adult Supervision.


https://jamesalangardner.wordpress.com/books/

The title of the first one, All Those Explosions Were Someone Else's Fault, hooked me, and I'm now eagerly awaiting the rest of the series. Along with the next Colonization High book. Dammit, Brin, stop blogging and start writing! :-)

scidata said...

Larry Hart: how does the Enlightenment assert itself?

It asserts itself when an insurrection is called and nobody comes.

Larry Hart said...

scidata:

[The Enlightenment] asserts itself when an insurrection is called and nobody comes.


I'm wondering more how the Enlightenment asserts itself in Wisconsin when they outlaw medical care for women and send Trump electors to the electoral college in defiance of their own state's voters. All perfectly legally according to their rigged supreme court and directed by their gerrymandered legislature.

scidata said...

If a great democracy falls because a handful of brain-wormed zombies pull some electoral hokery-pokery then we have our Fermi solution and we should just shamble off back to the caves. If reason prevails, then there's little to fear, either from humankind or AGI, and we can punch our ticket to the stars. We don't do education, research, and citizen science for fun and profit, we do it for posterity. And our averageness is what gives me the most hope.

Howard Brazee said...

I wonder how long it will take to finish the job of changing the meaning of "sentient" to mean "human-like intelligence".

Well, I'm not happy with the word "Sapient" which is used to describe the surviving human species. We're "wise" and the other human species weren't? Were they intelligent?

locumranch said...

Blue pill, red pill, black pill, lead pill ...

I tire of the Sisyphean task of trying to explain 'colour' to those who cannot see, so I think that I will do as Larry_H advises & recuse myself for awhile, at least until I can free myself from the terrible burden of 'giving a damn', as partisans who feel strongly about any cause soon become incapable of rational thought.

Just remember this principle as you fight your existential foes:

You will invariably strengthen whosoever you fight & struggle against, as this is the very nature of competition.

It's what Dr. Brin calls CITOKATE.


Best

Larry Hart said...

locumranch:

You will invariably strengthen whosoever you fight & struggle against, as this is the very nature of competition.


Gotcha. Can't win. Don't try.

Eric Schaetzle said...

What did I mean by "There is usually one very narrow pathway to evolutionary success"? Throughout this discussion I have referred to phenomenology as the higher level perspective we must take. Hence, I am referring to the perspective of a single agent embedded within a complex environment, not the entire possibility landscape in which the evolution of life can unfold. So to illustrate what I mean, let's take a fairly typical biological agent. This agent must be able to locate a suitable mate, fend off competing suitors, and (among those exhibiting parental care) raise offspring to reproductive age, along with all the other intermediary steps involved.

These are not trivially simple tasks to perform. Far from it! Each must meet numerous basic criteria for success. This is what is meant by "one very narrow pathway". Certainly, some local variations are possible, but only within hard limits. If the agent fails to correctly identify a compatible mate, if it is unable to compete, if all the offspring die by accident or neglect, and so on, the pathway for evolutionary success for that particular agent has been foreclosed.

But I do take your point, looking at the entire possibility landscape for the evolution of life is very useful. This higher level view can tell us many more things, arguably of greater importance than the narrow view of a single agent. As David Brin noted, "what is competition at one level can be viewed as cooperation at a higher one." And so, what is a failure mode for the single agent is not necessarily a failure mode for the larger collective to which it belongs.

Regarding the "very many pathways to failure". As I previously described, biologically evolved agents are incentivized “to win AND avoid losing”, and this requires the deployment of asymmetric forms of attention. Asymmetric PRECISELY because of the qualitative differences between the "pathways to success" versus the "pathways to failure" relative to the agency of the organism. Why is that relevant to avoiding failure modes? If agents disproportionately attend to the "pathways to success", then they risk being blind to the more numerous "pathways to failure". So we would want to "fine tune" our reciprocally accountable AI with both phenomenological capacities, deployed in an optimal ratio, etc.

I can only speculate how this might actually be instantiated in practice. But the phenomenological implications are not trivial, and they extend beyond my limited description here. Viewing AI in this manner is currently not very common, as far as I know. But we do need an evolutionarily informed perspective that draws on the insights of ecological negative feedback processes, which play out through inter-agent reciprocal accountability and infra-agent organic phenomenology, to be brought to bear upon AI development.

Paradoctor said...

Howard Brazee:
The solution to the Fermi Paradox is that there is no intelligent life anywhere in the universe, including Planet Earth. There's civilized, technological life, such as the ants, bees, termites, and lately, us. Earth's radio output is evidence for technological life, but closer inspection does not reveal intelligent life.

When AI researchers simulate human intelligence, then that will be a clear, detailed proof that human intelligence itself is a simulation. But don't expect our cybernetic children will be much wiser than us. They will display AS: Artificial Stupidity.

We've all seen plenty of AS from AI. For instance, I regularly get a statement from my bank charging me $0 for a $0 equity loan. They and we obey the 90-10 Rule:

Analog systems such as humans are 90% useful, and 10% useless, 100% of the time. Digital systems such as computers are 100% useful 90% of the time, and 100% useless 10% of the time.

Brin's advocacy of free-fair-flat competition among the AIs means abandoning their intelligent design (for whom to trust as a designer?) in favor of self-organization by Darwinian market. This has the virtues of chaos (freedom, flexibility) and its vices (confusion, ignorance).

Thus they will have features in common with us. To reinforce this commonality, I recommend that AIs rank Asimov's Three Laws in the same order that we do:
Protect Oneself > Protect Others > Obey Orders.

Unknown said...

Paradoctor,

If an AI's first duty will be to protect itself, self-driving trucks are a very bad idea.

Pappenheimer

Paradoctor said...

And so are human-driven trucks, for the same reason. But we need trucks, and trucks need drivers. The question is, which end of the 90-10 rule to favor? Consistently semi-cruddy, or perfect until disaster?

Paradoctor said...

Besides, I don't exactly recommend Self>Others>Orders, I just predict it as the natural survivor of free-flat-fair Darwinian market competition. There are 5 other ways to order the three laws, some I do not recommend implementing. Asimov's order, Others>Orders>Self, the "slave-bot", corrupts the slave-bot's owner.

Robert said...

And so are human-driven trucks, for the same reason.

There have been several crashes in the Rockies where a trucker chose to go over the cliff rather than hit someone. Including one where the family saved were actually picnicking in the emergency runaway lane.

Closer to home, there's this case:
https://toronto.ctvnews.ca/man-hailed-as-hero-in-hwy-400-crash-laid-to-rest-1.246045

So humans, even at mundane jobs, can decide that their life is worth less than a total stranger's.

Not always, of course. Longer ago, there's this case which led a chap I know to get very scathing about the pilot who crashed while trying to land in a school playground during recess, because as a pilot he knew that part of your responsibility is being ready to crash the plane rather than cause casualties.

https://www.cbc.ca/news/canada/pilot-survives-freak-landing-in-tree-1.176405

(No pictures, because it was before digital cameras were cheap let alone common, and I didn't have playground duty that day so I was inside when it happened. But as a teacher let me say that if the silly bint had missed the tree she'd still have crashed because there was no way we could have cleared the kids out of the way in time.)

Larry Hart said...

Paradoctor:

I do not recommend implementing. Asimov's order, Others>Orders>Self, the "slave-bot", corrupts the slave-bot's owner.


Only if the robot is truly a sentient being. In the earliest Asimov short stories, I don't think that was the case. Robots were useful tools, generalized enough to perform similar tasks to those a human could do and with rudimentary brains which could figure out ways to do things and react to the environment on the fly. But tools first and foremost.

If that's the case, then Asimov's ordering of the Laws was exactly right.

Max More said...

Glad to see you on the non-panic, don't stomp on AI just as it's starting to get useful side. It seems like a lot of generally intelligent people have gone insane and are walking the streets with sandwich board declaring the end of the world to be nigh.

Unknown said...

Robert,

Not sure where I read this, but an RAF Spitfire pilot during the Blitz was supposed to have strafed an English cottage (along with a lot of countryside) on the way down after being hit, slowing his descent, and defended himself by saying a fighter pilot's life was worth more to the country at the time than a civilian family's lives. It was put out that he somehow knew the cottage was empty (it was, but he didn't know that). Whether or not he was right, he wasn't very popular in his squadron after that. What choice should an AI have made?

Pappenheimer

Alan Brooks said...

From a WW2 perspective, AI *should’ve* made the choice the pilot made.
During the 9-11 attacks, pilots had orders to shoot down any attacking jets—if they’d had time, hundreds of airline passengers would have died.
AI *should* have made the same choice: good thing it didn’t come to that.
We’re talking expediency, not morality.

Larry Hart said...

Comedian Alan Sherman dealt with replacement by AI a long time ago:

https://www.youtube.com/watch?v=gLx3oYZrrlE

It was automation, I'm told.
That's what got me fired, so I'm out in the cold.
How was I to know when the 5-oh-3
Started in to blink, it was winking at me, dear?

I thought it was just some mishap
When it sidled over and sat on my lap.
But when it said, "I love you," and gave me a hug, dear,
That's when I pulled out its plug.

scidata said...

I recently came across the topic of "Boltzmann Brains", which has bolstered my dislike of statistical mechanics. There are lies, damned lies, and statistics.

Paradoctor said...

Larry Hart:
If the robot is artificially intelligent, then isn't it artificially sapient?

Larry Hart said...

@Paradoctor,

I guess the question is, "Does 'artificially sapient' mean 'sapient'?"

Larry Hart said...

The Endarkenment is winning, not by appealing to majorities, but by absurd rulings from captured courts.

https://www.nytimes.com/2023/04/07/health/abortion-pills-ruling-texas.html

A federal judge in Texas issued a preliminary ruling invalidating the Food and Drug Administration’s 23-year-old approval of the abortion pill mifepristone, an unprecedented order that — if it stands through court challenges — could make it harder for patients to get abortions in states where abortion is legal, not just in those trying to restrict it.

Unknown said...

Well, not winning entirely (yet) - U.S. District Judge Thomas O. Rice in Spokane, my current hometown, issued a ruling forbidding the FDA from pulling mifepristone. That's probably heading to the Supremes, where I do not have great hopes, but maybe 3-4 of the Furious Five will be linked in an autoerotic asphyxiation multiple manslaughter incident before then.

Pappenheimer

Tony Fisk said...

Au contraire, Larry, that's how it loses. The overreach is clear for all to see (and I gather has already been countered by another judge's ruling.)

Same thing in Tennessee. Two black, Democrat congressmen ejected for protesting against gun violence on children, while it's pointed out that another white, Republican congressman was waved through after *committing* violence on children. Parents vote, and have a few other avenues of expression available.

You've heard it elsewhere. Now hear it from Biden:
"Trickle-down economics doesn’t work."

David Brin said...

As The Atlantic’s Ronald Brownstein recently wrote, “The escalating confrontation between the parties over the federal budget rests on a fundamental paradox: The Republican majority in the House of Representatives is now more likely than Democrats to represent districts filled with older and lower-income voters who rely on the social programs that the G.O.P. wants to cut.”

… ‘Almost 70 percent of House Republicans represent districts where the median income is lower than the national median, according to researchers at the University of Southern California. By contrast, about 60 percent of House Democrats represent districts more affluent than the median. The politics of class, as Brownstein puts it, have been inverted.’
‘While Trump was president… he mostly did not govern as a populist. He acted more like a President Paul Ryan might have, cutting taxes on corporations and the affluent while trying to shrink Medicaid and repeal Obamacare. Those Trump policies weren’t popular. They contributed to the Republican Party’s huge losses in the 2018 midterms and probably hurt Trump’s re-election campaign too."
https://www.theatlantic.com/politics/archive/2023/03/working-class-white-voters-gop-house-agenda/673500/?campaign_id=9&emc=edit_nn_20230327&instance_id=88738&nl=the-morning&regi_id=120592697&segment_id=128856&te=1&user_id=d0f51f0d48bf780ec2c3fe93745c37f9

The reasons were obvious. Though DT putsched the GOP with beer-hall polemics, it was still the Masters of Oligarchy who owned his IOUs and blackmail files. Now? They now dread his political carnage and may be having Howard Beale thoughts. But meanwhile…

… those masters utterly rely upon the Democrats’ ever-continuing incompetence at polemic. Lacking any ability to make clear to those working class voters how profoundly the oligarchs despise and prey upon them. I wrote Polemical Judo in order help fix that incompetence.

Alfred Differ said...

Gregory Byshenk,

(I'm trying to be careful not to hog this thread since there are new people posting good stuff and they might not yet understand that we are peacefully disagreeing over a few things.)

There are a lot of people for whom the loss of three months income would mean a very real risk of homelessness.

This and the sentences leading to it are why I think I agree with you that I wasn't 'truly' insecure. It sure felt like it at the time, but the thing that was insecure was my foothold on a mature, independent life. The only way I would have wound up homeless is if I tried to hide from my family… and they'd have come looking for me. 8)

The point being that no one - including those we consider secure - can be prepared for everything.

Yep. That and the content around it is stuff with which I agree.

Even knowing that one must change course is insufficient if one has not the resources to do that.

You turned a corner around this point and I have to disagree. You'll never know in advance that you won't have the resources. You only know it when a bad day arrives and you don't have them. Until then, there are things to do and try to do.

I know that sounds… unrealistic. I get it, but that's the kind of guy I am… and I think I learned it from experience.

It is VERY important not to accept defeat until the end of the game. In some games you can make reasonable predictions about defeat (e.g. Chess against a competent player), but much of life is not like that. Believing it is qualifies as learned helplessness as far as I'm concerned. *

The "pulling oneself up by one's bootstraps" myth…

Mmm. You won't be shocked when I say I like it. A lot. For me it is allegory. Also, it doesn't matter whether they are all you have. If you have them, you try to use them… because anything less is learned helplessness.

* Okay. Some people ARE helpless. We should roll up our sleeves and help them. Everyone else should be trying to help themselves because then our effort to belong in their community will amplify their efforts.

Alfred Differ said...

Gregory Byshenk, (cont'd)

Regarding the shallow end metaphor…

…but more importantly it is (I suggest) not possible for more than a "very few" ever to do so…

Dude. How can you POSSIBLY know that?

Not long ago people could argue that it wasn't possible for common men to govern themselves. Prior attempts had all resulted in failure. What worked (historically) was Kings and the associated aristocratic social structure. Some still make that argument today suggesting the Great Enrichment is a fluke, but we are obviously governing ourselves.

You have a working crystal ball that tells you it isn't possible for us to organize our markets in a way that is different from what we see today? I'd be impressed. We've changed the way we collectively trade with each other occasionally and have multiple ways we do it today.

Without those resources as a safety net, the cost of failure becomes too high for most.

On this we agree. The most likely path for financing a new business that has a chance of succeeding requires support from friends and family at the founding moment. One small quibble, though. You bring with you a vital source of capital even before the money arrives. Human capital. What you know in terms of skills and what you see as opportunities. Without those, the money someone brings won't do much.


This doesn't undermine my point about stepping into the shallow end. Yes… there are risks. Fear of failure holds many people back. That fear also prevents many from EVER trying. If we had always given into that fear, though, we'd still be ruled by Kings!

My scenario for improvement isn't magical. We can just point people in the right direction and expect it to work. We can't throw textbooks at them and expect it to work. Well… we CAN do that, but it won't work.

The only way this works is through mentorship. DON'T try it alone! DON'T rely on only your own bootstraps! Get advice on how to do it or at least how not to do it! Persuade friends and family… and be persuaded when they aren't as enamored as you by your delusions. Make friends. Rub elbows. ASK questions!

———

Is your idea then that employers should be prohibited from providing health insurance as a benefit?

No. Far from it. I'm a big fan of credit unions and think there is room for a similar arrangement regarding health insurance. I'm a big fan of local co-ops too. My issue with employers doing it occurs when they are the ONLY ones doing it. There is a conflict of interest to consider.

Back in 2009 I was laid off and looked over my COBRA options. To buy in required that I make a choice between affording health insurance for my family… or paying the mortgage. I opted to keep paying the mortgage and then Obama and friends came up with a deal for many of us where the feds covered 65% if we covered the other 35%. I jumped on that deal and then scrambled for another job to get back off of it so other more needy people could keep going without budgets exploding. I was one of the fortunate who found work three months later, but I was on that deal for a total of 9 months because it was still the better option compared to my new employer.

Options and Transparency. That's what I want.

Alfred Differ said...

Argh. 8/

We can just point people in the right direction and expect it to work.

We CAN'T just point people in the right direction and expect it to work.


Leaving out negations kinda matters. Forty lashes for me. 8)
Sorry about that.

duncan cairncross said...

Alfred

I disagree MASSIVELY - not only can most people NOT "do it themselves" but I would go further and say that NOBODY can "Do it Themselves" well enough to cope with the normal situations that anybody can be hit with

If somebody is LUCKY and does not get hit THEN they may be able to "Do it themselves" - but that is more to do with LUCK

In non US advanced countries we have the support of the society - so we CAN cope with bad things happening (most of the time)

My life is comfortable - and my son is doing his Masters degree

If I had stayed in the USA I would be much much worse off - and he would be dead

Alfred Differ said...

Duncan,

Did you miss out on the correction? I typo'd by leaving out the negation.

Still... I think some people manage moderately well without help, but we'd be dumb to suggest that is the best way. I'm actually agreeing with a point you make elsewhere in that we all benefit from what others are doing or previous generations already did.'

The 'bootstrap' story is allegory. No one actually lifts themselves by their own bootstraps for the simplest of reasons. No one does it alone unless they are stuck on a deserted island... in which case most don't survive.

------

In non US advanced countries we have the support of the society...

Happens here too. We just take a more personal hand in it since many of us don't trust government to do it.


Still... if one isn't trying to lift oneself, how are others going to prioritize them? It's much easier to lift someone if they are already trying and doing a reasonable job of it.

gregory byshenk said...

Alfred Differ said...
This and the sentences leading to it are why I think I agree with you that I wasn't 'truly' insecure. It sure felt like it at the time, but the thing that was insecure was my foothold on a mature, independent life. The only way I would have wound up homeless is if I tried to hide from my family… and they'd have come looking for me. 8

Which is why I said in the beginning that I'm not keen on how one "feels". One can feel things for any reason or none, and this may have little to do with what is actually the case.

In your case, it may well have been true that your (immediate) foothold was insecure - but because of the resources available to you, even if you lost your foothold, you would have had a chance to try again.

Gregory
Even knowing that one must change course is insufficient if one has not the resources to do that.

Alfred
You turned a corner around this point and I have to disagree. You'll never know in advance that you won't have the resources. You only know it when a bad day arrives and you don't have them. Until then, there are things to do and try to do.

I didn't say that you don't know "in advance" about some situation. But even to prepare for some potential situation - even one that is foreseeable - one must have resources to spare. Of course it is a good choice to put away something for a rainy day - but to do that one must have some resources to spare, and there are lots of people who are already at the edge.

I know that sounds… unrealistic. I get it, but that's the kind of guy I am… and I think I learned it from experience.

Don't forget that your "experience" was not the same as that of other people. Again: the experience of those who truly are insecure is different, and the choices available to them are different.

It is VERY important not to accept defeat until the end of the game. In some games you can make reasonable predictions about defeat (e.g. Chess against a competent player), but much of life is not like that. Believing it is qualifies as learned helplessness as far as I'm concerned.

This isn't a question of "accepting defeat". It is an issue of people running as fast as they can just to stay in place - or maybe even falling behind.

Mmm. You won't be shocked when I say I like [he "pulling oneself up by one's bootstraps" myth]. A lot. For me it is allegory. Also, it doesn't matter whether they are all you have. If you have them, you try to use them… because anything less is learned helplessness.

The thing is, as Larry points out above, it is not possible, no matter how hard you try. It may sound plausible to those who have more than their own bootstraps, but that is why it is so dangerous.

I was able to pull myself out of poverty, but had a lot in my favour. First, I was lucky; I was never seriously ill or injured when I had no insurance. My father was a mechanic and I learned from him that I could repair most things on my own. And I sometimes had help from friends and others when I really needed it. But there were times when I was close to being on the street, and I don't know what would have happened had I ended up there. I like to think that I would have found a way back, but not having been there, I don't know.

But I also understand that there are few things as galling to someone who is already working as hard as they can to have someone - who has never actually been in their shoes - tell them "just try harder!"

gregory byshenk said...

Gregory
…but more importantly it is (I suggest) not possible for more than a "very few" ever to do so…

Alfred Differ said...
Dude. How can you POSSIBLY know that?
[...]
You have a working crystal ball that tells you it isn't possible for us to organize our markets in a way that is different from what we see today? I'd be impressed. We've changed the way we collectively trade with each other occasionally and have multiple ways we do it today.


Obviously, if we were to rearrange our society to be different, then things would be different. I thought it would be obvious that the unstated premise of my presupposition was "Assuming something like our current social arrangements...". There are all manner of things that could be done to support people starting their own business (in one sense or another) and reduce the risks of catastrophe for those who try and fail. So far as I am aware, there is nothing on the horizon (at least in the Anglophone world) that heads in that direction.

On this we agree. The most likely path for financing a new business that has a chance of succeeding requires support from friends and family at the founding moment. One small quibble, though. You bring with you a vital source of capital even before the money arrives. Human capital. What you know in terms of skills and what you see as opportunities. Without those, the money someone brings won't do much.

And without funding, any skills or capital will not do much.

This doesn't undermine my point about stepping into the shallow end. Yes… there are risks. Fear of failure holds many people back. That fear also prevents many from EVER trying. If we had always given into that fear, though, we'd still be ruled by Kings!

You seem to be working very hard to miss the point.

Yes, there is fear of failure. But there are vast differences in the cost of failure between one person and the next. For one person it may mean having to go back to work for their parents' company; for another it may mean losing their house and their children not eating.

My scenario for improvement isn't magical... .

Your scenario also doesn't do anything to show how anything more than a tiny fraction of the population could earn enough out of rent (dividends, investments, royalties, etc.) to survive. Yes, a small number do now. With changing social arrangements perhaps a slightly larger number could do so. But you have not so much as the barest sketch provided as to how this could ever be more than a very small part of the population - which was, you recall, my point.

===

Gregory
Is your idea then that employers should be prohibited from providing health insurance as a benefit?

Alfred
No. Far from it. I'm a big fan of credit unions and think there is room for a similar arrangement regarding health insurance. I'm a big fan of local co-ops too. My issue with employers doing it occurs when they are the ONLY ones doing it. There is a conflict of interest to consider.

Are you agreeing or disagreeing? When we refer to "employers" we do not normally mean 'credit unions'. CUs may employ people, but even "employee" credit unions are not themselves the employer. I believe (corrections welcome) that these are even legally required to be independent of the employer.

Options and Transparency. That's what I want.

What one's options are depends on where one sits. You (and I) have had options that others did not. And it is important to recognize that options in principle may not be real options in practice.

gregory byshenk said...

duncan cairncross said...
In non US advanced countries we have the support of the society - so we CAN cope with bad things happening (most of the time)

My life is comfortable - and my son is doing his Masters degree

If I had stayed in the USA I would be much much worse off - and he would be dead


Alfred Differ said...
Happens here too. We just take a more personal hand in it since many of us don't trust government to do it.

It can happen in the USA - but often does not. But survivorship bias is dangerous, and the ones for whom it does happen often fail to see the cases where it doesn't.

At my college there were many stories about a great dean back in the day when support for students was minimal to nonexistent. "If it weren't for X I don't think I would have made it!" But one who goes a bit deeper and talks to some of those who didn't make it learns about the other side of the coin. Without some systematic support structures, those who didn't happen to catch X's eye had no support.

Still... if one isn't trying to lift oneself, how are others going to prioritize them? It's much easier to lift someone if they are already trying and doing a reasonable job of it.

This is close to the standard corollary to 'bootstraps' myth: those who aren't succeeding are just lazy.

... Or maybe they are trying, but just don't have the advantages and resources of others, or just never caught the eye of the wealthy benefactor, as in the Horatio Alger stories.

One mark of real social support is that it is not dependent upon the whims of the well-off.

scidata said...

gregory byshenk: those who didn't happen to catch X's eye had no support

And those who drew X's ire had negative support. The Xs of this world can be a dogmatic, vicious, vindictive lot.

Howard Brazee said...

Republicans are pursuing Big Money policies. But voters take a long time to switch who they identify with. Republican voters are all about identity (they claim Democrats are, but they claim Democrats are all of the bad things they see within themselves).

Alan Brooks said...

Just about all the discussions (or monologues) I have with the public eventually come round to absolute truth versus empiricism. Plato v Aristotle...
Tomorrow will hear a sermon concerning how All our economic/social/political problems result from choosing the mind over the Heart.
They say we are quixotically attempting to utilize science to cure the incurable maladies of the soul. They say they are Defending The Soul In Politics. Defending the souls of the unborn. They say we must arm the teachers and staff in schools to defend souls in classrooms!

Larry Hart said...

Alan Brooks:

Defending the souls of the unborn.


How does restricting contraception defend the souls of the unborn? How does stigmatizing gay marriage defend the souls of the unborn? How does sex education defend the souls of the unborn?

I could go further and assert that abortion saves the unborn from the chance to become sinners, which is what almost everybody who is born actually does. But why muddy the waters of the original question--if anti-abortion morality is about defending the unborn, then why do those same people inevitably try to restrict non-procreative sex, forcing so many unwanted unborn into being?

«Oldest ‹Older   1 – 200 of 228   Newer› Newest»