Automated Economy: A Futurist Conversation

N&P: Discuss governments, nations, politics and recent related news here.

Moderators: Alyrium Denryle, Edi, K. A. Pital

Post Reply
User avatar
Elaro
Padawan Learner
Posts: 493
Joined: 2006-06-03 12:34pm
Location: Reality, apparently

Automated Economy: A Futurist Conversation

Post by Elaro »

As we develop better and better computerized decision-making processes (now I say that instead of AI because AI carries with it the connotation of autonomy. What I'm referring to could be as basic as determining plans for reaching specific goals, or automatic design without the actual carrying out part) and as engineers, businessmen and other white-collar workers get outsmarted by machines, what do you think our economic system will develop into?

So I think eventually, we'll have automated CEOs. The question is: will the directors remain human? Or will investors demand that the "human dead weight" be removed entirely so the machine can wholly concentrate on making money? And if they do so, what's to stop them from covertly coordinating activities with other automated CEOs so as to form virtual conglomerates and get all the money? Where would that wealth go? Could we legislate putting a "humanity's welfare first" goal into these machines, and does that constitute as the government controlling the economy? A sort of info-communism, if you will.

Either way, I don't see a "competitive capitalism" system work with machine intelligence. Any agent smart enough is going to try and collaborate with the competition, and I don't know how much intelligence is required to have plans and counter-plans for betrayal. Our system is going to have to change.

Sorry if it's in the wrong forum
"The surest sign that the world was not created by an omnipotent Being who loves us is that the Earth is not an infinite plane and it does not rain meat."

"Lo, how free the madman is! He can observe beyond mere reality, and cogitates untroubled by the bounds of relevance."
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Automated Economy: A Futurist Conversation

Post by Simon_Jester »

In large bureaucracies (including corporate ones) we already see the decision-making and managerial processes become more and more 'automated' in the sense that they are bound by particular rules and rule-making processes. This tends to be very counterproductive to the overall productivity of the organization, or at least it seems that way to the people at ground level.

Clearly, management cannot be automated according to 'rules' as simplistic as the ones involved in teaching a machine to tighten bolts or make phone calls and play a tape recorder.

So actually automating that job would most likely require a computer intelligence of superhuman intellect... and if you can afford one of those, with a bit more hardware budget you can probably make every employee in the company obsolete, not just the CEO.

At which point you've managed to render obsolete everything about modern capitalism except the owners, and as I noted recently in another thread, it's hard for me to see how that could happen without making the owners obsolete too.
This space dedicated to Vasily Arkhipov
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Automated Economy: A Futurist Conversation

Post by Starglider »

You seem to have a rather vague idea of what CEOs actually do. The relevant skills for this job are mostly interpersonal; motivating (individuals and groups), PR (to shareholders), generating bullshit (when needed to fool shareholers / customers / the public), seeing through bullshit (when an underling is trying to cover something up). For decision-making CEOs get stacks of reports and detailed analysis from experts; their contribution boils down to playing hunches, imposing an overall vision and trying to see through bias behind what went into the report and what didn't. All of this stuff is literally the hardest set of human capabilities to automate, even moreso than say creating artistic works. So Simon is correct to say that if you can automate the CEO, you don't need human labour for anything.

As for cartels, in principle semi-general AIs will collaborate in ways that are actually quite predictable given game theory and detailed knowledge of their goal systems. Of course I have noted at length that this is not a stable situation and fully general AIs will soon self-modify in highly unpredictable and likely highly undesireable ways.

The Etherium / MadeSafe / et al techno-libertarians do have an obsession with making self-sustaining 'software corporations' at a much lower level of AI (e.g. similar to conventional/contemporary algo trading), but frankly I don't get why this is so desireable. I suppose it's the same fascination as A-Life research (and to be honest, many virus writers) but with a more economic bent.
User avatar
Elaro
Padawan Learner
Posts: 493
Joined: 2006-06-03 12:34pm
Location: Reality, apparently

Re: Automated Economy: A Futurist Conversation

Post by Elaro »

Yeah, I failed to mention that the whole corporate structure would be automated because I am le brainless. Sigh.

But regardless, do you think governments would intervene to dictate goals to the AIs?
"The surest sign that the world was not created by an omnipotent Being who loves us is that the Earth is not an infinite plane and it does not rain meat."

"Lo, how free the madman is! He can observe beyond mere reality, and cogitates untroubled by the bounds of relevance."
User avatar
slebetman
Padawan Learner
Posts: 261
Joined: 2006-02-17 04:17am
Location: Malaysia

Re: Automated Economy: A Futurist Conversation

Post by slebetman »

The shareholders would intervene -- since it's their investment on the line. They of course can't all several thousand of them give the AI instructions so they'll appoint a committee -- call it the Board of Directors. And this committee will of course appoint one man to actually manage the AI, steer its goal, change its algorithm etc. They'll call this man CEO.

This is what the corporate structure is right now and it's what it will be in the future. The CEO doesn't make day-to-day decisions, he's got minions/employees for that. The only difference is that where once the were departments full of people there is now a cluster of servers hosting the AI that makes the detailed day-to-day decisions. And a small team of programmers :)

The government will of course intervene to restrict the goals of the shareholders (or the means to those goals) and the responsibilities of the CEO to the shareholders - just like they do today.

Any additional level of automation will not change things. They may make the board of directors obsolete by letting the shareholders communicate directly to the CEO via facebook or twitter style interfaces. They may make the role of the CEO more and more resemble that of a programmer.

There is one role of a CEO that cannot fully be automated until human nature itself changes. It's a role that I didn't understand when I started working and one I've grown to appreciate these last few years - getting money. Convincing investors and banks to give you money to do something with your business. Either just survive the start-up phase, or expand, or diversify. You can run a business without doing this but will be outcompeted by another business that can do this well.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20814
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Automated Economy: A Futurist Conversation

Post by K. A. Pital »

What is the point in being a money-hoarding monkey in the world of abudance? I foresee a rapid 'dehumanization' of humanity itself. Its welfare goals will rapidly change.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
cosmicalstorm
Jedi Council Member
Posts: 1642
Joined: 2008-02-14 09:35am

Re: Automated Economy: A Futurist Conversation

Post by cosmicalstorm »

It is hard to imagine the chain of causation stretching into the future from our current attempts to create intelligent machinery. I wonder if we live in a time like that preceded the advent of multicellular life. We are the bacterial mats that will spawn a kingdom of life much more complicated than us. This is probably the last few decades of biological life. In a century or so I'm sure most of Sol-space is deconstructed and turned into suitable substrate. Either that or we hit the brickwall in a new Carrington-event or a nuclear war.
User avatar
Elheru Aran
Emperor's Hand
Posts: 13073
Joined: 2004-03-04 01:15am
Location: Georgia

Re: Automated Economy: A Futurist Conversation

Post by Elheru Aran »

Please. Technology will advance, but human life isn't going to change *that* much. Economic conditions will change, social conditions probably will change, but there's still going to be people in cities and towns and all that. People are still going to get together, eat out, play games, whatever. We aren't just going to drop off the face of the Earth anytime soon.

'Last few decades of biological life' my ass. Centuries, maybe.

There's a reason evolution takes millennia... even artificial evolution has to take some time to get a viable result.
It's a strange world. Let's keep it that way.
User avatar
Arthur_Tuxedo
Sith Acolyte
Posts: 5637
Joined: 2002-07-23 03:28am
Location: San Francisco, California

Re: Automated Economy: A Futurist Conversation

Post by Arthur_Tuxedo »

I find it hard to buy into the idea that anyone will create an AI that will destroy all of humanity. The idea was proposed at the height of the Cold War, when every American thought the Russians had their fingers inches away from a big red button. It was an extremely convenient deterrent for both sides to convince the other, the world at large, and their own populations that they were crazy enough to launch. Recently declassified and translated documents have painted a very different picture, where the Soviets were so unwilling to engage in nuclear war that they spent exorbitant sums on a second strike capability of deep silos that could survive impact from the USA's ICBMs. In other words, they wouldn't even launch if they were sure the US had done so until there were actual mushroom clouds confirming that it wasn't a glitch.

The meme of humanity being an insane self-destructive species is a result of Cold War propaganda and simply false. There is no particular reason for a research team to build a Skynet-style unfriendly AI and give it access to the tools it would need to wipe us out, and the only vaguely plausible explanation even proposed for such an event is that competing nations would be racing so quickly toward the benefits of the technological singularity that they wouldn't bother with safeguards. This might have seemed reasonable during the Cold War if one did not realize how far behind the Soviets were in computer tech, but makes no sense in the world of today or any world that's likely to unfold in the next century.

I propose that we should stop predicting doom and gloom like history's previous luddite groups and embrace the idea that radical future changes in technology are likely to be mostly positive but with challenges and inequalities, just like every big discovery for millions of fucking years. I don't expect the gradual (but probably inevitable) obsolescence of human labor to go off without a hitch, but I also don't expect to be murdered by Terminators, used as a battery, turned into a messy component of some celestial abacus, or any other silly fantasy that shouldn't be taken as seriously by educated people as they still seem to be.

EDIT: I realize no one in this thread actually mentioned destruction of humanity per se. I meant the post to respond more generally to drastic, unpleasant consequences of AI advancement.
"I'm so fast that last night I turned off the light switch in my hotel room and was in bed before the room was dark." - Muhammad Ali

"Dating is not supposed to be easy. It's supposed to be a heart-pounding, stomach-wrenching, gut-churning exercise in pitting your fear of rejection and public humiliation against your desire to find a mate. Enjoy." - Darth Wong
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Automated Economy: A Futurist Conversation

Post by Simon_Jester »

Elaro wrote:Yeah, I failed to mention that the whole corporate structure would be automated because I am le brainless. Sigh.
Well, at that point you don't have a corporate structure, you have a big computer bank that remote-controls a lot of equipment and is legally owned by a joint-stock company.

Note the difference between a 'company' and a 'corporate structure.'
But regardless, do you think governments would intervene to dictate goals to the AIs?
If they don't, they aren't governments.
Stas Bush wrote:What is the point in being a money-hoarding monkey in the world of abudance? I foresee a rapid 'dehumanization' of humanity itself. Its welfare goals will rapidly change.
Most people who have substantial sums of money are doing it because they get off on collecting arbitrary tokens of value. Or because they get off on having more of those tokens than their neighbors. This incentive will not change.
cosmicalstorm wrote:It is hard to imagine the chain of causation stretching into the future from our current attempts to create intelligent machinery. I wonder if we live in a time like that preceded the advent of multicellular life. We are the bacterial mats that will spawn a kingdom of life much more complicated than us. This is probably the last few decades of biological life. In a century or so I'm sure most of Sol-space is deconstructed and turned into suitable substrate. Either that or we hit the brickwall in a new Carrington-event or a nuclear war.
I am more than a little skeptical of this, mostly because I suspect the limits of available physical infrastructure (computer hardware and industrial technology).
Arthur_Tuxedo wrote:I find it hard to buy into the idea that anyone will create an AI that will destroy all of humanity. The idea was proposed at the height of the Cold War, when every American thought the Russians had their fingers inches away from a big red button.
Well, the point is that an AI might (given sufficient power and freedom to work) become very unpredictable, and if its goals do not align with ours you get "be careful what you wish for" scenarios.

An AI that is vastly more intelligent than us, as humans are vastly more intelligent than dogs, for instance... Such an entity might well be able to manipulate us (including powerful leaders) into doing whatever it pleases. Just as we can basically hijack a dog's pack instinct and other behaviors, and 'hack' them so that we can convince them that a hairless plains ape is actually the alpha of their wolf pack.

If an AI could do that, its potential to fulfill its goals is limited only by hard physical obstacles, or by the prospect of violent attack against its own physical infrastructure.

And if those goals are ones that are in the long term incompatible with human happiness... uh-oh. Because then you have an AI whose goal is to make people smile and starts performing facial surgery, or whose goal is to make paperclips and melts down the whole world for paperclips, or some such horror story.

The real question is whether AIs can 'bootstrap' themselves from being roughly as intelligent as a human, to being orders of magnitude smarter than a human, without anyone deliberately making them so.
________________________
The meme of humanity being an insane self-destructive species is a result of Cold War propaganda and simply false. There is no particular reason for a research team to build a Skynet-style unfriendly AI and give it access to the tools it would need to wipe us out...
The question is, fundamentally, whether an AI that does not care what happens to us might do things inimical to us, purely as a side effect of maximizing the number of paperclips or smiles or dollar bills in existence.

And the concern is that if we develop AIs that are capable of superhuman feats of persuasion or thought, and they do not care about the general good of humanity, then we will have a serious problem on our hand. Especially if those AIs are capable of improving their own powers rapidly (the 'hard takeoff' Singularity).
This space dedicated to Vasily Arkhipov
User avatar
Elheru Aran
Emperor's Hand
Posts: 13073
Joined: 2004-03-04 01:15am
Location: Georgia

Re: Automated Economy: A Futurist Conversation

Post by Elheru Aran »

When AI start becoming a practical reality, expect legislation to be passed defining their states of existence (from burger-flipper to can run a corporation/state) and regulating their application.

Whether the legislation is actually useful or applicable will be another matter, but there will probably be something along the lines of "don't make anything that's smart enough to blow us all up" or some such.

That would be the smart thing to do, which admittedly sets the bar high for your average government...
It's a strange world. Let's keep it that way.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Automated Economy: A Futurist Conversation

Post by Simon_Jester »

Elheru Aran wrote:When AI start becoming a practical reality, expect legislation to be passed defining their states of existence (from burger-flipper to can run a corporation/state) and regulating their application.
Machines for running burger-flippers will probably still be property because there is no reason to make something as smart as a human in order to do that.
Whether the legislation is actually useful or applicable will be another matter, but there will probably be something along the lines of "don't make anything that's smart enough to blow us all up" or some such.
Again, the main issue is that it's not really possible to predict in advance how well a given AI will perform, especially without experience with other, equally smart AIs. So the first AI smart enough to blow us all up* will almost certainly be created 'by accident:' either its abilities will be underestimated in advance, or it'll be designed to self-improve and do so until it outstrips the design expectations.

*If it wanted to
This space dedicated to Vasily Arkhipov
User avatar
K. A. Pital
Glamorous Commie
Posts: 20814
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Automated Economy: A Futurist Conversation

Post by K. A. Pital »

Simon_Jester wrote:
Stas Bush wrote:What is the point in being a money-hoarding monkey in the world of abudance? I foresee a rapid 'dehumanization' of humanity itself. Its welfare goals will rapidly change.
Most people who have substantial sums of money are doing it because they get off on collecting arbitrary tokens of value. Or because they get off on having more of those tokens than their neighbors. This incentive will not change.
Why not? Competitive hoarding loses meaning. Being a monkey that is good at hoarding loses meaning. Hell, a lot of things can lose meaning. What good are your tokens if the physics prof next door lives the same lifestyle, but also is vastly intellectually superior? And there's nothing you can do to buy his brain, because... well, he already has what he needs?

In essence, the ultimate desintegration of hierarchy among the smartest part of human population is easily foreseen since it happens in educated collectives even now (though with it, perhaps, many ordinary social ties will also collapse and people will become even more atomized, unable and unwilling to connect, because a big part of the necessity to have relationships is removed).
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
Elheru Aran
Emperor's Hand
Posts: 13073
Joined: 2004-03-04 01:15am
Location: Georgia

Re: Automated Economy: A Futurist Conversation

Post by Elheru Aran »

It's debatable whether a burger-flipper would even be an AI at all, merely a program with a pretty limited set of responses. The responses might be fairly numerous depending on all the variables available to that program (type of meat, size, done-ness, ingredients, condiments, etc) but it would still just be a simple program. Anything more intelligent would be, frankly, useless in such a function. I don't see true AI's (IMO, 'true' AI= programs capable of making autonomous decisions, learning and self-programming) in much below mid-level administrative positions.

It's also an open question how fast we will proceed in actually creating an AI versus chatbots. I don't see humanity being willing to relinquish authority to computers until AI become prevalent enough that there's no escaping them and there's a history of their use benefiting the public and private good. Again, talking decades at the very least here, more likely another century or so.
It's a strange world. Let's keep it that way.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Automated Economy: A Futurist Conversation

Post by Simon_Jester »

Stas Bush wrote:Why not? Competitive hoarding loses meaning. Being a monkey that is good at hoarding loses meaning. Hell, a lot of things can lose meaning. What good are your tokens if the physics prof next door lives the same lifestyle, but also is vastly intellectually superior? And there's nothing you can do to buy his brain, because... well, he already has what he needs?
If the value tokens lose all significance because anything anyone could want is available on demand, then yes.

However, it takes a really post-scarcity economy for that to happen. As in, even the luxuries are no longer scarce. It also assumes people's desires will not expand to match the limits of the possible, i.e. no one will say "I want my own planet."

For so long as someone wants their own planet, there is room for a capitalist to employ Slartibartfast.
In essence, the ultimate desintegration of hierarchy among the smartest part of human population is easily foreseen since it happens in educated collectives even now (though with it, perhaps, many ordinary social ties will also collapse and people will become even more atomized, unable and unwilling to connect, because a big part of the necessity to have relationships is removed).
On the other hand, people who have no need to rely on others seem to socialize quite extensively online; it's just that the depth of these relationships tends to vary wildly. But then, the depth of our relationships with each other in real life varies too...
This space dedicated to Vasily Arkhipov
User avatar
madd0ct0r
Sith Acolyte
Posts: 6259
Joined: 2008-03-14 07:47am

Re: Automated Economy: A Futurist Conversation

Post by madd0ct0r »

Question. If the shareholders appoint the CEO who manages the AI what happens if that AI buys shares in company b, with the same structure? What if company B then buys shares in A?
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
Channel72
Jedi Council Member
Posts: 2068
Joined: 2010-02-03 05:28pm
Location: New York

Re: Automated Economy: A Futurist Conversation

Post by Channel72 »

The implications of widespread general AIs with human-level intelligence, capable of making executive decisions on par with human judgment, is so game-changing in so many ways that working out corporate structures would be the least of our problems.

However, I question whether an AI would necessarily be much better than a human at "fuzzy-judgment-call" type decision making, like the executive decisions CEOs make. All in all, the Universe has so many interdependent variables and is so complex that AIs and humans alike should have trouble making predictive models, especially of financial markets.

(I read on Bloomberg news the other day that stock prices of poultry companies plummeted because of an unexpected epidemic of rooster impotence... try factoring shit like that into your next financial predictive model)
User avatar
cosmicalstorm
Jedi Council Member
Posts: 1642
Joined: 2008-02-14 09:35am

Re: Automated Economy: A Futurist Conversation

Post by cosmicalstorm »

The human brain is so hideously constrained by biology that almost anything would be better. I would be glad to remember a trillion things instead of just seven in my short term memory, for instance.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Automated Economy: A Futurist Conversation

Post by Simon_Jester »

I think the trick, if all you were doing was trying to design an AI to outperform a human CEO, would be to teach it to follow certain general practices that are usually wise. In particular, ones that insure against the unpredictable, and which cause it to make plans that are 'robust' with respect to random events disrupting the plan.

The computer can remember a trillion things, but many things happening around it are inherently unpredictable (at least for an AI that has no more access to raw information than a human CEO could conceivably get by asking for it). So the trillion items factored into making the plan still don't guarantee that the plan will actually work. But being prudent in the way one designs the plan and following what are good practices in general... that helps.
This space dedicated to Vasily Arkhipov
User avatar
Lagmonster
Master Control Program
Master Control Program
Posts: 7719
Joined: 2002-07-04 09:53am
Location: Ottawa, Canada

Re: Automated Economy: A Futurist Conversation

Post by Lagmonster »

As an interjection, to all the posters above writing about the impact of better-than-human machines, bear in mind that "good as the average human" is the acceptable benchmark long before you start worrying about whether they can go Skynet on you.

For example, if you're talking CEOs, there are a shitload more people running small businesses than there are people running multi-trillion dollar companies. If you start thinking about a machine that only has to run Bob's Corner Gas Station and Discount Jerky Emporium, your requirements fall drastically. More so when you are willing to accept a pass/fail rate as good as for the average human small business owner in the first place.
User avatar
Elheru Aran
Emperor's Hand
Posts: 13073
Joined: 2004-03-04 01:15am
Location: Georgia

Re: Automated Economy: A Futurist Conversation

Post by Elheru Aran »

What that might amount to is basically a massive, walk-in vending machine... a setup which might actually work to some degree. Pop in, put some money into a cash slot to buy some gas. Walk over to the fridge, it's locked with a card-reader on the lock. Swipe your card, it unlocks, you grab a soda, it automatically reads what you picked up and gives you an electronic receipt. Head back out to your car, fill up, take a receipt from the pump and go in, scan it in the machine you bought gas at, it spits your change back out at you...

I can see how that might work for a small business. The degree of electronics on everything would be daunting, though.
It's a strange world. Let's keep it that way.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Automated Economy: A Futurist Conversation

Post by Starglider »

Elaro wrote:Yeah, I failed to mention that the whole corporate structure would be automated because I am le brainless.
That would require general AI. Which is incredibly dangerous and will not stay confined to whatever small-minded commercial application it is initially applied to. To answer this in more detail;
Could we legislate putting a "humanity's welfare first" goal into these machines
No. Or rather you could but (a) literally no one knows how to do that with any reliability and (b) there is absolutely no way of enforcing it.

If you disagree with (a), tell me how you think you would guarantee that an arbitrary artificial intelligence of human level or above will be 'benevolent to humanity'. If you disagree with (b), observe that writing viruses/scamware (and computer crime in general) is thoroughly illegal, yet somehow we still have it in abundance. In fact if you still stubbornly believe that the government can somehow specify benevolent AI and that average IT engineers (or worse, government software engineers) can reliably implement it, just consider what organised crime and rogue states would do with the technology.
But regardless, do you think governments would intervene to dictate goals to the AIs?
For AI sufficiently general to replace an entire company, I am certain they would be utterly incapable of doing so on both a technical and a practical level.
Any agent smart enough is going to try and collaborate with the competition
No, you pulled that out of your ass, but to be fair, the 'intelligence automatically equals benevolence' assumption has been made by quite a few AI luminaries as well. An intelligent agent will use other agents to the extent that it furthers its goals, but unless it has a specific independent goal to value the wellbeing of those other agents, it will ultimately attempt to consume and/or eliminate them.
cosmicalstorm wrote:It is hard to imagine the chain of causation stretching into the future from our current attempts to create intelligent machinery. I wonder if we live in a time like that preceded the advent of multicellular life. We are the bacterial mats that will spawn a kingdom of life much more complicated than us. This is probably the last few decades of biological life. In a century or so I'm sure most of Sol-space is deconstructed and turned into suitable substrate. Either that or we hit the brickwall in a new Carrington-event or a nuclear war.
Exactly correct. The route is fuzzy but those are the likely endpoints.
Elheru wrote:Technology will advance, but human life isn't going to change *that* much
At the risk of sounding like someone from Less Wrong: here we see normalcy bias, personified.

Although to be fair the one order of magnitude centuries vs decades distinction (vs years at the other extreme; hardware sufficient for seed AI already exists) is not really important. The difference is mostly in whether you appreciate the vastly faster speed of digital evolution (there is no room for debate on this if you have a clue) and whether you think physical infrastructure scaling will speed up massively with mature nanotechnology (there is some room for debate on this).
Arthur_Tuxedo wrote:I find it hard to buy into the idea that anyone will create an AI that will destroy all of humanity.
The major risk is not that anyone will do these on purpose, the risk is that there is no known way to control what any self-modifying intelligence will do. There is some debate on what fraction of arbitrary goal systems (that do not immediately wirehead / brick) are likely to produce this outcome when executed by a transhuman AGI, but it only takes one.

That said, naturally if the technology becomes generally available it will quickly fall into the hands of a few terrorist nuts who do want to wipe out lots or all of the humans (naturally 'kill some' goals are even more likely to produce a 'kill all' outcome than usual).
The meme of humanity being an insane self-destructive species is a result of Cold War propaganda and simply false.
Humanity being self-destructive is not the problem: although we were fortunate to get through the cold war without a major nuclear or biological exchange, it would not have been an extinction level event.
I also don't expect to be murdered by Terminators, used as a battery, turned into a messy component of some celestial abacus, or any other silly fantasy that shouldn't be taken as seriously by educated people as they still seem to be.
Of course popular science-fiction is not relevant here. The real risks exist completely independently of what stories Hollywood scriptwriters like to tell.

[quote="Simon_Jester""]you have a big computer bank that remote-controls a lot of equipment[/quote]

Actually you probably have a virtual cluster in the global cloud leasing compute resources from a disaster-tolerant selection of whichever providers / data centers are cheapest that month, but that's nitpicking :)
Simon Jester wrote:I am more than a little skeptical of this, mostly because I suspect the limits of available physical infrastructure
This is an interesting debate to have with the nanorobotics enthusiasts, i.e. how powerful nanotechnology can actually be and how quickly it can be created (even given superintelligence). It's not my field so I'm not any better qualified to have it than any other interested amateur. However from an AGI safety perspective, the conservative assumption is that the gun is loaded e.g. 'sufficiently powerful' nanotechnology can be bootstrapped by a hostile entity quicker than we could possibly detect/regulate/shut it down.
Simon_Jester wrote:If an AI could do that, its potential to fulfill its goals is limited only by hard physical obstacles, or by the prospect of violent attack against its own physical infrastructure.
I can say with complete confidence that 99% of existing computer security is effectively meaningless to a superintelligent (or even parallel-scalable human intelligence) general AI. So unless you managed to code in a stable goal of not taking over other hardware, 'the physical infrastructure' quickly becomes 'pretty much every internet connected device on earth'.
Simon_Jester wrote:And the concern is that if we develop AIs that are capable of superhuman feats of persuasion or thought, and they do not care about the general good of humanity, then we will have a serious problem on our hand.
Alas, that is if anything understating the problem. Of course responsible researchers would like to make any general AI 'care about humanity', but how do you code that? Ignoring implementation difficulties, how do you even specify that in formal logic? Codifying morality is hard enough even when granted stable abstractions, human hardware-implemented emapthy to help with interpretation and no self-modification. By all means, anyone who thinks they can do this, show me some pseudocode for Morality.cpp
Stas Bush wrote:Why not? Competitive hoarding loses meaning. Being a monkey that is good at hoarding loses meaning. Hell, a lot of things can lose meaning.
All goals are meaningless. All goals are meaningful.

Or rather, there is no objective morality or otherwise objective standard for 'ranking' goal systems, other than self-consistency. This is not a question of truth or belief; probability is independent of utility, the former is objective (or rather converges), the later is inherently subjective. Talking about humans specifically, hunger for possessions, power, status etc are genetic imperatives that can be regulated by environment, but certainly not eliminated with anything short of radical brain redesign (i.e. genetic alteration). You can transmute the 'status' one into something more socially acceptable perhaps, but as long as we're dealing with baseline humans plenty will still want the power, wealth, sex etc.
Elheru Aran wrote:It's debatable whether a burger-flipper would even be an AI at all, merely a program with a pretty limited set of responses. The responses might be fairly numerous depending on all the variables available to that program (type of meat, size, done-ness, ingredients, condiments, etc) but it would still just be a simple program. Anything more intelligent would be, frankly, useless in such a function.
You have the wrong model. An 'AI' is not a black box (ok, the Apple one will be white and excitingly bevelled) you buy from IBM and stick in your robot. It is not even an app running on your smartphone. Come on, this mistake was kinda understandable in the 1980s, but the trend should be blindingly obvious from Siri et al. In the era of cheap pervasive networking 'AI' is a cloud software service, with local autonomy only as required by safety concerns (e.g. automated cars need to work fine during a wi-fi outage). You can see this quite clearly in financial software; 'corporate AI' is a federation of software components that contribute competencies as required to complete tasks. A trade executes at low latency with minimal CPU power and software involved if it falls within normal parameters, but if it looks unusual additional servers and code is pulled in to do arbitrage checks, fraud checks, competitor strategy reverse-engineering etc. Robots can run on 'reflex' for normal situations (much like humans), but when extraordinary situations occurs additional intelligence will be instantly brought to bear (for subhuman AI, the last line of escalation is popping up a job on a teleoperator's screen, in the robot-operation equivalent of an outsourced call center). A robot providing customer service can answer simple questions by rote, but if the question is harder it will go to various agents in the cloud for more intensive analysis.

This is incidentally, one of the many many reasons why AI is so counter intuitive. Right out of the gate you have failed if you're imagining a 'fast food AI' as a chimp made out of plastic and servos, rather than a distributed digital intelligence with tens of thousands of assorted peripherals managing thousands of simultaneous conversations at varying levels of 'attention'. This isn't superintelligence; this is just a typical enterprise software system. This is the jumping off point for the really crazy stuff.
Elheru Aran wrote:I don't see humanity being willing to relinquish authority to computers
Well, you're not working in investment banking then, because I see it every day. That's why we keep firing traders in fact. Even the quants are now all 'quant-developers'.

Political authority? Obviously not at the top level but the civil service, the practical decisions of taxes, benefits, services a citizen is entitled to; all increasingly automated.
madd0ct0r wrote: If the shareholders appoint the CEO who manages the AI what happens if that AI buys shares in company b, with the same structure? What if company B then buys shares in A?
Legally that is an indirect buyback; people do this already, it just increases the stake of the remaining (human) shareholders like a regular buyback. Ultimately only humans can own things; corporate owned things are indirectly owned by the shareholders. I don't expect (existing) governments to grant AIs ownership rights or personhood before their opinion becomes irrelevant.
Channel72 wrote:However, I question whether an AI would necessarily be much better than a human at "fuzzy-judgment-call" type decision making, like the executive decisions CEOs make. All in all, the Universe has so many interdependent variables and is so complex that AIs and humans alike should have trouble making predictive models, especially of financial markets.
Humans are just awful as general AIs. This isn't an emotional statement; humans are quite obviously just over the threshold of general intelligence. Human civilisation is a 'hard takeoff' from an evolutionary standpoint; we have gone from an odd kind of ape to completely transforming the biosphere, doing ridiculously impossible things (for normal animals; could a bird evolve to fly to the moon?) and causing a mass extinction event... all in the blink of an eye, in evolutionary time. If we were left to evolve for an evolutionary meaningful amount of time, e.g. a few million years, it's likely we'd become a lot more intelligent (if only due to intra-human competion). But we won't be because (barring end of civilisation events) human cultural evolution will bee bypassed by a similar kind of phase change; digital evolution of self-modifying artificial intelligence.
cosmicalstorm wrote:The human brain is so hideously constrained by biology that almost anything would be better.
Exactly; the scope of things that decently competent general AIs can usefully model is obviously not infinite, but so vastly greater than anything a human could manage that this isn't a meaningful objection.
Simon_Jester wrote:In particular, ones that insure against the unpredictable, and which cause it to make plans that are 'robust' with respect to random events disrupting the plan.
Coding that heuristic is relatively easy. The difficult part (here in the pre-AGI era) is solving the frame problem enough that a reasonably large set of events can be considered 'non-random'.
Simon_Jester wrote:The computer can remember a trillion things, but many things happening around it are inherently unpredictable
Predictability isn't binary; in fact acting like it is is yet another human bias forced by hardware limitations. Actually almost everything is predictable to some degree, i.e. has a non-uniform probability distribution. Efficient goal-seeking behavior looks for and seeks to create useful peaks in the global joint probability distribution i.e. high likelihoods of something good happening. Humans have enough problems trying to reason about single-event probabilities, never mind chaining/combining millions of arbitrary multidimensional PDs. Even without getting that advanced, vanilla contemporary financial software benefits greatly from being able to exactly simulate a billion possible market futures (and summarise them), while a human trader can only imagine a handful.
Lagmonster wrote: If you start thinking about a machine that only has to run Bob's Corner Gas Station and Discount Jerky Emporium, your requirements fall drastically. More so when you are willing to accept a pass/fail rate as good as for the average human small business owner in the first place.
Exactly, and this is frankly a more sensible topic for general debate, because transhuman AGI is so counterintuitive. As noted above the software doesn't even have to be able to run BCGS&DJE on its own, it just has to manage day-to-day with occassional help from cheap teleoperators in offshore robo-management-centers.
Etheru wrote: The degree of electronics on everything would be daunting, though.
You say 'daunting', a thousand Sillicon Valley start-ups say 'the time for the Internet of Things' is now. Remember, you don't need local intelligence, you just need sensors, efforts and a 10 cent wi-fi chip.
Darmalus
Jedi Master
Posts: 1131
Joined: 2007-06-16 09:28am
Location: Mountain View, California

Re: Automated Economy: A Futurist Conversation

Post by Darmalus »

All you need for an Auto-BCGS&DJE is a mess of RFID chips (smaller than a grain of sand and measured in RFIDs per cent these days I believe) and sensors accurate enough to know which customer grabbed what (assuming multiple simultaneous customers) and charge them appropriately. You could probably make it so you swipe your card (or retina) at the door, go in, grab anything you want, walk out and get your receipt. You could probably pull it off today.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Automated Economy: A Futurist Conversation

Post by Simon_Jester »

Starglider wrote:Although to be fair the one order of magnitude centuries vs decades distinction (vs years at the other extreme; hardware sufficient for seed AI already exists) is not really important. The difference is mostly in whether you appreciate the vastly faster speed of digital evolution (there is no room for debate on this if you have a clue) and whether you think physical infrastructure scaling will speed up massively with mature nanotechnology (there is some room for debate on this).
Personally, I'm skeptical of this bit. And more than a little skeptical, possibly without justification, that an AI will be able to take advantage of arbitrary amounts of new hardware to bootstrap itself from 'humanish' to 'godlike intelligence.'
[quote="Simon_Jester""]you have a big computer bank that remote-controls a lot of equipment
Actually you probably have a virtual cluster in the global cloud leasing compute resources from a disaster-tolerant selection of whichever providers / data centers are cheapest that month, but that's nitpicking :)[/quote]True, though a relevant nitpick because it undermines the obvious counter to AI, which is to threaten its physical ability to survive if it starts doing things we'd regret.
Simon Jester wrote:I am more than a little skeptical of this, mostly because I suspect the limits of available physical infrastructure
This is an interesting debate to have with the nanorobotics enthusiasts, i.e. how powerful nanotechnology can actually be and how quickly it can be created (even given superintelligence). It's not my field so I'm not any better qualified to have it than any other interested amateur. However from an AGI safety perspective, the conservative assumption is that the gun is loaded e.g. 'sufficiently powerful' nanotechnology can be bootstrapped by a hostile entity quicker than we could possibly detect/regulate/shut it down.
As long as the technology to use nanotech to make things can be monitored, if only in the sense of "hey, is this thing on, and Fred, go take a look at what it's doing," that is likely to be a good way to at least mitigate the risk of disaster.

If that stops being true, all bets are off I guess.
Simon_Jester wrote:If an AI could do that, its potential to fulfill its goals is limited only by hard physical obstacles, or by the prospect of violent attack against its own physical infrastructure.
I can say with complete confidence that 99% of existing computer security is effectively meaningless to a superintelligent (or even parallel-scalable human intelligence) general AI. So unless you managed to code in a stable goal of not taking over other hardware, 'the physical infrastructure' quickly becomes 'pretty much every internet connected device on earth'.
This is true unless someone gets clever and paranoid and does something effective- the old "trap a genius in a pit and he's at least going to be slowed down figuring out a way out" approach.

Not making bets on the odds of cleverness and paranoia reigning here.
This is incidentally, one of the many many reasons why AI is so counter intuitive. Right out of the gate you have failed if you're imagining a 'fast food AI' as a chimp made out of plastic and servos, rather than a distributed digital intelligence with tens of thousands of assorted peripherals managing thousands of simultaneous conversations at varying levels of 'attention'. This isn't superintelligence; this is just a typical enterprise software system. This is the jumping off point for the really crazy stuff.
The classic problem is that for most of us who don't actually work with complex computer systems, we still think of software running on one machine, not software running on networks of machines. It doesn't help that AI running in a single isolated box is at least conceivable enough that we can grasp the concept and assume (probably wrongly) that we 'get' it.
Elheru Aran wrote:I don't see humanity being willing to relinquish authority to computers
Well, you're not working in investment banking then, because I see it every day. That's why we keep firing traders in fact. Even the quants are now all 'quant-developers'.
As I understand it, investment banking basically consists of a relatively small number of very rich assholes who run the company because they want a magic machine that creates billions of dollars out of nothing (or at least, nothing that costs them anything). Everything else is part of the magic machine. And the machine is made out of essentially expendable pieces (contracts, people, computers, whatever when you're half-psychopath at best).

They want to think they're in charge of the machine, because that plus the money is their personal thing that motivates them. And with people that works because you can bully them.

Then again, it's not like this isn't a good model of a lot of human enterprises, many of which could easily run away from their owners without the owners even having the technical background to grasp what the hell is happening to them. Arguably, the best simple parable for the dangers of AI is that of the sorceror's apprentice...

[Note that I said simple parable, not accurate or with the 'right' ending]
Simon_Jester wrote:In particular, ones that insure against the unpredictable, and which cause it to make plans that are 'robust' with respect to random events disrupting the plan.
Coding that heuristic is relatively easy. The difficult part (here in the pre-AGI era) is solving the frame problem enough that a reasonably large set of events can be considered 'non-random'.
Fair enough.
This space dedicated to Vasily Arkhipov
Channel72
Jedi Council Member
Posts: 2068
Joined: 2010-02-03 05:28pm
Location: New York

Re: Automated Economy: A Futurist Conversation

Post by Channel72 »

Starglider, isn't it a pretty big assumption that an AGI would even be selfish in any sense? Humans probably fear it would, because we're biased to think in terms of selfishness as fundamental to goal-seeking because of our evolutionary roots. Biological evolution is an algorithm that (mostly) selects for selfishness (self-survival as a fundamental goal), resulting in a feedback loop that favors selfish survivalists (whether we're talking in terms of individual organisms, or groups of organisms).

But why would an AGI necessarily even value itself over other entities? It's entire concept of self may be nothing more than a reference point. You seem to be very afraid that it's goal system will spiral out of control to the point where it decides "SURVIVAL OF THE SELF AT ALL COSTS AND KILL EVERYONE ELSE". But it's just as likely it would see itself as expendable; as merely a node in a graph of other entities, without any concept of self-worth beyond it's immediate goals. Indeed, if it's goal system spirals out of control it may conclude that some meta-goal is in fact unattainable, or too expensive to attain, and simply self-terminate. I mean, there's really no logical link between "general intelligence" and "selfishness" - it's just a human conceit because our evolutionary roots make it difficult to even conceive of being any other way.

Also, I don't understand why a (potentially) hostile AI couldn't be constrained by hardware restraints (like NX bits or whatever) - I mean just have the OS segfault the damn thing if it starts having ambitions of overtaking the world.
Post Reply