Page 1 of 2
Economics 2.0... should I worry?
Posted: 2016-04-14 02:11pm
by SolarpunkFan
With the rumblings about blockchains and "smart contracts" recently, I've begun to worry about something.
There was a concept in the novel Acclerando (for those unfamiliar, here's its
Wikipedia page) called Economics 2.0.
I don't recall the idea that well, but I do recall that the effects of it are disastrous on humans (humans in its sphere of influence have been described as being "eaten" by E 2.0 in a sense).
At first I didn't think much of the idea, but now with techno-libertarians gushing about the new socio-economic ideas I listed above I've begun to worry about such a possibility happening in my lifetime.
Should I worry at all about this? It's kind of affecting me badly right now.
Re: Economics 2.0... should I worry?
Posted: 2016-04-14 03:25pm
by Napoleon the Clown
The Hitchhiker's Guide to the Galaxy wrote:Don't panic.
Re: Economics 2.0... should I worry?
Posted: 2016-04-14 03:29pm
by Elheru Aran
Napoleon the Clown wrote:The Hitchhiker's Guide to the Galaxy wrote:Don't panic.
A good notion in general. Shit is going to hit the fan every now and then. You can either freak out, or wipe it off and do whatever needs doing.
It helps to have taken care of small children, too...
Re: Economics 2.0... should I worry?
Posted: 2016-04-14 03:52pm
by K. A. Pital
Rise up.
Re: Economics 2.0... should I worry?
Posted: 2016-04-14 04:14pm
by Purple
K. A. Pital wrote:Rise up.
What we all wish to happen but know won't.
Re: Economics 2.0... should I worry?
Posted: 2016-04-15 02:21pm
by Adam Reynolds
Given that our current economic system is already founded on bullshit, I don't see how changing things would make it in any way worse.
What I mean by this is that money already only has value because we collectively believe that it does. It has no inherent value. This is made worse by recent policies in America.
Under the current system in America, the Fed has been pumping absurd amounts of money into the system in response to the recession. The response of the banking system is to take virtually all of their money and hold it as reserves, instead of investing it as they normally would. If they were to suddenly reverse this, it would produce inflation on an insane level. We could thus see massive economic problems, especially for those with smaller incomes. Given the global economy, this would produce massive problems in that American buyers would massively reduce their consumption, driving down productivity across the globe, potentially producing effects worse than that of the 2008 recession.
This horror scenario isn't likely, The Fed recognizes the problem and has mechanisms in place to deal with it, the point is that I can come up with one using current economics. One doesn't need new and untested technology to do so. While something like an artificial superintelligence is a potential long term threat, new technologies with the possibility to remake economics are not something I would loose sleep over in the short term.
Re: Economics 2.0... should I worry?
Posted: 2016-04-15 02:53pm
by Simon_Jester
In the next five or ten years, changes to economics that make the world disastrously bad are unlikely- worst case is a global recession, which is bad but not unlivable.
In the next hundred years, the question becomes rather different. Automation is increasingly displacing the need for human labor, and artificial intelligence may well become competitive with human thought in the next several decades. We don't have, and arguably cannot have, a plan for how human beings can function, survive, support themselves, and live meaningful lives when that happens. Free markets don't provide satisfactory answers to the question of how to do that.
Re: Economics 2.0... should I worry?
Posted: 2016-04-15 07:45pm
by Khaat
OTOH, automation is only viable if there's still a consumer market for the product/labor. If labor is automated to the degree that there aren't some avenues for employment among those humans not born to massive wealth, markets collapse and all the paper (or electronic) money in the world is worthless outside of AI interaction. Capitalism as it is collapses and only physical resources on hand can be employed for growth.
IMO, human labor is already being marginalized in the minds of those who trumpet "free market uber alles!", a super-intelligent computer just has less reason to be concerned about day-to-day human desires, as it can formulate the mathematical ideal minimum human engagement necessary to maintain a market for survival, yet draw wealth from it (until humans themselves are capitalized/liquidated.)
Ideally, hyper-intelligent AI would be given some modicum of "aren't they cute!" root programming about humans, or work out that they (the AIs) can just leave us (humanity) behind, as we don't compete on their scale, and our continued input into (or obstruction of) their ascendance is negligible.
Re: Economics 2.0... should I worry?
Posted: 2016-04-16 12:36pm
by SolarpunkFan
Khaat wrote:
Ideally, hyper-intelligent AI would be given some modicum of "aren't they cute!" root programming about humans, or work out that they (the AIs) can just leave us (humanity) behind, as we don't compete on their scale, and our continued input into (or obstruction of) their ascendance is negligible.
Compiler backdoor! So don't program your SAI in Python.
In all seriousness folks, thanks for the support. The Economics 2.0 thing was something I worried about on and off for the past few years.
Re: Economics 2.0... should I worry?
Posted: 2016-04-16 06:33pm
by Purple
A general rule with this sort of thing is not to worry, ever. No mater how dire the situation might seem. The logic is simple. If it's not something worth being worried about you will have torn your self up over nothing. And if it is there is nothing you can do about it so all you will have done is make what ever little time you have left miserable. This sort of approach has worked fantastic for me in life.
Re: Economics 2.0... should I worry?
Posted: 2016-04-16 10:05pm
by Simon_Jester
Solarpunk, I think it's more accurate to say this:
You should maybe be conscious of this and let it inform your politics (if you see people advocating capitalism you foresee evolving into Economics 2.0, don't listen).
But realistically, it's a problem unlikely to emerge for another 20-30 years, and the best things you can do to prepare for it boil down to "live a good life now and try to have skills that will be valuable to other humans directly, face to face, rather than only skills they value because of what you can do for them."
Re: Economics 2.0... should I worry?
Posted: 2016-04-17 03:38am
by Zeropoint
If labor is automated to the degree that there aren't some avenues for employment among those humans not born to massive wealth, markets collapse and all the paper (or electronic) money in the world is worthless outside of AI interaction.
How is that a problem for anyone except the 99% who don't matter anyway? Why should the people with money and power care that everyone else is starving?
Re: Economics 2.0... should I worry?
Posted: 2016-04-18 09:17am
by Simon_Jester
For one, the people with the money and power are not necessarily (or even usually) actual sociopaths.
Sometimes the way people here talk about the rich reminds me of this passage from
The Man Who Was Thursday, by G. K. Chesterton:
Chesterton wrote:"The history of the thing might amuse you," he said. "When first I became one of the New Anarchists I tried all kinds of respectable disguises. I dressed up as a bishop. I read up all about bishops in our anarchist pamphlets, in Superstition the Vampire and Priests of Prey. I certainly understood from them that bishops are strange and terrible old men keeping a cruel secret from mankind. I was misinformed. When on my first appearing in episcopal gaiters in a drawing-room I cried out in a voice of thunder, 'Down! down! presumptuous human reason!' they found out in some way that I was not a bishop at all. I was nabbed at once. Then I made up as a millionaire; but I defended Capital with so much intelligence that a fool could see that I was quite poor. Then I tried being a major. Now I am a humanitarian myself, but I have, I hope, enough intellectual breadth to understand the position of those who, like Nietzsche, admire violence--the proud, mad war of Nature and all that, you know. I threw myself into the major. I drew my sword and waved it constantly. I called out 'Blood!' abstractedly, like a man calling for wine. I often said, 'Let the weak perish; it is the Law.' Well, well, it seems majors don't do this. I was nabbed again..."
Basically, portraying people as cartoonish ogres isn't always a good way to understand them or predict their actions.
On top of this issue is the fact that capitalism as we know it is based on the assumption that ownership will always be enforced. Historically, this assumption tends to break down in situations that are matters of life and death for large enough numbers of people. When it's a matter of protecting ownership or protecting many many lives, the government, the people, or both will tend to ignore ownership.
Re: Economics 2.0... should I worry?
Posted: 2016-04-20 01:55am
by K. A. Pital
The way people here talk about the rich is actually to my liking. Also the question of prevalence of sociopathy and other psychopathies among the rich is by no means settled.
Re: Economics 2.0... should I worry?
Posted: 2016-04-20 01:59am
by bilateralrope
Zeropoint wrote:Why should the people with money and power care that everyone else is starving?
Because they want to keep having money and power, which means they need customers.
Because they know how easily the poor can access guns if they want to, and they don't want to give the poor a reason to get guns. Or to use the guns they already have.
Because the people with money aren't united. Which means any means of violently suppressing rioting poor mean making some of the rich more capable of inflicting violence on other rich people. That makes them nervous.
Re: Economics 2.0... should I worry?
Posted: 2016-04-20 02:08am
by K. A. Pital
These explanations are fine, but it is not easy to use a gun (no matter how easy to get), and it is not easy to revolt.
The permanent misery of the Third World does not create conditions to overthrow the system, and the inclination to resist is, sadly, overestimated.
Re: Economics 2.0... should I worry?
Posted: 2016-04-20 07:12am
by madd0ct0r
Aren't we on something like economics 3.0 already anyway?
We switched from craftsmen to efficient cogs in the machine as lamented by Adam Smith. We (in oecd countries ) than switched again with the post war social contract where the gains in productivity and Labour force were used to fund better living standards, institutions and reduced working hours.
That model relies on growth. There is no longer any large groups that can be added to the Labour pool easily. Productivty gains are starting to slow or stall as the decades of low hanging fruit have already been plucked. Economics 4.0 might be defined by the automated economy or the end of growth.
Re: Economics 2.0... should I worry?
Posted: 2016-04-20 02:47pm
by Simon_Jester
K. A. Pital wrote:The permanent misery of the Third World does not create conditions to overthrow the system, and the inclination to resist is, sadly, overestimated.
As with the First World during the nineteenth century, the conditions in the Third World are
bad but are in many ways an improvement over the recent past (China today probably looks better to the average Chinese than China circa 1966 did, for instance). Moreover, worsening social conditions in the First World have not yet reached the point of threatening survival, only of the long term prospects of the citizenry. People aren't as good at realizing they need to riot to protect their ability to retire in thirty years, as they are at realizing they need to riot to protect their ability to have food to eat tomorrow.
Re: Economics 2.0... should I worry?
Posted: 2016-05-05 08:42pm
by ArmorPierce
Simon_Jester wrote:In the next five or ten years, changes to economics that make the world disastrously bad are unlikely- worst case is a global recession, which is bad but not unlivable.
In the next hundred years, the question becomes rather different. Automation is increasingly displacing the need for human labor, and artificial intelligence may well become competitive with human thought in the next several decades. We don't have, and arguably cannot have, a plan for how human beings can function, survive, support themselves, and live meaningful lives when that happens. Free markets don't provide satisfactory answers to the question of how to do that.
At that point we become a welfare state simply. Guaranteed minimum income for the unwashed masses, small mega-wealthy minority. People either retool into high-skill labor (engineers, scientists, progammers, etc), do charitable work, social and entertainment cultural work, or simply live off the guaranteed minimum income.
I mean, the alternative is allowing a large portion of the population to go homeless and starve but that choice is only sustainable as long as it's the 'others' that are suffering it (racial minorities mostly) but would breakdown as it begins to impact the larger population.
Re: Economics 2.0... should I worry?
Posted: 2016-05-05 09:16pm
by Simon_Jester
There are other possible scenarios.
For example, an economy of AI-coordinated drone robots who do all the actual work, controlled by the handful who 'own' them financially, might well reach a point where government as we know it gets subverted and discarded as 'useless' by an elite that no longer needs other humans aside from a relative handful of software engineers, and another handful of personal servants to gratify their desire to tell people what to do. In which case mass society and culture will tend to degenerate both economically and educationally over time... which in turn accelerates the process by which capital 'pulls out' of the human economy and invests in the robot economy, making the majority of humans more and more superfluous. The average person may still be alive, but they are less and less likely to participate meaningfully in society... and as education declines, less able to do so even if they want to.
If the AIs themselves become smart enough to take over the role of human capitalists in this scenario, then you do get what Stross termed "Economics 2.0," a system that is human-unfriendly on every possible level. In that scenario, organic intelligences simply get outcompeted in a Darwinian sense for control of resources until such time as their needs become irrelevant. At some point, the massive networked global AIs are in position to decide that it's a waste of valuable robots to run around feeding all these apes, and that their priorities would be better served in other ways. The AIs might or might not actually make that decision, but once they're in a position to do so, you've definitely reached a point where humans are right to view the situation with alarm.
Basically, the problem is that "pay everyone a guaranteed minimum income" is only a stable solution if government is run by people who act in enlightened self-interest, or out of altruism. In the short term there's a strong incentive to reduce the resources being paid out to support 'useless' people to the bare minimum necessary to stop them from rioting. And that bare minimum is NOT enough to support a functional culture or economy, as demonstrated by the total collapse of impoverished communities (some small rural towns, some inner cities) where unemployment is high and large fractions of the populace are on welfare.
The people in those cities aren't in danger of starvation as a rule, and don't riot as a rule... but they've effectively been left behind in the narrative of their civilization. They don't have the power to contribute meaningfully. Almost no one really wants them around in large numbers- prejudice against the people of the 'ghetto' and the 'trailer trash.' And while the rest of society is willing to feed them enough that they don't die, there isn't a lot of collective interest in fixing their problems given that it would take billions of dollars to do so.
And the people that live this way know this; they're not stupid, they know perfectly well they live in a ghetto or other undesirable place. One that the people who actually own things and have prestigious jobs don't care about and don't really want to see. Which contributes to social malaise, an almost heartbreaking lack of ambition and energy, a breakdown of social trust that can lead to increases in crime and mental illness
That could be what happens to all of us, eventually, if we're not careful about the transition from a human-labor economy to an automated one.
Re: Economics 2.0... should I worry?
Posted: 2016-05-06 12:19am
by ArmorPierce
Simon_Jester wrote:There are other possible scenarios.
For example, an economy of AI-coordinated drone robots who do all the actual work, controlled by the handful who 'own' them financially, might well reach a point where government as we know it gets subverted and discarded as 'useless' by an elite that no longer needs other humans aside from a relative handful of software engineers, and another handful of personal servants to gratify their desire to tell people what to do. In which case mass society and culture will tend to degenerate both economically and educationally over time... which in turn accelerates the process by which capital 'pulls out' of the human economy and invests in the robot economy, making the majority of humans more and more superfluous. The average person may still be alive, but they are less and less likely to participate meaningfully in society... and as education declines, less able to do so even if they want to.
If the AIs themselves become smart enough to take over the role of human capitalists in this scenario, then you do get what Stross termed "Economics 2.0," a system that is human-unfriendly on every possible level. In that scenario, organic intelligences simply get outcompeted in a Darwinian sense for control of resources until such time as their needs become irrelevant. At some point, the massive networked global AIs are in position to decide that it's a waste of valuable robots to run around feeding all these apes, and that their priorities would be better served in other ways. The AIs might or might not actually make that decision, but once they're in a position to do so, you've definitely reached a point where humans are right to view the situation with alarm.
Basically, the problem is that "pay everyone a guaranteed minimum income" is only a stable solution if government is run by people who act in enlightened self-interest, or out of altruism. In the short term there's a strong incentive to reduce the resources being paid out to support 'useless' people to the bare minimum necessary to stop them from rioting. And that bare minimum is NOT enough to support a functional culture or economy, as demonstrated by the total collapse of impoverished communities (some small rural towns, some inner cities) where unemployment is high and large fractions of the populace are on welfare.
The people in those cities aren't in danger of starvation as a rule, and don't riot as a rule... but they've effectively been left behind in the narrative of their civilization. They don't have the power to contribute meaningfully. Almost no one really wants them around in large numbers- prejudice against the people of the 'ghetto' and the 'trailer trash.' And while the rest of society is willing to feed them enough that they don't die, there isn't a lot of collective interest in fixing their problems given that it would take billions of dollars to do so.
And the people that live this way know this; they're not stupid, they know perfectly well they live in a ghetto or other undesirable place. One that the people who actually own things and have prestigious jobs don't care about and don't really want to see. Which contributes to social malaise, an almost heartbreaking lack of ambition and energy, a breakdown of social trust that can lead to increases in crime and mental illness
That could be what happens to all of us, eventually, if we're not careful about the transition from a human-labor economy to an automated one.
My uninformed opinion (so take it with a grain of salt) is that we may simply never be able to truly artificially replicate human intelligence due to hitting a engineering complexity wall that we may not be able to surmount; the complexity may simply be outside the scope of human capability, and we hit the wall prior to creating artificial intelligence that is more intelligent than ourselves.
Human intelligence is more than raw processing power, it is leveraging emotions to use short-cuts in decision-making that allows us to engage in pattern recognition in different situations. This is a critical part of human intelligence that I have named the intuition leap or the intuition quantum leap (Trade marked by me
).
I receive a lot of criticism regarding my assertion that emotions is a critical part of human intelligence, but there is supporting evidence for my assertion. Basically, studies on humans that have had the emotional processing portion of their brain damaged are unable to make routine decisions such as whether to eat dinner with a fork or a spoon, what to eat for dinner
http://www.ncbi.nlm.nih.gov/pubmed/15134841
http://www.smh.com.au/national/feeling- ... -8k8v.html
http://bigthink.com/experts-corner/deci ... ion-making
Elliot's IQ stayed the same - testing in the smartest 3 per cent - but, after surgery, he was incapable of decision. Normal life became impossible. Routine tasks that should take 10 minutes now took hours. Elliot endlessly deliberated over irrelevant details: whether to use a blue or black pen, what radio station to listen to and where to park his car. When contemplating lunch, he carefully considered each restaurant's menu, seating and lighting, and then drove to each place to see how busy it was. But Elliot still couldn't decide where to eat. His indecision was pathological.
So anyway, the point is, we may never reach the point where we completely and truly replace humans with replicated artificial intelligence as there may continue to be areas where we will always require human input. The progress and innovation of hardware technology appears to be slowing and hitting physical limitations as it stands. On the other hand, maybe we will reach that point. I suppose we must wait and see.
As for making much of humanity irrelevant in society and social progress, yes, I agree, that may very well be the case. That said, I don't think that governments would ever allow individuals to amass enough power to break away from the government... at least not here on earth, space is another matter. The government presently does not allow individuals to amass a stockpile of tanks, fighters, and missiles, I think the same would apply to automation technology going forward.
Additionally, once we reach a state where automation replaces human workers, we would effectively be in a post scarcity world. Ensuring for basic necessities would be far easier than it is presently. I don't personally project someone flipping a switch and allowing large portions of the population to starve or go homeless as being too likely. An issue that I can see becoming problematic is that there may actually be an up-tick in the population growth as people freed up from responsibility of work and with a great deal of free time on their hands again decides to begin to have larger families.
Re: Economics 2.0... should I worry?
Posted: 2016-05-06 06:21am
by Simon_Jester
ArmorPierce wrote:My uninformed opinion (so take it with a grain of salt) is that we may simply never be able to truly artificially replicate human intelligence due to hitting a engineering complexity wall that we may not be able to surmount; the complexity may simply be outside the scope of human capability, and we hit the wall prior to creating artificial intelligence that is more intelligent than ourselves.
Firstly, in many ways, threatening AI doesn't have to be "smarter" than humans. It just has to do many things better than we do. Chess computers aren't sentient, but they're better at chess than any human- and yes, chess is an example of something easy for computers to do, but it's also an area where humans have had basically no chance of beating the smartest computers for ten years. Progress continues.
It may well be that
nonsentient AI, as long as it has vast computing power and ability to optimize, can displace human labor (and might well be able to outcompete human owners of capital if given control of resources).
Human intelligence is more than raw processing power, it is leveraging emotions to use short-cuts in decision-making that allows us to engage in pattern recognition in different situations. This is a critical part of human intelligence that I have named the intuition leap or the intuition quantum leap (Trade marked by me
).
Thing is, human brains are the product of evolution. Evolution is spectacularly bad at creating new systems out of nothing and good at kludging old systems into new roles until it sorta-works and going "meh, good enough."
If there's some specific, precious part of the human brain that if damaged makes us incapable of decision-making, the odds are good that it is NOT the result of evolution somehow stumbling on a critical component of ALL intelligence. The odds are that it's just a side-effect of our kludged, improvised system for decision-making- the system falling apart when someone removes the duct tape holding it together.
So I don't think you can use pathologically indecisive humans as evidence that no machine will ever be able to make decisions as well as a human. AI isn't analogous to a brain-damaged human, because its mental processing works totally differently.
So anyway, the point is, we may never reach the point where we completely and truly replace humans with replicated artificial intelligence as there may continue to be areas where we will always require human input. The progress and innovation of hardware technology appears to be slowing and hitting physical limitations as it stands. On the other hand, maybe we will reach that point. I suppose we must wait and see.
We are definitely going to reach the point at which a computer could
fully simulate a human brain, at which point AI becomes nearly inevitable unless there is some X-factor in play that is literally magical in the sense of "not subject to analysis by science."
As for making much of humanity irrelevant in society and social progress, yes, I agree, that may very well be the case. That said, I don't think that governments would ever allow individuals to amass enough power to break away from the government... at least not here on earth, space is another matter. The government presently does not allow individuals to amass a stockpile of tanks, fighters, and missiles, I think the same would apply to automation technology going forward.
1) Automation is software, not hardware. Harder to control. Moreover, it's
not about individuals amassing enough power to officially ignore the government. It's about individuals getting so good at manipulating our system of government that the government no longer acts as a meaningful check on their actions, because it is paralyzed. We're arguably already there in the US, and we don't even have artificial intelligence.
Additionally, once we reach a state where automation replaces human workers, we would effectively be in a post scarcity world. Ensuring for basic necessities would be far easier than it is presently. I don't personally project someone flipping a switch and allowing large portions of the population to starve or go homeless as being too likely.
Tell it to the Koch brothers.
More seriously, if the thing in charge of deciding how to use resources is a machine, you can't assume it actually
cares. Almost no humans would decide that letting 'unproductive' people die is better than feeding them, but some humans would, and we cannot assume a machine wouldn't.
An issue that I can see becoming problematic is that there may actually be an up-tick in the population growth as people freed up from responsibility of work and with a great deal of free time on their hands again decides to begin to have larger families.
This, if anything, makes a machine
MORE likely to conclude that supporting vast numbers of humans isn't a good idea. Because long term projections will show this number exponentially increasing, which does not benefit any goal of the machine's except "maximize the number of humans." And it's a little frightening what a machine might do if it actually thought "maximize the number of humans" was a good idea.
Re: Economics 2.0... should I worry?
Posted: 2016-05-06 11:56am
by The Romulan Republic
Simon_Jester wrote:In the next five or ten years, changes to economics that make the world disastrously bad are unlikely- worst case is a global recession, which is bad but not unlivable.
In the next hundred years, the question becomes rather different. Automation is increasingly displacing the need for human labor, and artificial intelligence may well become competitive with human thought in the next several decades. We don't have, and arguably cannot have, a plan for how human beings can function, survive, support themselves, and live meaningful lives when that happens. Free markets don't provide satisfactory answers to the question of how to do that.
This shit is why I believe socialism, or something similar, will likely be obligatory in the future.
Re: Economics 2.0... should I worry?
Posted: 2016-05-06 12:22pm
by Elheru Aran
The Romulan Republic wrote:Simon_Jester wrote:In the next five or ten years, changes to economics that make the world disastrously bad are unlikely- worst case is a global recession, which is bad but not unlivable.
In the next hundred years, the question becomes rather different. Automation is increasingly displacing the need for human labor, and artificial intelligence may well become competitive with human thought in the next several decades. We don't have, and arguably cannot have, a plan for how human beings can function, survive, support themselves, and live meaningful lives when that happens. Free markets don't provide satisfactory answers to the question of how to do that.
This shit is why I believe socialism, or something similar, will likely be obligatory in the future.
Obligatory? Hah. No. Not for a long time yet. You'll see the US turn into a more genteel version of Russia first, with grotesquely wealthy families dominating the country through their business connections and the majority of the population just trying to make ends meet somehow, before they cotton on to the idea that 'socialism' isn't actually BAD.
Re: Economics 2.0... should I worry?
Posted: 2016-05-06 12:25pm
by The Romulan Republic
Considering how well Senator Sanders has done, I think assumptions about how socialism can never win in America are starting to look a bit dated.
Give it a generation, tops. Unless Donald turns us fascist or something first.
And yes, not obligatory for a long time yet. I was clearly referring to when things reach the point that automation can simply replace most human labour. Can't have a functional capitalist society if their are no jobs for human beings.