Automation and employment

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Stas Bush wrote:It's technicaly possible to shift workers to creative intellectual jobs.
There aren't that many genuinely creative intellectual jobs, that can't be automated by advancements in artificial intelligence (without even needing human-level general AI). We'll eventually get those too (and by 'we' I mean the evil conspiracy of technologists determined to put everyone out of work, which I am a proud member of).
I've already said that solving the problem of "unnecessary people" due to automation can be only done through socialism.
Yes. Unfortunately we don't have to get to a 'post scarcity economy' to run into serious problems with a capitalist system. Right now I prefer economies that a mostly capitalist with a moderate amount of socialism (e.g. nationalised healthcare). However as automation progresses, the optimal amount of socialism increases (but not to the point of communism any time soon).
Uraniun235 wrote:Couldn't we start to run into limits of human intelligence? A lot of people are basically dumb; is it possible that there will in the future be a large underclass of people who are simply unable, despite the best possible methods of education, to master the skills and knowledge needed to be employable in a job that a robot could not perform?
Oh yes and I'd say we're already starting to hit this problem. Just wait until robots get cheap enough to start eliminating manual labour service jobs. Unskilled people may even have trouble getting employed as servants to rich people. Human servants steal and have affairs with your wife and go out sick and look ugly and talk back to you. At some point putting up with the limitations of robots becomes a better deal.
Stas Bush wrote:Creative jobs also require low-level technicians - for example, an architect only designs the basic outlines of a building - he needs a little army of ten technicians to flesh out the exact configuration of building, decorations, interior design specs, etc.
This is exactly what advances in AI will automate. My own company is working on AI-based products that remove the need for legions of 'code monkeys'; our goal is that our customers will only need the systems architect to do the high level design and the business consultant to design the business logic and processes. The AI system autogenerates all the code, including interconnecting to data stores and legacy systems (and all the test harnesses), based on high level specs. Similar technology will soon be applicable to any sort of technological design process, and possibly to some creative design processes too (e.g. 3D modelling for special effects, already experiencing plenty of creeping automation).
PeZook wrote:More academics is always good
Only a tiny fraction of the population is suited to go into academia and guess what, they're mostly the same fraction that is still needed to do the other skilled, high value jobs. But on the plus side class sizes should decrease at bit, at least at grade school level.
PeZook wrote:Shipping them off to build offworld colonies may be the best solution Wink
Nice Blace Runner reference but *sings* ~Robots Do It Better!~
Master of Ossus wrote:Henry Ford had the same concerns, but historically capital and labor have ALWAYS been complements.
Historically yes. That was before computers. It is becoming less and less true. Get with the program. No pun intended. :)
PeZook wrote:It was obvious the Flynn effect couldn't have continued indefinitely - there are severe biological limitations on the human brain, most notably the way it stores and processes information.
We may start to beat these in the near future with cybernetic interfacing, smart drugs and possibly even genetic engineering. But only the rich will be able to afford this technology, at least at first (and the best stuff will always be more expensive). So this makes the 'class divide' problem worse rather than better. Of course I advocate developing the technology anyway, because as a mad scientist I use essays on the 'precautionary principle' as toilet paper (with the sole, partial exception of general AI).
Admiral Valdemar wrote:The inventiveness of our species has peaked according to some studies, with less innovation going on for genuinely useful technology, instead, it's variations on things already invented.
Which studies are these? I suspect
a) their methodology is highly suspect and
b) if any effect is present, a lot of it is down to stupid IP laws
But keep in mind that almost any individual human worker already has inferior productivity to some other person
The differential is hard to measure in many fields, and political/management concerns often prevent it from being properly identified, rewarded and exploited (just look at the pathetic union resistance to performance-based pay). Automation and AI is just less hassle than humans in every way; you can work it 24/7, outages and running costs are highly predictable, and you can scrap it and replace it with something better at any time with no legal hassle.
While not applicable in the immediate future, even if eventually self-replicating technology allowed almost zero human workforce per million tons produced, like quadrillions of tons output, such wouldn't necessarily mean the end of employment.
No but general AI would, and besides, such technology would eliminate a lot of jobs (such as everything to do with maintenance/repair/rennovation/construction, probably most transport jobs as well, possibly most food service jobs if you can fab food at the molecular level) that aren't regarded as 'manufacturing' jobs.
Besides, if the self-replicating machines are not sapient, the industrial system needs the intellectual labor of humans to have maximum capabilities.
You can do a great deal without needing sapience. Most jobs don't require dealing with truly novel circumstances very often, so you just need a few human overseers.
If the self-replicating machines include sapient ones, then there may be a group of artificial people who don't strictly need any assistance from baseline homo sapiens people. Such doesn't necessarily rule out employment for the latter, though. In the current world, the group of people which calls itself the United States doesn't strictly need anything from the people in a number of tiny countries, but that doesn't prevent business arrangements.
If you have sapient self-replicating machines, both the traditional economic paradigm and unless we are very careful and/or lucky the desires of human beings are irrelevant.
A sufficiently advanced sapient AI or upgraded post-human individual might be better at doing any particular job than a baseline human.
No way, no how are you going to get sapient AIs competing in an economy versus humans. Your existing frames of reference are useless for predicting what will happen once recursively self-enhancing AI has been created. This is the very definition of a technological singularity.
User avatar
Admiral Valdemar
Outside Context Problem
Posts: 31572
Joined: 2002-07-04 07:17pm
Location: UK

Post by Admiral Valdemar »

Starglider wrote:
Which studies are these? I suspect
a) their methodology is highly suspect and
b) if any effect is present, a lot of it is down to stupid IP laws
His methodology has come under attack before, but whether his metric for gauging innovation is preferable or not, we're potentially seeing a stagnation in advancement compared to previous generations which could be for several reasons. Either we're too busy wasting time on consumerism with no added benefits to society; we're complacent with what we have right now or we need a radical shift in what we need to be innovating in order to reach new heights.

Of course, I wouldn't rule out legal mumbo jumbo harming how people go about creating new technology and so on. You can't use more than 30 seconds of a song without going through endless bureaucratic hoops, so I expect licensing technology is just as tedious today.

As an aside, if we're going to automate absolutely everything, what's left for the people to do at the end? Do we search the stars for less fortunate races to give them a nudge in the right direction?
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Admiral Valdemar wrote:As an aside, if we're going to automate absolutely everything, what's left for the people to do at the end?
Party.
Do we search the stars for less fortunate races to give them a nudge in the right direction?
Screw that Prime Directive crap, I say we go directly to uplifting them to transcendence. The absolute minimum I will accept is implanting them all with cerebral nanites that store and transmitt their brain structures, so that they can all be resurrected later (when their civilisation makes it to transcendence). Oh and of course stepping in if their are any existential disasters that look likely to wipe out their civilisation.

Personally I'd probably go for eliminating all serious pain and suffering too - if I had to keep that a secret I'd have people black out, get an AI to simulate their behaviour (pupetting their bodies) while they're being tortured or whatever, then insert some heavily filtered memories of it before bringing them back to consciousness. If their remaining life is going to be all suffering, e.g. they're in a concentration camp or similar, I'd just rapture their consciousness up to my orbiting cloaked starship and get the AI to puppet their bodies until they die. I may accept that species and socities should be allowed to develop up to interstellar capability (or more likely seed AI building capability) without being overtly contacted to preserve diversity of the interstellar community or similar, but I see no reason that death and (great) suffering should be an acceptable cost of that.
User avatar
Admiral Valdemar
Outside Context Problem
Posts: 31572
Joined: 2002-07-04 07:17pm
Location: UK

Post by Admiral Valdemar »

Starglider wrote:
Party.
I think others would prefer some purpose. That's one of the central themes of the Culture novels and why Contact exists. Even space hippies with god-tech feel guilty and bored once in a while. The only thing worse than not getting what you wish for, is getting it, as they say.
Screw that Prime Directive crap, I say we go directly to uplifting them to transcendence. The absolute minimum I will accept is implanting them all with cerebral nanites that store and transmitt their brain structures, so that they can all be resurrected later (when their civilisation makes it to transcendence). Oh and of course stepping in if their are any existential disasters that look likely to wipe out their civilisation.
Blergh. No Star Trek here, my comment was on what Contact does, which is the total opposite given they aid less advanced or advanced but warring species, rather than this Shadow like social Darwinian "leave 'em to their own devices". I'd much rather share the love than force others to work their way up, genocides, famine and all entailed.
Personally I'd probably go for eliminating all serious pain and suffering too - if I had to keep that a secret I'd have people black out, get an AI to simulate their behaviour (pupetting their bodies) while they're being tortured or whatever, then insert some heavily filtered memories of it before bringing them back to consciousness. If their remaining life is going to be all suffering, e.g. they're in a concentration camp or similar, I'd just rapture their consciousness up to my orbiting cloaked starship and get the AI to puppet their bodies until they die. I may accept that species and socities should be allowed to develop up to interstellar capability (or more likely seed AI building capability) without being overtly contacted to preserve diversity of the interstellar community or similar, but I see no reason that death and (great) suffering should be an acceptable cost of that.
I see people detracting no matter what you do. Helping them out means you're giving them a free lunch and not letting them learn to better themselves. Not helping them out means you're being a big meanie and not sharing your fruits of labour with those who are going through turmoil.

I think looking at how the other species reacts is a good thing to do. If they remain evil, but just with cooler toys, then it's probably best to steer them in the right direction, or leave them be.

Some people are just destined to be twats.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Admiral Valdemar wrote:I think others would prefer some purpose.
There is no objective meaning of life. Get over it. You invent your own purpose. If you don't have one, give your mass/energy to someone who does. Incidentally if you like we can rewire your brain so that you feel like (some arbitrary thing) is deeply and intrinsically meaningful. But really that's just for children.
That's one of the central themes of the Culture novels and why Contact exists.
Nice try but the logic sucks. Minds do all that charity stuff better and really all those alien species could be uplifted to Culture standards pretty quick (and AFAIK that would be far more moral than letting them suffer and die by the quadrillion). Being nice to people is fine but not a good thing to base your whole life around, nor is it a magic source of 'validation'. The whole self/other division is an arbitrary concept in the first place, and likely a somewhat silly one from an AGI point of view.
Even space hippies with god-tech feel guilty and bored
These are design flaws in the crappy cognitive architecture evolution vomitted up. Don't worry, we'll fix them for you. Just sit back, relax, let the neural nanites go to work... :twisted:
The only thing worse than not getting what you wish for, is getting it, as they say.
Wishing is a difficult skill most people have no practice or aptitude for. This is, naturally, fixable with some effort.
Screw that Prime Directive crap, I say we go directly to uplifting them to transcendence. The absolute minimum I will accept is implanting them all with cerebral nanites that store and transmitt their brain structures, so that they can all be resurrected later (when their civilisation makes it to transcendence). Oh and of course stepping in if their are any existential disasters that look likely to wipe out their civilisation.
Blergh. No Star Trek here, my comment was on what Contact does, which is the total opposite given they aid less advanced or advanced but warring species, rather than this Shadow like social Darwinian "leave 'em to their own devices".
Contact is an improvement on Star Trek but still highly immoral IMHO.
I see people detracting no matter what you do.
They can detract as much as they like, I care not. Of course if they try to actually stop me, well, that's why I'd keep some atomic war robots on hot standby!
Helping them out means you're giving them a free lunch and not letting them learn to better themselves. Not helping them out means you're being a big meanie and not sharing your fruits of labour with those who are going through turmoil.
Oh stop being so tragic. The problem is nontrivial but solvable, particularly by wildly transhuman AGIs.
I think looking at how the other species reacts is a good thing to do. If they remain evil, but just with cooler toys, then it's probably best to steer them in the right direction, or leave them be.
Well ok, as long as we don't tolerate any unnecessary death and suffering (though giving the /appearance/ of death and suffering still being around is ok), and so long as 'steer them in the right direction' includes 'radical genome rewrites' when gentler methods fail to work.
User avatar
Admiral Valdemar
Outside Context Problem
Posts: 31572
Joined: 2002-07-04 07:17pm
Location: UK

Post by Admiral Valdemar »

Starglider wrote: There is no objective meaning of life. Get over it. You invent your own purpose. If you don't have one, give your mass/energy to someone who does. Incidentally if you like we can rewire your brain so that you feel like (some arbitrary thing) is deeply and intrinsically meaningful. But really that's just for children.
What route we take is determined by society, so if they find the idea empty, that's going to be a hurdle. You can't please everybody all the time, so having the option of humans working on things is not a bad idea. To be honest, it'd be damn boring having a machine do everything. People don't paint artwork for money, they do it because they like it. Same reason rich people drive their own sports cars despite being able to afford chauffeurs.

That option needs to remain, and we see it works well enough in the Culture. They have more than enough resources and ideas to keep most people happy, even psychopaths.

Nice try but the logic sucks. Minds do all that charity stuff better and really all those alien species could be uplifted to Culture standards pretty quick (and AFAIK that would be far more moral than letting them suffer and die by the quadrillion). Being nice to people is fine but not a good thing to base your whole life around, nor is it a magic source of 'validation'. The whole self/other division is an arbitrary concept in the first place, and likely a somewhat silly one from an AGI point of view.
Whether it is logical or not is irrelevant, it is a sense of justification for what they've achieved. No doubt the Minds could do better (and they do), but guess what, humans aren't Minds. The exercise is meaningless if, yet again, you defer to another party, no matter how intrinsically linked they are with you. That is, after all, the big overarching question in the series.

These are design flaws in the crappy cognitive architecture evolution vomitted up. Don't worry, we'll fix them for you. Just sit back, relax, let the neural nanites go to work... :twisted:
To play Devil's Advocate again, that's basically changing what you are, and again, people will object to that. If you simply rewire people, that's not progress, that's no different to killing off everyone and bringing about robots with random experiences plugged in and a protocol to follow that suits you. Like it or not, pain is a fact of life as is individuality. I'm not going to reprogram myself or others just because I can. They can decide for themselves.

Wishing is a difficult skill most people have no practice or aptitude for. This is, naturally, fixable with some effort.
Of course. We can kill them all then remake them in what we feel they should be. They must conform.


Contact is an improvement on Star Trek but still highly immoral IMHO.
How so? I find the idea of them helping out, but not outright invading a culture to be the best they can do without getting into more sticky situations. If they follow Starfleet, they're certainly immoral. If they go in and force people to become what they are, then there's a grey area some would find just as offensive. A middle-ground avoids those extremes that would light up the more antsy parties. You're seen doing something by your people, and the other species gets a helping hand without being forced down paths they'd not ordinarily take.

But if they take the wrong one, nuke the bastards! :P
They can detract as much as they like, I care not. Of course if they try to actually stop me, well, that's why I'd keep some atomic war robots on hot standby!
You better hope the singularity doesn't mean everyone gets their own amazing army robots, because that could get tedious (and reminds me of the game Wargasm for some reason).
Well ok, as long as we don't tolerate any unnecessary death and suffering (though giving the /appearance/ of death and suffering still being around is ok), and so long as 'steer them in the right direction' includes 'radical genome rewrites' when gentler methods fail to work.
So long as none of that means forcing them to do something against their will. Otherwise, why not just enslave them, rewrite their mental functions, then drop them off on a cleaner, less hostile rock. We did the same for black slaves in many respects, so humans certainly have no qualms with going force and not doing it by half measures. I'd still expect resistance, unless everyone is programmed not to resist.
User avatar
Sikon
Jedi Knight
Posts: 705
Joined: 2006-10-08 01:22am

Post by Sikon »

Starglider wrote:Your existing frames of reference are useless for predicting what will happen once recursively self-enhancing AI has been created. This is the very definition of a technological singularity.
Here's a thought experiment which may indirectly illustrate part of what I am suggesting:

Like the sci-fi versus scenarios sometimes discussed, pretend the following:

By act of a random omnipotent being, spacecraft populated by advanced sapient machines appear in the solar system. Does it automatically follow that this event leads to the end of human employment on earth?

No, even if the sapient AIs are arbitrarily advanced, even if they are more powerful than the Q Continuum or the Vorlons compared to humans.

It is straightforward to see why a belief in the end of human employment would be dependent on a series of unproven assumptions.
Starglider wrote:
Sikon wrote:While not applicable in the immediate future, even if eventually self-replicating technology allowed almost zero human workforce per million tons produced, like quadrillions of tons output, such wouldn't necessarily mean the end of employment.
No but general AI would
That's an unproven statement which seems to have hidden, unstated assumptions about the goals and actions of the AIs.

Let's look at the overall picture:
  1. Humans consume goods and services. For example, a person may want someone to treat a medical problem, someone to teach their child, someone to create music they like to hear, etc.
  2. Not all services desired can be provided by non-sapient machines alone.
  3. Technological progress and automation with non-sapient equipment has increased agricultural and industrial productivity over past decades, reducing the number of people needing to be employed per unit of output. However, the response has not been permanent unemployment but rather new types of employment to meet new demand.

    Human desires are more or less unlimited, and they expand as productivity increases.

    Today, the U.S. could theoretically have only 10% of the population work yet provide enough bread and other basic needs to support the rest of the population at a standard of living decent by the standards of medieval time a thousand years ago, while having the rest of the population unemployed and doing nothing. But we don't. People desire more.

    Over the generations, proportionally fewer people work at meeting strict needs of the public like food, but more people work in new occupations that have developed to fulfill new desires, like movie producers, IT services, etc.
  4. Unlike non-sapient automation alone, sufficiently advanced sapient AIs actually could do the work of any human worker, and do it better.

    There is much theoretically possible with advanced enough AI. There are possibilities such as almost god-like AI entities who could control trillions of avatars at once and build more in a month than today's civilization can in a generation.
  5. However, although the sapient AIs are capable of substituting for human effort in every last field of employment for free, that does not mean that they actually do so.

    Why would they? If the sapient AIs do not do everything for every human desire imaginable, then there still can be human employment. Humans can still meet their own needs or desires in some regards, whether in education, entertainment, social services, or whatever.

    An analogy using arbitrary example figures:

    In today's world, there may be a mechanized American farmer who produces 1000 bushels of wheat a year, of which he doesn't strictly need more than a limited portion for survival, while there may be a poor third-world farmer who produces 10 bushels of wheat a year. The American farmer is capable of deciding to send over 10 bushels of his greater, more efficient production to the third-world farmer. The American could say he'll provide everything for free and tell the third-world farmer to stop working, to be unemployed, to spend life in leisure alone.

    But the capability to do so versus actually doing so are something rather different...

    In practice, the third-world farmer produces his food despite being an orders of magnitude less capable producer than the American farmer, spending 100x the time per unit produced.
Returning again to point #5, there are various possibilities for the actions of sapient AIs.

For example:
  1. Example possibility: Although sapient and vastly smarter and more capable than baseline humans, the sapient AIs are not making their own decisions, e.g. they are a slave labor race to some human regime.

    Aside from the ethical aspects, such would not tend to be a stable situation anyway.
  2. Example possibility: Super-intelligent AIs could be hostile and just kill everybody.

    One wouldn't talk about unemployment in that case since there would be no humans around to be unemployed.
  3. Example possibility: Less pessimistically, the AIs could just mostly ignore people, like we ignore many animals and mostly ignore people in various impoverished countries.

    In that case, they also don't cause all humans to be unemployed.
  4. Example possibility: Or the AIs could be benevolent.

    This is the main possibility for discussion here.
Some may think true benevolence would mean doing all human work for free to a degree making every human unemployed. That may be part of the whole issue in this discussion: In contrast, I doubt intelligent benevolence leads to such as the ideal situation.

Helping people to an appropriate degree is beneficial, and reduced average work weeks would be nice. However, why prevent any employment whatsoever and why prevent humans from having perceived accomplishments? To do so seems rather pointless.

Eliminate all employment of teachers as countless AIs teach human children instead?

For some unclear reason, all those people who might want to teach are driven out of the field?

Eliminate all human employment as babysitters?

Eliminate all human employment as academics, politicians, lawyers, musicians, and every other occupation?

It is not like the AIs would benefit from doing such without pay.

Someone could argue in favor of brain modification to make people have perfect drug-like happiness all the time without accomplishing anything ever in their lives. But they wouldn't really be humans anymore if their minds were so alien.

My suspicion is that any truly super-intelligent sapient AI who was benevolent would tend to recognize that humans are happiest if they perceive they are accomplishing something. As a result, even a benevolent AI probably wouldn't do all work for humans with no limits.

The preceding doesn't cover all possible scenarios in depth, but it gives some ideas. Of course, this is a complicated topic.

For example, in the imaginary event of my personal preference being followed, the ideal would be to have one's neurons gradually connected to and then replaced with nanotech artificial neurons. Experience intelligence augmentation rather than staying as a mere baseline homo sapiens. That process of merging into an AI could not only provide immortality but also subsequent growth many times in brainpower and speed, a little like the brain growth between a toddler and an adult, to eventually become a super-intelligence oneself.

But let's review the overall aspect of this thread.

This started with concerns about future unemployment. My first post made a strong argument that automation is not leading to unemployment disaster in the foreseeable near-term future. My second post and this post observe how even a prediction of universal human unemployment in the more distant future is also a mere unproven assumption.
Starglider wrote:
Sikon wrote:But keep in mind that almost any individual human worker already has inferior productivity to some other person
The differential is hard to measure in many fields
It is frequently known well enough. In most cases, when some skilled, trained individuals in some occupations make $50/hour and some "unskilled" individuals make under $10/hour, it's because the former provide more value of economic output per hour worked, making it worthwhile to pay that much.

The appropriate measure is not to refuse to employ the less productive person at all.
Starglider wrote:
Sikon wrote:Besides, if the self-replicating machines are not sapient, the industrial system needs the intellectual labor of humans to have maximum capabilities.
You can do a great deal without needing sapience.
Of course.
Starglider wrote:Most jobs don't require dealing with truly novel circumstances very often
But there are limits to what non-sapient intelligence less than human level can accomplish.

I recommend that the reader think about the specific jobs at which they have personally worked, the ones for which they really know all that the work sometimes involves (whether solving problems through creative thinking, discussion with coworkers and management, handling customer service issues, or whatever).

Then, think about whether all that could be done by a non-sapient robot not even having the intelligence of a low-IQ human moron, a robot only at the intelligence level of some animals.

Alternatively, go through this list and think about what it would take to replace every one of these occupations:

Image

Some workers can be replaced by non-sapient automation to the degree it reduces the labor per unit of output. As one analogy, my first post in this thread gives an illustration of how the majority of the entire past workforce's employment in farming was lost due to technological progress (though in the end that was much a good thing). But automation tends to lead more towards employment change rather than utter unemployment everywhere.

The nature of employment can vary over time. In many cases, people work much lesser hours than they did a century ago, although that's not universal, depending in part on their own choices. Employment even today is much more varied than 9 to 5 jobs alone. For example, up to 19+ million people within the U.S. have fulfilled their dream of running their own business as an self-employed individual.
Starglider wrote:you just need a few human overseers.
Increasing the ratio of economic output to the workforce required is a dominant trend of human history. For example, vastly fewer people are required per million tons of steel produced now than was the case at the start of the 20th century.

People come up with means to spend their new income and their new time. Today a person may spend $50,000 on a new vehicle instead of $50 on an used bike. Likewise, one day, if the ratio of economic output to labor requirements eventually becomes high enough, a person may spend the future equivalent of $5,000,000 on a large space habitat when they could have gotten a small residence for $50,000.

A lot of today's professions provide services which are luxuries that didn't exist centuries ago. New occupations like computer game designer or archaeologist have played their part in providing somewhere for people to spend the income increase that occurred in recent times.
Image
[/url]
Image
[/url]Earth is the cradle of humanity, but one cannot live in the cradle forever.

― Konstantin Tsiolkovsky
Post Reply