Artificial Intelligence: Why Would We Make One?

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

Post Reply
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Artificial Intelligence: Why Would We Make One?

Post by Formless »

Samuel wrote:Uh Hoth, it sounds as if you are against tranhumanism because it will render humanity extinct. Transhumanism renders humanity extinct by having people improve themselves to the point they are no longer human. What is wrong with that?
IIRC, Hoth subscribes to an extreme form of Humanism that wouldn't look out of place in the Imperium of Man-- humanity literally is the measure of all things to him, so replacing it with machines would be heretical to that morality. But then its been a while since the last "first contact situation, LETS BLOW UP DEM ALIENS!" or animal rights thread, so I may be misremembering things.

Personally, Transhumanism to me is just a silly fantasy. As long as the Transhumanists aren't in your face about it, I'm cool with people who would like to become immortal furry machine gods. Its no more shameful than my fantasies of *censored due to sexually explicit content* . :wink: :D

Its when they start making bold claims and/or getting smug about how they are a Transhumanist and you aren't that I take offence. Its like when a new toy comes out and not everyone is interested, but you aren't "cool" unless you buy it. That seems to be the mindset with some of them, especially where mind uploading and promises of immortality are concerned. I'd be content with fixing the problems with my brain (attention span and whatever seizure disorder I've been straddled with) and I have no more existential fear of death than I have existential fear of sleep, so the emotional appeal isn't that strong with me. So you can imagine that having to deal with people who think you are a Luddite if you don't share their fantasies is a natural recipe for annoyance.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Seggybop
Jedi Council Member
Posts: 1954
Joined: 2002-07-20 07:09pm
Location: USA

Re: Artificial Intelligence: Why Would We Make One?

Post by Seggybop »

Darth Hoth wrote:I cannot but consider the people who actually believe in the "transhumanist" agenda, and yet advocate it, utterly evil, sociopathic, and morally bankrupt.
It's kind of cool that you assert that a group of people generally interested in permanently curing all humans of death, disease, and all forms of suffering and oppression are evil and sociopathic.

Looks like you have a promising career ahead of you as one of those generic sci-fi luddite antagonists who tries to repress all genetic/cybernetic/nano/whatever augmentation for the sake of ETERNAL HUMAN PURITY. I mean, wouldn't want to contemplate the possibility of anyone ever being better than than we are now, you know?
my heart is a shell of depleted uranium
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Artificial Intelligence: Why Would We Make One?

Post by Simon_Jester »

Seggybop wrote:
Darth Hoth wrote:I cannot but consider the people who actually believe in the "transhumanist" agenda, and yet advocate it, utterly evil, sociopathic, and morally bankrupt.
It's kind of cool that you assert that a group of people generally interested in permanently curing all humans of death, disease, and all forms of suffering and oppression are evil and sociopathic.

Looks like you have a promising career ahead of you as one of those generic sci-fi luddite antagonists who tries to repress all genetic/cybernetic/nano/whatever augmentation for the sake of ETERNAL HUMAN PURITY. I mean, wouldn't want to contemplate the possibility of anyone ever being better than than we are now, you know?
I think the problem here, and if you take a step back this becomes utterly obvious, is that Hoth's definition of "transhumanist" doesn't match yours.

The basic problem with transhumanism is the question: what happens to the people who don't want to be uploaded into the machine paradise, because they distrust the promises made by it? What happens to the people who don't want to tweak their children's DNA beyond recognition- or who, due to poverty or safety concerns, become the evolutionary late-adopters by a generation or two? And so on.

Given the way real economies and people work, that's liable to consist of the bulk of humanity... and yet there are far, far too many waving the 'transhumanist' banner whose only answer is "well, they're going to wind up marginalized on their own planet, much like chimpanzees are today." And they think that's good enough; they still work towards this end.

At that point, yes transhumanism does become actively evil- or at least a case of massive, genocidal negligence.

The vision of making yourself and your chosen elect into gods can, believe it or not, become something twisted and evil. It doesn't have to be, but it can.

And I'm pretty sure the twisted branch of that is what Hoth has come to identify as "transhumanism." Not "all attempts to solve problems or cure disease by the application of advanced technology," as you accuse. That's not what he's attacking; he's attacking the notion that it is somehow okay to marginalize and destroy everyone whose feet of clay won't stand the march to godhood.

Our own recently, happily ejected LionElJonson is the extreme example of this: all problems will be solved by uploading to the machine paradise, which will probably then depart the world in an Orion drive starship that conveniently burns up everything the Rapture leaves behind it. And yes he is an extreme example- the sociopathic little shit to beat all sociopathic little shits. But I can't blame Hoth if he's run into enough people who are like this to varying extents that it's soured him on the whole movement.
This space dedicated to Vasily Arkhipov
User avatar
Broomstick
Emperor's Hand
Posts: 28822
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Re: Artificial Intelligence: Why Would We Make One?

Post by Broomstick »

Samuel wrote:Uh Hoth, it sounds as if you are against tranhumanism because it will render humanity extinct. Transhumanism renders humanity extinct by having people improve themselves to the point they are no longer human. What is wrong with that?
Perhaps some people are most comfortable being human, even if they don't think humanity is perfect in its current form.

It puzzles me that people around here, by and large, would react with horror at some future plan that would convert all homosexuals into heterosexuals (or vice versa) but don't understand why someone might want to remain an imperfect human being. If being H. sapiens is a vital part of your personal identity then transformation into an arbitrarily "better" form means death - sure, something related to you continues onward, but it's not the "real" you. It's like saying it's OK to kill one of a pair of identical twins because, hey, they're identical, right? Well, no they're not. Likewise, for some people to become something other than human is tantamount to personal destruction, no matter how similar the "copy" that remains.

So... for those people I'd say it's wrong because it's forcing them to undergo something they find as distasteful as death, if not actually viewing it as death, or perhaps even worse than death. Under what ethical system would such a thing be acceptable?
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Artificial Intelligence: Why Would We Make One?

Post by Simon_Jester »

To put it another way, remember Starglider's take on all this:
Starglider wrote:
Destructionator XIII wrote:With a little luck though, the poverty ridden proleteriat will rise up and slaughter the AI owners, correcting this injustice. (or elect congresspeople who support *gasp* redistribution of wealth, but why vote when you can use murder?) But it's possible that they won't...
Ignoring the take-off problem again, this is where your mass-produced hordes of soulless robotic riot-suppression troops come in. Imagine, thousands of black storm-trooper like figures marching in perfect formation, clubbing down those dirty poor people with their electro-shock batons. The molotov cocktails are useless, the steel plated robots march right through the flames. No police unions to worry about, no pay-offs, loyalty or morale issues. Sure, a few get taken down by ramming with hijacked vehicles and improvised explosives, but that's acceptable losses, the downed units are quickly repaired or recycled. The terrorist druggie anti-freedom anti-American mob is soon rounded up, put in the automated trucks and sent to the automated underground internment centers.

Taking on the mentality of a sociopath ultra-capitalist for a moment, this vision brings a little tear of joy to my eye...
Now, Starglider himself isn't even slightly one of the offenders here. But this illustrates the nature of the problem, and the reason why people can understandably rebel against transhumanism entirely.

The Friendly AI problem is explicitly advertised as being so transcendentally hard compared to the General AI problem that for most of us, it's difficult to imagine the former being solved first. And yet we have people telling us that yes, the first big AI on the planet will set the tone of all future existence.

Stop and think that through. Look at the trends of world geopolitics over the past two decades. Does anyone really believe there's a way to stop the first big AI on the planet from belonging to a corporation, or a bunch of politicoes in the pay of one, at this rate? Or to stop them from simply 'leaving behind,' in miserable conditions, all the people they don't need to maintain some arbitrary and increasingly meaningless dollar bottom line?

If we take claims about AI singularities seriously, the future we're looking at is, at best, a rather dark shade of cyberpunk. And there really is not a way to stop this short of mass populist Luddism while we still have the means to do so.

And yet there are people who seem, explicitly or implicitly, to announce that this is our future... while actively working to make it happen. What are we to think of people like that?

The same goes for nanotech, cybernetics, and the like. Whatever the technology is, you can bet the rich will get it first. If it turns them into gods, and gods have no real need of mortals, then the outcome is going to be horrific, given the collective sociopathy of the modern upper class, the way their preferred ideologies spit on any notion of a collective social contract or economic rights for the groundlings.

People who promise that this is in our future, are smart enough to figure this out, and still actively want to make it happen... what do you say about them?
This space dedicated to Vasily Arkhipov
User avatar
PeZook
Emperor's Hand
Posts: 13237
Joined: 2002-07-18 06:08pm
Location: Poland

Re: Artificial Intelligence: Why Would We Make One?

Post by PeZook »

That's only true if you upload to a computer body, Broomie. It's likely that many, many other options will be available due to high technology for those who'd like to keep their current bodies thankyouerymuch.

Already we're extending people's lives and improving their quality of life a lot with cybernetic implants (it has now become possible to replace much or all heart function with an artificial implant). Genetic engineering is starting to trickle into the mainstream. It's not really death if you keep your body, just replace the aging bits with brand new fresh parts and make sure you're not going to waste away in a hospital bed due to alzheimer's or cancer.

There are many, many possibilities for how the future will turn out and the amount of variables involved is too high to predict with any certainty the demographics and economic systems that will operate in, say, 300 years, but the change will most likely be gradual, not revolutionary due to simple logistics. Just like manned and automated checkouts operate side by side, some people ride bicycles to work and I have a microwave oven and a stove at home. Or, hell - a cell phone, a laptop and lots of paper and pencils.

Sure, yeah, the elite will dominate because they will have easy access to all those new technologies and improvements and whatnot. But let's be real here ; The elite dominates anyway, and always have.
Simon_Jester wrote:The same goes for nanotech, cybernetics, and the like. Whatever the technology is, you can bet the rich will get it first. If it turns them into gods, and gods have no real need of mortals, then the outcome is going to be horrific, given the collective sociopathy of the modern upper class, the way their preferred ideologies spit on any notion of a collective social contract or economic rights for the groundlings.
You know, the problem is that even if the first ever AI belongs to a corporation, it's not going to *POOF* alter the established infrastructure to make the sort of mass opression possible.

The darkest possible scenario means the elite wall themselves off in ivory towers, surrounded by armies of perfectly obedient soulless robots that cater to their every whim and sustain the infrastructure necessary to keep their masters living in comfort and luxury - and then proceed to genocide everybody else, or stick them in ghettoes denies the basic things necessary for survival.

But...how does this come about? The robots won't come out of nowhere ; Neither will the necessary factories and automated infrastructure to support them. Further complicating things is the fact that this sort of tech does not appear in the hands of only a few people, it will quickly disseminate around the world and may be implemented in an entirely different way two countries to the side.

Look at it this way: industrialization allowed unprecedented power to the nations that did it first, but spread around fast enough to make sure no one nation could conquer the entire planet. Even poorly industrialized nations were abused and kicked around for a while, but never conquered outright due to logistics and manpower issues.

And manpower applies to robotpower too, as resources are not unlimited. The disgruntled masses can still do lots of damage, especially if, say, Russia gives them loads of heavy weapons and explosives and their own combat robotoids and hackers...The elites would need to reach a "critical mass" of robotoid soldiers so that the entire prouction and power infrastructure is secure from organics attacking them, and that's not guaranteed to happen before the backlash makes it untenable.

Further complications include flunkies and subordinates of these sociopathic elites, who might sabotage their plans of opression.
Image
JULY 20TH 1969 - The day the entire world was looking up

It suddenly struck me that that tiny pea, pretty and blue, was the Earth. I put up my thumb and shut one eye, and my thumb blotted out the planet Earth. I didn't feel like a giant. I felt very, very small.
- NEIL ARMSTRONG, MISSION COMMANDER, APOLLO 11

Signature dedicated to the greatest achievement of mankind.

MILDLY DERANGED PHYSICIST does not mind BREAKING the SOUND BARRIER, because it is INSURED. - Simon_Jester considering the problems of hypersonic flight for Team L.A.M.E.
User avatar
cosmicalstorm
Jedi Council Member
Posts: 1642
Joined: 2008-02-14 09:35am

Re: Artificial Intelligence: Why Would We Make One?

Post by cosmicalstorm »

Formless wrote:
Kurzweil,
You need to read PZ Myers. This guy is a textbook loon. Knowing why he is a loon reveals much about the transhumanist (especially singularity type) ideology is a load of crap-- the people are (in my experience) almost uniformly computer science nerds who think they know everything, even stuff far out of their area of expertise like biology and engineering.
To be relevant to this thread, will it be hard to map a human brain? Yeah, of course. Was it hard to build the first nuclear bomb? Yeah it was, so what. My idea is that if the technological development is not halted permanently within the next century or less, this stuff* is bound to come about.
The problem is, people only remember the improbable technologies that did happen like nuclear energy and forget all those improbable technologies that didn't like cold fusion and jetpacks. You cannot assume that these technologies are inevitable by looking backwards-- in psychology we call this the hindsight bias, and in logic we call it a fallacy.
I don't believe in all of the singularity stuff, but machine intelligence and an intelligence explosion does seem very reasonable to me provided techonology continus to develop. I do not see a similarity to flying cars, jetpacks and so on in regards to that subject, since we already know intelligence exists inside human skulls. Every other aspect of our body can be readily outdone by machinery, so why not the brain?

To me that violates the principle of mediocrity and it would seem a stunning coincidence if the current cognitive ability of humans just happened to be the absolute best kind of cognitive ability that can be produced in the universe.

You are right about hindsight bias though, I've considered it and in the light of it I'm more skeptical about things like nanotechnology.

I also don't think that any advances in AI will necessarily produce a world of rainbows and ponies for us humans, I wouldn't be shocked if it simply kills us all and pursues something that would seem ridiculous from a human POV.

I'm also completely aware of the possibility that I might be completely wrong about everything, the future is a strange beast it seems.
User avatar
someone_else
Jedi Knight
Posts: 854
Joined: 2010-02-24 05:32am

Re: Artificial Intelligence: Why Would We Make One?

Post by someone_else »

Formless wrote:Of course that is a nitpick, you can do higher cognitive functions without getting human-like cognitive functions at all.
It is also easier ('cause you don't have to figure out how the fucking human brain works first but only make a thing that reacts the way you want it to, cutting development time in half), and has fat chances to give you a very powerful but totally controllable machine instead of the average science fiction AI that goes apeshit and enslaves humanity "for its own good" every fucking time they turn it on.
The thing I was talking of is a machine meant to understand the human brain, that becomes sapient since it has to mimic perfectly a brain to be useful. Becoming an AI is incidental for it.
So you're right, it is a cross purposes nitpick. :mrgreen:
the transhumanist (especially singularity type) ideology is a load of crap--
I tend to agree on this point. It may be fun as a RPG game universe (like Eclipse Phase), but most don't solve the main problems of humanity and just add Superpowers and Weird Stuff on top. Or at best sidestep the issue (like living the whole life in a virtual reality haven).
Simon_Jester wrote:Does anyone really believe there's a way to stop the first big AI on the planet from belonging to a corporation, or a bunch of politicoes in the pay of one, at this rate? Or to stop them from simply 'leaving behind,' in miserable conditions, all the people they don't need to maintain some arbitrary and increasingly meaningless dollar bottom line?
Obvious rhetorical questions we all know the likely answer :roll:.
I think that you can add all the tech you want, but to solve actual human problems (i.e. racism, homophobia, people disregarding other people's life for personal gain... and so on) you must change human minds (that is, where the problem originates). Uploading minds, Dyson Spheres, most AI/Cyber/genetic technowank don't solve the issue, and usually only makes it worse (because now the ruling class is entrenched behind Godlike Superpowers and rooting it out becomes uhhhh.... complex).

Theoretically, if the ones that undergo "the Transhuman change" (choose your favorite, I tend to prefer genetic modifications and brain enhancements) also become Lawful Good (borrowing from D&D) they shouldn't exploit normal humans but will be instead ready to sacrifice themselves for the good of other people (normal or altered). Even if you "transhumanize" only the ones willing to undergo (and pay for) the process, their actions will be a beacon for humanity as a whole. Superpowers and techno-genetic-wank are an added benefit. :mrgreen:
So, I hope neurology arrives in time to allow such a change before computer-science born AIs become powerful tools of rich evil fat-asses.

If you do that on everyone you end human race as we know it. It is unlikely you'll manage to do it (without forcing people at least), but won't be that bad imho.
PeZook wrote:But...how does this come about? The robots won't come out of nowhere ; Neither will the necessary factories and automated infrastructure to support them.
They are unnecessary, given how the world is nowadays. Corporations can force governments to do what they please (to a certain extent) simply because they (plus their subsidiaries and enterprises working for them) give work to large parts of population or can outright bribe the politicians.
That's one of the main benefits of corporations. They leave all the annoyance of political power (like crowd control and bad PR) to governments.
To DOMINATE a nation you just have to own most of the important industries and have lots of money. Military conquest is soooo 19th century.
Even poorly industrialized nations were abused and kicked around for a while, but never conquered outright due to logistics and manpower issues.
Most of such poorly industrialized nations are still totally DOMINATED by us, like Côte d'Ivoire, whose economy is totally reliant on us eating chocolate, drinking coffe and using crappy palm oil in junk foods. (not even necessary things, btw)
Any drop in demand would mean an economic disaster for them, thus corporations buying stuff from them do have quite a bit of power over it.
I'm nobody. Nobody at all. But the secrets of the universe don't mind. They reveal themselves to nobodies who care.
--
Stereotypical spacecraft are pressurized.
Less realistic spacecraft are pressurized to hold breathing atmosphere.
Realistic spacecraft are pressurized because they are flying propellant tanks. -Isaac Kuo

--
Good art has function as well as form. I hesitate to spend more than $50 on decorations of any kind unless they can be used to pummel an intruder into submission. -Sriad
User avatar
Broomstick
Emperor's Hand
Posts: 28822
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Re: Artificial Intelligence: Why Would We Make One?

Post by Broomstick »

PeZook wrote:That's only true if you upload to a computer body, Broomie. It's likely that many, many other options will be available due to high technology for those who'd like to keep their current bodies thankyouerymuch.
I should hope so!

Just to be clear - that wasn't MY viewpoint I related, just one that I have heard by those who are less than enthused about this coming "singularity" (which, really, sounds all too much like a black hole, which is also a singularity, and which probably doesn't help things for the less informed)
Already we're extending people's lives and improving their quality of life a lot with cybernetic implants (it has now become possible to replace much or all heart function with an artificial implant). Genetic engineering is starting to trickle into the mainstream. It's not really death if you keep your body, just replace the aging bits with brand new fresh parts and make sure you're not going to waste away in a hospital bed due to alzheimer's or cancer.
And that I'm OK with (mostly) - especially as we've been moving int hat direction for some time now.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Artificial Intelligence: Why Would We Make One?

Post by Simon_Jester »

I'm perfectly happy with better medicine, but "better medicine" isn't the kind of transformative technology the prophets of the Singularity like to talk about.
PeZook wrote:
Simon_Jester wrote:The same goes for nanotech, cybernetics, and the like. Whatever the technology is, you can bet the rich will get it first. If it turns them into gods, and gods have no real need of mortals, then the outcome is going to be horrific, given the collective sociopathy of the modern upper class, the way their preferred ideologies spit on any notion of a collective social contract or economic rights for the groundlings.
You know, the problem is that even if the first ever AI belongs to a corporation, it's not going to *POOF* alter the established infrastructure to make the sort of mass opression possible.

The darkest possible scenario means the elite wall themselves off in ivory towers, surrounded by armies of perfectly obedient soulless robots that cater to their every whim and sustain the infrastructure necessary to keep their masters living in comfort and luxury - and then proceed to genocide everybody else, or stick them in ghettoes denies the basic things necessary for survival.

But...how does this come about? The robots won't come out of nowhere ; Neither will the necessary factories and automated infrastructure to support them. Further complicating things is the fact that this sort of tech does not appear in the hands of only a few people, it will quickly disseminate around the world and may be implemented in an entirely different way two countries to the side.
This is all strictly true, but I think you kind of missed my point.

My point is that this is a totally plausible outcome for at least large parts of humanity: cybernetic* minions enforcing the whims of human tyrants who are clearly recognizable as the descendants of today's self-entitled, exclude-the-groundlings-from-consideration, globalization-at-all-costs upper class. And if you believe strongly in the transformative power of the Singularity to change all the rules of human existence in a short timespan (the "hard takeoff"), this becomes more likely, not less.

If, as someone like Yudkowsky might claim, the first AI capable of human or noticeably better-than-human thought will predictably bootstrap itself into realms of transhuman intellect, and then dominate the course of all future human society... then the probability that this first AI will be controlled by someone we don't want controlling it approaches unity. At which point you should have to be out of your damn mind to want to make this happen.

If the transformative power of AI isn't that great, if there is time and flexibility and the invention of the new ways doesn't replace the old in a matter of weeks or months... then yes, suddenly the problem becomes less urgent. But that mindset stands in direct contradiction to the Singularitarian model, to the belief that these technologies will change the world, and that all that matters is whether you're a first adopter (who gets to set the tone of the future forever) or a second adopter (who gets to pray to the first adopters that they will be merciful).

*In the general sense of 'machines that think,' not specifically in the sense of guys with mechanical parts in their bodies as 'cyborgs.'
someone_else wrote:
Simon_Jester wrote:Does anyone really believe there's a way to stop the first big AI on the planet from belonging to a corporation, or a bunch of politicoes in the pay of one, at this rate? Or to stop them from simply 'leaving behind,' in miserable conditions, all the people they don't need to maintain some arbitrary and increasingly meaningless dollar bottom line?
Obvious rhetorical questions we all know the likely answer :roll:
And yet this question seems to go unasked by so very many people in the field, including the ones who preach that having the first big AI in the world will grant the owner godlike power.

Of the ones who have thought through the implications far enough to recognize a potential problem here, and who believe that owning the first AI implies godlike power... well, they say the right words to indicate that they care, but their policy proposal boils down to "throw money at me and I'll work on building the first AI god before they do." Which isn't a very convincing appeal to me, not when I'm in no position to gauge how efficiently, or even if, their efforts to solve the problem would work.
This space dedicated to Vasily Arkhipov
User avatar
HeadCreeps
Padawan Learner
Posts: 222
Joined: 2011-01-10 10:47pm

Re: Artificial Intelligence: Why Would We Make One?

Post by HeadCreeps »

Broomstick wrote:
Samuel wrote:Uh Hoth, it sounds as if you are against tranhumanism because it will render humanity extinct. Transhumanism renders humanity extinct by having people improve themselves to the point they are no longer human. What is wrong with that?
Perhaps some people are most comfortable being human, even if they don't think humanity is perfect in its current form.

It puzzles me that people around here, by and large, would react with horror at some future plan that would convert all homosexuals into heterosexuals (or vice versa) but don't understand why someone might want to remain an imperfect human being. If being H. sapiens is a vital part of your personal identity then transformation into an arbitrarily "better" form means death - sure, something related to you continues onward, but it's not the "real" you. It's like saying it's OK to kill one of a pair of identical twins because, hey, they're identical, right? Well, no they're not. Likewise, for some people to become something other than human is tantamount to personal destruction, no matter how similar the "copy" that remains.

So... for those people I'd say it's wrong because it's forcing them to undergo something they find as distasteful as death, if not actually viewing it as death, or perhaps even worse than death. Under what ethical system would such a thing be acceptable?
Is there an immediate objection to these types of discussions in the same sense that society more or less looks down upon liposuction or breast implants? I'm curious about other peoples' response to this sort of thing; is it the physical modification that brings the instinctual dislike of transhumanism first, or does it require the realization that a Kurzweil-style AI explosion will inevitably lead to a fundamental change in what comprises the human condition for the objections to begin? Can things like hardware-based human memory enhancement be seen in the same light as currently existing types of body modifications for "normal people"? Or does it take the whole package - boom, you're a robot - for there to be a strong objection toward the concept? You said you're fine with replacing the heart, but what about parts of the brain for people who don't have obvious issues like alzheimer's? Is this where it becomes objectionable?
Hindsight is 24/7.
[/size]
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Artificial Intelligence: Why Would We Make One?

Post by Simon_Jester »

What's objectionable is the idea that everyone has to agree to this stuff, or effectively has to, if they don't want to become one with homo habilis. Which is a fairly obvious undercurrent in genuinely transhumanist rhetoric.

It's important to understand that this is not purely an 'instinctive' objection. It's partly philosophical: the act of altering enormous numbers of people, and effectively forcing the rest to accept alteration or become 'obsolete,' is a disturbing thing to do. It's very hard to believe that you can change the meaning of "human" without changing the humans in question to the point where they aren't the people they were before.
This space dedicated to Vasily Arkhipov
User avatar
Guardsman Bass
Cowardly Codfish
Posts: 9281
Joined: 2002-07-07 12:01am
Location: Beneath the Deepest Sea

Re: Artificial Intelligence: Why Would We Make One?

Post by Guardsman Bass »

PeZook wrote:But...how does this come about? The robots won't come out of nowhere ; Neither will the necessary factories and automated infrastructure to support them. Further complicating things is the fact that this sort of tech does not appear in the hands of only a few people, it will quickly disseminate around the world and may be implemented in an entirely different way two countries to the side.

Look at it this way: industrialization allowed unprecedented power to the nations that did it first, but spread around fast enough to make sure no one nation could conquer the entire planet. Even poorly industrialized nations were abused and kicked around for a while, but never conquered outright due to logistics and manpower issues.
Good point. That's only going to be more potent in the future, because of the ease with which you can transmit and duplicate information.

As for the robots, I'm honestly not convinced that we'll have the whole "robots do everything" situation in the near future. It can often be cheaper to simply use humans for a task than to rely on capital-intensive automation. Just look at the agricultural sector in the US.
“It is possible to commit no mistakes and still lose. That is not a weakness. That is life.”
-Jean-Luc Picard


"Men are afraid that women will laugh at them. Women are afraid that men will kill them."
-Margaret Atwood
User avatar
Shroom Man 777
FUCKING DICK-STABBER!
Posts: 21222
Joined: 2003-05-11 08:39am
Location: Bleeding breasts and stabbing dicks since 2003
Contact:

Re: Artificial Intelligence: Why Would We Make One?

Post by Shroom Man 777 »

Well, in a way it's going to be pretty ironic. The indivisible hand of the free market, for once, will help actually getting rid of the standard homo sapiens. If they can't afford consumer organics and iBrains, they'll be obsolete and will be pretty much a bunch of cavemen when the transhumans run around with microchips in their brains, downloading the latest updates in brainwares, with General Electric cardiovascular systems, Speedo dimpled skin to allow them to swim as fast as Mako sharks, Frod Motors-designed feet to allow them to run faster, and Nikon optical systems to allow them to take pictures with their eyeballs.

Think about it. You're too poor to afford the latest Norton brainware security updates? Whoops, malicious brain worms and parasites infest your neural operating system. Bye bye! Maybe your post-kidneys have a factory defect. Oh no, mass recall of everyone's spleens! We'll repay you, lifetime warranty, lol!

This is an awesome future.

The problem about sexuality will be solved when we get convertible genitals. Imagine your uterus turning inside-out and envaginating outside of the clitoris, hanging out and then turning rigid to become a makeshift penis.

Or imagine your penis, testicles and scrotum sinking inside your groin and transforming into a vaginal/uteral cavity. Your testicles can relocate themselves to become the ovaries. :mrgreen:

Think about it. Sweet Jesus. That's going to be an awesome future. Every tiny cell in your body is going to be branded. That's if you are rich. If you're a bloody poor person, well, shit, you'll just be stuck with generic meat organs.

You may have to loan money to be able to buy the same kind of consumer organ that's popular at the moment!

And, imagine, like consumer electronics there'll be planned obsolescence! Your visual cortex will have to be replaced when nVidia or ATI Radeon manufactures a new medulla oblongata that can process more pixels per square nanometer! Combine that with the latest Dickon DSLR eyeball and, man!

People will be so expensive that jealous poor people will end up murdering posthuman rich people and stealing their fancy expensive organs and either using 'em for themselves, or selling them in chop shops!

God I love the future.
Image "DO YOU WORSHIP HOMOSEXUALS?" - Curtis Saxton (source)
shroom is a lovely boy and i wont hear a bad word against him - LUSY-CHAN!
Shit! Man, I didn't think of that! It took Shroom to properly interpret the screams of dying people :D - PeZook
Shroom, I read out the stuff you write about us. You are an endless supply of morale down here. :p - an OWS street medic
Pink Sugar Heart Attack!
User avatar
Broomstick
Emperor's Hand
Posts: 28822
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Re: Artificial Intelligence: Why Would We Make One?

Post by Broomstick »

HeadCreeps wrote:
Broomstick wrote:Perhaps some people are most comfortable being human, even if they don't think humanity is perfect in its current form.

It puzzles me that people around here, by and large, would react with horror at some future plan that would convert all homosexuals into heterosexuals (or vice versa) but don't understand why someone might want to remain an imperfect human being. If being H. sapiens is a vital part of your personal identity then transformation into an arbitrarily "better" form means death - sure, something related to you continues onward, but it's not the "real" you. It's like saying it's OK to kill one of a pair of identical twins because, hey, they're identical, right? Well, no they're not. Likewise, for some people to become something other than human is tantamount to personal destruction, no matter how similar the "copy" that remains.

So... for those people I'd say it's wrong because it's forcing them to undergo something they find as distasteful as death, if not actually viewing it as death, or perhaps even worse than death. Under what ethical system would such a thing be acceptable?
Is there an immediate objection to these types of discussions in the same sense that society more or less looks down upon liposuction or breast implants? I'm curious about other peoples' response to this sort of thing; is it the physical modification that brings the instinctual dislike of transhumanism first, or does it require the realization that a Kurzweil-style AI explosion will inevitably lead to a fundamental change in what comprises the human condition for the objections to begin? Can things like hardware-based human memory enhancement be seen in the same light as currently existing types of body modifications for "normal people"? Or does it take the whole package - boom, you're a robot - for there to be a strong objection toward the concept? You said you're fine with replacing the heart, but what about parts of the brain for people who don't have obvious issues like alzheimer's? Is this where it becomes objectionable?
First of all, approval for things like liposuction and breast implants is FAR from universal, despite what the media would have you think. Even after complete mastectomies there's a consisten percentage of women who forego reconstructive surgery, even when it costs them no money. The right of adults of sound mind to refuse medical treatment, even when the consequences of refusal is death, is well established and exists because not everyone wants to live at any cost, or do not want the tradeoffs required by certain technologies.

We already have technological enhancement - we communicate over distances routinely, there are a variety of memory aid from writing to voice recording to computerized alarms on watches/phones/laptops/etc, everyone seems to use GPS these days which makes navigation so much easier, and so on. The thing is, none of that requires modification of the human body to utilize. As soon as you start changing bodies people become a little less eager. Getting a cochlear implant for deafness is not a trivial thing, and not pursued by everyone who can benefit from it. All operations, even minor ones, carry a risk of adverse events and for that reason people tend to resist the concept of things like brain implants because you can't help but worry what the hell happens if something goes wrong. Cardiac pacemakers and defibrillators - unquestionably lifesaving devices in most cases - are much harder to remove when there's a problem than they are to implant. That's part of it, really - there is no surgery without risk. For most people, there has to be a perception that the benefits outweigh the risks. People aren't going to be replacing their eyeballs when glasses or contact lenses can fix the problem and, better yet, can easily be returned if there's a manufacturing defect.

I think the market for devices that can enhance human capability that don't require a surgical hookup is higher than for implants. Sure, there are some out there eager for an implant, but there are people walking around with bad tattoos, too. Some folks don't consider what the future will be like after a "mod". The rest of us tend to think twice before making a permanent change. That doesn't mean we won't make such a change, just that we're not going to fling ourselves into it headlong.

Really, when someone starts gibbering about uploading into robot bodies or computers without considering there might be a downside to all this I can't help but think them an immature and impulsive idiot. It's not like robots or computers are immune to damage or malfunction. I think that might be considered by the several handicapped or those who know they face imminent death, but what benefit is gained by the average healthy human doing this? Yes, yes, technology will improve, but we're a long way from the replacement parts being so superior that they perform high enough above organics to make swapping them out worth the time, bother, and expense.

I would not stand in the way of someone wanting to upload to a computer or get some sort of implant - their body, their life, their choice. What's most objectionable, as Simon points out, is the lack of choice presented in most of the common scenarios. You must change or die or wind up in an H. sapiens zoo exhibit. Having seen an enormous amount of change in my lifetime I expect the most extreme scenarios will never come to pass anyway. If we do get "general AI" it will probably not be quite what was expected by anyone.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
User avatar
HeadCreeps
Padawan Learner
Posts: 222
Joined: 2011-01-10 10:47pm

Re: Artificial Intelligence: Why Would We Make One?

Post by HeadCreeps »

Simon_Jester wrote:What's objectionable is the idea that everyone has to agree to this stuff, or effectively has to, if they don't want to become one with homo habilis. Which is a fairly obvious undercurrent in genuinely transhumanist rhetoric.

It's important to understand that this is not purely an 'instinctive' objection. It's partly philosophical: the act of altering enormous numbers of people, and effectively forcing the rest to accept alteration or become 'obsolete,' is a disturbing thing to do. It's very hard to believe that you can change the meaning of "human" without changing the humans in question to the point where they aren't the people they were before.
Surely there are other reasons than "I don't like the idea of being forced into this". There are no points at which the idea of cybernetic implants which could benefit a person don't reach the point where they're objectionable? I don't believe that for a minute, but if so, then I'm a lot more against cybernetics than you are. For me, there is a point where my stupidity or limitations are the reasons why I enjoy this or that. I enjoy some of the most godawfully arranged music ever invented, but if I understand music well enough that I know exactly how godawful it is, I'm suddenly not so sure I want the enhancement. This is why I do not choose to learn how to compose music in the first place! Without question, I'm afraid of surgery and wouldn't do it unless I felt a very strong need to have the surgery, which goes hand-in-hand with my objection toward cybernetics.

@Broomstick: Thank you for giving your reasoning. I just wanted to point out that when I brought up implants and liposuction, I was actually referring to something that I thought most people should be looking down upon.
Hindsight is 24/7.
[/size]
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Artificial Intelligence: Why Would We Make One?

Post by Simon_Jester »

HeadCreeps wrote:It's important to understand that this is not purely an 'instinctive' objection. It's partly philosophical: the act of altering enormous numbers of people, and effectively forcing the rest to accept alteration or become 'obsolete,' is a disturbing thing to do. It's very hard to believe that you can change the meaning of "human" without changing the humans in question to the point where they aren't the people they were before.
Surely there are other reasons than "I don't like the idea of being forced into this". There are no points at which the idea of cybernetic implants which could benefit a person don't reach the point where they're objectionable? I don't believe that for a minute, but if so, then I'm a lot more against cybernetics than you are. For me, there is a point where my stupidity or limitations are the reasons why I enjoy this or that. I enjoy some of the most godawfully arranged music ever invented, but if I understand music well enough that I know exactly how godawful it is, I'm suddenly not so sure I want the enhancement. This is why I do not choose to learn how to compose music in the first place! Without question, I'm afraid of surgery and wouldn't do it unless I felt a very strong need to have the surgery, which goes hand-in-hand with my objection toward cybernetics.[/quote]I don't feel comfortable drawing those lines for people- saying "this is something it is to your advantage to have, this is not." I'm not sure where my lines would be drawn; if I could improve myself in certain ways I'd beg for the chance, while there are other improvements I'm indifferent to, and I'm sure I could think of hypothetical improvements I would be actively averse to.

But, again, I don't feel comfortable trying to tell other people where to draw the lines. What disturbs me is the sum total of every person on Earth's experiencing this process- of having to choose between being marginalized or being 'improved' in ways that (like you and music composition) they are not comfortable with.
@Broomstick: Thank you for giving your reasoning. I just wanted to point out that when I brought up implants and liposuction, I was actually referring to something that I thought most people should be looking down upon.
"Should?" Why "should?" It's a choice, and not an unreasonable one- if a woman wants large breasts I don't feel I should try to stop her; if someone of either sex wants to get rid of twenty pounds of fat without the grinding months-long tedium of sweating the weight off by sheer force of will, I can't blame them. Like quitting an addiction, losing weight is difficult, difficult enough that some people can't make themselves do it at all.

I feel the same way about implanted cybernetics, but I'm not comfortable around people who go on about how implanted cybernetics will make me obsolete.
This space dedicated to Vasily Arkhipov
User avatar
HeadCreeps
Padawan Learner
Posts: 222
Joined: 2011-01-10 10:47pm

Re: Artificial Intelligence: Why Would We Make One?

Post by HeadCreeps »

@Broomstick: Thank you for giving your reasoning. I just wanted to point out that when I brought up implants and liposuction, I was actually referring to something that I thought most people should be looking down upon.
"Should?" Why "should?"
Argh!! I was trying to point out a form of body enhancement that I expected most of society looks down upon. Over where I live, rural USA, it's unquestionably looked down upon by the church-going locals.
Hindsight is 24/7.
[/size]
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Artificial Intelligence: Why Would We Make One?

Post by Simon_Jester »

So was interracial dating, not so long ago, in that neck of the woods- even if it isn't now.

You can always find cultural reasons why a given group of people will resist or oppose specific things; it's important to know the difference between things that are taboo because of the customs of the tribe one lives among and things that are universally looked down on or opposed.
This space dedicated to Vasily Arkhipov
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Artificial Intelligence: Why Would We Make One?

Post by Formless »

cosmicalstorm wrote:I don't believe in all of the singularity stuff, but machine intelligence and an intelligence explosion does seem very reasonable to me provided techonology continus to develop. I do not see a similarity to flying cars, jetpacks and so on in regards to that subject, since we already know intelligence exists inside human skulls. Every other aspect of our body can be readily outdone by machinery, so why not the brain?
... I don't see where you are going with this. You realize that what makes the human brain amazing is its programming, NOT its raw processing power, right? It very well may be that there isn't much to improve on that can't be more easily fixed by giving humans the right tools (software tools, better non-sentient computers, better training to overcome cognitive biases, etc.). Furthermore, to paraphrase a point Destructionator XIII once made, frequently the real limiting factor on problem solving isn't intelligence, its the time it takes to implement the solution. Processing power doesn't change things much if the implementation gives you enough time to fix wrinkles in the plan as you go. That means that AI may not be any more godlike than existing bureaucracies (which to be fair, can already seem inhuman and Kafkaesque thanks to the sheer scope of the problems they work on). In which case, the revolutionary nature of AI would be... well, not revolutionary. If that is true, you could say that we've already hit a singularity of sorts, and only now is it catching up to us that it was even possible.
Simon_Jester wrote:I think the problem here, and if you take a step back this becomes utterly obvious, is that Hoth's definition of "transhumanist" doesn't match yours.
That's actually a big problem with "Transhumanism" as a philosophy. How is it different from technological utopianism? Or communism with AI rulers rather than bureaucracy? How is it different from what I have sometimes heard called "Post-Humanism"? Singularitanism can at least be called an ideology because it keeps central the general prediction that human life will be radically transformed in one relatively short event. But other forms that talk about enhancements using genetic engineering, cybernetics, and so forth share only the idea that what it means to be human will be radically transformed at the basic level of the body. That's why I classify all "trans"-humanisms as a set of fantasies. The main theme seems to be "look at all the cool technologies!"
HeadCreeps wrote:Is there an immediate objection to these types of discussions in the same sense that society more or less looks down upon liposuction or breast implants? I'm curious about other peoples' response to this sort of thing; is it the physical modification that brings the instinctual dislike of transhumanism first, or does it require the realization that a Kurzweil-style AI explosion will inevitably lead to a fundamental change in what comprises the human condition for the objections to begin? Can things like hardware-based human memory enhancement be seen in the same light as currently existing types of body modifications for "normal people"? Or does it take the whole package - boom, you're a robot - for there to be a strong objection toward the concept? You said you're fine with replacing the heart, but what about parts of the brain for people who don't have obvious issues like alzheimer's? Is this where it becomes objectionable?
Its the fact that Kurzweil-style predictions of a fundamental change in the human condition are often made in ignorance of how real human societies work, as well as other areas of knowledge, and that often the people spreading the fad are often guilty of spreading ignorance. Ignorance is bad, no?

As I explained, body modification is no issue to me, as long as people don't insist that you are a Luddite for declining.

Also, Broomstick mentioned this already, but it bears repeating that you can already supplement your memory using existing technology. In fact, you don't even need electricity to do so. Its called pen and paper, and I use it all the time. Or you can even skip technology completely and use mnemonic devices. Why risk brain surgery to solve something that's easy to work around with methods that have existed for centuries?
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Artificial Intelligence: Why Would We Make One?

Post by Starglider »

ThomasP wrote:In saying that, there are facets to that culture which deserve mention precisely because they reject the WASP computer-nerd tenets of mainstream Singularity worship:
What a bizarre concept, 'mainstream Singularity-worship'. That's a total non-seqitur. The 'singularity' people arrived late to the transhumanist movement, and the idiot hangers on arrived the best part of a decade after the initial pioneers fleshed out the theory. Maybe LessWrong is to blame for an increase in preachy idiots appropriating the terminology? I don't know, to be honest I haven't really been following the non-technical community associated with this for several years. It's pretty divorced from the people doing real work.

Anyway, I don't know where you're getting your data from, it sounds like you saw that Lion guy here and assumed he was representative of hundreds of thousands of people. The majority of transhumanists do not subscribe to the 'singularity' concept at all, in the now-common sense of very fast radical transformation of reality, transhuman AIs popping out of seemingly nowhere etc. Of course almost all transhumanists explicitly or implicitly subscribe to Vernor Vinge's original definition of the singularity, which is simply a breakdown in the ability of futurists to predict what society will look like, because non-human intelligences in play change the dynamics. Genetically engineering wolves into sapient bipeds would be sufficient for that despite not involving any sort of transhumanism at all. More realistically, relatively modest brain-computer interfacing is enough. A few people argue that pervasive social networking qualifies, but I certainly don't buy that, it doesn't actually change human psychology in a prediction-breaking way, it just expands the village-gossip idiom to arbitrary sets of globally distributed humans.
Simon_Jester wrote:The Friendly AI problem is explicitly advertised as being so transcendentally hard compared to the General AI problem that for most of us, it's difficult to imagine the former being solved first. And yet we have people telling us that yes, the first big AI on the planet will set the tone of all future existence.
This is of course deeply ironic. The vast majority of transhumanists have no problem with people choosing to remain unmodified humans; the majority of transhumans are at least mild libertarians so this is hardly surprising. A probably majority of transhumanists themselves want to remain human, just without aging and maybe some modest cybernetics; the eager-upload crowd is a sizable vocal minority. The hysterical 'they want to exterminate us' stuff is nonsense. The 'oh no we will be marginalised' is (a) bullshit even in principle - why should humanity as it currently exists be the most important thing in the galaxy for ever and (b) probably irrelevant - squishy human interstellar travel is so utterly impractical compared to AI interstellar travel, that you could reasonably give the classic-humans everything they can reach and still have several thousand times more resources available for transhuman intelligences.

The irony is that while the vast majority of transhumanists want to preserve and expand the rights and happiness of all humans, the policies they advocate would just accelerate the creation of an unfriendly superintelligent AI. Technologies like brain-computer interfacing, quantum computing and nanorobotics also make it even easier for such an entity to achieve complete domination - plus they are of course significant existential risks in their own right. From copious experience, I can verify that most transhumanists don't accept hard take-off or the fact that most AI designs result in very negative outcomes for humanity (despite the intentions of the designers).
Does anyone really believe there's a way to stop the first big AI on the planet from belonging to a corporation, or a bunch of politicoes in the pay of one, at this rate?
Honestly the exact intentions of whoever builds the AI that crosses the recursive self-improvement threshold don't matter that much... if they lack the appreciation of the goal system stability problem and the (extreme) technical competence to deal with it. Yes in theory a megalomanical nut could build a stable widly transhuman AI and dominate humanity with it, but the chance of just getting it wrong and wiping everyone out with an arbitrary-seeming goal (with ridiculous optimisation power applied to it) is much higher.

Obviously Hoth is utter filth for wanting to murder the only people who could possibly prevent this awful outcome (and turn it into an awesome one). His strawmanning of the concept (with the gratuitous invention of the human-purging policies) and total ignorance of the technical arguments are minor irritants by comparison.
User avatar
cosmicalstorm
Jedi Council Member
Posts: 1642
Joined: 2008-02-14 09:35am

Re: Artificial Intelligence: Why Would We Make One?

Post by cosmicalstorm »

Formless wrote:
cosmicalstorm wrote:I don't believe in all of the singularity stuff, but machine intelligence and an intelligence explosion does seem very reasonable to me provided techonology continus to develop. I do not see a similarity to flying cars, jetpacks and so on in regards to that subject, since we already know intelligence exists inside human skulls. Every other aspect of our body can be readily outdone by machinery, so why not the brain?
... I don't see where you are going with this. You realize that what makes the human brain amazing is its programming, NOT its raw processing power, right? It very well may be that there isn't much to improve on that can't be more easily fixed by giving humans the right tools (software tools, better non-sentient computers, better training to overcome cognitive biases, etc.). Furthermore, to paraphrase a point Destructionator XIII once made, frequently the real limiting factor on problem solving isn't intelligence, its the time it takes to implement the solution. Processing power doesn't change things much if the implementation gives you enough time to fix wrinkles in the plan as you go. That means that AI may not be any more godlike than existing bureaucracies (which to be fair, can already seem inhuman and Kafkaesque thanks to the sheer scope of the problems they work on). In which case, the revolutionary nature of AI would be... well, not revolutionary. If that is true, you could say that we've already hit a singularity of sorts, and only now is it catching up to us that it was even possible.
If it turns out that there isn't much to improve in human cognition then I will accept that. Right now it seems unlikely to me.
Existing bureaucracies are run by humans, at the subjective time experienced by humans, the communications inside the bureaucracies are run at human communication rates (bits per second in speech and writing and perhaps a few kb/s in pictures and movies) and the manual work is carried out at the speed limits imposed by humans manipulators. I feel like I'm not being a gratuitous transhumansingularitarded machine-wanker when I say that I suspect this organization could be improved in efficiency and speed rather significantly.
Again I want to stress that I'm by far no expert in this area, I may be completely wrong. If 50 years from now, when I'm 75 years old, none of this has come to pass despite technology continuing to improve in the absence of an existential disaster, I will not be shocked.
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Artificial Intelligence: Why Would We Make One?

Post by Formless »

cosmicalstorm wrote:If it turns out that there isn't much to improve in human cognition then I will accept that. Right now it seems unlikely to me.
Existing bureaucracies are run by humans, at the subjective time experienced by humans, the communications inside the bureaucracies are run at human communication rates (bits per second in speech and writing and perhaps a few kb/s in pictures and movies) and the manual work is carried out at the speed limits imposed by humans manipulators. I feel like I'm not being a gratuitous transhumansingularitarded machine-wanker when I say that I suspect this organization could be improved in efficiency and speed rather significantly.
If an AI were to replace the functions of a bureaucracy, these communication inefficiencies aren't likely to go away. Unless the work is being done by robots (with articulated bodies and everything, as opposed to the HAL9000 style of AI) then humans are going to be doing the manual labor. That means they need to be organized, managed, motivated, and set to work, and at every step human language and processing speeds will have to be taken into account. And guess what? That accounts for the vast majority of information transactions in a bureaucracy, because there are only so many people at the top making big decisions, and tons more at the bottom carrying them out.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
cosmicalstorm
Jedi Council Member
Posts: 1642
Joined: 2008-02-14 09:35am

Re: Artificial Intelligence: Why Would We Make One?

Post by cosmicalstorm »

I suspect that the role that humans play, this bottle-neck you are referring to, will be reduced gradually over time. And I fail to see why it shouldn't be possible to enhance the human part of the equation with more efficient human-computer interfaces (like the story about the lawyers being replaced posted earlier in this thread).
I'm not putting any specific time limit on this though. And again this is provided tech continues to develop and we do not enter a period where, for example, dwindling energy-resources and rapid climate change make our civilization begin to move backwards.
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Artificial Intelligence: Why Would We Make One?

Post by Formless »

cosmicalstorm wrote:I suspect that the role that humans play, this bottle-neck you are referring to, will be reduced gradually over time. And I fail to see why it shouldn't be possible to enhance the human part of the equation with more efficient human-computer interfaces (like the story about the lawyers being replaced posted earlier in this thread).
What kind of computer interfaces are you thinking of, specifically? Making a better keyboard? Voice operation like in Star Trek? Better software UI? Or cyberpunk style direct brain-computer interfacing? The first three, sure no problem. The last one on the other hand could run into legal/ethical problems. The courts might rule, for example, that a company cannot discriminate against people who don't want to have the necessary surgery for obvious reasons. Its a medical risk, the poor would be unable to afford it, etc.. Granted, this is making a few assumptions about how such a direct interface would operate, but hopefully you understand the principal.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
Post Reply