Artificial Intelligence: Why Would We Make One?

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

Post Reply
ThomasP
Padawan Learner
Posts: 370
Joined: 2009-07-06 05:02am

Re: Artificial Intelligence: Why Would We Make One?

Post by ThomasP »

Starglider wrote:
ThomasP wrote:In saying that, there are facets to that culture which deserve mention precisely because they reject the WASP computer-nerd tenets of mainstream Singularity worship:
What a bizarre concept, 'mainstream Singularity-worship'. That's a total non-seqitur. The 'singularity' people arrived late to the transhumanist movement, and the idiot hangers on arrived the best part of a decade after the initial pioneers fleshed out the theory. Maybe LessWrong is to blame for an increase in preachy idiots appropriating the terminology? I don't know, to be honest I haven't really been following the non-technical community associated with this for several years. It's pretty divorced from the people doing real work.
I agree with you Starglider. I've been flitting around the edges of the transhumanist culture since the mid 90s, and I even remember the days before Drexler and before Extropianism. The comment which you quoted was a bit tongue-in-cheek and I should have made that clear (my fault there), but the 'mainstream Singularity worship' -- meaning the easy target which often catches flak in the press due to some Kurzweil prediction, or in the knee-jerk responses in this thread* -- is how a lot of people see the entire movement of transhumanism.

The reason I posted the links I did was to point out that there is in fact more to the movement than that, and that not everyone working for (or just attracted to) such causes is a sociopath out to play with cool toys. I am skeptical of wildly fantastic views of AI takeoff and 'lol Singularity' (sorry), but I agree with you that the majority of non-fanboy types don't really feel that way. I think the ethical component is vastly understated in most of these discussions, and thus the 'toy obsession' takes precedence in the minds of the uninitiated.

Sorry for the poorly-worded statement.

* Understandable due to the LionEl Johnson shitposting
All those moments will be lost in time... like tears in rain...
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Re: Artificial Intelligence: Why Would We Make One?

Post by Sarevok »

I don't think the problem is ethical, yet. Nobody is uploading their mind tomorrow or living in fear of the grey goo apocalypse next year. Ethics only enters the fray when you know - at least one tenth of the wildly distorted predictions actually enter existence.

The problem is elsewhere and it is a simple one. Wild unfounded optimism as opposed to slow progress rooted in rational thought is what is wrong with the singularity movement. You don't see astronauts boldly proclaiming they will be on Mars by 2020. Or nuclear fusion researchers sermonizing with religious zeal that tokamak will bring utopia by 2050. But Singulatarians (is that a word?) are preaching their hopes for the singularity like fundamentalist Christians and rapture. The claim that the singularity is actually codeword for nerd rapture is not that far off. This is what I can think is most responsible for damaging the credibility of real transhuman philosophy in mainstream public discourse.
I have to tell you something everything I wrote above is a lie.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Artificial Intelligence: Why Would We Make One?

Post by Simon_Jester »

Starglider wrote:
Simon_Jester wrote:The Friendly AI problem is explicitly advertised as being so transcendentally hard compared to the General AI problem that for most of us, it's difficult to imagine the former being solved first. And yet we have people telling us that yes, the first big AI on the planet will set the tone of all future existence.
This is of course deeply ironic. The vast majority of transhumanists have no problem with people choosing to remain unmodified humans; the majority of transhumans are at least mild libertarians so this is hardly surprising. A probably majority of transhumanists themselves want to remain human, just without aging and maybe some modest cybernetics; the eager-upload crowd is a sizable vocal minority.
I know- it's the implications that bug me, and I think we're pretty much in agreement on that.
The hysterical 'they want to exterminate us' stuff is nonsense. The 'oh no we will be marginalised' is (a) bullshit even in principle - why should humanity as it currently exists be the most important thing in the galaxy for ever and (b) probably irrelevant - squishy human interstellar travel is so utterly impractical compared to AI interstellar travel, that you could reasonably give the classic-humans everything they can reach and still have several thousand times more resources available for transhuman intelligences.
The fear of marginalization is reasonable from a parochial standpoint: I don't want to find myself or my descendants in a situation where they must choose between accepting radical alteration of their own nature (whether they like it or not) or becoming irrelevant zoo exhibits.

I'm not sure there's a way to avoid it, but I still don't like it.
The irony is that while the vast majority of transhumanists want to preserve and expand the rights and happiness of all humans, the policies they advocate would just accelerate the creation of an unfriendly superintelligent AI. Technologies like brain-computer interfacing, quantum computing and nanorobotics also make it even easier for such an entity to achieve complete domination - plus they are of course significant existential risks in their own right. From copious experience, I can verify that most transhumanists don't accept hard take-off or the fact that most AI designs result in very negative outcomes for humanity (despite the intentions of the designers).
EXACTLY. Or in my case, I'm sold on the idea of, if not hard take-off, at least an increase in capability rapid enough to give first adopters a colossal advantage- exactly how rapid "rapid" is depends on the technology and the time it takes to put it into use. Genetics is long-lead; engineer your children to be supermen and they grow up to dominate the world in thirty years' time... but in turn have thirty years' lead over anyone else who tries to emulate them, assuming nothing goes awry with their modifications.

AI as presented by the hard takeoff advocates has much, much shorter lead times, but the basic point remains: almost regardless of what form the Singularity is supposed to take, the first adopters have a very large advantage- or wind up creating something else that does.
Obviously Hoth is utter filth for wanting to murder the only people who could possibly prevent this awful outcome (and turn it into an awesome one). His strawmanning of the concept (with the gratuitous invention of the human-purging policies) and total ignorance of the technical arguments are minor irritants by comparison.
I think your perception of Hoth suffers from a severe mutual misunderstanding in need of being worked out.

What Hoth's frustrated with, enough to think it vile and in need of demolition, is the 'ideal' of a posthuman future as held by some of the loonier individuals who take "the Singularity" as something to pin all their "rapture of the nerds" fantasies on. I don't know where he got the notion that this represents a majority viewpoint among futurists who think technology is going to revolutionize everything (the strict definition of 'Singularitarian'). But there's a huge gap between the way he's using these words and the way you are.

Please try to take that into account.
This space dedicated to Vasily Arkhipov
ThomasP
Padawan Learner
Posts: 370
Joined: 2009-07-06 05:02am

Re: Artificial Intelligence: Why Would We Make One?

Post by ThomasP »

Sarevok wrote:I don't think the problem is ethical, yet. Nobody is uploading their mind tomorrow or living in fear of the grey goo apocalypse next year. Ethics only enters the fray when you know - at least one tenth of the wildly distorted predictions actually enter existence.
I disagree completely. Virtually every complaint about aggregate transhumanism comes down to ethical issues -- usually the broad disregard for the norms and values of existing societies, or the degrees of freedom available to any given person (real or perceived). The criticisms of power-fantasy wish-fulfillment and technology masturbation are only levered after the fact, and within the confines of that argument (that is, most people don't really care what others do, but don't want to be coerced into being a robot or uploaded mind directly or indirectly).

Ethics is central to the issue of technology policy, whether you're talking nuclear power and the internet or mind uploading and nanotech. How we use technology is arguably more important than the capability of said technology, and there is certainly no harm in establishing frameworks if there's even the remote possibility of fabricator or AGI technologies being developed.

Even if those technologies prove to be unworkable, I wouldn't savor the robot stormtroopers scenario, nor would I particularly care to see advanced biotechnology or robust 3D printing technologies concentrated in the hands of a new oligarchy. Neither of those scenarios, nor a whole variety of dysfunctional outcomes, require 'magic Singularity' in order to happen.

Having a structure in place to favor, or at least be non-inimical to, existing human societies and values is essential.
All those moments will be lost in time... like tears in rain...
User avatar
cosmicalstorm
Jedi Council Member
Posts: 1642
Joined: 2008-02-14 09:35am

Re: Artificial Intelligence: Why Would We Make One?

Post by cosmicalstorm »

Formless wrote:
cosmicalstorm wrote:I suspect that the role that humans play, this bottle-neck you are referring to, will be reduced gradually over time. And I fail to see why it shouldn't be possible to enhance the human part of the equation with more efficient human-computer interfaces (like the story about the lawyers being replaced posted earlier in this thread).
What kind of computer interfaces are you thinking of, specifically? Making a better keyboard? Voice operation like in Star Trek? Better software UI? Or cyberpunk style direct brain-computer interfacing? The first three, sure no problem. The last one on the other hand could run into legal/ethical problems. The courts might rule, for example, that a company cannot discriminate against people who don't want to have the necessary surgery for obvious reasons. Its a medical risk, the poor would be unable to afford it, etc.. Granted, this is making a few assumptions about how such a direct interface would operate, but hopefully you understand the principal.
I certainly don't doubt there will be a jungle of legal issues along the way, no doubt about that. What kind of interface? Take optogenetics for instance.
Technological developments now allow neurotransmitters to be uncaged with exquisite spatial specificity (down to a single spine, with two-photon uncaging) and in rapid, flexible spatial–temporal patterns [10–14]. Nevertheless, current technology generally requires damaging doses of UV or violet illumination and the continuous re-introduction of the caged compound, which, despite interest, makes for a difficult transition beyond in vitro preparations. Thus, the tremendous progress in the in vivo application of photo-stimulation tools over the past five years has been largely facilitated by two 'exciting' new photo-stimulation technologies: photo-biological stimulation of a rapidly increasing arsenal of light-sensitive ion channels and pumps ('optogenetic' probes[15–18]) and direct photo-thermal stimulation of neural tissue with an IR laser [19–21].
http://iopscience.iop.org/1741-2552/7/4/040201
The principle of using optogenetics to trigger action potentials in neurons was discovered by Miesenböck in 2002 (Zemelman 2002), who was also the first to use optogenetics to control behavior in an animal (Lima 2005). The term “optogenetics” was initially coined in 2006 (Deisseroth 2006) to refer to a rapidly adapted approach of using new high-speed optical methods for probing and controlling genetically targeted neurons within intact neural circuits. Over the next year, the term used to describe this new technique was featured in the pages of Science and Nature, in a series of general-interest (Miller 2006, Miesenböck 2009) and scientific/technical (Zhang 2007a, Adamantidis 2007) reports, and is now widely used. Optogenetics was selected as the Method of the Year 2010 by Nature Methods (Deisseroth 2011).

The hallmark of optogenetics is introduction of light-activated channels and enzymes that allow manipulation of neural activity with millisecond precision while maintaining cell-type resolution through the use of specific targeting mechanisms.
http://en.wikipedia.org/wiki/Optogenetics
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Artificial Intelligence: Why Would We Make One?

Post by Starglider »

Simon_Jester wrote:I don't want to find myself or my descendants in a situation where they must choose between accepting radical alteration of their own nature (whether they like it or not) or becoming irrelevant zoo exhibits.
Irrelevant to what? Seriously, what is your standard for comparison here? If you are an anti-transhumanist who doesn't want to join those upstart uploads etc, then why do you care what they think of you? Yes, they may be doing scientific and cultural stuff way in advance of you, but why does this matter? If you don't want spoilers then you don't have to pay attention.

In reality this tends to boil down to actual physical threat terms, 'relevance' is just a code-word for 'supremacy'. I wonder if this hang-up is more common for westerners in general and Americans in particular, who are used to being the 'most relevant things on the planet'?
I'm not sure there's a way to avoid it
The only way to avoid it is to destroy civilisation, such that transhumans are never possible. Of course, that probably ceedes the galaxy to aliens...
Or in my case, I'm sold on the idea of, if not hard take-off, at least an increase in capability rapid enough to give first adopters a colossal advantage- exactly how rapid "rapid" is depends on the technology and the time it takes to put it into use. Genetics is long-lead; engineer your children to be supermen and they grow up to dominate the world in thirty years' time... but in turn have thirty years' lead over anyone else who tries to emulate them, assuming nothing goes awry with their modifications.AI as presented by the hard takeoff advocates has much, much shorter lead times, but the basic point remains: almost regardless of what form the Singularity is supposed to take, the first adopters have a very large advantage- or wind up creating something else that does.
Yes, this is correct. Most transhumanists are pretty naive here; they do tend to be libertarians after all. The movement matured in the 1990s period of relative stability, western supremacy, the illusion of free markets working perfectly etc etc. People pushing to develop the tech are naturally going to be optimists about its applications, plus there was just the push-back against the massive amounts of relentless negative characterisation that transhumanist themes get from Hollywood etc. In reality these are all incredibly dangerous technologies that could well be used by small groups for material gain and to oppress/control larger groups. Generally the more powerful the technology, the bigger an imbalance and the more radical a takeover it can support.
What Hoth's frustrated with, enough to think it vile and in need of demolition, is the 'ideal' of a posthuman future as held by some of the loonier individuals who take "the Singularity" as something to pin all their "rapture of the nerds" fantasies on.
On its own, that's actually fairly harmless. There's nothing wrong with hoping for a positive future. The thing is, for the vast majority of people there is nothing they can do to help except maybe help fund some research projects. If you're not a researcher in a relevant technology, then it doesn't have much impact on real decision-making. Chatting about it is ok I guess, but without understanding the tech you're just going to get everything horribly wrong, and certainly actively preaching at people is stupid. The tendency to use classification as a religion as a trivial dismissal is annoying but to be honest understandable given that the fringe nuts are in fact using religious thinking to interpret the ideas.

I don't mind this as much as I did, because at present I think more publicity for seed/recursive/Unfriendly AI is almost certainly a bad thing (outside of the AI field).
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Artificial Intelligence: Why Would We Make One?

Post by Formless »

@ cosmicalstorm: See, that example right there would be the kind of ethically problematic proposition I'm talking about. Both those articles presents the concept as a tool for researching the function of neurons, not as a potential technology for interfacing with computers. The first quote even explains why using that way would be bad: the necessary chemical compounds and UV radiation damages tissue, meaning its use is limited to in vitro experiments (that is, for laypeople, cell samples outside of the body in a petri dish). Even if new developments make it possible to use it as an interface technology like a keyboard, its almost certainly going to require surgical implants and (according to wikipedia) virus delivered chemical markers.

Plus, it seems to me to be overkill. Its prized for its accuracy in picking up on signals, but you don't necessarily need that much precision. One of those lessons they teach you in chemistry class: why ask for a machined part that's precise down to three significant figures when its cheaper and no less efficient to do so to only one? Similarly, things like voice activated software or (if you really want to get fancy) EEG interfacing would probably have just as fast of signal transfer from human to computer, and they're far cheaper and easier with no medical procedures.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
cosmicalstorm
Jedi Council Member
Posts: 1642
Joined: 2008-02-14 09:35am

Re: Artificial Intelligence: Why Would We Make One?

Post by cosmicalstorm »

The first set of tools developed to research radiation could probably not be used to build nuclear weapons. Worst case scenario is that they get to use that piece of technology, (which I only used as an example, I could have mentioned other examples) to "only" see the way neurons act on a millisecond basis in test animals or in a petri-dish, instead of using much clumsier technologies that follow neurons over periods of time that are orders of magnitudes longer. And there are going to be ethical issues? Heaven forbid there be ethical issues! I cannot think of any activity that any larger group of people partake in that is not surrounded by a debate about ethics that could be carried out forever. I'm curious Formless, where do you think the research into human-computer interfaces ought to stop? Where do you think it will stop?
User avatar
Broomstick
Emperor's Hand
Posts: 28822
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Re: Artificial Intelligence: Why Would We Make One?

Post by Broomstick »

Arguably, a cochlear implant is a crude computer/brain interface, as are the artificial retinas being developed.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Artificial Intelligence: Why Would We Make One?

Post by Formless »

cosmicalstorm wrote:The first set of tools developed to research radiation could probably not be used to build nuclear weapons. Worst case scenario is that they get to use that piece of technology, (which I only used as an example, I could have mentioned other examples) to "only" see the way neurons act on a millisecond basis in test animals or in a petri-dish, instead of using much clumsier technologies that follow neurons over periods of time that are orders of magnitudes longer.
How is this technology at all analogous to nuclear bombs? :wtf: And here you go again with that hindsight bias...
And there are going to be ethical issues? Heaven forbid there be ethical issues! I cannot think of any activity that any larger group of people partake in that is not surrounded by a debate about ethics that could be carried out forever. I'm curious Formless, where do you think the research into human-computer interfaces ought to stop? Where do you think it will stop?
Where do you think it begins? You seem to just throw out technologies as they come to your mind without considering their moral impact at all. I already explained why technologies that require surgical implants to take advantage of would face issues if you tried to make them mandatory, in the context of using AI to replace the higher functions of a bureaucracy. Hell, read Shroom's post in this thread (there is only one thus far). Or any post by Shroom in a thread about transhumanism. Medical procedures are not something you just do on a whim.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
cosmicalstorm
Jedi Council Member
Posts: 1642
Joined: 2008-02-14 09:35am

Re: Artificial Intelligence: Why Would We Make One?

Post by cosmicalstorm »

There is an analogy to nuclear weapons because intelligence is a very powerful force, when tools are made available that make it easier to scrutinize intelligence that is one more step on the way to create it in machines. (With regards to hindsight bias: Unless we find out that it's not possible to create intelligence in anything but a standard human biological brain.)
So we should end all research along these lines because of the moral issues?
I don't think that's going to happen.
And I've made it clear five times or something that I don't think tech development will necessarily lead to a world of ponies and rainbows, of course I hope the benefits will outweigh the cons, but why are you still acting like I'm saying that?
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Artificial Intelligence: Why Would We Make One?

Post by Formless »

What the fuck is with all the herring covered in lead based red paint? I'm talking about specific moral issues with implants in a bureaucratic setting, not AI you moron. Can't you keep track of context?
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
cosmicalstorm
Jedi Council Member
Posts: 1642
Joined: 2008-02-14 09:35am

Re: Artificial Intelligence: Why Would We Make One?

Post by cosmicalstorm »

I don't think this is going anywhere. I've made myself as clear as possible. Have a nice day.
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Artificial Intelligence: Why Would We Make One?

Post by Formless »

So you're not going to own up to the stupidity of your proposal, and only want to talk in generalities rather than specific cases? Good to know you are so honest. [/sarcasm]
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
cosmicalstorm
Jedi Council Member
Posts: 1642
Joined: 2008-02-14 09:35am

Re: Artificial Intelligence: Why Would We Make One?

Post by cosmicalstorm »

You were asking about computer-human interfaces and morals.

I agreed with you that there would be many moral issues.

You asked what kind of interfaces and I gave you one example. What else is there?
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Artificial Intelligence: Why Would We Make One?

Post by Formless »

EEG? Eye tracking? Voice operation? There are tons of ways to operate a computer, and there are plenty of them are fairly high tech too without being invasive. Of course, the real question is if any of them can surpass the utility of the keyboard and mouse.

Edit: Oh, and by the way, if you are going to concede something it helps if you don't follow it up with something flippant like "Heaven forbid there be ethical issues!" Mixed signals tend to hinder communication.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Re: Artificial Intelligence: Why Would We Make One?

Post by Singular Intellect »

What the hell does computer brain interfaces have to do with morality? Whether it's a keyboard, mouse, eye tracker or brain implant, what the fuck has morality got to do with anything?

Do people worry about the morality of keyboards, mice, touch screens, eye trackers or voice recognition software?

Where's the moral issue? 'Forcing people to use them'? Who the fuck forces anyone to use any of the afore mentioned interfaces, and why would we expect any change to the status quo if interfaces include brain implants/readers?
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Artificial Intelligence: Why Would We Make One?

Post by Formless »

Singular Intellect wrote:What the hell does computer brain interfaces have to do with morality? Whether it's a keyboard, mouse, eye tracker or brain implant, what the fuck has morality got to do with anything?
brain implant,
brain implant,
Goddammit, do you know how to read, Bubble Boy? We covered this. Any implant is going to need surgery to, you know, implant it, which is going to bring in medical risks to the hypothetical employee. Now, I know you are a complete moron who thinks technological advancement is the only measure of social improvement, but I would hope you could at least understand the basic risk-benefit analysis of FUCKING BRAIN SURGERY and how its quite a bit different from using a mouse and keyboard.
Where's the moral issue? 'Forcing people to use them'? Who the fuck forces anyone to use any of the afore mentioned interfaces, and why would we expect any change to the status quo if interfaces include brain implants/readers?
I dunno, an artificial intelligence that's not Friendly and only understands efficiency maybe? :roll:
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Re: Artificial Intelligence: Why Would We Make One?

Post by Singular Intellect »

Formless wrote:Goddammit, do you know how to read, Bubble Boy? We covered this. Any implant is going to need surgery to, you know, implant it, which is going to bring in medical risks to the hypothetical employee. Now, I know you are a complete moron who thinks technological advancement is the only measure of social improvement, but I would hope you could at least understand the basic risk-benefit analysis of FUCKING BRAIN SURGERY and how its quite a bit different from using a mouse and keyboard.
This does not address my question: where's the moral issue? Who's being forced to volunteer for brain implants or brain scanners?
Where's the moral issue? 'Forcing people to use them'? Who the fuck forces anyone to use any of the afore mentioned interfaces, and why would we expect any change to the status quo if interfaces include brain implants/readers?
I dunno, an artificial intelligence that's not Friendly and only understands efficiency maybe? :roll:
What's yoiur point/argument here? That an AI could be dangerous? Fucking duh, so can biological intelligence. Don't see anyone arguing against popping out new kids.
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Artificial Intelligence: Why Would We Make One?

Post by Formless »

My god, you are denser than lead. Its a hypothetical. This technology isn't here yet, so we can only surmise how it might be used down the road. Employees of any company or bureaucratic organization (governments, non-profit groups, think tanks) are forced to use tools and equipment provided by their employer to get their job done. If implants become commonplace, it is not unreasonable to consider the possibility that someone out there will want to require their employees have them because of the perceived or actual increase in worker efficiency they provide or in the interest of staying competitive with other companies who are using them and disregard other considerations like safety, economics, discrimination, employee preferences, alternative technologies and so forth. By asking these what-ifs now, we can figure out how these situations would be best resolved legally and morally if/when the technology gets here. Your questions are as puerile as asking "who is enslaving robots? Where is the moral problem with robots? What's your point?"
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Re: Artificial Intelligence: Why Would We Make One?

Post by Singular Intellect »

Formless wrote:My god, you are denser than lead. Its a hypothetical. This technology isn't here yet, so we can only surmise how it might be used down the road. Employees of any company or bureaucratic organization (governments, non-profit groups, think tanks) are forced to use tools and equipment provided by their employer to get their job done.
Nobody is 'forced' to used tools at any given job. You can use the tools provided and needed at a job, or go find another one if you don't want to use them.

The only 'forced' usage of tools would be safety equipment and safety standards. On a job site, I can use a hammer if I want to. If I don't, my attitude towards the usage of tools dictates whether I'm qualified to do the job or not. A safety hard hat, on the other hand, will be strictly enforced. To the point of being kicked off site if I don't wear one and even potentially being fined for neglecting to do so. That is the only 'forced' technology use; that which is designed to protect and save people.
If implants become commonplace, it is not unreasonable to consider the possibility that someone out there will want to require their employees have them because of the perceived or actual increase in worker efficiency they provide or in the interest of staying competitive with other companies who are using them and disregard other considerations like safety, economics, discrimination, employee preferences, alternative technologies and so forth.
You're confusing job qualifications with worker safety. If you don't have the training and skills to do a job, then you won't get hired. Skills and training are irrelevant with regards to safety, because those rules apply to anyone, not just employees.
By asking these what-ifs now, we can figure out how these situations would be best resolved legally and morally if/when the technology gets here. Your questions are as puerile as asking "who is enslaving robots? Where is the moral problem with robots? What's your point?"
If someone doesn't get hired for a job because they lack the education and skills to do it, do you cry foul as well? No one is going to be forced to use tools, regardless whether those tools are education or brain implants that enhance intelligence in some manner.

Theoritically speaking, will a 'non-enhanced' human have less options than a 'enhanced' one regarding employment? Seems quite likely. Is that a serious problem? No more so than those who don't get an education to qualify for more desirable, higher paying jobs.
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Artificial Intelligence: Why Would We Make One?

Post by Formless »

Singular Intellect wrote:Nobody is 'forced' to used tools at any given job. You can use the tools provided and needed at a job, or go find another one if you don't want to use them.
Don't be a pedantic little asshole. If you want to be a doctor, you had damn well better be able to hold a scalpel. If you want to be a heavy machinery operator, you had better be able to operate heavy machinery. If you want to be an accountant, you had better be able to use a computer. If you don't or can't do these things, you don't get the job. If you refuse to use them, they fire you.

Your entire post is nothing but a series of straw men and attempts to redefine the meaning of "job requirements" so you can dodge what should be an obvious point. There is a difference between job requirements and qualifications that you apparently need spelled out for you: requirements can be be almost anything that the employer will fire you for if you disagree with them. This includes being qualified for the job, obviously. But it can also include things that are not only arbitrary but immoral or illegal like "No Irish Brown People Need Apply".

In the scenario I'm talking about a company or other organization is asking its workers to have unnecessary and potentially dangerous medical procedures done to themselves so that they keep up with the employers high standards or alternatively to stay competitive. But we don't have to look into the future to understand this point. Imagine if your employer were to ask you to take amphetamines to make you work harder, and implies that you would be on the chopping block if you refuse. Do you think that would be okay? Would you accept their requirement? If yes, do yourself a favor. Seek help. The side effects of most stimulants can seriously fuck up your health. Similarly, implants require surgery, which always carries a risk to your long term health. Moreover, you were proposing BRAIN implants, which means those risks rise to the level of "you had better have epilepsy or something similarly bad, because this could leave you dead or worse if something goes wrong". Something going wrong can include something as simple as an infection around the point of incision or implant.

And don't try and say there isn't precedent for companies asking their employees to do things detrimental to their health or welfare. Corporations can and will do whatever they think they can get away with in the name of the Bottom Line. Its their raison d'être. The reason we have a guaranteed 40 hour work week and other rights is not because of the generosity of corporations, that's for sure.

Will there be jobs that only Cybernetic Übermensch will be qualified for? Maybe. Or maybe all the thinking jobs can be done just fine with technologies that don't require invasive surgery (and besides, isn't that what people want AI for? To take over or at least supplement the thinking jobs?). I can't think of any menial task that isn't already or cannot in the future be done just fine with heavy machinery and/or robots.

And the real kicker is, you say you don't see the problem with a future where enhanced people have all the job opportunities by comparing that to education. I mean, its already hard enough for poor kids to get into a decent school or get the money for college level education, and that's one of the things that tends to keep the poor poor. On top of that in the future you propose they may have to get surgical implants to compete for jobs. Where does the money for that come from? The hopes and dreams of children? Our society isn't likely to embrace Communism any time soon. Its quite ironic: either you are an idiot for not understanding the basic socioeconomics of what you propose, or lack basic compassion for not seeing it as a problem. I'm betting both, all things considered.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Re: Artificial Intelligence: Why Would We Make One?

Post by Singular Intellect »

Formless wrote: Don't be a pedantic little asshole. If you want to be a doctor, you had damn well better be able to hold a scalpel. If you want to be a heavy machinery operator, you had better be able to operate heavy machinery. If you want to be an accountant, you had better be able to use a computer. If you don't or can't do these things, you don't get the job. If you refuse to use them, they fire you.

Your entire post is nothing but a series of straw men and attempts to redefine the meaning of "job requirements" so you can dodge what should be an obvious point. There is a difference between job requirements and qualifications that you apparently need spelled out for you: requirements can be be almost anything that the employer will fire you for if you disagree with them. This includes being qualified for the job, obviously. But it can also include things that are not only arbitrary but immoral or illegal like "No Irish Brown People Need Apply".
And we have anti discrimination laws to combat precisely that.

What? You think you're the first to realize this shit or do more than just whine about it?
In the scenario I'm talking about a company or other organization is asking its workers to have unnecessary and potentially dangerous medical procedures done to themselves so that they keep up with the employers high standards or alternatively to stay competitive.
That's why we have laws in place to allow employees to refuse dangerous work hazards.

Medical operations for things like brain implants would be a personal choice.
But we don't have to look into the future to understand this point. Imagine if your employer were to ask you to take amphetamines to make you work harder, and implies that you would be on the chopping block if you refuse. Do you think that would be okay? Would you accept their requirement? If yes, do yourself a favor. Seek help. The side effects of most stimulants can seriously fuck up your health. Similarly, implants require surgery, which always carries a risk to your long term health. Moreover, you were proposing BRAIN implants, which means those risks rise to the level of "you had better have epilepsy or something similarly bad, because this could leave you dead or worse if something goes wrong". Something going wrong can include something as simple as an infection around the point of incision or implant.
See above. The law is on my side, not my employer. He/she/it would be in deep shit for attempting to threaten me or anyone else that way.
And don't try and say there isn't precedent for companies asking their employees to do things detrimental to their health or welfare. Corporations can and will do whatever they think they can get away with in the name of the Bottom Line. Its their raison d'être. The reason we have a guaranteed 40 hour work week and other rights is not because of the generosity of corporations, that's for sure.
Which brings us back to the law I cited earlier.
Will there be jobs that only Cybernetic Übermensch will be qualified for? Maybe. Or maybe all the thinking jobs can be done just fine with technologies that don't require invasive surgery (and besides, isn't that what people want AI for? To take over or at least supplement the thinking jobs?). I can't think of any menial task that isn't already or cannot in the future be done just fine with heavy machinery and/or robots.

And the real kicker is, you say you don't see the problem with a future where enhanced people have all the job opportunities by comparing that to education. I mean, its already hard enough for poor kids to get into a decent school or get the money for college level education, and that's one of the things that tends to keep the poor poor. On top of that in the future you propose they may have to get surgical implants to compete for jobs. Where does the money for that come from? The hopes and dreams of children? Our society isn't likely to embrace Communism any time soon. Its quite ironic: either you are an idiot for not understanding the basic socioeconomics of what you propose, or lack basic compassion for not seeing it as a problem. I'm betting both, all things considered.
You're arguing from historical ignorance. Technology becomes cheaper and more available the longer it is around. The rich and financially well off are the ones, ironically, who end up paying through the nose for the expensive, unreliable and undeveloped technologies first. Once they're cheap, reliable and well developed, practically everyone has access to them. Like cell phones, vehicles, computers, TVs, internet, clean water, food, etc, etc.
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Artificial Intelligence: Why Would We Make One?

Post by Formless »

Singular Intellect wrote:And we have anti discrimination laws to combat precisely that.

What? You think you're the first to realize this shit or do more than just whine about it?
Are you trying to out-smug my cat? Of course I know about those laws, in fact if you read my post more carefully:
I wrote:But it can also include things that are not only arbitrary but immoral or illegal like "No Irish Brown People Need Apply".
Next time, do more than just scan the post before replying to it. Actually READ it.
That's why we have laws in place to allow employees to refuse dangerous work hazards.

Medical operations for things like brain implants would be a personal choice.
Concession accepted.
You're arguing from historical ignorance. Technology becomes cheaper and more available the longer it is around. The rich and financially well off are the ones, ironically, who end up paying through the nose for the expensive, unreliable and undeveloped technologies first. Once they're cheap, reliable and well developed, practically everyone has access to them. Like cell phones, vehicles, computers, TVs, internet, clean water, food, etc, etc.
I'm having de ja vu...

Oh, that's right. Its because we went over this earlier in the thread, and you got soundly trounced on this point:
Formless wrote:If you have ever cracked open a history book in your life you would see dozens of examples of robber barons, imperialists, and other greedy scumbags who reaped all the profits of high technology at everyone else's expense-- including that of their descendants. You put far too much naive faith in technology, when the real problems of this world stem from systematic human greed.
Technology isn't magic, fucktard. Why, if these technologies keep getting cheaper why do we still have endemic poverty in the vast majority of the world?

Could it be because the price of technology has about as much to do with poverty as does the price of tea in China? :roll:

Think about it. I just gave Bubble Boy here one of several socio-economic mechanisms that keep poor people poor. Rather than explaining how we can go about solving the problem or at least keeping it from getting worse, he simply appeals to technological progress making technologies cheaper. But that's not the cause of poverty at all-- its the concentration of wealth that causes poverty, and systemic mechanisms set in place to keep that wealth concentrated. In fact, that technology is getting cheaper benefits the rich arguably as much or more than the poor because they can buy more and better technology and invest in capital (as in, things that generate wealth like banking institutions and the Means of Production Marx harped on about so much).

Finally this claim doesn't even make a lick of sense, and smells like its been pulled from someone's rectum and given a shiny gold gilding.

The rich are precisely the people who can afford to "pay through the nose" for relatively unreliable technologies-- they do it for fun, and as soon as their iPod Nano-whatever is even out of fashion (fashion!) they just go buy a new one. Ever heard of "planned obsolescence?" Why does it make sense to assume that the technology the rich get is always unreliable and buggy compared to the stuff that the poor get? Isn't it true that the technological gadgets that get sold to the masses are often designed cheaply and lazily as possible so that the maker turns a bigger profit? Among the rich: do you include the people who make the technology? And I don't mean the people who invent it, I mean the people who hire those people and own factories. Like Steve Jobs. And lastly, where are you getting your numbers from, and how for that matter are you measuring the cost of technology? On a per unit basis? On a per-factory basis? Materials cost? Processing costs? Maintenance costs? Are you basing your observations on economic activity in the First world, or in developing countries?

Show, don't tell is not just a good storytelling method, its the only way to prove you aren't a bullshit artist.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
cosmicalstorm
Jedi Council Member
Posts: 1642
Joined: 2008-02-14 09:35am

Re: Artificial Intelligence: Why Would We Make One?

Post by cosmicalstorm »

@Formless
EEG? Eye tracking? Voice operation? There are tons of ways to operate a computer, and there are plenty of them are fairly high tech too without being invasive. Of course, the real question is if any of them can surpass the utility of the keyboard and mouse.

Edit: Oh, and by the way, if you are going to concede something it helps if you don't follow it up with something flippant like "Heaven forbid there be ethical issues!" Mixed signals tend to hinder communication.
Lets keep in mind you started this entire debate from the position that the pure notion of technology increasing the speed of technology-development and using technology to improve human cognition (transhumanism) was nonsense that needed to be ridiculed. Then you have gradually shifted it into wanting to debate the morals of a hypothetical scenario where a hypothetical corporation wants to force employees into having brain implant surgery. I don't dispute that there might be those situations, it would not shock me. On the other hand I would not be shocked if people just got those kind of augmentations completely voluntarily should they ever become available, because of the way they could improve life.
I hope that the positive parts of cognitive enhancements will outweigh the negative parts.
Nobody will ever need more than 640kb of RAM, blind people who get their sight back thanks to neural implants would be better off with a mouse and a keyboard.
Post Reply