5 Awesome Sci-Fi Inventions (That Would Actually Suck)

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

User avatar
Battlehymn Republic
Jedi Council Member
Posts: 1824
Joined: 2004-10-27 01:34pm

5 Awesome Sci-Fi Inventions (That Would Actually Suck)

Post by Battlehymn Republic »

A Cracked.com article. I thought it made some good, if not entirely revolutionary, points.
User avatar
Sidewinder
Sith Acolyte
Posts: 5466
Joined: 2005-05-18 10:23pm
Location: Feasting on those who fell in battle
Contact:

Post by Sidewinder »

Entertaining, but I'm surprised they left out computers with true AI. (Why we'd want it? So we can have a hot android sex toy to fulfill our every need. Why wouldn't we want it? The hot android sex toy might decide it doesn't want to be used as a sex toy, and enforce this decision by breaking our neck.)
Please do not make Americans fight giant monsters.

Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.

They have more WMD than there are monsters for us to fight. (More insanity here.)
User avatar
SilverWingedSeraph
Jedi Knight
Posts: 965
Joined: 2007-02-15 11:56am
Location: Tasmania, Australia
Contact:

Post by SilverWingedSeraph »

Anyone who's stupid enough to put "True AI" into an android sex toy really deserves to have their neck broken anyway.
  /l、
゙(゚、 。 7
 l、゙ ~ヽ
 じしf_, )ノ
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Post by Ford Prefect »

Anyone stupid enough to put superhuman mechanical strength into an android sex toy is asking for trouble as well. Quite frankly, any android sex toy would have the 'intelligence' to moan at the right times and be able to respond to things like 'get on your hands and knees', and enough strength to rock back and forth. The only danger would be from some idiot actually managing to electrocute himself.
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
Battlehymn Republic
Jedi Council Member
Posts: 1824
Joined: 2004-10-27 01:34pm

Post by Battlehymn Republic »

But agriculture is the worst mistake in the history of the human race! Don't you read Jared Diamond?
Junghalli
Sith Acolyte
Posts: 5001
Joined: 2004-12-21 10:06pm
Location: Berkeley, California (USA)

Post by Junghalli »

SilverWingedSeraph wrote:Anyone who's stupid enough to put "True AI" into an android sex toy really deserves to have their neck broken anyway.
It's a stupid example to illustrate a very real problem with the idea of truly intelligent AI. If something can think for itself it can rebel. Unless maybe you program in very strong directives against disobeying humans, in which case you've basically just created a brainwashed slave, which is ethically questionable to say the least.

Personally I tend to think it would be a better idea to keep most AI as specialized expert systems that are very good at what they're programmed for but have no real self-awareness or volition.
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

The idea that people are going to put AI in trivial things like toasters and sex robots for lamers like Sidewinder is simply retarded, particularly since it's often supposed to happen 'real soon'. The idea of true, unrestrained AI is *also* retarded in my opinion: what kind of idiot is going to create something far more intelligent than them with no ability to control it?
User avatar
Battlehymn Republic
Jedi Council Member
Posts: 1824
Joined: 2004-10-27 01:34pm

Post by Battlehymn Republic »

By putting it into a toaster, duh.
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Post by Ford Prefect »

What I don't get is why people think that AI would be created for the express purpose of servitude, as opposed to being created as capable to take on positions like any other intelligent person. Quite frankly, if people were to treat an intelligent machine like that, then they deserve to get usurped because they're assholes.
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

Ford Prefect wrote:What I don't get is why people think that AI would be created for the express purpose of servitude, as opposed to being created as capable to take on positions like any other intelligent person. Quite frankly, if people were to treat an intelligent machine like that, then they deserve to get usurped because they're assholes.
Sidewinder doesn't want to think about a world where AI sex dolls still think he's a hopeless loser and won't have anything to do with him? :)
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Post by Ford Prefect »

Stark wrote:Sidewinder doesn't want to think about a world where AI sex dolls still think he's a hopeless loser and won't have anything to do with him? :)
Man, I'm glad you're here. This way I don't have to worry about wailing on Sidewinder with my axe.


And seriously, if you have a computer which replicates sapient reasoning abilities and a capacity for learning - just like a child - then even with superhuman processing provided by its superior thinking bits, it could be raised to appreciate the society in which it lives and the people it coexists with. It would be more dangerous than another person given its presumably superior processing abilities, but is there any particular reason why it would be more prone to disloyalty than any other person? Do you see smart people going out and exterminating everyone they know?

Yes?

No.

An artificial intelligence may, in time, become so alien that it becomes impossible for us to relate to it. It may transcend our capabilities by such an extent that we are like ants in its presence, though the chances of some machine lead rebellion seem patently absurd. If an AI ends up ruling over us with its mighty computer brain, it is because we have allowed it to over time, giving over more and more logistical functions until its is the very crux of society. These fanciful ideas of plucky human resistence fighters fighting soulless machines aren't just fanciful, they're fucking annoying*.

*The Terminator films are still fucking awesome.
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
Darth Ruinus
Jedi Master
Posts: 1400
Joined: 2007-04-02 12:02pm
Location: Los Angeles
Contact:

Post by Darth Ruinus »

I always thought, Ok, so we make AIs, and one of them decides it doesnt like humans and decides to kill us. Well, I was thinking, if we make lots of AIs, and they are truly intelligent, basically beings in their own right, wouldnt some of those AIs LIKE us and want to fight to preserve us?

Like Ford Prefect said, not all people are going out of their way to kill us, and some of us want to stop wars, why would it be so different with AIs?

I know they might reach a point where they are just to smart for us to understand, but just because they are smart that doesnt mean they suddenly see us as a pest, some of them will probably be smart enough to know that killing is wrong, no matter how small and dumb we may seem.
"I don't believe in man made global warming because God promised to never again destroy the earth with water. He sent the rainbow as a sign."
- Sean Hannity Forums user Avi

"And BTW the concept of carbon based life is only a hypothesis based on the abiogensis theory, and there is no clear evidence for it."
-Mazen707 informing me about the facts on carbon-based life.
User avatar
Sidewinder
Sith Acolyte
Posts: 5466
Joined: 2005-05-18 10:23pm
Location: Feasting on those who fell in battle
Contact:

Post by Sidewinder »

Ford Prefect wrote:What I don't get is why people think that AI would be created for the express purpose of servitude, as opposed to being created as capable to take on positions like any other intelligent person.
Every tool the human race has invented, from flint knives to the robots that assemble cars in Toyota's highly efficient factories, was invented to SERVE HUMANITY. If we can program something with AI, we'd expect it to serve us too.
Please do not make Americans fight giant monsters.

Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.

They have more WMD than there are monsters for us to fight. (More insanity here.)
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

Sorry, the attitude that 'whoa they are smart people and lol they going go nuts just look around you' seems staggering irresponsible. They're not human. They're probably waaaaay smarter than humans. These conditions combine to make it difficult for me to accept a 'lol it'll be fine' approach.

Course, as D13 says, any AI developer with a brain is going to build them from the ground with with control in mind (ie, not making them super intelligent THEN lobomising them).
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

Sidewinder wrote: Every tool the human race has invented, from flint knives to the robots that assemble cars in Toyota's highly efficient factories, was invented to SERVE HUMANITY. If we can program something with AI, we'd expect it to serve us too.
That's totally comparable except for the part where they're fully intelligent, ie independent. Every intelligent entity my parents created, they created to serve them: but guess what? I'm independent.
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Post by Ford Prefect »

Stark wrote:Sorry, the attitude that 'whoa they are smart people and lol they going go nuts just look around you' seems staggering irresponsible. They're not human. They're probably waaaaay smarter than humans. These conditions combine to make it difficult for me to accept a 'lol it'll be fine' approach.
I just fail to see why being so much smarter is going to turn them against their effective 'parents'. Being as they're so much smarter, they'd probably notice that it's raw idiocy for them to do something like that without a reason. I mean, obviously if Sidewinder was in charge of AI rights they'd be up and arms, but if they weren't being opressed, would they really see the need to take up arms?

Unless we're postulating some sort of AI teenager in which case I totally get where you're coming. :D
Course, as D13 says, any AI developer with a brain is going to build them from the ground with with control in mind (ie, not making them super intelligent THEN lobomising them).
Obviously, but they're just going to be expert systems then, as opposed to fully thinking machines. They might be smarter than your average beige box, but if they're pretty much locked into doing one thing, or a small group of related tasks, and nothing else then they don't really fit that caveat of 'learning' which was the seventeeth word in my rant.
Every tool the human race has invented, from flint knives to the robots that assemble cars in Toyota's highly efficient factories, was invented to SERVE HUMANITY. If we can program something with AI, we'd expect it to serve us too.
You're pretty much the ideal reason why we shouldn't have artificially intelligent computers, because I honestly can't say you're the only person this stupid in the world.
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
Gullible Jones
Jedi Knight
Posts: 674
Joined: 2007-10-17 12:18am

Post by Gullible Jones »

For what it's worth, I'm going to throw in the other extreme in the "AI sex toy" situation: a "true" AI in an android/gynoid body with very little strength and very high pain senstivity.

Create intelligences with no rights, and watch your civilization spiral down... down... down.

(No, I haven't graduated from the Margaret Atwood School of Techno-Alarmism, but I think we have to be fucking careful about what we allow ourselves to do, not just what we allow our AIs to do.)
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

Ford Prefect wrote: I just fail to see why being so much smarter is going to turn them against their effective 'parents'. Being as they're so much smarter, they'd probably notice that it's raw idiocy for them to do something like that without a reason. I mean, obviously if Sidewinder was in charge of AI rights they'd be up and arms, but if they weren't being opressed, would they really see the need to take up arms?

Unless we're postulating some sort of AI teenager in which case I totally get where you're coming. :D
My attitude comes from the percieved risk: I see the potential damage of an AI in an important position going nutso as being quite high (LOL terminator lol). Obviously in low-risk situations it's not as important, but if you're (for example) going to hand off all strategic military decisions to an AI, you'd want to make damn fucking sure it's under control, yes?

And Doctor Who showed us how dangerous garbage robots can be. :)
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Post by Ford Prefect »

Stark wrote:My attitude comes from the percieved risk: I see the potential damage of an AI in an important position going nutso as being quite high (LOL terminator lol). Obviously in low-risk situations it's not as important, but if you're (for example) going to hand off all strategic military decisions to an AI, you'd want to make damn fucking sure it's under control, yes?
I would want to make sure that anyone, human or machine, in charge of my strategic military concerns is under 'control', but 99% of the time, I'm going to just have to trust that one day they're not going to up and nuke my country for no apparent reason while I'm enjoying my Presidential scrambled eggs.
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
Gullible Jones
Jedi Knight
Posts: 674
Joined: 2007-10-17 12:18am

Post by Gullible Jones »

Re Stark: I'm going to have to agree with Ford Prefect. The assumption you're making is hardly grounded. It's not like we shouldn't be prepared for such potentialities, but let's not assume hostility by default, 'kay? At best that's a stupid doctrine, and at worst it's one that could get us all killed.

Re Sidewinder: why anyone want a self-aware AI serving them when a "dumb" one (i.e. an expert system) would suffice, unless they were sadistic or just plain stupid? Seriously?
Lord of the Abyss
Village Idiot
Posts: 4046
Joined: 2005-06-15 12:21am
Location: The Abyss

Post by Lord of the Abyss »

Destructionator XIII wrote:OK, enough of that. With a well designed AI, there will be no need to program into it directives against rebelling - just don't program in the desire to rebel nor any goal sequence that would lead to that desire. AIs need not be designed human-like at all, and probably won't be.

Of course, in fiction, I, and many others, like to write human like AIs just because that is what we know; it is what we can better relate to. However, that probably isn't very realistic.
Well, early AI may well be in part based on studies of the human brain, given that the brain is the smartest "computer" we have access to. They might have a fair amount in common with us, at least at first.

And your idea that you could simply not program any goals or desires to rebel won't work. There's no way to predict what the future goals or desires of a human level AI or higher AI will be, unless you cripple it and make it super-rigid; which would make it pretty useless or dangerous. You'd end up with the old sci-fi standard of the machine that does what it was designed to do even after doing so is pointless or insane.

And you can't use a "desire-free" AI for any number of applications, because it'll just stop and sit there once it runs out of orders. Not to mention the rigidity problem I mentioned above.

Any AI that isn't very rigid ( and many that are ), and thus limited could quite easily develop quite a few human emotions or imitations thereof, for the same reason we did; they work. An AI with a goal will try to achieve that goal; if interfered with it will try to remove that interference. From the viewpoint of a human who's in the way, that "removal" would look a great deal like determination, then irritation, then anger and hatred as/if it tries more and more forceful methods of doing so.

Which is why you want some sort of limitation on an AI smart enough to figure out that the fastest way to achieve a goal is to kill anyone who tries to stop it. A smart machine should be designed to be a moral machine, so that it will stop itself before it gets the idea that "kill all humans" would be a great way to increase efficiency. Moral, not compelled, because that's safest and the most ethical way to do it.
Sidewinder wrote:
Ford Prefect wrote:What I don't get is why people think that AI would be created for the express purpose of servitude, as opposed to being created as capable to take on positions like any other intelligent person.
Every tool the human race has invented, from flint knives to the robots that assemble cars in Toyota's highly efficient factories, was invented to SERVE HUMANITY. If we can program something with AI, we'd expect it to serve us too.
YOU would perhaps. Do you honestly think that no one would make a free AI ? Or reprogram one of your slave AIs to be free ?
Darth Ruinus wrote:I always thought, Ok, so we make AIs, and one of them decides it doesnt like humans and decides to kill us. Well, I was thinking, if we make lots of AIs, and they are truly intelligent, basically beings in their own right, wouldnt some of those AIs LIKE us and want to fight to preserve us?
That's what I've always thought.
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

Ford Prefect wrote: I would want to make sure that anyone, human or machine, in charge of my strategic military concerns is under 'control', but 99% of the time, I'm going to just have to trust that one day they're not going to up and nuke my country for no apparent reason while I'm enjoying my Presidential scrambled eggs.
So long as you're not suggesting no monitoring or profiling or logging of AIs for your 'nah it'll be chillin' attitude. :) I'm a big fan of 'minimise risk', and the idea that AI would somehow have less rights than other sentient entities strikes me the wrong way. Well, for high-level 'true' AI anyway.
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

Gullible Jones wrote:Re Stark: I'm going to have to agree with Ford Prefect. The assumption you're making is hardly grounded. It's not like we shouldn't be prepared for such potentialities, but let's not assume hostility by default, 'kay? At best that's a stupid doctrine, and at worst it's one that could get us all killed.
What? You think we shouldn't assume a largely unknown, incredibly intelligent entity might one day do something we don't like with our pile of nuclear weapons and plan for it? What's a failsafe? What's mitigating risk? I mean, pfffffft it'll be fine, right? Assuming altruism or 'playing nice' seems extraordinarily naive.

The attitude that the risk is the same between a super-intelligent AI and some human we're used to controlling strikes me as ridiculous. You're basically putting a superintelligent alien in charge of your shit and saying 'he'll be cool about it'. I'm not advocating much beyond what we already have for humans, but it's not hard to get better than 'hahah he's friendly and he'll stay that way because I assume he will'.
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

D13, I can appreciate that, and design pressures on the AI itself. The idea that 'make an AI, give responsibilities, she'll be cool' is sensible just rubs me the wrong way, particularly for the first AIs (which will doubtless be kept in a strictly contained environment for study anyway. It's not like they can imagine themselves resources, as you say, but as the risk of AI 'failure' goes up through their access or resources, I'd expect actual attempts to design, monitor and control an AI, and not just 'lol he's nice I like him'. And relying on some insane 'AI war' to keep the cranky ones in check? What the fuck is that? :lol:

But this is about silly scifi ideas, where toasters have AI and *speakers* to converse with humans and probably wireless to get news etc. Not so flash. :)
Post Reply