5 Awesome Sci-Fi Inventions (That Would Actually Suck)

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

User avatar
Gullible Jones
Jedi Knight
Posts: 674
Joined: 2007-10-17 12:18am

Post by Gullible Jones »

Stark wrote:
Gullible Jones wrote:Re Stark: I'm going to have to agree with Ford Prefect. The assumption you're making is hardly grounded. It's not like we shouldn't be prepared for such potentialities, but let's not assume hostility by default, 'kay? At best that's a stupid doctrine, and at worst it's one that could get us all killed.
What? You think we shouldn't assume a largely unknown, incredibly intelligent entity might one day do something we don't like with our pile of nuclear weapons and plan for it? What's a failsafe? What's mitigating risk? I mean, pfffffft it'll be fine, right? Assuming altruism or 'playing nice' seems extraordinarily naive.
Umm, no, I didn't say anything about giving it responsibilities like that.
The attitude that the risk is the same between a super-intelligent AI and some human we're used to controlling strikes me as ridiculous. You're basically putting a superintelligent alien in charge of your shit and saying 'he'll be cool about it'. I'm not advocating much beyond what we already have for humans, but it's not hard to get better than 'hahah he's friendly and he'll stay that way because I assume he will'.
Again: I did not advocate putting an AI in control of anything. What the fuck would be the point of that, when something that didn't have a mind of its own would be perfectly sufficient and in all likelihood better?

Maybe I'm overreacting; however, I've seen enough people spouting the "extraterrestrials will be hostile, we should try to kill them if they visit us" line that I'm quite weary of those claiming that anything should be considered hostile by default. Caution, sure. Paranoia, fucking hell no.
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

Maybe you shouldn't 'agree' with someone without understand what was under discussion then, hmm? Ford and I were discussing aspects of AI control with specific examples like 'strategic military etc etc'. I repeat, your apparently 'it'll be fine don't worry about it' attitude is stupid. At very least you'd want steps at design level and a firm understanding of the artificial psyche, and hopefully logging the AI internally as well to give an idea of what's going on.

Holy fucking shit, 2001 in the fucking 60s had this shit, for an AI in charge of a single spaceship. Clearly I'm fucking irrational and we should just laugh and say 'nah they're intelligent it'll be fine', or even 'some other AIs will join us and fight like in Street Fighter'. :lol:
User avatar
Gullible Jones
Jedi Knight
Posts: 674
Joined: 2007-10-17 12:18am

Post by Gullible Jones »

Hmm. I'm looking for where Ford Prefect said anything about putting AIs in control strategic military stuff, or you guys indicated you were talking specifically about that. Maybe I'm going blind, but I'm not seeing it.
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

Stark wrote:My attitude comes from the percieved risk: I see the potential damage of an AI in an important position going nutso as being quite high (LOL terminator lol). Obviously in low-risk situations it's not as important, but if you're (for example) going to hand off all strategic military decisions to an AI, you'd want to make damn fucking sure it's under control, yes?
Ford Prefect wrote: I would want to make sure that anyone, human or machine, in charge of my strategic military concerns is under 'control', but 99% of the time, I'm going to just have to trust that one day they're not going to up and nuke my country for no apparent reason while I'm enjoying my Presidential scrambled eggs.
IMMEDIATELY before the post of yours I responded to. Yeah, I bet you were looking. :roll:

We were discussing risk and control, and Ford appears to be of the opinion that a properly designed AI would be no more risky to use in such positions as a human, but I disagree with his apparent 'nevermind it'll be fine' attitude. Using an AI present advantages and disadvantages, and I don't think a naive 'they won't go nuts I promise' attitude is viable in such high-risk (well, high potential for damage) roles.
User avatar
Gullible Jones
Jedi Knight
Posts: 674
Joined: 2007-10-17 12:18am

Post by Gullible Jones »

Ah, I missed that - I was looking further back in the thread. Color me dumb.

Yes, having an AI solely in charge of one's strategic military concerns would be quite stupid.

FWIW though, Stark: do you see AIs going insane as a highly likely scenario, or at least more likely than humans going insane? If so, why?

(If not... forget I asked.)
User avatar
18-Till-I-Die
Emperor's Hand
Posts: 7271
Joined: 2004-02-22 05:07am
Location: In your base, killing your d00ds...obviously

Post by 18-Till-I-Die »

Well i think Stark's got the right idea. Giving over power to some AI that is vastly more intelligent and then hoping it'll be "nice" is fucking retarded. That's what always creeped me out when i would read the Culture books.

My idea is to put in a certain code or word that, when entered or spoken, causes the AI to shut down. Basically so that no matter how advanced it is, there would always, always be some way for even the lowest puny human to kill the thing if it gets cute.

I disagree about the whole "AIs should be FREEEE" stuff thought but thats another story entirely.
Kanye West Saves.

Image
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Post by Ford Prefect »

Stark wrote:We were discussing risk and control, and Ford appears to be of the opinion that a properly designed AI would be no more risky to use in such positions as a human, but I disagree with his apparent 'nevermind it'll be fine' attitude.
Perhaps I'm not being clear enough here: I don't believe that an AI is very likely to spontaneously decide to turn against us with no actual reason. Give it a reason to turn on us, and it's going to be much more dangerous than a human, but if you treat it with the same rights and dignity of any other sapient being, properly compensate it as you would another employee (It might be a computational intelligence, but it might enjoy buying random shit off ebay or something), and if it has been raised within the society itself ... why exactly is it going to go nuts and kill us all.

Things change when the AI becomes more and more powerful, and thus more and more difficult to relate to. But total extermination seems unlikely when it can just go bugger off to Jupiter and work on turning it into a big brain.
I disagree about the whole "AIs should be FREEEE" stuff thought but thats another story entirely.
So you're totally okay with putting a spaient, thinking being into lavery? You'd be totally okay if I went on down to the market, bought a person, and made them my possession? I bet you're one of those people who would be okay with killing of alien species because they're not human.
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

Ford Prefect wrote: Perhaps I'm not being clear enough here: I don't believe that an AI is very likely to spontaneously decide to turn against us with no actual reason. Give it a reason to turn on us, and it's going to be much more dangerous than a human, but if you treat it with the same rights and dignity of any other sapient being, properly compensate it as you would another employee (It might be a computational intelligence, but it might enjoy buying random shit off ebay or something), and if it has been raised within the society itself ... why exactly is it going to go nuts and kill us all.

Things change when the AI becomes more and more powerful, and thus more and more difficult to relate to. But total extermination seems unlikely when it can just go bugger off to Jupiter and work on turning it into a big brain.
Oh I know, I was just angling for discussion re AI paranoia. :) Humans are relatively predictable and controlled by all kinds of biological, social and psychological methods that'd have to be reinvented for AI before they could be considered as 'trustworthy'. It's not like anyone would be dumb enough to put mankind's first AI in charge of nukes or anything, Terminator notwithstanding. :)

I'm totally behind treating a true AI with sapient rights. Optimus Prime was *right*. :)
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

Gullible Jones wrote:Ah, I missed that - I was looking further back in the thread. Color me dumb.

Yes, having an AI solely in charge of one's strategic military concerns would be quite stupid.

FWIW though, Stark: do you see AIs going insane as a highly likely scenario, or at least more likely than humans going insane? If so, why?

(If not... forget I asked.)
Nah, not really (largely for reasons that D13 and Ford have mentioned). I just think that since they're so different, a lot of caution is warranted, and not some blase 'be nice and they'll be nice' thing.

Maybe I read TOO MUCH 2001, lol. :)
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Post by Ford Prefect »

Stark wrote:Oh I know, I was just angling for discussion re AI paranoia. :) Humans are relatively predictable and controlled by all kinds of biological, social and psychological methods that'd have to be reinvented for AI before they could be considered as 'trustworthy'. It's not like anyone would be dumb enough to put mankind's first AI in charge of nukes or anything, Terminator notwithstanding. :)
Well, I suppose it depends on the techniques behind the creation of the AI in question. Again, I'm not an expert (where's Starglider? :)), but in some ways, an AI could be more predictable, without man's biological vagaries. I mean, I'd think you'd actually have to start with an idiot, and then work it up to intelligence like Doctor Chandra did with the HAL 9000 (great example, yeah? :lol:), so it's not like it doesn't have formative learning experiences with humans.

I mean, I still don't think you'd put it, or the AIs it helps you build in future, in charge of anything like nuclear weapons. It would make a fantastic advisor, however.
I'm totally behind treating a true AI with sapient rights. Optimus Prime was *right*. :)
Optimus Prime is pretty much the greatest role model anyone could ever have.
Maybe I read TOO MUCH 2001, lol.
Well, HAL did have something of a reason, even if it was pretty goddamn faulty. It wasn't like he just turned around and decided to fuck over Bowman (who HAL considered a friend I recall, though I haven't read 2OOl in some time).
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
AMX
Jedi Knight
Posts: 853
Joined: 2004-09-30 06:43am

Post by AMX »

Going back to the article, I'm somewhat surprised about "#2. Teleporters".
While I get the criticism of the "kill/clone mechanism", there are equivalent devices in other SF that explicitly work on a different, "nonfatal" principle; so the problem is with the implementation in some popular SF, rather than the concept of teleporters as such.
User avatar
Nyrath
Padawan Learner
Posts: 341
Joined: 2006-01-23 04:04pm
Location: the praeternatural tower
Contact:

Post by Nyrath »

Ford Prefect wrote:Perhaps I'm not being clear enough here: I don't believe that an AI is very likely to spontaneously decide to turn against us with no actual reason. Give it a reason to turn on us, and it's going to be much more dangerous than a human, but if you treat it with the same rights and dignity of any other sapient being, properly compensate it as you would another employee (It might be a computational intelligence, but it might enjoy buying random shit off ebay or something), and if it has been raised within the society itself ... why exactly is it going to go nuts and kill us all.
Well, there was a semi-plausible scenario for that unhappy outcome in the novel The Two Faces of Tomorrow by James P. Hogan. It outlined a situation where a self-programming process optimizing computer system could reprogram itself into behavior designed to eradicate the human race even if it didn't know the human race existed.

All it would know is that by experimenting with various measures at its disposal, it could optimize whatever its task was.

For instance, a computer in charge of a lunar based mass driver wants to ensure that it has adequate electricity to send the payloads to the site of a future L5 colony. If it suffers brown-outs because of excessive electrical demands from the people living in the nearby lunar base, the computer might experimentally discover that it can increase its electrical supply by using the mass driver to turn the lunar base into a smoking crater.

I told you it was semi-plausible. Read the book for more details.
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27384
Joined: 2002-07-07 06:30am
Location: The Lost City

Post by NecronLord »

Destructionator XIII wrote:This is hilarious when you consider that Cylons are called toasters... because they toast human flesh with their lasers!
He may be thinking of Red Dwarf. Which had an actual sapient toaster in it, that pestered the characters to consume bread at all times.

And I thought it was because they were the ubiquitous wedding gift. :)
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
User avatar
Gil Hamilton
Tipsy Space Birdie
Posts: 12962
Joined: 2002-07-04 05:47pm
Contact:

Post by Gil Hamilton »

Ford Prefect wrote:Well, HAL did have something of a reason, even if it was pretty goddamn faulty. It wasn't like he just turned around and decided to fuck over Bowman (who HAL considered a friend I recall, though I haven't read 2OOl in some time).
To be fair to HAL, it wasn't really his fault. He inadvertantly was given two mutually exclusive directives which he was required to perform: Truthfully and completely give all information at his disposal to the crew of the Discovery and also to keep the true mission to Jupiter a secret until they arrived for reasons of national security. When Poole and Bowman thought to disconnect his cognitive functions, it endangered the second, but he couldn't truthfully tell them this, which violates the first directive. This gave him paranoia and, unfortunately for the crew of the Discovery, resolved itself as "Kill the crew so there no one left to withhold the secret from and thus no contradiction".
"Show me an angel and I will paint you one." - Gustav Courbet

"Quetzalcoatl, plumed serpent of the Aztecs... you are a pussy." - Stephen Colbert

"Really, I'm jealous of how much smarter than me he is. I'm not an expert on anything and he's an expert on things he knows nothing about." - Me, concerning a bullshitter
OmegaGuy
Retarded Spambot
Posts: 1076
Joined: 2005-12-02 09:23pm

Post by OmegaGuy »

Back to the original topic, not all teleportation mechanisms rely on killing the original person and creating a copy. For example, teleportation based on wormholes wouldn't have this problem. Even with a teleporter like the ones in the article, it could still be used to send inanimate objects.
Image
User avatar
Molyneux
Emperor's Hand
Posts: 7186
Joined: 2005-03-04 08:47am
Location: Long Island

Post by Molyneux »

With the exception of the griping about replicators, that's a pretty good list.
Of course, he doesn't really mention artists, authors, 3D animators...pretty much any kind of creative application where you want other people to see your invention, which would be either unaffected or enhanced by universal, cheap replicator technology.
Ceci n'est pas une signature.
User avatar
Drooling Iguana
Sith Marauder
Posts: 4975
Joined: 2003-05-13 01:07am
Location: Sector ZZ9 Plural Z Alpha

Post by Drooling Iguana »

The only alternative would have to be some kind of enormous air bag that instantly inflates around you in an emergency, letting you bounce gently to safety while you involuntarily shout, "WHEEEE!!!" The problem with that, of course, is that we'd be intentionally crashing all the time just to make that happen.
Awesome.

Although I can't say I agree with their assessment of replicators. Sure, it would make our current capitalist scarcity-based society unworkable, but it's still creating a net gain in the amount of wealth available in the world, and as long as that wealth is distributed equally everyone would benefit.

Basically, replicators would make connumism actually work.
Image
"Stop! No one can survive these deadly rays!"
"These deadly rays will be your death!"
- Thor and Akton, Starcrash

"Before man reaches the moon your mail will be delivered within hours from New York to California, to England, to India or to Australia by guided missiles.... We stand on the threshold of rocket mail."
- Arthur Summerfield, US Postmaster General 1953 - 1961
User avatar
Molyneux
Emperor's Hand
Posts: 7186
Joined: 2005-03-04 08:47am
Location: Long Island

Post by Molyneux »

Drooling Iguana wrote:
The only alternative would have to be some kind of enormous air bag that instantly inflates around you in an emergency, letting you bounce gently to safety while you involuntarily shout, "WHEEEE!!!" The problem with that, of course, is that we'd be intentionally crashing all the time just to make that happen.
Awesome.

Although I can't say I agree with their assessment of replicators. Sure, it would make our current capitalist scarcity-based society unworkable, but it's still creating a net gain in the amount of wealth available in the world, and as long as that wealth is distributed equally everyone would benefit.

Basically, replicators would make connumism actually work.
Replicators are essentially a large step towards turning the world into SecondLife.
Ceci n'est pas une signature.
User avatar
18-Till-I-Die
Emperor's Hand
Posts: 7271
Joined: 2004-02-22 05:07am
Location: In your base, killing your d00ds...obviously

Post by 18-Till-I-Die »

Ford Prefect wrote:
I disagree about the whole "AIs should be FREEEE" stuff thought but thats another story entirely.
So you're totally okay with putting a spaient, thinking being into lavery? You'd be totally okay if I went on down to the market, bought a person, and made them my possession? I bet you're one of those people who would be okay with killing of alien species because they're not human.
No, i'm ok with putting a machine in "slavery" (i.e., making it do as it's built to do when asked, like...you know, a machine) because that is what the machine was built to do.

I have a question: what the hell do you think AIs will be built to do? Have fun? Go to the mall? Hang out and play Warhammer 40,000 with us? No. They'll be made to preform tasks humans cant, like all computers, they'll just be way more competent and less likely to break down or freeze or crash. I see no reason why a machine, and i stress that machine, should be anthropomorphized in this way.

Or let me put it another way: you are in a burning building. An AI is over here, a human is over there. They're both of equal intelligence and, lets say, "importance", however you want to judge that. You can get away unharmed, with one, but not both. Which would YOU save? Or lets get personal...it's the machine or your son, or your wife, your father...who do you pick?

And if it came to it, yeah i'd kill an alien if he was a danger to humans. I wouldnt just wipe out any alien race we meet but if they get cute i think we should establish some boundries.
Kanye West Saves.

Image
User avatar
Spanky The Dolphin
Mammy Two-Shoes
Posts: 30776
Joined: 2002-07-05 05:45pm
Location: Reykjavík, Iceland (not really)

Post by Spanky The Dolphin »

18-Till-I-Die wrote:I have a question: what the hell do you think AIs will be built to do?
True AIs will likely be restricted to boxes sitting in the research labs of university robotics departments.
Image
I believe in a sign of Zeta.

[BOTM|WG|JL|Mecha Maniacs|Pax Cybertronia|Veteran of the Psychic Wars|Eva Expert]

"And besides, who cares if a monster destroys Australia?"
User avatar
18-Till-I-Die
Emperor's Hand
Posts: 7271
Joined: 2004-02-22 05:07am
Location: In your base, killing your d00ds...obviously

Post by 18-Till-I-Die »

Spanky The Dolphin wrote:
18-Till-I-Die wrote:I have a question: what the hell do you think AIs will be built to do?
True AIs will likely be restricted to boxes sitting in the research labs of university robotics departments.
That or, running complex machines too advanced for normal humans, requiring advanced multitasking. Like if we ever get off this planet, i'd imagine they would do most of the piloting of future starships, or surgery or the like. Where a steady hand and superhuman intellect and reflexes would be a godsend.

Not treated like some little kid, or buddy, or mecha-pal.
Kanye West Saves.

Image
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27384
Joined: 2002-07-07 06:30am
Location: The Lost City

Post by NecronLord »

18-Till-I-Die wrote:I have a question: what the hell do you think AIs will be built to do? Have fun? Go to the mall? Hang out and play Warhammer 40,000 with us? No. They'll be made to preform tasks humans cant, like all computers, they'll just be way more competent and less likely to break down or freeze or crash. I see no reason why a machine, and i stress that machine, should be anthropomorphized in this way.
The very definition of an AGI, in this context, is self aware. Capable of self reflection and such. Not just an expert system like the type we're working on now to (for instance) assist doctors.
Or let me put it another way: you are in a burning building. An AI is over here, a human is over there. They're both of equal intelligence and, lets say, "importance", however you want to judge that. You can get away unharmed, with one, but not both. Which would YOU save?
Logically speaking; the Machine. It is likely to live longer anyway.
Or lets get personal...it's the machine or your son, or your wife, your father...who do you pick?
Irrelevant. I would pick my son, if I had one, over ten humans. I am nowhere near strong enough to resist parental instinct like that. It doesn't mean it's the best thing to do, but it is what I would realistially do.
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
User avatar
18-Till-I-Die
Emperor's Hand
Posts: 7271
Joined: 2004-02-22 05:07am
Location: In your base, killing your d00ds...obviously

Post by 18-Till-I-Die »

Ok well, screw the personal part, look at it from the perspective of just some guy you met and an AI that he happens to have. You're right that is an appeal to emotion, so take that part out of the equation.
Kanye West Saves.

Image
User avatar
Spanky The Dolphin
Mammy Two-Shoes
Posts: 30776
Joined: 2002-07-05 05:45pm
Location: Reykjavík, Iceland (not really)

Post by Spanky The Dolphin »

18-Till-I-Die wrote:
Spanky The Dolphin wrote:
18-Till-I-Die wrote:I have a question: what the hell do you think AIs will be built to do?
True AIs will likely be restricted to boxes sitting in the research labs of university robotics departments.
That or, running complex machines too advanced for normal humans, requiring advanced multitasking. Like if we ever get off this planet, i'd imagine they would do most of the piloting of future starships, or surgery or the like. Where a steady hand and superhuman intellect and reflexes would be a godsend.
Control and assistance computer systems don't need intelligence right now in order to do their jobs, such as the case for airplane autopilots, robot-assisted surgery, and current research with self-driving cars. Imbuing such systems with genuine artificial intelligence would just add an incredibly complex and unpredictable variable.
Image
I believe in a sign of Zeta.

[BOTM|WG|JL|Mecha Maniacs|Pax Cybertronia|Veteran of the Psychic Wars|Eva Expert]

"And besides, who cares if a monster destroys Australia?"
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Post by Ford Prefect »

18-Till-I-Die wrote:No, i'm ok with putting a machine in "slavery" (i.e., making it do as it's built to do when asked, like...you know, a machine) because that is what the machine was built to do.
No, 18, what you're 'ok' with is taking a fully sapient being and making it your slave. You seem to believe there is a difference between the machine sapience and human sapience, but in the end, there's no difference between the quality of sapience there. You seem to think of an AI as being a lesser being than you or other humans, which seems remarkably similar to the attitude of slave owners in the past. Funny that.

There really is no two ways about this. To apply your double standard another way, you would be against the torture of humans ... yet fine with torturing dogs, even though there's going to be no qualitative difference in the magnitude of suffering felt emotionally.
I have a question: what the hell do you think AIs will be built to do? Have fun? Go to the mall? Hang out and play Warhammer 40,000 with us? No. They'll be made to preform tasks humans cant, like all computers, they'll just be way more competent and less likely to break down or freeze or crash. I see no reason why a machine, and i stress that machine, should be anthropomorphized in this way.
Yes, an AI will be built with a purpose in mind, yet it will still possess human qualitities. While it may have been created for a task, such as logisitics management or or traffic monitoring, there's no reason to not assume it won't dedicate a few processing cycles to interacting with its human co-workers at days end. The 'true AI' that Stark and I have been discussing is at the very least Turing compliant, capable of carrying out a conversation with human beings and being essentially impossible to tell that it is artificial.

I mean, how the hell do I not anthropomorphise an AI? The chances are that its computation is done using networks based off the neural functions of the human brain!
Or let me put it another way: you are in a burning building. An AI is over here, a human is over there. They're both of equal intelligence and, lets say, "importance", however you want to judge that. You can get away unharmed, with one, but not both. Which would YOU save? Or lets get personal...it's the machine or your son, or your wife, your father...who do you pick?
:lol: I love how you create the false dilemma with me having to choose a member of my family over a stranger. It works both ways though: what if the AI was your creation? What if you were the one who flicked the switch, gave it life and then brought it to true intelligence? What if you had spent years teaching it the difference red and blue or hot and cold or love and hate? What if the AI wasn't just some stranger in a box, but rather your own child?

It's not so clean cut as you'd like 18. Machines or not, they can still potentially be people.
And if it came to it, yeah i'd kill an alien if he was a danger to humans. I wouldnt just wipe out any alien race we meet but if they get cute i think we should establish some boundries.
I love how you seem to think of it terms of 'them' and 'us', as though there's something special about humanity.
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
Post Reply