5 Awesome Sci-Fi Inventions (That Would Actually Suck)

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Post by Ford Prefect »

Sidewinder wrote: To paraphrase Ford Prefect, "Blah blah blah AI is sentient, and therefore have equal rights to humans and other sentient beings, so denying them sapient characteristics is no better than slavery, blah blah blah." (A stupid argument, as Starglider pointed out.)
Wow, I'm almost impressed. I never mentioned anything about it being wrong to create distinctly non-sapient AI. I expressed the opinion that enslaving a sapient AI would be wrong; though Starglider has comprehensively pointed out that this is irrational. Of course, it's not like you had any more understanding of what a real AI would be like than I did.

So, as usual, you're just pulling shit out of your ass again.
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
Valk
Youngling
Posts: 92
Joined: 2007-10-12 04:06pm
Location: the Netherlands

Post by Valk »

This may boil down to the one great but stupid question: "What's the purpose of life?" - with the simple answer: "There is no purpose."
Sure you can invent your own purpose but you know it's not a universal one (that's why so many people ask that stupid question).
If there is no purpose, would an entity of pure logic do anything at all? - We act because of our incentive to replicate. An incentive an AI will not have and I am very doubtful if it could evolve one because computer programs cannot evolve. Altering, removing or moving a bit or byte will usually crash the program or cause some output to be garbled. We can evolve because every nucleotide sequence yields a result (a large protein), and it's form determines its function.
And if robots were to make new ones (initially as per our instructions), they'd probably program them instead of using a copying process that introduces errors. At the very least they use a perfect copying process, so they will not introduce anything they do not deem useful.
A nanite-based AI could evolve I think, because such an AI must have huge tolerances and failsafes for broken, displaced or malfunctioning nanites. - but why would you build that when a microcontroller is superior in every aspect that we (humans) care about for this purpose?


Further, robots are in some relevant ways inferior to us. They build themselves with refined materials, in addition to acquiring these materials they need fuel for both processing it and powering themselves. We on the other hand eat stuff that is simultaneously a building material and fuel. There is also much more carbon than typical robot materials. Robots could also use carbon as presented by nature, but it would weaken them.
User avatar
White Haven
Sith Acolyte
Posts: 6360
Joined: 2004-05-17 03:14pm
Location: The North Remembers, When It Can Be Bothered

Post by White Haven »

Something else that came to mind as a point for thought if we assume sapient AI; what entity is more likely to have a desire to rebel, one who's able to choose their actions, or one who's been forced into a predestined path from its creation. While it may be easier to control the latter, the former would lack at least one rather pressing reason to want to strike back.

And yes, I know, that doesn't apply to subsapient AI and so forth, this's jsut a thought that occured to me as I was driving to work and thinking about how mind-bogglingly obnoxious the task of maintaining a real-time traffic control network would be for an AI. And lo and behold, I see this thread.
Image
Image
Chronological Incontinence: Time warps around the poster. The thread topic winks out of existence and reappears in 1d10 posts.

Out of Context Theatre, this week starring Darth Nostril.
-'If you really want to fuck with these idiots tell them that there is a vaccine for chemtrails.'

Fiction!: The Final War (Bolo/Lovecraft) (Ch 7 9/15/11), Living (D&D, Complete)Image
Post Reply