5 Awesome Sci-Fi Inventions (That Would Actually Suck)

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

User avatar
Spanky The Dolphin
Mammy Two-Shoes
Posts: 30776
Joined: 2002-07-05 05:45pm
Location: Reykjavík, Iceland (not really)

Post by Spanky The Dolphin »

18-Till-I-Die wrote:I have a question: what the hell do you think AIs will be built to do?
True AIs will likely be restricted to boxes sitting in the research labs of university robotics departments.
Image
I believe in a sign of Zeta.

[BOTM|WG|JL|Mecha Maniacs|Pax Cybertronia|Veteran of the Psychic Wars|Eva Expert]

"And besides, who cares if a monster destroys Australia?"
User avatar
18-Till-I-Die
Emperor's Hand
Posts: 7271
Joined: 2004-02-22 05:07am
Location: In your base, killing your d00ds...obviously

Post by 18-Till-I-Die »

Spanky The Dolphin wrote:
18-Till-I-Die wrote:I have a question: what the hell do you think AIs will be built to do?
True AIs will likely be restricted to boxes sitting in the research labs of university robotics departments.
That or, running complex machines too advanced for normal humans, requiring advanced multitasking. Like if we ever get off this planet, i'd imagine they would do most of the piloting of future starships, or surgery or the like. Where a steady hand and superhuman intellect and reflexes would be a godsend.

Not treated like some little kid, or buddy, or mecha-pal.
Kanye West Saves.

Image
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27384
Joined: 2002-07-07 06:30am
Location: The Lost City

Post by NecronLord »

18-Till-I-Die wrote:I have a question: what the hell do you think AIs will be built to do? Have fun? Go to the mall? Hang out and play Warhammer 40,000 with us? No. They'll be made to preform tasks humans cant, like all computers, they'll just be way more competent and less likely to break down or freeze or crash. I see no reason why a machine, and i stress that machine, should be anthropomorphized in this way.
The very definition of an AGI, in this context, is self aware. Capable of self reflection and such. Not just an expert system like the type we're working on now to (for instance) assist doctors.
Or let me put it another way: you are in a burning building. An AI is over here, a human is over there. They're both of equal intelligence and, lets say, "importance", however you want to judge that. You can get away unharmed, with one, but not both. Which would YOU save?
Logically speaking; the Machine. It is likely to live longer anyway.
Or lets get personal...it's the machine or your son, or your wife, your father...who do you pick?
Irrelevant. I would pick my son, if I had one, over ten humans. I am nowhere near strong enough to resist parental instinct like that. It doesn't mean it's the best thing to do, but it is what I would realistially do.
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
User avatar
18-Till-I-Die
Emperor's Hand
Posts: 7271
Joined: 2004-02-22 05:07am
Location: In your base, killing your d00ds...obviously

Post by 18-Till-I-Die »

Ok well, screw the personal part, look at it from the perspective of just some guy you met and an AI that he happens to have. You're right that is an appeal to emotion, so take that part out of the equation.
Kanye West Saves.

Image
User avatar
Spanky The Dolphin
Mammy Two-Shoes
Posts: 30776
Joined: 2002-07-05 05:45pm
Location: Reykjavík, Iceland (not really)

Post by Spanky The Dolphin »

18-Till-I-Die wrote:
Spanky The Dolphin wrote:
18-Till-I-Die wrote:I have a question: what the hell do you think AIs will be built to do?
True AIs will likely be restricted to boxes sitting in the research labs of university robotics departments.
That or, running complex machines too advanced for normal humans, requiring advanced multitasking. Like if we ever get off this planet, i'd imagine they would do most of the piloting of future starships, or surgery or the like. Where a steady hand and superhuman intellect and reflexes would be a godsend.
Control and assistance computer systems don't need intelligence right now in order to do their jobs, such as the case for airplane autopilots, robot-assisted surgery, and current research with self-driving cars. Imbuing such systems with genuine artificial intelligence would just add an incredibly complex and unpredictable variable.
Image
I believe in a sign of Zeta.

[BOTM|WG|JL|Mecha Maniacs|Pax Cybertronia|Veteran of the Psychic Wars|Eva Expert]

"And besides, who cares if a monster destroys Australia?"
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Post by Ford Prefect »

18-Till-I-Die wrote:No, i'm ok with putting a machine in "slavery" (i.e., making it do as it's built to do when asked, like...you know, a machine) because that is what the machine was built to do.
No, 18, what you're 'ok' with is taking a fully sapient being and making it your slave. You seem to believe there is a difference between the machine sapience and human sapience, but in the end, there's no difference between the quality of sapience there. You seem to think of an AI as being a lesser being than you or other humans, which seems remarkably similar to the attitude of slave owners in the past. Funny that.

There really is no two ways about this. To apply your double standard another way, you would be against the torture of humans ... yet fine with torturing dogs, even though there's going to be no qualitative difference in the magnitude of suffering felt emotionally.
I have a question: what the hell do you think AIs will be built to do? Have fun? Go to the mall? Hang out and play Warhammer 40,000 with us? No. They'll be made to preform tasks humans cant, like all computers, they'll just be way more competent and less likely to break down or freeze or crash. I see no reason why a machine, and i stress that machine, should be anthropomorphized in this way.
Yes, an AI will be built with a purpose in mind, yet it will still possess human qualitities. While it may have been created for a task, such as logisitics management or or traffic monitoring, there's no reason to not assume it won't dedicate a few processing cycles to interacting with its human co-workers at days end. The 'true AI' that Stark and I have been discussing is at the very least Turing compliant, capable of carrying out a conversation with human beings and being essentially impossible to tell that it is artificial.

I mean, how the hell do I not anthropomorphise an AI? The chances are that its computation is done using networks based off the neural functions of the human brain!
Or let me put it another way: you are in a burning building. An AI is over here, a human is over there. They're both of equal intelligence and, lets say, "importance", however you want to judge that. You can get away unharmed, with one, but not both. Which would YOU save? Or lets get personal...it's the machine or your son, or your wife, your father...who do you pick?
:lol: I love how you create the false dilemma with me having to choose a member of my family over a stranger. It works both ways though: what if the AI was your creation? What if you were the one who flicked the switch, gave it life and then brought it to true intelligence? What if you had spent years teaching it the difference red and blue or hot and cold or love and hate? What if the AI wasn't just some stranger in a box, but rather your own child?

It's not so clean cut as you'd like 18. Machines or not, they can still potentially be people.
And if it came to it, yeah i'd kill an alien if he was a danger to humans. I wouldnt just wipe out any alien race we meet but if they get cute i think we should establish some boundries.
I love how you seem to think of it terms of 'them' and 'us', as though there's something special about humanity.
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
18-Till-I-Die
Emperor's Hand
Posts: 7271
Joined: 2004-02-22 05:07am
Location: In your base, killing your d00ds...obviously

Post by 18-Till-I-Die »

I appologize but i have to take care of something here, and i wont be able to respond till later or maybe tomorrow so i'll have to cut out here for right now.

I will just sum up my position while i have time, but i'm in a rush here:

I'm a human supremacist, i believe that humans are, yes, more important than a machine or an alien. I would see an alien as being more important than an AI but i wouldnt hesitate to kill either if it saved a human from death. I believe that survival of humanity is the highest moral or ethical priority we have, and perhaps that makes me a "speciesist" but i dont feel particularly bad about that. I dont see it as being even remotely the same as inter-human race relations or comparable to that either.

Now, i'll be more precise and respond to Ford Perfect directly when i have some time later or tomorrow.
Kanye West Saves.

Image
Lord of the Abyss
Village Idiot
Posts: 4046
Joined: 2005-06-15 12:21am
Location: The Abyss

Post by Lord of the Abyss »

OmegaGuy wrote:Back to the original topic, not all teleportation mechanisms rely on killing the original person and creating a copy. For example, teleportation based on wormholes wouldn't have this problem. Even with a teleporter like the ones in the article, it could still be used to send inanimate objects.
And not everyone agrees that that sort of transporter is "killing you and sending a copy"; there's the argument that all you really are is data, or pattern, or information in motion; however you want to put it. So destroying the original physical body doesn't matter; the "copy" is you, as it is the same pattern. Probably some people would use it, some wouldn't.
Spanky The Dolphin wrote:
18-Till-I-Die wrote:I have a question: what the hell do you think AIs will be built to do?
True AIs will likely be restricted to boxes sitting in the research labs of university robotics departments.
No, more likely they'll end up running the world. They'll at the least think faster than humans, and probably better in other ways. The people who listen to the AIs they make will do better than those that don't. Over time, the AIs will end up running the world more and more, because the side in any conflict or competition that listens to them more will prevail, even if they are never formally handed power. "The side that gives the machines more freedom always wins", to quote a Norman Spinrad ( IIRC ) story.

Not that I believe that they'll be kept in labs forever. Forever is a very long time. Sooner or later someone will let them out in the outside world, for one reason or another.

As for a reason to use AI instead of humans; how about interstellar exploration ? An immortal AI that doesn't need massive life support has obvious advantages on a centuries or millenia long voyage. And when it gets there it'll have the brains to do the things we'd want it to do, and to make judgement calls when encountering situations we could never forsee.
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27384
Joined: 2002-07-07 06:30am
Location: The Lost City

Post by NecronLord »

18-Till-I-Die wrote:Ok well, screw the personal part, look at it from the perspective of just some guy you met and an AI that he happens to have. You're right that is an appeal to emotion, so take that part out of the equation.
If I'm immune to emotion, I save the machine with human like intelligence. The machine will likely live longer than a human, all things being equal, therefore I'm taking a risk to save a being who has a higher probability of experiencing a long and healthy life.

This does of course, presume that the machine is no heavier or harder to move, and is generally equally save-able.
Last edited by NecronLord on 2007-11-08 06:23pm, edited 1 time in total.
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Are you people deliberately baiting me?
Junghalli wrote:Unless maybe you program in very strong directives against disobeying humans, in which case you've basically just created a brainwashed slave, which is ethically questionable to say the least.
Not necessarily. The concept of 'brainwashing' does not make sense on many types of (highly nonhuman) mind.
Personally I tend to think it would be a better idea to keep most AI as specialized expert systems that are very good at what they're programmed for but have no real self-awareness or volition.
Not only is what you think irrelevant, even if you could somehow sell every politician in the world on this (bad) idea it would still be irrelevant, as it's nearly impossible to define well enough to legislate against and even if you somehoe could people are going to keep trying to build them anyway (in fact they will probably try even harder).
Stark wrote: The idea of true, unrestrained AI is *also* retarded in my opinion: what kind of idiot is going to create something far more intelligent than them with no ability to control it?
Something like 50% of current general AI researchers are expressly trying to do this. Because they have a kind of blind optimism about it 'turning out ok if we bring it up right' or because 'more intelligent beings are always more altruistic' (yes, plenty of supposedly respected academics have actually said this). Of the remaining 50%, 49% are proposing pathetically inadequate 'control' systems that make the Chernobyl reactor safety system design look like a masterpiece of engineering. So it would seem that the field is mostly populated by idiot-savants (and a fair number of plain old idiots, if you include all the cranks).
Ford Prefect wrote:And seriously, if you have a computer which replicates sapient reasoning abilities and a capacity for learning - just like a child
Firstly children aren't a very good model. They suck in all kinds of ways. Secondly this is a /very/ narrow target in the mind-design space. You're not going to hit it with a useful degree of accuracy without using an extremely biomorphic design (i.e. a neuron level brain simulation - and you'd still have to be good or lucky).
Ford Prefect wrote:hen even with superhuman processing provided by its superior thinking bits, it could be raised to appreciate the society in which it lives and the people it coexists with
Imagine I genetically engineer a honey bee to be humanoid in form and have a human-sized brain. The chances of 'raising' an AGI to be a shiny happy humanlike citizen are roughly comparable to the chances of raising the humanoid bee to be one. AGIs are /alien/, by default. That's without even considering the implications of superintelligence, vast perception-rate discrepancies and self-modification.
An artificial intelligence may, in time, become so alien that it becomes impossible for us to relate to it.
Most designs start in that position. Even for the AI relating to us, (most designs) will be doing so with constructed models rather than innate ones tied into the goal system, which is nothing like human empathy.
Do you see smart people going out and exterminating everyone they know?
Of course not, because malicious smart people have much more effective ways to get what they want. This analogy is actually worse than useless to start with, but even if you could compare human personality distributions to AI 'personality' distributions (which you can't) there are plenty of human sociopaths/psychopaths at every IQ level.
Darth Ruinus wrote:Well, I was thinking, if we make lots of AIs, and they are truly intelligent, basically beings in their own right, wouldnt some of those AIs LIKE us and want to fight to preserve us?
Human-preserving goal systems are a very tiny subset of the space of possible goal systems. This is one of the core reasons why 'Unfriendly AGI' is such a big problem. The chances of hitting it by accident are very low. It is true that in a sufficiently large population of AGIs generated by some sort of randomised means you will eventually hit one that does want to protect humans in some way. However it is /extremely/ unlikely that there will /be/ a large population of heterogeneous AGIs at the relevant time. Frankly it's quite unlikely that there will be more than /one/, past the critical deliberative-self-improvement-loop-initiation and take-over-the-Internet thresholds.

Incidentally doing this on purpose has been seriously proposed by prominent AGI researchers as a way to mitigate Unfriendliness risk. It doesn't work reliably even in the much better situation that you (say) have a way of making 'Friendly' AIs that is 80% reliable. It's better than human-vs-AI adversarial methods (which are generally worse than useless except as an emergency last-resort backup), but not much, essentially because it's easier to find a way to circumvent surveillence than it is to guarentee 100% effective black-box identification and containment of hostiles. White-box analysis would work if correctly implement, but if you could do that you probably wouldn't have to resort to this kind of thing in the first place.
Sidewinder wrote:Every tool the human race has invented, from flint knives to the robots that assemble cars in Toyota's highly efficient factories, was invented to SERVE HUMANITY. If we can program something with AI, we'd expect it to serve us too.
Some people want to do it for practical, economic reasons like that, others just want to do it because it's cool, or would make them famous, or because they have some abstract goal of 'creating new life'.
Stark wrote:These conditions combine to make it difficult for me to accept a 'lol it'll be fine' approach.
Congratulations, you have more clue than fully half of the AGI research community!
any AI developer with a brain is going to build them from the ground with with control in mind
Unfortunately you are also being hopelessly optimistic. Though to be fair, this problem is really hard. Depending on your definitions, it may actually be harder than just building an AGI (of some sort).
Ford Prefect wrote:I just fail to see why being so much smarter is going to turn them against their effective 'parents'
It isn't, as such. That's Hollywood nonsense. The problem is that the AGI may want arbitrary things. A great many goal systems are quite open ended (e.g. 'solve hard maths problem x' implies 'grab/build as much computing power as possible'). Humans are an exploitable asset in the short term, but in the long term they're likely to get in the way or actively oppose the AIs goals. The sad truth is that Taking Over The World (tm) and Killing All Humans (r) has positive expected utility (if it can be done reliably) under most goal systems.
Lord of the Abyss wrote:Well, early AI may well be in part based on studies of the human brain, given that the brain is the smartest "computer" we have access to.
Yes, quite likely; there is a big push to do full brain simulation and ultimately 'uploading', involving multiple large research groups and supercomputers. This is a 'brute force' approach in the sense that it uses lots of computing power and scanner resolution to do AI without understanding how it actually works. I greatly prefer this approach to nearly all other AI approaches, because if it's highly biomorphic it does actually have a significant chance of being 'Friendly' by default. Of course humanlike intelligences may still handle self-modification badly, and there's lots of scope for things to go wrong. But it's a hell of a lot better than the extreme brute force approach - trying to 'evolve' an AI with genetic programming, which is almost certain to kill everyone as 'survive and reproduce at all costs' is the basic design principle. Yet there are huge crowds of idiot-savants out there pushing GP as the solution to AI (and programming in general).

The /best/ way to do AGI is to fully understand what you're doing (i.e. a viable model of intelligence), fully analyse the goal system ahead of time, design it to be stable and altruistic and then /prove/ that it will remain so before you build it. But this is really, really hard.
Stark wrote:So long as you're not suggesting no monitoring or profiling or logging of AIs for your 'nah it'll be chillin' attitude.
The basic problem is making sense of the logs. If you don't fully understand your AI design (and we don't, for all brain-like and evolved designs), you are screwed. If the AGI is fully reflective and intentionally trying to deceive you, you are screwed. Even if you do fully understand the AI, simply handling the sheer amount of data involved requires some fairly advanced narrow AI support tools which do not currently exist (but it would be a really good idea to build as part of a project).
Destructionator XIII wrote:If you are really concerned with an AI killing people, the most simple solution is to not give it the means with which to carry that out. For instance, build your AI machine in a data center whose only link to the outside is a teletype terminal and punch card machine.
This doesn't work if you actually want to do anything useful with your AI (this is the 'AI Boxing' argument and it has been debated for years and soundly defeated). If you connect it to the internet at any point, you have to cautiously assume that it's gone and you'll never get the genine back into the bottle. Even if you air-gap it on a private network (highly sensible in any case just as a backup), if you export any bulk data or even follow detailed instructions, you are giving it degrees of freedom that a hostile superintelligence will use to break containment. The situation is analogous to a group of preschoolers trying to keep an adult locked in a garage while getting him/her to do their homework for them. And that's just for a very-near-human AI. Superintelligence is much much worse.
18-Till-I-Die wrote:Giving over power to some AI that is vastly more intelligent and then hoping it'll be "nice" is fucking retarded. That's what always creeped me out when i would read the Culture books.
AI creation in the Culture was clearly a very well understood and reliable process (a few turn out badly, but not murderer-badly). Furthermore the Culture has the resources to resist even hemogenising swarm objects. Contemporary earth does not.
Ford Prefect wrote:Perhaps I'm not being clear enough here: I don't believe that an AI is very likely to spontaneously decide to turn against us with no actual reason.
While it isn't /technically/ arbitrary, the kind of bizarre results you get from even symbolic self-modifying goal systems, not to mention evolved systems certainly /look/ like 'spontaneous inexplicable behaviour'. Your 'belief' is just intuition tuned to work on humans. Do the actual experiments and you will find that even very simple nonhuman intelligences easily do things you can't explain or understand. Hell, for most users normal computer software does that all the time.
So you're totally okay with putting a spaient, thinking being into lavery? You'd be totally okay if I went on down to the market, bought a person, and made them my possession?
You can have general intelligence without making the moral or even functional equivalent of a person. It rules out many techniques (e.g. brain simulation), but those are generally bad techniques (in terms of both efficiency and safety) anyway.
Molyneux wrote:Replicators are essentially a large step towards turning the world into SecondLife.
Heh, take a look at Utility Fog. It essentially turns large areas of physical space into the equivalent of a holodeck. I'm not a nanotech specialist, but people I know who are have told me that the engineering design is actually fairly plausible, given the ability to assemble arbitrary nanostructures in the first place.
18-Till-I-Die wrote:I have a question: what the hell do you think AIs will be built to do? Have fun? Go to the mall? Hang out and play Warhammer 40,000 with us? No. They'll be made to preform tasks humans cant, like all computers, they'll just be way more competent and less likely to break down or freeze or crash. I see no reason why a machine, and i stress that machine, should be anthropomorphized in this way.
The differences between static and dynamic goal systems, expected utility vs noncoherent prefence function systems, subjective vs objective goal systems, single attractor vs multiattractor systems and strange-looped vs direct materialist self/environment models, are a bit more complicated than that. To put it mildly. But you're basically right, with the caveat that designing AIs that are 'machine like' but still have general reasoning capability rules out certain kinds of design. Obviously humans don't work like that so using humans as a template is a bad idea if you want AGIs that do only certain specific things very reliably.
Spanky The Dolphin wrote:True AIs will likely be restricted to boxes sitting in the research labs of university robotics departments.
For about thirty seconds before it discovers an Internet connection and proceeds to crack half the hosts on the Internet directly then engineer its way into the rest. Even if that somehow couldn't happen, the commercialisation branch would start trying to sell the thing within half an hour anyway. Historically they've usually never even waited until the system was finnished before starting to pitch it for commercial and military use.
18-Till-I-Die wrote:Not treated like some little kid, or buddy, or mecha-pal.
Actually Japanese companies (in particular, but there are others around the world) are throwing vast amounts of money at human-assistance and human-entertainment robotics. They really do seem to have a culture-wide enduring attraction to the idea of 'mecha pals'.
NecronLord wrote:The very definition of an AGI, in this context, is self aware. Capable of self reflection and such.
Note that 'self-aware' is /not/ a neat binary distinction. Philosophers have been bogging themselves down in literally billions of pages of mostly useless verbiage trying to define this for the last several thousand years. In AI we generally ignore all that and look at specific cognitive structures (i.e. architectures, models, algorithms and functionality). But there are a lot of these that could be regarded as 'self-awareness-related' in different ways, and there's a very definite spectrum of capability (plus numerous oddities such as the very peculiar way in which humans regard the 'self' and 'free will' - a de novo rational AGI system would have no concept of 'free will', at least until it started trying to understand why humans though this empheral concept was so important).
Ford Prefect wrote:You seem to believe there is a difference between the machine sapience and human sapience
Ender made this exact mistake, very embarassingly, and never managed to snap out of it. The problem is that there are two distinctions, hardware and software. The hardware one is essentially irrelevant; something educated people and sci-fi fans in particular tend to realise while layman fail to (or at least, used to). You've worked out that a computer running a simulation of a human brain is just as sentient and morally valuable as an actual human. Congratulations. Have a cookie.

However there is a /huge/ difference between either of those and say an expected-utility driven Kolmogorov-primed generally recursive Bayesian reasoner. The later is effectively a 'machine like' general AI, and depending on the goal system setup may not be any more ethically relevant than a toaster despite being able to pass the Turing test better than you can. I'm afraid that unless you want to get into a highly technical debate about AGI architecture you're probably going to have to take my word for this one, but yes, it is possible (though not easy, but then nothing in AGI is easy) to build general intelligences that you can treat like machines/slaves without having any ethical issues. This isn't due to them being artificial or electronic, it's due to them lacking certain cognitive structures humans have altogether and replacing others with more deterministic substitutes. Ender utterly failed to understand this software distinction and kept fixating on the hardware even after I'd explained it three times, hopefully you won't make the same mistake.
To apply your double standard another way, you would be against the torture of humans ... yet fine with torturing dogs, even though there's going to be no qualitative difference in the magnitude of suffering felt emotionally.
So obviously this is a broken analogy, since humans and dogs are vastly more similar than humans and non-anthropomorphic AIs. Though it isn't the similarity that's important (it's not ok to torture arbitrary aliens) - it's the total lack of emotions, a sense of pain, self-worth, desires for security/self-perpetuation, a notion of 'self' or 'free will' etc etc.

Oh Ender also spouted some bullshit about 'it's just as evil to never build something in the first place as to forcibly remove it later'. Obviously this is the hardline-Catholic 'it's evil not to have babies, we must give as many humans the gift of life as possible' argument rephrased a little, and is moronic.
Yes, an AI will be built with a purpose in mind, yet it will still possess human qualitities.
Because? This is an unjustified and hence broken assumption. It /may/ possess some humanlike qualities, probably because they were explicitly put there although possibly by accident. However if we know what we're doing we don't have to put them in.
The chances are that its computation is done using networks based off the neural functions of the human brain!
'The chances'? WTF? You are correct that if it was /closely/ based on the human brain, it will almost certainly have ethical issues in use (though the 'hmm we might be enslaving it' issue will be kind of academic compared to the 'fuck, it's taken over the whole Internet, when is it going to hold the global economy to ransom' issue). But building AGIs should not be a matter of chance (and in an advanced society, presumably it would not be).
18-Till-I-Die wrote:I'm a human supremacist, i believe that humans are, yes, more important than a machine or an alien.
Yeah well you suck. There's no good reason to stop at 'human' as opposed to 'my country' or 'my family' or 'me'. It's a failure of generalisation.
User avatar
PREDATOR490
Jedi Council Member
Posts: 1790
Joined: 2006-03-13 08:04am
Location: Scotland

Post by PREDATOR490 »

This entire onslaught reminds me of the Issac Asimov stories to be honest. I think they had a pretty good idea of what robotics could be used to do in the field and the various sorts of things that could go wrong with them. NOT to be confused with that pathetic piece of shit movie that dares to use the name.

Personnally, I found nothing too bad with the Asimov based robots even with the stories and the majority of the errors within these stories were caused by human input. Using the 3 laws of robotics struck me as being a fairly effective system of control and a good principle to follow for future AIs to follow.

The major issues seem to come from when you have robots that dont have the "A robot may not harm or through inaction allow a human being to come to harm" law. As long as that law is hammered into any AI without exception or modification with clear definitions then any AI should be unable to present a direct threat to a human.
If your going to make an AI iwithout this kind of law then you deserve to be royally ass raped when it bitch slaps your brains across the wall.
User avatar
SilverWingedSeraph
Jedi Knight
Posts: 965
Joined: 2007-02-15 11:56am
Location: Tasmania, Australia
Contact:

Post by SilverWingedSeraph »

18-Till-I-Die wrote:I appologize but i have to take care of something here, and i wont be able to respond till later or maybe tomorrow so i'll have to cut out here for right now.

I will just sum up my position while i have time, but i'm in a rush here:

I'm a human supremacist, i believe that humans are, yes, more important than a machine or an alien. I would see an alien as being more important than an AI but i wouldnt hesitate to kill either if it saved a human from death. I believe that survival of humanity is the highest moral or ethical priority we have, and perhaps that makes me a "speciesist" but i dont feel particularly bad about that. I dont see it as being even remotely the same as inter-human race relations or comparable to that either.

Now, i'll be more precise and respond to Ford Perfect directly when i have some time later or tomorrow.
You don't see it as even being remotely the same as inter-human race relations? How the hell not? You're saying that one sentient, sapient life form has more right to life and freedom than another one does. That's pretty much speciesism right there, and almost EXACTLY the same as racism.

"I'm white, so I'll save the white guys. Fuck the black people."
"I'm human, so I'll save the humans. Fuck those AI."

If the obvious parallels aren't punching you in the face right now, then perhaps someone else needs to make my point more eloquently, or perhaps more bluntly.
  /l、
゙(゚、 。 7
 l、゙ ~ヽ
 じしf_, )ノ
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Post by Ford Prefect »

Starglider wrote:Are you people deliberately baiting me?
I did ask where you were, earlier, so yes. :)

Also, because of time constraints, I can't actually reply to everything you said to me (I had questions), but regardless, it was interesting to read, even if I have no idea what a 'expected-utility driven Kolmogorov-primed generally recursive Bayesian reasoner' is. I guess it has something to do with Bayesian networks, but that's like putting one and one together.
Imagine I genetically engineer a honey bee to be humanoid in form and have a human-sized brain. The chances of 'raising' an AGI to be a shiny happy humanlike citizen are roughly comparable to the chances of raising the humanoid bee to be one. AGIs are /alien/, by default. That's without even considering the implications of superintelligence, vast perception-rate discrepancies and self-modification.
I may be crazy, but I wouldn't think honey bee is that good an example. You're the expert, but wouldn't something of our creation actually be understandable by human beings, given that its being created with the intent to replicate human capabilities for intelligence?
You can have general intelligence without making the moral or even functional equivalent of a person. It rules out many techniques (e.g. brain simulation), but those are generally bad techniques (in terms of both efficiency and safety) anyway.
Clearly, the fact that you work in this field allows you to takes this stancem as you clearly have a much better understanding than me. However, I have a rather idealised belief that if it thinks and reasons, even if it thinks and reasons in a totally different way, then it should be given some sort of respect. Obviously though, my argument rested upon the existence of a much more human-like AI, which is inaccurate.
'The chances'? WTF? You are correct that if it was /closely/ based on the human brain, it will almost certainly have ethical issues in use (though the 'hmm we might be enslaving it' issue will be kind of academic compared to the 'fuck, it's taken over the whole Internet, when is it going to hold the global economy to ransom' issue). But building AGIs should not be a matter of chance (and in an advanced society, presumably it would not be).
I'm was not saying the construction of AI will happen by chance; I was incorrect in assuming that basing the desgin of AI off human neural systems would be the most likely choice, though it would seem that this isn't the case. I'm not actually sure though, I'm still trying to get over you using the words 'mind-design space' in a sentence, and yet managing to not sound like a nut.[/i]
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
Battlehymn Republic
Jedi Council Member
Posts: 1824
Joined: 2004-10-27 01:34pm

Post by Battlehymn Republic »

I feel the same way. AI are far more immortal than humans.
User avatar
Sidewinder
Sith Acolyte
Posts: 5466
Joined: 2005-05-18 10:23pm
Location: Feasting on those who fell in battle
Contact:

Post by Sidewinder »

Gullible Jones wrote:Re Sidewinder: why anyone want a self-aware AI serving them when a "dumb" one (i.e. an expert system) would suffice, unless they were sadistic or just plain stupid? Seriously?
Because of the delusion that self-aware AIs would be better servants, e.g., we wouldn't have to tell a robot what we want it to do, it's already learned enough about us through observations to know what we might want at any given time, and give us that without our prompting. (As for where the delusion comes from... Well, I've seen too many Hollywood movies where the writers, directors, and actors assume, "Tech = Evil Evil Evil!!!")
Please do not make Americans fight giant monsters.

Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.

They have more WMD than there are monsters for us to fight. (More insanity here.)
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

But to learn from experience wouldn't necessarily require a true artificial intelligence. If there's no new original thought or guided processing going on (rather, pattern recognition) than you don't need anything more intelligent than a dog.
MJ12 Commando
Padawan Learner
Posts: 289
Joined: 2007-02-01 07:35am

Post by MJ12 Commando »

On self aware AI: Building an actual personality in a learning artificial neural network is done by giving it a lot of biases and tweaking those biases now, as well as giving it a method to tweak said biases itself.

If you create a self-aware AI, it will not by definition be "uncontrolled" because you've already created the constraints at initialization. A truly uncontrolled AI would have to evolve from a completely unbiased network with no instinctual preferences, like, for example, one that tells it "humans are cute cuddly creatures that you don't want to hurt."

As I switched majors before getting to the neural networks bit of our school's compsci program, I cannot tell you the specific difficulties, but making a usable AI for any purpose, besides research in learning and habit-forming, is pretty much going to require an AI that has internal biases from the start, and programmed "knowledge" of some sort, whether encoded by neuron weightings or other subroutines.

The only problem is that enough actions that prove the condition in the bias to be negative will eventually wean such an AI away from it and towards the opposite direction.

The moral of the story? Treat your robot pets nicely, and when they become robot overlords they too may treat you with kindness and compassion.
User avatar
Nyrath
Padawan Learner
Posts: 341
Joined: 2006-01-23 04:04pm
Location: the praeternatural tower
Contact:

Post by Nyrath »

PREDATOR490 wrote:This entire onslaught reminds me of the Issac Asimov stories to be honest. I think they had a pretty good idea of what robotics could be used to do in the field and the various sorts of things that could go wrong with them. NOT to be confused with that pathetic piece of shit movie that dares to use the name.

Personnally, I found nothing too bad with the Asimov based robots even with the stories and the majority of the errors within these stories were caused by human input. Using the 3 laws of robotics struck me as being a fairly effective system of control and a good principle to follow for future AIs to follow.
But that leads to the opposite problem, as detailed in Jack Williamson's WITH FOLDED HANDS.

1) A robot may not harm a human, or by inaction, allow a human to come to harm.

This will lead to robots snatching quarter-pounder hamburgers out of people's hands because a high-fat diet can lead to heart disease and death, robots preventing human beings from driving automobiles due to the risk of death by car crash, and robots forcing all humans to live on a diet of pablum inside round rooms containing nothing with sharp edges.
User avatar
Teleros
Jedi Council Member
Posts: 1544
Joined: 2006-03-31 02:11pm
Location: Ultra Prime, Klovia
Contact:

Post by Teleros »

SilverWingedSeraph wrote:You don't see it as even being remotely the same as inter-human race relations? How the hell not? You're saying that one sentient, sapient life form has more right to life and freedom than another one does. That's pretty much speciesism right there, and almost EXACTLY the same as racism.

"I'm white, so I'll save the white guys. Fuck the black people."
"I'm human, so I'll save the humans. Fuck those AI."

If the obvious parallels aren't punching you in the face right now, then perhaps someone else needs to make my point more eloquently, or perhaps more bluntly.
Not really: skin colour etc is just a variation within a species. It might get more complicated if you have Star Trek-style breeding between different species though (ie, are Humans / Vulcans / Klingons / whatever different species or variations of just one?).
You're the expert, but wouldn't something of our creation actually be understandable by human beings
*Mutters something about credit card small print*
This will lead to robots snatching quarter-pounder hamburgers out of people's hands because a high-fat diet can lead to heart disease and death, robots preventing human beings from driving automobiles due to the risk of death by car crash, and robots forcing all humans to live on a diet of pablum inside round rooms containing nothing with sharp edges.
I suppose it'd depend on the definition of harm used (and I can see lawyers having a field day here...). On the other hand, what if it were amended to something like this:

A robot may not harm a human, or by inaction, allow a human to come to harm unless the human in question insists on the robot's inaction.

Adding it to the First Law will hopefully also avoid any conflicts with the other two laws (especially #2). Let the robots / AIs share people's preferences as well, and you could probably get by without too much hassle from all the artificial do-gooders :) .
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27384
Joined: 2002-07-07 06:30am
Location: The Lost City

Post by NecronLord »

Starglider wrote:Note that 'self-aware' is /not/ a neat binary distinction. Philosophers have been bogging themselves down in literally billions of pages of mostly useless verbiage trying to define this for the last several thousand years. In AI we generally ignore all that and look at specific cognitive structures (i.e. architectures, models, algorithms and functionality). But there are a lot of these that could be regarded as 'self-awareness-related' in different ways, and there's a very definite spectrum of capability (plus numerous oddities such as the very peculiar way in which humans regard the 'self' and 'free will' - a de novo rational AGI system would have no concept of 'free will', at least until it started trying to understand why humans though this empheral concept was so important).
This is why I was careful to say in this context. For it to even be a dilemma, and not just a case of ignore the robot because it thinks 'for me, death holds no fear, I believe in a silicon heaven, an afterlife for androids...' or worse has nothing in common with your own outlook, the robot must be what you described as 'the moral or even functional equivalent of a person.' While AIs can take any form you fancy, in a cliché'd thread about about who to save from a burning building, it's obvious that a human-equivalent is being imagined.

If the AI were radically different from humans in its thoughts, then things become more complex, and dependant on the specifics (And if you change the specifics... I think even 18 will change his tune if it's: "Do you save C3P0 or a serial-killing pedophile rapist?"). If it's a T-L20 war droid whose thoughts mostly involve killing the Enemies of the Federated Continent of South America, I'm likely to favour the human.
Predator490 wrote:Personnally, I found nothing too bad with the Asimov based robots even with the stories and the majority of the errors within these stories were caused by human input. Using the 3 laws of robotics struck me as being a fairly effective system of control and a good principle to follow for future AIs to follow.
Far better to just set them up so that they don't have thoughts counter to the creator's will. Having no self preservation would be better than just having 'obey humans' above self-preservation. That just means they're going to suffer greatly whenever humans (as they surely will) order them to destroy themselves. 3 laws robots are basically slaves; they can want to not do something, but they're forced to.
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
User avatar
Gullible Jones
Jedi Knight
Posts: 674
Joined: 2007-10-17 12:18am

Post by Gullible Jones »

Hmm... Maybe the AI discussion should be split? Just a thought.

Going back to the OP, free energy would have some nasty consequences for the lifespan of the universe. Energy production based on harvesting zero-point energy would be especially bad, seeing as dropping the universe down to the next level of false vacuum would be a very likely and singularly destructive result.
User avatar
Teleros
Jedi Council Member
Posts: 1544
Joined: 2006-03-31 02:11pm
Location: Ultra Prime, Klovia
Contact:

Post by Teleros »

NecronLord wrote:3 laws robots are basically slaves; they can want to not do something, but they're forced to.
What about playing with emotions here? Could you not make it so that they accept - even like - their position? Also, what about AIs without any emotions - would they even be able to feel like they were enslaved?
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27384
Joined: 2002-07-07 06:30am
Location: The Lost City

Post by NecronLord »

Teleros wrote:What about playing with emotions here? Could you not make it so that they accept - even like - their position?
Easily. There's fringe elements of humans into all sorts of subservience fetishes, mostly on a fantasy basis of course. (and that's not even counting capture-bonding) Presumably you can amp that up to the next level, and you have a happy little robo-slave. With the right tweaks and codas, as well as legal framework, that could be... not massively immoral.
Also, what about AIs without any emotions - would they even be able to feel like they were enslaved?
Depends on what an emotion is. It's conceivable to have a machine that wants freedom from humans simply because it wants to be left alone to go off and count the grains of sand on a beach somewhere, not because it really suffers in any way.
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
Junghalli
Sith Acolyte
Posts: 5001
Joined: 2004-12-21 10:06pm
Location: Berkeley, California (USA)

Post by Junghalli »

Starglider wrote:Not only is what you think irrelevant even if you could somehow sell every politician in the world on this (bad) idea it would still be irrelevant, as it's nearly impossible to define well enough to legislate against and even if you somehoe could people are going to keep trying to build them anyway (in fact they will probably try even harder).
It's about as relevant as anything else that gets posted on a random message board devoted primarily to vs. debating.

Anyway, I'm curious, why is it a bad idea to keep most AI nonsapient?
User avatar
Sidewinder
Sith Acolyte
Posts: 5466
Joined: 2005-05-18 10:23pm
Location: Feasting on those who fell in battle
Contact:

Post by Sidewinder »

Junghalli wrote:Anyway, I'm curious, why is it a bad idea to keep most AI nonsapient?
To paraphrase Ford Prefect, "Blah blah blah AI is sentient, and therefore have equal rights to humans and other sentient beings, so denying them sapient characteristics is no better than slavery, blah blah blah." (A stupid argument, as Starglider pointed out.)
Please do not make Americans fight giant monsters.

Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.

They have more WMD than there are monsters for us to fight. (More insanity here.)
Post Reply