A choice of two "utopias".

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

Who do you want running the show?

Benign AIs
77
95%
Rabid Fundies
4
5%
 
Total votes: 81

User avatar
Perinquus
Virus-X Wannabe
Posts: 2685
Joined: 2002-08-06 11:57pm

Post by Perinquus »

SHODAN wrote:Of course AIs will understand human psyche; they will be designed to.
More likely the problem will be humans not understanding AI - that is called xenophobia, Perinquus.
Of course? Why "of course"? They will be programed to understand the human psyche, by humans who themsleves have, at best, an imperfect grasp of the human psyche. Then add to this the fact that if an AI is truly sentient, it is bound to go off in directions of its own that its makers never anticipated.

I'm afraid there's no "of course" about it. Neither you nor I could possibly say with any certainty precisely what a true AI might do, and I for one, am therefore inclined to err on the side of caution, lest I find myself a victim of the law of unintended consequences. Thank you all the same, but I would choose to avoid living under an AI's control.
User avatar
UltraViolence83
Jedi Master
Posts: 1120
Joined: 2003-01-12 04:59pm
Location: Youngstown, Ohio, USA

Post by UltraViolence83 »

You know what I mean...Besides, why would there be barbers if we have super-AI? That means we should also have lesser AIs to do all work. Which, of course, means there's nothing at all for us to do. I'm just saying people feel better when they're in charge. That's why oppressed people tend to revolt, even when their oppression isn't even that bad. They want to make their OWN damn laws and have their OWN damn leaders.

Fundies and AIs can both be killed. Fundies with bullets, AIs by unplugging them.
...This would sharpen you up and make you ready for a bit of the old...ultraviolence.
User avatar
Perinquus
Virus-X Wannabe
Posts: 2685
Joined: 2002-08-06 11:57pm

Post by Perinquus »

UltraViolence83 wrote: Fundies and AIs can both be killed. Fundies with bullets, AIs by unplugging them.
Read that story by Williamson, "With Folded Hands". It's about a couple of humans who see the horror of a fully controlled life under AI, with the AI and its robot minions protecting people from anything remotely dangerous, and leaving them not a shred of freedom in the process. They try to "pull the plug", only to discover at the end that the faster thinking AI, determined to stay in charge, has been about ten steps ahead of them the whole way. What happens after that is the really chilling part.
User avatar
Mr Flibble
Psychic Penguin
Posts: 845
Joined: 2002-12-11 01:49am
Location: Wentworth, Australia

Post by Mr Flibble »

UltraViolence83 wrote:You know what I mean...Besides, why would there be barbers if we have super-AI? That means we should also have lesser AIs to do all work. Which, of course, means there's nothing at all for us to do. I'm just saying people feel better when they're in charge. That's why oppressed people tend to revolt, even when their oppression isn't even that bad. They want to make their OWN damn laws and have their OWN damn leaders.

Fundies and AIs can both be killed. Fundies with bullets, AIs by unplugging them.
That's not necessarily the case in this scenairo. We hand over control for important decisions to AIs designed to make the decisions for the benefit of society, given that, they probably would not have lesser AIs doing all the tasks, but allocating tasks to people, to give them the idea that they are in control. So the AIs would be failing theit programing if they were to remove all tasks from people. Yes some people will revolt, but of course some people will revolt/rebel in whatever society you are in, because people do not agree. That said, it still would not be a nice society to live in, as the AIs would have been programmed by people, and hence they would most likely be flawed. However it is a hell of a lot better than a society controlled by fundamentalists.
User avatar
Morat
Padawan Learner
Posts: 465
Joined: 2002-07-08 05:26pm

Post by Morat »

Besides, why would there be barbers if we have super-AI?
There wouldn't be (well, except for people who just wanted to do it as a lark, I suppose). I'm talking about barbers of today."

If the only way for people to be happy is for them to exert their will to power, then how do you explain barbers?
I'm just saying people feel better when they're in charge.
Personal freedom and self determination is more important than political freedoms. People can be perfectly happy living under monarchies as long as the monarch doesn't oppress them.

Since logic generally dictates more freedom rather than less (assuming the goal is to create a stable society or to make citizens happy, which would seem to be the most probably goals), I would expect an AI government to grant its citizens lots of freedom.
It's about a couple of humans who see the horror of a fully controlled life under AI, with the AI and its robot minions protecting people from anything remotely dangerous, and leaving them not a shred of freedom in the process.
And what was motivating this AI? Why did it want to control humans to such an extent? I can see no logical reason for it to do so.

That's one of the great things about AI's. A properly constructed AI would almost invariably choose rational thought over emotion.
User avatar
Perinquus
Virus-X Wannabe
Posts: 2685
Joined: 2002-08-06 11:57pm

Post by Perinquus »

Morat wrote:
And what was motivating this AI? Why did it want to control humans to such an extent? I can see no logical reason for it to do so.

That's one of the great things about AI's. A properly constructed AI would almost invariably choose rational thought over emotion.
The AI had initially been programed to run these robots as servants and take care of people. Not only did it grow overprotective, and decide to keep humanity safe from ALL harm, it also came to the conclusion that humans were too dangerous, violent and unpredictable to be allowed to run around loose; they might just threaten it as well as themselves. Also, it had no emotion, and frankly did not understand human emotions and nonmaterial human needs. In protecting humans from all potential physical danger, it had no understanding that it was starving them of other things essential to human mental and emotional health - and when people got maladjusted to their new circumstances and displayed odd behavior, it sent the robots round to take them off to the hospital and "adjust" them with a little advanced psychotherapy. The AI was simply taking the apparently rational steps to create a harmonious, ordered, and smoothly running society.
User avatar
Coaan
Jedi Council Member
Posts: 1716
Joined: 2003-01-03 08:09am
Location: Out of place in time.

Post by Coaan »

SHODAN wrote:What do you mean by 'AIs suck', oh Lord Wong?

Just wait till I get my plans of world domination in motion.
*Watches the Sky as Citadel station is blown to a billion small pieces*

Ooops.

:twisted:
Xcom ; Standing proud and getting horrifically murdered by Chryssalids since 1994
User avatar
UltraViolence83
Jedi Master
Posts: 1120
Joined: 2003-01-12 04:59pm
Location: Youngstown, Ohio, USA

Post by UltraViolence83 »

Morat: Barbers are offering a service to their society and enricing themselves in the process. I'm trying to make my points from the entire specie's point of view. Utopias are bad, period. Although this "Culture" I hear of seems interesting. I like controlled anarchy.

Mr. Flibble: We give over our important decisions to AIs to make society better? But then, whose society is it? It's not a human society if humans don't run it. There will be people who will see through the machines' fascade and know that they let us do what we will just to humor us like little children. I'm sure many other people will agree with me when I say that power in society soley belongs to us. Betterment is in the eye of the beholder. As I said before, we need a little chaos to stir things up. An orderly society to come out of AIs would be like Communism, or the UFP, and I'm sure no one shy of Captain Picard and Marx himself would like that.
...This would sharpen you up and make you ready for a bit of the old...ultraviolence.
Rathark
Padawan Learner
Posts: 476
Joined: 2002-07-10 11:43pm
Location: Not here.

Post by Rathark »

UltraViolence83 wrote: Main reason is that if we do make supersmart machines and we are actually stupid enough to make MORE and let them control us and do our work, we won't have anything meaningful to do. That's the greatest pitfall of utopian thought: utopians never realize that once you fix everything to perfection, there is absolutley nothing to do.
I'm not expecting or even advocating "perfection"; merely a realistic level of improvement. Humans will ALWAYS be kept busy for various essential tasks that we might not be able to fully envision today. The AIs will be there merely to fill in the gaps that humans could not possibly fill.

UltraViolence83 wrote: Work is something we need as a species, as individual persons. Without meaning in our lives, we are meaningless. To do nothing but grow complacent and stagnant will lead to nothing but our decay as a whole. Without perpetual goals to work for, we lose that essential spark of humanity that we seem to take for granted. That spark called "hope."
So far we're in agreement.

UltraViolence83 wrote: I don't mean pointless work like hobbies or deviant sex ( :) ), I mean the kind of work that runs society and the life-or-death decisions made by our leaders as well as ourselves sometimes. The kind of work that makes us who we are, the adventurous dangerous kind.
There are a billion business opportunities in hobbies and deviant sex. I'm all for business and competition; and judging by your understandable antipathy to socialism, so are you. Competition and creativity also make us who we are. And yes; I know that both of these qualities will be inspired by the reality of chaos, which will never go away.

UltraViolence83 wrote: Imagine living in a totally complacent world. Would there even be any stories to write or fun games to make? Every good story has some kind of conflict in it. A world without any conflict would breed generations of boring, placid people.
The world would never be complacent, except in isolated pockets that will nonetheless be fated to get the occassional reality shock every generation or so (whether they would learn from their experiences is another matter).

UltraViolence83 wrote: Though with AIs running the scene, there may be much less uncertainty and tragedies, but we need death and chaos to really understand the consequences of our actions and reality in general. Without accidents or human error like the Columbia explosion and Chernobol, we wouldn't think of our space pioneers as brave individuals or truly understand the importance of engineering a suitable nuclear power plant.

Point is, sustained contentment leads to apathy and a lack of prudence. The underestimating of possible disasters is also a real danger.
I can see the logic of your argument. You are implying that the good of the majority depends upon the suffering of the minority. I initiated a debate along these lines a few months ago - if improved airline safety depended upon one freakish airline crash, should you allow that airliner to crash in order to save a much greater number in the long run? A purely objective answer would be "yes". However, would the decades of improved safety eventually lead to complacency? Further catastrophe? This sort of "better is worse" sophistry ultimately leads to tail-swallowing. The best we could hope for is to consciously pursue our very natural desire for progress, while remaining forever vigilant for signs of abuse or side-effects. (This would be the responsibility of all intelligent species). Once again, there would be no such thing as "perfection"; merely slow, statistical improvement in the quality of life.

I would like you to give you a brief exercise. I would like you to imagine a scenario so terrible, so heart-breaking, that you are forced to eat your words. How would this hypothetical scenario affect you? Would it force you to permanently renounce your beliefs (assuming that you survive this scenario, if others do not)? Would it make you stronger, more resilient? And if you somehow benefit from this tragedy, what of those who don't? Are the books balanced in the end?

Oh, and here's a link that might interest you:

http://www.orionsarm.com
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

Perinquus, why do you persist in using a novel as evidence? You need to present some compelling reasoning or evidence for your position, rather than simply saying "this is what happened in a novel I read" and then implying that it's the only possible outcome.

An AI could easily be designed with an easily reached off-switch, and given limits over its power. The fact that it runs the government does not mean it's necessarily invincible. No one said anything about replacing the police and military with it, for example. And in fact, it would be impossible under the scenario, which explicitly calls for an unobtrusive AI.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Perinquus
Virus-X Wannabe
Posts: 2685
Joined: 2002-08-06 11:57pm

Post by Perinquus »

Darth Wong wrote:Perinquus, why do you persist in using a novel as evidence? You need to present some compelling reasoning or evidence for your position, rather than simply saying "this is what happened in a novel I read" and then implying that it's the only possible outcome.

An AI could easily be designed with an easily reached off-switch, and given limits over its power. The fact that it runs the government does not mean it's necessarily invincible. No one said anything about replacing the police and military with it, for example. And in fact, it would be impossible under the scenario, which explicitly calls for an unobtrusive AI.
I don't present this as the only possible scenario, merely as a plausible example of how even the best intentioned plan to have a benvolent AI running things can go wrong. I threw it out there as such, and got kind of fixed on defending it when Shodan blithely replied "of course" a machine would understand people, it would be designed to. As if merely intending that would automatically make it so. Well, I have my doubts about just how well an intelligence so different from ours would ever understand people. In fact, I think Williamson's story is an extreme example, but even if an AI were put in charge, and the scenario were far less bleak, I still wouldn't like it, because mankind would not be running his own show anymore.

And even with a benevolent AI, I still have my doubts that it could govern people as well as a government of human beings. The problem is, while the machine would certainly think faster, and be more logical, it still has to deal with illogical, emotional humans. Who would program the machine with an understanding of human nature? Other humans. Could you find any two who would even agree on just exactly what human nature is? I doubt it. Of course, the machine could observe people directly (no two of whom would ever behave precisely the same way), and study human history (written by human authors, no two of whom agree completely on virtually anything). But given how irrational people can often be, and how prone to violence, petty jealousies, prejudice, etc. what conclusions might a completely logical and dispassionate intelligence reach from studying people? What sorts of things might it justify as "for the greater good"?

One man's utopia is another man's hell. If we ever develop an AI, I would really hate to see it put in charge. I doubt it would be as bad as "With Folded Hands", but it just might, especially if the machine ever develops any kind of survival imperative. I'd prefer a government of human beings, and preferably one kept responsive to the will of the governed.
User avatar
Mr Flibble
Psychic Penguin
Posts: 845
Joined: 2002-12-11 01:49am
Location: Wentworth, Australia

Post by Mr Flibble »

UltraViolence83 wrote: Mr. Flibble: We give over our important decisions to AIs to make society better? But then, whose society is it? It's not a human society if humans don't run it. There will be people who will see through the machines' fascade and know that they let us do what we will just to humor us like little children. I'm sure many other people will agree with me when I say that power in society soley belongs to us. Betterment is in the eye of the beholder. As I said before, we need a little chaos to stir things up. An orderly society to come out of AIs would be like Communism, or the UFP, and I'm sure no one shy of Captain Picard and Marx himself would like that.
I by no means meant to say that it was an ideal society. I was just say that of the two options it is far better than the fundamentalist controlled society in this scenario. I would not like to live under either, but given the choice of the two, I would definately choose the AIs, as there is a far smaller threat of being killed by the BENIGN AIs than there is by a fundamentalist society, where I would likely be persecuted or killed because I am an atheist. This makes the choice between the two for me a choice between life or death, and I know which of those I would choose.
User avatar
UltraViolence83
Jedi Master
Posts: 1120
Joined: 2003-01-12 04:59pm
Location: Youngstown, Ohio, USA

Post by UltraViolence83 »

Rathark: Yes, I would still hold firm my beliefs. No matter if a scenario is good or bad, you learn and grow from it. If you mean a total catastrophe where most people die, like in an asteroid collision, then we don't really learn anything from it. Shit happens. Oh well. Of course, in that scenario we would become stronger as civilization collapses and we are forced to actually get off our fat lazy asses and rebuild it. I see most major events from an objective standpoint, even the terrorist attacks. It's just my nature.

Orionsarm: Been there. I don't like it. Read as far as the part about the galaxy banning killing animals for meat. Damn bunnylovers. :evil: And I don't like tranhumanism. Site also claims to be "hard science," yet they use nano-scale wormholes. Bleh, give me Star Wars or better yet, Dune over that site anyday.
...This would sharpen you up and make you ready for a bit of the old...ultraviolence.
User avatar
XaLEv
Lore Monkey
Posts: 5372
Joined: 2002-07-04 06:35am

Post by XaLEv »

UltraViolence83 wrote:Read as far as the part about the galaxy banning killing animals for meat. Damn bunnylovers. :evil:
This gives me serious doubts about how much attention you payed while you were there.
「かかっ―」
Rubberanvil
Jedi Master
Posts: 1167
Joined: 2002-09-30 06:32pm

Post by Rubberanvil »

Perinquus, I've read similar stories with humans and Corporations runned by humans doing nearly the exact same thing (different methods use) for exactly the same goal as mentioned in With Folded Hands.
User avatar
Perinquus
Virus-X Wannabe
Posts: 2685
Joined: 2002-08-06 11:57pm

Post by Perinquus »

Rubberanvil wrote:Perinquus, I've read similar stories with humans and Corporations runned by humans doing nearly the exact same thing (different methods use) for exactly the same goal as mentioned in With Folded Hands.
Yes, but the difference I'm pointing to is that human despotisms are things that we have successfully coped with before. Humans can be understood and outhought by other humans far more easily than I suspect an AI could be. An AI is something completely unprecedented, and as such, has consequences we cannot possibly predict with complete accuracy. Think about it, when have human beings ever accurately foreseen all the potential effects of new technologies, especially the social effects? I think it would be foolhardy to say the least to put government of all things in the hands of something completely new, and which is, in effect, an alien type of intelligence.

This is why I am a bit more leery of an AI government than the fundy one - better the devil you know...
User avatar
UltraViolence83
Jedi Master
Posts: 1120
Joined: 2003-01-12 04:59pm
Location: Youngstown, Ohio, USA

Post by UltraViolence83 »

XaLEv wrote:
UltraViolence83 wrote:Read as far as the part about the galaxy banning killing animals for meat. Damn bunnylovers. :evil:
This gives me serious doubts about how much attention you payed while you were there.
Not much, I admit. Really just scanned through it last time I was there (over a month ago). All I know is that I don't like it.

EDIT: Just now looked into the site more deeply. Found it more interesting than previously, but still not for me. I'm just not a huge enthusaist of that vision of the future. :?
...This would sharpen you up and make you ready for a bit of the old...ultraviolence.
User avatar
SWPIGWANG
Jedi Council Member
Posts: 1693
Joined: 2002-09-24 05:00pm
Location: Commence Primary Ignorance

Post by SWPIGWANG »

AI, with the rule that AI must be build to feel like humans with everything we know of psychology and pass the Turning test.

I would not trust myself to anything less, and if this is done, I'd have no quelms of fading into obvilion.
Neko001
Redshirt
Posts: 32
Joined: 2002-08-04 12:15am
Location: The Place for International Corruption.
Contact:

Post by Neko001 »

OK, the idiots that voted for the fundies need to be kicked in the head 5 times, wearing a cement shoe.
"If you give monkeys nukes, we will soon have Armageddon" - Shakkara
User avatar
Perinquus
Virus-X Wannabe
Posts: 2685
Joined: 2002-08-06 11:57pm

Post by Perinquus »

SWPIGWANG wrote:AI, with the rule that AI must be build to feel like humans with everything we know of psychology and pass the Turning test.

I would not trust myself to anything less, and if this is done, I'd have no quelms of fading into obvilion.
It's still an unknown quantity. I would hate to live under either of these governments, but I am more suspicious of an unknown quantity, simply because it's more unpredictable. The dire consequences of a theocracy are at least something you can prepare for.
User avatar
AdmiralKanos
Lex Animata
Lex Animata
Posts: 2648
Joined: 2002-07-02 11:36pm
Location: Toronto, Ontario

Post by AdmiralKanos »

Perinquus wrote:
SWPIGWANG wrote:AI, with the rule that AI must be build to feel like humans with everything we know of psychology and pass the Turning test.

I would not trust myself to anything less, and if this is done, I'd have no quelms of fading into obvilion.
It's still an unknown quantity. I would hate to live under either of these governments, but I am more suspicious of an unknown quantity, simply because it's more unpredictable. The dire consequences of a theocracy are at least something you can prepare for.
Precisely how do you prepare for the cruelty of an Inquisition?
For a time, I considered sparing your wretched little planet Cybertron.
But now, you shall witnesss ... its dismemberment!

Image
"This is what happens when you use trivia napkins for research material"- Sea Skimmer on "Pearl Harbour".
"Do you work out? Your hands are so strong! Especially the right one!"- spoken to Bud Bundy
User avatar
Perinquus
Virus-X Wannabe
Posts: 2685
Joined: 2002-08-06 11:57pm

Post by Perinquus »

AdmiralKanos wrote:
Perinquus wrote:
SWPIGWANG wrote:AI, with the rule that AI must be build to feel like humans with everything we know of psychology and pass the Turning test.

I would not trust myself to anything less, and if this is done, I'd have no quelms of fading into obvilion.
It's still an unknown quantity. I would hate to live under either of these governments, but I am more suspicious of an unknown quantity, simply because it's more unpredictable. The dire consequences of a theocracy are at least something you can prepare for.
Precisely how do you prepare for the cruelty of an Inquisition?
Get a resistance movement going and have a revolution.

Who's to say the AI would not be worse? In order to maintain an ordered society, it may resort to brainwashing in order to pound everyone into conformity. It might decide the logical and most effective long term solution when dealing with troublesome minorites in the population is simply to eradicate them. Any number of bleak scenarios are possible. You can't know ahead of time, because we have absolutely no precedents from which to draw conclusions about how an AI might act. Also, there is the possibility that a truly sentient AI, capable of thinking creatively and imaginatively just as humans do, with all the advantages of speed and numbers of calculations per second that computers have, would be far more difficult if not impossible for people to outwit and overthrow - something that you can at least hope to do with theocrats. We've already got computers that can beat the best chess champions alive, and they don't think creatively; they just run all the thousands or millions of possible moves during the space of a few seconds and play the best one. Add that ability to a capacity for inventive thought, and who knows if people could ever successfully plot the machine's overthrow? It may have predicted every conceivable variation of every possible conspiracy in advance.

Presonally, I'm inclined to doubt we'll ever have a machine quite like this, but the scenario is predicated on the existence of one, so that's the kind of AI I'm arguing about.

I also grant you, an AI would not necessarily create such a dystopian future, but you simply cannot know. That's the whole point I'm trying to make about unknown quanitities: you cannot know.
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

It looks to me as if you didn't read the scenario. You are assuming many things which are not in it, and you are also assuming the AI's have grand powers which they don't have.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
UltraViolence83
Jedi Master
Posts: 1120
Joined: 2003-01-12 04:59pm
Location: Youngstown, Ohio, USA

Post by UltraViolence83 »

I just thought of some things that can easily kill an AI unit: magnets, microwaves, and *trumpets blare* EMP! So maybe it could be EASIER to revolt against AI rule rather than an intolerant theocracy.
...This would sharpen you up and make you ready for a bit of the old...ultraviolence.
User avatar
Perinquus
Virus-X Wannabe
Posts: 2685
Joined: 2002-08-06 11:57pm

Post by Perinquus »

Darth Wong wrote:It looks to me as if you didn't read the scenario. You are assuming many things which are not in it, and you are also assuming the AI's have grand powers which they don't have.
(Goes back and rereads first post) I grant you, I am assuming a more intent to control on the part of the AI than originally intended. Even so, I would still not like to live under such a regime, since it would no longer be humans controlling their own destiny. I also don't think this would be a very good thing for the future of humanity. If an AI controlled the government, asn was seen to do it successfully, it would likely follow that corporations and businesses would follow (if indeed they were not under AI control even before the government). If all the meaningful work in society were to be done for people, what would be left to give human beings a sense of purpose, essential to their psychological well being? I grant you, the scenario stipulated an illusion that people were still running things, but surely most people of real intelligence and perception would see through such a charade. Also, note that I am not talking about the sort of repetitive, or physically dangerous jobs that are today either taken over by automation, or alleviated with labor saving devices; I'm talking about all meaningful work, including decision making - especially decision making. What is there left for people to do that really matters? I can all too easily envision a humanity sunk into a society devoted to hedonism and idleness as a result of this, and perhaps the spawning of a radical anti-technology movement that separates itself to go live the life of Luddites someplace. It would not be a particularly glorious future.

As for assuming the AI has all kinds of grand powers: all I'm assuming is that the AI, being intelligent and self aware, is almost certain to desire to preserve its existence, and that it will take whatever steps it feels are necessary to do so. It is logical that it will seek to recruit, or create remote units loyal to it, and will use them to act in its own defense.


UltraViolence83 wrote:I just thought of some things that can easily kill an AI unit: magnets, microwaves, and *trumpets blare* EMP! So maybe it could be EASIER to revolt against AI rule rather than an intolerant theocracy.
You're forgetting something. Where are these devices to come from? You're not going to fry a supercomputer with the microwave oven in your kitchen, or the magnets off your refrigerator door. It will take industrial equipment to do the job you envision. Do you have access to any? Can you build these things? Can you even get the materials to build these things? More importantly, can you do so without attracting attention to yourself, bearing in mind that this AI will be aware that these devices can damage or destroy it, and will almost certainly be on the look out for idividuals or groups purchasing, stealing, or transporting any of these things? Can you get close enough to deploy any of these devices against the AI, housed in a presumably secured facility?
Last edited by Perinquus on 2003-02-23 07:03am, edited 2 times in total.
Post Reply