Artificial Intelligence: Why Would We Make One?
Moderator: Alyrium Denryle
- Broomstick
- Emperor's Hand
- Posts: 28822
- Joined: 2004-01-02 07:04pm
- Location: Industrial armpit of the US Midwest
Re: Artificial Intelligence: Why Would We Make One?
It's quite possible that there are significant differences between Wal-Mart and Meijer's customers. My Meijer's has had self-serve check out for several years now and they're expanding it.
This sort of thing may sort out based on store location - some localities are more prone to dysfunctional people, people trying to cheat the system, and so forth.
This sort of thing may sort out based on store location - some localities are more prone to dysfunctional people, people trying to cheat the system, and so forth.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.
Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.
If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy
Sam Vimes Theory of Economic Injustice
Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.
If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy
Sam Vimes Theory of Economic Injustice
- someone_else
- Jedi Knight
- Posts: 854
- Joined: 2010-02-24 05:32am
Re: Artificial Intelligence: Why Would We Make One?
I'm getting sick of this . I'll try to explain my first post trying to put aside any more sophistry to win this debate, since you're too smart and I'm doomed to fail to fool your unblinking eye .
He also said that there were made various experiments where living primates, rats, and dogs were brain-damaged on purpose, to understand what is connected to what, and how the signals are modulated.
Then I extrapolated that since models are used to understand some brain areas, models can be used to understand other areas as well, so, given enough time you can have an AI with the specs the OP asked for.
Then you misinterpret my (probably badly written) post, based on the assumption that brain research is studying only its higher functions (for some unknown reason) and don the Moral Police hat to start bombing my ass.
And I defend myself by aknowledging that if you want to study the higher functions of the brain, you have the limitations you raise, and add some sophistry to it for a good measure 'cause I wanted to win.
Pull your head out of the psychology books (and possibly out of your ass). Neuroscience still does research on motor control and sensor control areas too.
My neuroanathomy's professor told me that to understand what did what in the human brain they had studied cases where people suffered damage to specific areas (of the brain or of the spinal cord or whatever).My original post in this tread wrote:You know, to learn about how something works you must try to fuck some specific part up and see what happens, but doing it on actual human brains is uhhhhh... immoral.
Waiting for bad things to casually happen on people and studying them is very inefficient, and chimps are just related to us, but not the same. [editors note: this is probably the single most damning statement, not only because it was your first in this thread but for its sheer irreconcilability with your current statements]
He also said that there were made various experiments where living primates, rats, and dogs were brain-damaged on purpose, to understand what is connected to what, and how the signals are modulated.
Then I extrapolated that since models are used to understand some brain areas, models can be used to understand other areas as well, so, given enough time you can have an AI with the specs the OP asked for.
Then you misinterpret my (probably badly written) post, based on the assumption that brain research is studying only its higher functions (for some unknown reason) and don the Moral Police hat to start bombing my ass.
And I defend myself by aknowledging that if you want to study the higher functions of the brain, you have the limitations you raise, and add some sophistry to it for a good measure 'cause I wanted to win.
Well, your posts aren't particularly better than mine (apart from my awesome writing style) either. We both post without giving any particular source, so it qualifies as a nitpicking contest at best.Is this really the level of thought posters are allowed to get away with in SLAM now?
Realize the level of difficulty of the feat asked for in the OT (recreating a human mind in a computer). If it can be done at all, I think this is a way, since we're modelling something of human brain already.Making fusion reactors is one of the most balls-hard engineering problems ever conceived, and that is what you compare this to?
I plain don't understand this answer either. Care to elaborate a little?Even in medicine we have epidemiology for a reason. You just plain don't know what you are talking about.
There is still lots to know about even what spinal cord does. We know more or less where the cabling runs and what is responsible of what general function, but not how it actually works with decent detail.They will also be incapable of higher thought. Or any thought. Really, this is so ignorant of basic neurobiology it could have been written by avianmosquito.
Pull your head out of the psychology books (and possibly out of your ass). Neuroscience still does research on motor control and sensor control areas too.
Of course, but unless you're aware of specific limitations there is no reason to assume it is impossible without trying. Your claim it is "too complex" is too vague to be a credible limitation. Computers get better every day, and can catch up in 50 or so years, while we make simpler models.Not everything that in theory can be solved in practice can.
I'm nobody. Nobody at all. But the secrets of the universe don't mind. They reveal themselves to nobodies who care.
--
Stereotypical spacecraft are pressurized.
Less realistic spacecraft are pressurized to hold breathing atmosphere.
Realistic spacecraft are pressurized because they are flying propellant tanks. -Isaac Kuo
--
Good art has function as well as form. I hesitate to spend more than $50 on decorations of any kind unless they can be used to pummel an intruder into submission. -Sriad
--
Stereotypical spacecraft are pressurized.
Less realistic spacecraft are pressurized to hold breathing atmosphere.
Realistic spacecraft are pressurized because they are flying propellant tanks. -Isaac Kuo
--
Good art has function as well as form. I hesitate to spend more than $50 on decorations of any kind unless they can be used to pummel an intruder into submission. -Sriad
- someone_else
- Jedi Knight
- Posts: 854
- Joined: 2010-02-24 05:32am
Re: Artificial Intelligence: Why Would We Make One?
While gas stations had self-service for decades here (to make some money at night) and stealing gas or diesel was (and is still) not pratical (cameras all around and everything bolted shut), most supermarkets doing the same tend to have very exploitable payment systems.Broomstick wrote:This sort of thing may sort out based on store location - some localities are more prone to dysfunctional people, people trying to cheat the system, and so forth.
I mean, there is none controlling if you are actually paying all the products you buy (for example, you can pay only some of the objects in your shopping bag).
I don't think it will be a good idea on the long run.
I'm nobody. Nobody at all. But the secrets of the universe don't mind. They reveal themselves to nobodies who care.
--
Stereotypical spacecraft are pressurized.
Less realistic spacecraft are pressurized to hold breathing atmosphere.
Realistic spacecraft are pressurized because they are flying propellant tanks. -Isaac Kuo
--
Good art has function as well as form. I hesitate to spend more than $50 on decorations of any kind unless they can be used to pummel an intruder into submission. -Sriad
--
Stereotypical spacecraft are pressurized.
Less realistic spacecraft are pressurized to hold breathing atmosphere.
Realistic spacecraft are pressurized because they are flying propellant tanks. -Isaac Kuo
--
Good art has function as well as form. I hesitate to spend more than $50 on decorations of any kind unless they can be used to pummel an intruder into submission. -Sriad
- Broomstick
- Emperor's Hand
- Posts: 28822
- Joined: 2004-01-02 07:04pm
- Location: Industrial armpit of the US Midwest
Re: Artificial Intelligence: Why Would We Make One?
That's why you still need a person to monitor the check out systems. There are some safeguards (from the store's viewpoint) in them, but yes, if you're really, really determined you can defeat them. You can also sneak shit by a harried, overworked human cashier and/or play short-change games with them.
As I said - different stores have different sorts of people walking in the door, that's why some stores have extensive theft-deterrent systems and some don't. Clearly, self-serve checkout will work best where most customers are honest and not trying to game the system. Such places do exist, as not everyone is a cheat.
As I said - different stores have different sorts of people walking in the door, that's why some stores have extensive theft-deterrent systems and some don't. Clearly, self-serve checkout will work best where most customers are honest and not trying to game the system. Such places do exist, as not everyone is a cheat.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.
Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.
If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy
Sam Vimes Theory of Economic Injustice
Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.
If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy
Sam Vimes Theory of Economic Injustice
- Formless
- Sith Marauder
- Posts: 4143
- Joined: 2008-11-10 08:59pm
- Location: the beginning and end of the Present
Re: Artificial Intelligence: Why Would We Make One?
Well, I'm glad you came out to clarify. But I have to tell you, your neuroanatomy professor is coming at this from a limited perspective. Cases like Phineas Gage (who survived having a railway spike driven through his skull with the dramatic and unexpected side effect of changing his personality) were indeed important for kicking off brain science, but higher functioning is the least well understood part of the brain and what much of AI research is interested in. We already have a good general idea of what the brain's constituent parts do (there aren't that many of them, as I'm sure you know), what is interesting now is the more specific functions they carry out, especially the cortex. Indeed, if you are going to use AI for modeling purposes higher functions seem to me to be what you would want to study: the general idea can in fact be gained from studies on brain damaged dogs because their common ancestor with humans is near enough in the past that the basic functioning isn't all that different. Higher cognitive functions, not as much.My neuroanathomy's professor told me that to understand what did what in the human brain they had studied cases where people suffered damage to specific areas (of the brain or of the spinal cord or whatever).
He also said that there were made various experiments where living primates, rats, and dogs were brain-damaged on purpose, to understand what is connected to what, and how the signals are modulated.
Then I extrapolated that since models are used to understand some brain areas, models can be used to understand other areas as well, so, given enough time you can have an AI with the specs the OP asked for.
Then you misinterpret my (probably badly written) post, based on the assumption that brain research is studying only its higher functions (for some unknown reason) and don the Moral Police hat to start bombing my ass.
And I defend myself by aknowledging that if you want to study the higher functions of the brain, you have the limitations you raise, and add some sophistry to it for a good measure 'cause I wanted to win.
Sure. Epidemology tells us what parts of the population are most at risk for certain diseases than others. Obviously when you are talking about injuries or infectious diseases the average person's immune response is going to be roughly the same most of the time. But other times it reveals differences among humans that cannot be ignored. Any disease with a genetic or age component, for instance, is inapplicable to the rest of the population. Racial differences (and as much as people don't like to talk about them, they are there) can mean the difference between someone being at risk for diabetes or lactose intolerance. And in psychiatry, as I already noted, cultural differences can change presenting symptoms of even disorders that are observable across cultures. You cannot so easily generalize about the human body or mind: it is what it is.I plain don't understand this answer either. Care to elaborate a little?
But in the context of artificial intelligence simulating the spinal cord and/or cerebellum is not enough to qualify. Its part of a larger organ, not intelligent on its own. I'm sure that there is useful data that can be modeled, like the functioning of the thalamus, but it would not qualify as an artificial intelligence because the thalamus' primary function is to relay information between different parts of the brain rather than processing it.There is still lots to know about even what spinal cord does. We know more or less where the cabling runs and what is responsible of what general function, but not how it actually works with decent detail.
Pull your head out of the psychology books (and possibly out of your ass). Neuroscience still does research on motor control and sensor control areas too.
(if you haven't noticed, my psychology books do contain at least some information on neuroanatomy, so its not like I can't follow along with what you are saying)
BTW, if other people don't want us to continue this argument I would be willing to discontinue this conversation.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
- Guardsman Bass
- Cowardly Codfish
- Posts: 9281
- Joined: 2002-07-07 12:01am
- Location: Beneath the Deepest Sea
Re: Artificial Intelligence: Why Would We Make One?
Wal-Mart seems to be the big hold-out on self-check-out in my neighborhood. All of the grocery stores around here installed self-check-out several years ago.Destructionator XIII wrote:Yeah, that makes sense.
I'm cool with it. Neuroscience is likely going to have a major effect on attempts to simulate a human brain.Formless wrote:BTW, if other people don't want us to continue this argument I would be willing to discontinue this conversation.
“It is possible to commit no mistakes and still lose. That is not a weakness. That is life.”
-Jean-Luc Picard
"Men are afraid that women will laugh at them. Women are afraid that men will kill them."
-Margaret Atwood
-Jean-Luc Picard
"Men are afraid that women will laugh at them. Women are afraid that men will kill them."
-Margaret Atwood
- someone_else
- Jedi Knight
- Posts: 854
- Joined: 2010-02-24 05:32am
Re: Artificial Intelligence: Why Would We Make One?
Well, nowadays the vast majority of experiments that in the old times needed such barbaric animal sacrifices can be done on humans with brain imaging and other similar non-invasive techniques.Formless wrote:But I have to tell you, your neuroanatomy professor is coming at this from a limited perspective.
But is also true that most of those new methods have an obvious limit (basically the same of the barbaric techniques above). They tell "what parts are active" at any given moment, but not what the hell they are actually doing on a tissue level, that is what you need to get further than our current understanding. So to get that data you need to revert back to fucking up stuff (or looking at naturally fucked up stuff due to moral implications).
To make an (half-assed) example: Visual cortex has primary and secondary cortex areas. They all show as "active" with brain imaging when the subject is looking at stuff, but there was a bigger part (primary) and a few smaller parts (secondary). So they looked at people who had suffered brain damage to those secondary areas and discovered that such people had problems in remembering if they had seen an object before or not. And that's how we know that those secondary areas are visual memory.
Or lately, that looking at visual areas of people blind from a very young age (with brain imaging), they saw that parts of it are still working when the subject is moving around with the stick or listening to a sound. One of the theories tossed around is that such part may be something like a "3D rendering part" that uses all the sensory feeds to create a virtual 3D environment of the area around the subject that other parts of the brain then use for navigation. This is still work-in-progress though.
Nope. We have a good idea of what the brain's constituent parts are. What they do is still subject of intensive research effort (that is, we know some, but not everything).We already have a good general idea of what the brain's constituent parts do (there aren't that many of them, as I'm sure you know), what is interesting now is the more specific functions they carry out, especially the cortex.
The point is that this research, has stepped one level deeper and is looking at the interactions between neurons (that is, simply put, "how the brain works"). And as I said, brain imaging cannot help a lot here since it isn't sensible enough to single out the place and activity every single cell.
So you're stuck with doing experiments (usually in vitro) with a very limited goal, like this.
And after a while, you end up with huge amounts of papers that say only a tiny bit (we have already reached this stage, more or less).
So you need to find a way to link all these results up and test them working together to get an idea of what the hell is actually going on (and also see where more research is needed).
Here is where computer modeling becomes useful.
(a little clarification, what we call "brain" is basically cortex plus the cabling that connects cortex areas, the "higher functions" are around 1/3 or so of the total cortex, the rest of the cortex processes sensory feeds and controls muscle movement. The main reason why "if we used 100% of our bran" claims are stupid.)
Of course. But the problem is that "higher functions" don't live in a vacuum, they are constantly bombarded by signals coming from sensory cortex, and more or less constantly communicate with motor control cortex. So, to understand them you need to have a very good understanding of the signals that enter and the signals that exit from it (duh!). That means you have to know exactly how sensory cortex elaborates physical stimuli and motor cortex elaborate the signals they receive, in humans. And I say in humans because any other animal, has pretty different "higher functions", so the signals reaching them and generated by them are pretty different as well. The difference is mostly in how the cells are arranged and linked, but since here form IS function...Indeed, if you are going to use AI for modeling purposes higher functions seem to me to be what you would want to study
Trying to model superior brain functions without understanding better the interface parts and emulating the brain tissue would complicate even further an already mindboggingly complex problem imho. Sociological and most neurological experiments don't tell you what neuron fires and where, they give only a general idea on the area activating at best, so while you can make a model out of the "human reactions" according to your data, it won't resemble the human brain (it would be a machine programmed to simulate only its reactions, not how it decides to react like that) and is also a pain in the ass to get enough info to make it half-decent due to the complexity of human reactions. Also I have no clue on how something like this could prove useful to understand how the brain actually works.
The inherent stupidity of this approach possibly explains why you were so outraged in the posts above.
Generalizing allows you to throw a bridgehead on the issue, then you can add race-specific, population-specific and culture-specific corrections to your approximation. You cannot study each human being as a unique artifact. Its life won't last enough to let you discover anything useful.And in psychiatry, as I already noted, cultural differences can change presenting symptoms of even disorders that are observable across cultures. You cannot so easily generalize about the human body or mind: it is what it is.
Even something that looks much easier to study as anatomy (it isn't, not even a bit) is packed with generalizations about shape, lenght, exact position, presence or absence of certain details and so on.
In some fields (digestive system? kidneys?) there is so much variability in shape and position of critical parts that that before some doing surgery on them you need to make exams to see what kind of "special case" the subject is.
Well, if I wanted to nitpick I could have said that even "higher brain functions" are part of a larger organ and aren't intelligent of their own.But in the context of artificial intelligence simulating the spinal cord and/or cerebellum is not enough to qualify. Its part of a larger organ, not intelligent on its own.
They need an interface to sense and do something to the external world, without they are worse than useless.
Of course models of interface parts alone won't qualify as AI all by themselves, but they must be modeled and (later) integrated in such kind of brain-research AI for it to work correctly (the scope if this is understanding how the brain works, making an AI is only a mean to an end).
I'm nobody. Nobody at all. But the secrets of the universe don't mind. They reveal themselves to nobodies who care.
--
Stereotypical spacecraft are pressurized.
Less realistic spacecraft are pressurized to hold breathing atmosphere.
Realistic spacecraft are pressurized because they are flying propellant tanks. -Isaac Kuo
--
Good art has function as well as form. I hesitate to spend more than $50 on decorations of any kind unless they can be used to pummel an intruder into submission. -Sriad
--
Stereotypical spacecraft are pressurized.
Less realistic spacecraft are pressurized to hold breathing atmosphere.
Realistic spacecraft are pressurized because they are flying propellant tanks. -Isaac Kuo
--
Good art has function as well as form. I hesitate to spend more than $50 on decorations of any kind unless they can be used to pummel an intruder into submission. -Sriad
- Formless
- Sith Marauder
- Posts: 4143
- Joined: 2008-11-10 08:59pm
- Location: the beginning and end of the Present
Re: Artificial Intelligence: Why Would We Make One?
Pretty much, yeah. There are only oh so many times you can read ignorant transhumanist shit from guys like LionElJhonson and Ray Kazurwili before you've heard enough. Yours on the other hand are the kind I can actually learn something from, if late into the game.The inherent stupidity of this approach possibly explains why you were so outraged in the posts above.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
- someone_else
- Jedi Knight
- Posts: 854
- Joined: 2010-02-24 05:32am
Re: Artificial Intelligence: Why Would We Make One?
A couple links that I didn't manage to insert due to stupid connection problems.
As you can see, the "higher brain functions" (decision, emotion, imagination) are a smallish part of a brain mostly dedicated to interfacing them with the outside world.
Human Connectome Project is for example a major effort in this direction, using the usual cutting-edge custom-made tech and supercomputers to map brain patterns of a thousand or so healthy human individuals.myself wrote:The point is that this research, has stepped one level deeper and is looking at the interactions between neurons (that is, simply put, "how the brain works").
An image to explain this better. Ok, looks simplistic but it's the best I found.myself wrote:(a little clarification, what we call "brain" is basically cortex plus the cabling that connects cortex areas, the "higher functions" are around 1/3 or so of the total cortex, the rest of the cortex processes sensory feeds and controls muscle movement. The main reason why "if we used 100% of our bran" claims are stupid.)
As you can see, the "higher brain functions" (decision, emotion, imagination) are a smallish part of a brain mostly dedicated to interfacing them with the outside world.
I'm nobody. Nobody at all. But the secrets of the universe don't mind. They reveal themselves to nobodies who care.
--
Stereotypical spacecraft are pressurized.
Less realistic spacecraft are pressurized to hold breathing atmosphere.
Realistic spacecraft are pressurized because they are flying propellant tanks. -Isaac Kuo
--
Good art has function as well as form. I hesitate to spend more than $50 on decorations of any kind unless they can be used to pummel an intruder into submission. -Sriad
--
Stereotypical spacecraft are pressurized.
Less realistic spacecraft are pressurized to hold breathing atmosphere.
Realistic spacecraft are pressurized because they are flying propellant tanks. -Isaac Kuo
--
Good art has function as well as form. I hesitate to spend more than $50 on decorations of any kind unless they can be used to pummel an intruder into submission. -Sriad
- Formless
- Sith Marauder
- Posts: 4143
- Joined: 2008-11-10 08:59pm
- Location: the beginning and end of the Present
Re: Artificial Intelligence: Why Would We Make One?
Actually, now that I think about it, if I wanted to be nitpicky I could say that "higher cognitive brain function is the definition of intelligence". Of course that is a nitpick, you can do higher cognitive functions without getting human-like cognitive functions at all. So in a sense that would be talking cross purposes.someone_else wrote:Well, if I wanted to nitpick I could have said that even "higher brain functions" are part of a larger organ and aren't intelligent of their own.
They need an interface to sense and do something to the external world, without they are worse than useless.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
- cosmicalstorm
- Jedi Council Member
- Posts: 1642
- Joined: 2008-02-14 09:35am
Re: Artificial Intelligence: Why Would We Make One?
I started out thinking that transhumanism, AI and singularity was nonsense. Having read through most of the stuff produced by people like Vinge, Kurzweil, Yudkowsky, Bostrom, Sandberg (more locally, Starglider) etc. I have a very hard time dismissing the whole idea as nonsense. I have an impression that a lot of older generation sci-fi people who spent a lot of time figuring out how humans would colonize the galaxy are annoyed by the idea that human cognition might soon be about as relevant to cognition in general as the first replicator organisms are to modern DNA. To be relevant to this thread, will it be hard to map a human brain? Yeah, of course. Was it hard to build the first nuclear bomb? Yeah it was, so what. My idea is that if the technological development is not halted permanently within the next century or less, this stuff* is bound to come about.
*E.g minds operating millions of times faster than current human minds, and likely much more efficiently.
*E.g minds operating millions of times faster than current human minds, and likely much more efficiently.
- Formless
- Sith Marauder
- Posts: 4143
- Joined: 2008-11-10 08:59pm
- Location: the beginning and end of the Present
Re: Artificial Intelligence: Why Would We Make One?
You need to read PZ Myers. This guy is a textbook loon. Knowing why he is a loon reveals much about the transhumanist (especially singularity type) ideology is a load of crap-- the people are (in my experience) almost uniformly computer science nerds who think they know everything, even stuff far out of their area of expertise like biology and engineering.Kurzweil,
The problem is, people only remember the improbable technologies that did happen like nuclear energy and forget all those improbable technologies that didn't like cold fusion and jetpacks. You cannot assume that these technologies are inevitable by looking backwards-- in psychology we call this the hindsight bias, and in logic we call it a fallacy.To be relevant to this thread, will it be hard to map a human brain? Yeah, of course. Was it hard to build the first nuclear bomb? Yeah it was, so what. My idea is that if the technological development is not halted permanently within the next century or less, this stuff* is bound to come about.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Artificial Intelligence: Why Would We Make One?
I think a big part of it is that whatever else about our societies have changed, the basic human needs of approximately normal people haven't changed since the Stone Age: we want security, some reasonable degree of physical comfort, a family, and probably some varying degree of intellectual stimulation.
At a certain point, it gets tiresome hearing people go on at length about how all this is about to become irrelevant. "The Singularity" boils down to a future that I can't picture myself living in, which is kind of the point. But for people not involved in AI research and who don't feel a deep passion for talking about the idea of a post-human future that, by definition, doesn't really include them... it's tiresome, depressing, and likely to encourage fatalism.
I mean really, if in thirty or forty years we're all going to be uplifted/killed to make room for more paperclips/whatever by the AI Gods... the problem is so much larger than I am, and so utterly lacking in a clearly defined technical solution, that I might as well just try to have an approximately normal and happy existence for the next few decades and hope my particular bit part in the End of Human Civilization As We Know It isn't too horrible.
This isn't like "a giant comet is headed for Earth;" I know what we can do about that, at least in theory. Or like "global warming will have us up to our ass in seawater;" again, I know what we can do about that, at least in theory. This is a problem that arises more or less no matter what I do, unless our entire civilization self-destructs in a Luddite frenzy to avoid it. So beyond "keep my head down and hope the end of the world isn't too painful," what else is there for me to do? I mean side from write checks to people who ask me for money in exchange for which they promise to do work I can't check or evaluate for its usefulness?
And I don't want to have conversations which lead me to that conclusion over and over and over. It's depressing, and if I want to be depressed I have plenty of much more immediate things in my life to be depressed about. That doesn't mean it isn't true, it's just that it becomes a bloody nuisance when all casual conversations I partake in related to certain topics wind up getting hijacked in that direction.
I mean... in general, The Singularity looks, to the average human being, one hell of a lot like the prospect of everyone being eaten/turned into jello/whatever by the Elder Gods. It's not something I, or most people I know, want to build a life and a mindset around. Even if there's some opportunity to make it pleasant if we all pull together as a team... what kind of life would that be? No one seems to be able to answer that.
And, again, I'm not saying it isn't happening. I'm just utterly burned out on the idea that by the time I turn fifty the world won't have room for me.
At a certain point, it gets tiresome hearing people go on at length about how all this is about to become irrelevant. "The Singularity" boils down to a future that I can't picture myself living in, which is kind of the point. But for people not involved in AI research and who don't feel a deep passion for talking about the idea of a post-human future that, by definition, doesn't really include them... it's tiresome, depressing, and likely to encourage fatalism.
I mean really, if in thirty or forty years we're all going to be uplifted/killed to make room for more paperclips/whatever by the AI Gods... the problem is so much larger than I am, and so utterly lacking in a clearly defined technical solution, that I might as well just try to have an approximately normal and happy existence for the next few decades and hope my particular bit part in the End of Human Civilization As We Know It isn't too horrible.
This isn't like "a giant comet is headed for Earth;" I know what we can do about that, at least in theory. Or like "global warming will have us up to our ass in seawater;" again, I know what we can do about that, at least in theory. This is a problem that arises more or less no matter what I do, unless our entire civilization self-destructs in a Luddite frenzy to avoid it. So beyond "keep my head down and hope the end of the world isn't too painful," what else is there for me to do? I mean side from write checks to people who ask me for money in exchange for which they promise to do work I can't check or evaluate for its usefulness?
And I don't want to have conversations which lead me to that conclusion over and over and over. It's depressing, and if I want to be depressed I have plenty of much more immediate things in my life to be depressed about. That doesn't mean it isn't true, it's just that it becomes a bloody nuisance when all casual conversations I partake in related to certain topics wind up getting hijacked in that direction.
I mean... in general, The Singularity looks, to the average human being, one hell of a lot like the prospect of everyone being eaten/turned into jello/whatever by the Elder Gods. It's not something I, or most people I know, want to build a life and a mindset around. Even if there's some opportunity to make it pleasant if we all pull together as a team... what kind of life would that be? No one seems to be able to answer that.
And, again, I'm not saying it isn't happening. I'm just utterly burned out on the idea that by the time I turn fifty the world won't have room for me.
This space dedicated to Vasily Arkhipov
- Darth Hoth
- Jedi Council Member
- Posts: 2319
- Joined: 2008-02-15 09:36am
Re: Artificial Intelligence: Why Would We Make One?
To sum up my impression of the "Singularity" phenomenon as promoted by Starglider et al (which is substantially akin to Simon's), it is essentially similar to Bible prophecy, only with monsters out of science fiction destroying humanity instead of angels and demons out of mythology. A vaguely defined, non-human but physically and mentally superior power promises to deliver vaguely defined bliss to its adherents . . . and a fate much worse than death to all who oppose its apostles. And, if we are to believe the doom prophets, there is absolutely nothing we poor sinful humans can do about it.
To an unbeliever, it might raise an eyebrow. To a believer, it is the Glorious Golden Age of the Future. But if you are an ordinary human being who are just easily enough intimidated to buy into the package without accepting the morality and theology behind it wholesale, it looks pretty damn terrifying.
I cannot but consider the people who actually believe in the "transhumanist" agenda, and yet advocate it, utterly evil, sociopathic, and morally bankrupt. I mean, they believe that their own research will render humanity obsolete, and possibly (hopefully, I would say, in such a scenario) extinct. In their imaginations, they are playing a part in the greatest crime that will ever even theoretically be committed in the Universe - and more than a few seem smugly proud of it.
In all the history of mankind, there has never been, nor will ever be, people any worse than the transhumanists. They rank right up there with the very worst excesses of the mad religious cults. And all I can do is hope that their evil gods will prove as imaginary as those that have come before them.
To an unbeliever, it might raise an eyebrow. To a believer, it is the Glorious Golden Age of the Future. But if you are an ordinary human being who are just easily enough intimidated to buy into the package without accepting the morality and theology behind it wholesale, it looks pretty damn terrifying.
I cannot but consider the people who actually believe in the "transhumanist" agenda, and yet advocate it, utterly evil, sociopathic, and morally bankrupt. I mean, they believe that their own research will render humanity obsolete, and possibly (hopefully, I would say, in such a scenario) extinct. In their imaginations, they are playing a part in the greatest crime that will ever even theoretically be committed in the Universe - and more than a few seem smugly proud of it.
In all the history of mankind, there has never been, nor will ever be, people any worse than the transhumanists. They rank right up there with the very worst excesses of the mad religious cults. And all I can do is hope that their evil gods will prove as imaginary as those that have come before them.
"But there's no story past Episode VI, there's just no story. It's a certain story about Anakin Skywalker and once Anakin Skywalker dies, that's kind of the end of the story. There is no story about Luke Skywalker, I mean apart from the books."
-George "Evil" Lucas
-George "Evil" Lucas
Re: Artificial Intelligence: Why Would We Make One?
I agree with the backlash against much of the transhumanist culture. The Singularity is little more than the 'nerd rapture' it's widely reputed to be.
In saying that, there are facets to that culture which deserve mention precisely because they reject the WASP computer-nerd tenets of mainstream Singularity worship:
Techno-progressivism
Despite the (unfortunate) mass adoption of 'upload into cyber-heaven' views, there are those out there who take a more balanced and ethics-focused position rather than focusing on techno-masturbation and uploading themselves into the iPad Matrioshka Brain Edition.
Disclaimer: I have no investment in any of these groups nor any defense of any mumbo-jumbo you may find on further exploration; I just wanted to add a new dimension to the discussion.
In saying that, there are facets to that culture which deserve mention precisely because they reject the WASP computer-nerd tenets of mainstream Singularity worship:
Techno-progressivism
Institute for Ethics & Emerging TechnologiesTechno-progressives argue that technological developments can be profoundly empowering and emancipatory when they are regulated by legitimate democratic and accountable authorities to ensure that their costs, risks and benefits are all fairly shared by the actual stakeholders to those developments.
Techno-progressivism maintains that accounts of "progress" should focus on scientific and technical dimensions, as well as ethical and social ones. For most techno-progressive perspectives, then, the growth of scientific knowledge or the accumulation of technological powers will not represent the achievement of proper progress unless and until it is accompanied by a just distribution of the costs, risks, and benefits of these new knowledges and capacities.
David Pearce's Hedonistic Imperative is ambitious to the point of Singularity magic in most regards, but at least the goal of eliminating suffering is more admirable than playing real-life MMORPGs in a solar mass of CPUs.The IEET's mission is to be a center for voices arguing for a responsible, constructive, ethical approach to the most powerful emerging technologies. We believe that technological progress can be a catalyst for positive human development so long as we ensure that technologies are safe and equitably distributed. We call this a "technoprogressive" orientation.
We aim to showcase technoprogressive ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies. Focusing on emerging technologies that have the potential to positively transform social conditions and the quality of human lives—especially "human enhancement technologies"—the IEET seeks to cultivate academic, professional, and popular understanding of their implications, both positive and negative, and to encourage responsible public policies for their safe and equitable use.
Despite the (unfortunate) mass adoption of 'upload into cyber-heaven' views, there are those out there who take a more balanced and ethics-focused position rather than focusing on techno-masturbation and uploading themselves into the iPad Matrioshka Brain Edition.
Disclaimer: I have no investment in any of these groups nor any defense of any mumbo-jumbo you may find on further exploration; I just wanted to add a new dimension to the discussion.
All those moments will be lost in time... like tears in rain...
Re: Artificial Intelligence: Why Would We Make One?
EDIT: double post.
All those moments will be lost in time... like tears in rain...
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Artificial Intelligence: Why Would We Make One?
It's not that bad- mostly, the prophet-types sincerely believe that everyone will share the same fate; the believers and the unbelievers get eaten to make room for paperclips alike, or the believers and unbelieversDarth Hoth wrote:To sum up my impression of the "Singularity" phenomenon as promoted by Starglider et al (which is substantially akin to Simon's), it is essentially similar to Bible prophecy, only with monsters out of science fiction destroying humanity instead of angels and demons out of mythology. A vaguely defined, non-human but physically and mentally superior power promises to deliver vaguely defined bliss to its adherents . . . and a fate much worse than death to all who oppose its apostles. And, if we are to believe the doom prophets, there is absolutely nothing we poor sinful humans can do about it.
I mean, Singularitarianism really is a plausible, logical projection of the way the future might look; it is not pure fantasy, it is not on the same level as believing in fairies. My objections are more along the lines of "I'm tired of talking about it because I can't do anything meaningful about it with sufficient reliability to inspire confidence." This might be like people in the 1950s felt about nuclear war, except that in their case the predicted destruction was potentially imminent and would occur more or less randomly, whereas now the predicted destruction is definitely rather far off... but also asserted to occur with total inevitability.
Either way, though, at a certain point you just wish people would shut up about it; if I only have three decades to live before the end of civilization as we know it happens whether I like it or not, let me live those three decades.
That depends on your definition of the agenda. It's honestly not that bad when you talk about the people who are sincerely trying to help people, or who really are legitimately concerned about the prospect of a future in which AI technology is used first for oppression and then, potentially, for massive destruction of civilization as we know it.I cannot but consider the people who actually believe in the "transhumanist" agenda, and yet advocate it, utterly evil, sociopathic, and morally bankrupt. I mean, they believe that their own research will render humanity obsolete, and possibly (hopefully, I would say, in such a scenario) extinct. In their imaginations, they are playing a part in the greatest crime that will ever even theoretically be committed in the Universe - and more than a few seem smugly proud of it.
But then there are the sociopathic fucks, yes.
This space dedicated to Vasily Arkhipov
Re: Artificial Intelligence: Why Would We Make One?
Uh Hoth, it sounds as if you are against tranhumanism because it will render humanity extinct. Transhumanism renders humanity extinct by having people improve themselves to the point they are no longer human. What is wrong with that?
- Formless
- Sith Marauder
- Posts: 4143
- Joined: 2008-11-10 08:59pm
- Location: the beginning and end of the Present
Re: Artificial Intelligence: Why Would We Make One?
IIRC, Hoth subscribes to an extreme form of Humanism that wouldn't look out of place in the Imperium of Man-- humanity literally is the measure of all things to him, so replacing it with machines would be heretical to that morality. But then its been a while since the last "first contact situation, LETS BLOW UP DEM ALIENS!" or animal rights thread, so I may be misremembering things.Samuel wrote:Uh Hoth, it sounds as if you are against tranhumanism because it will render humanity extinct. Transhumanism renders humanity extinct by having people improve themselves to the point they are no longer human. What is wrong with that?
Personally, Transhumanism to me is just a silly fantasy. As long as the Transhumanists aren't in your face about it, I'm cool with people who would like to become immortal furry machine gods. Its no more shameful than my fantasies of *censored due to sexually explicit content* .
Its when they start making bold claims and/or getting smug about how they are a Transhumanist and you aren't that I take offence. Its like when a new toy comes out and not everyone is interested, but you aren't "cool" unless you buy it. That seems to be the mindset with some of them, especially where mind uploading and promises of immortality are concerned. I'd be content with fixing the problems with my brain (attention span and whatever seizure disorder I've been straddled with) and I have no more existential fear of death than I have existential fear of sleep, so the emotional appeal isn't that strong with me. So you can imagine that having to deal with people who think you are a Luddite if you don't share their fantasies is a natural recipe for annoyance.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
Re: Artificial Intelligence: Why Would We Make One?
It's kind of cool that you assert that a group of people generally interested in permanently curing all humans of death, disease, and all forms of suffering and oppression are evil and sociopathic.Darth Hoth wrote:I cannot but consider the people who actually believe in the "transhumanist" agenda, and yet advocate it, utterly evil, sociopathic, and morally bankrupt.
Looks like you have a promising career ahead of you as one of those generic sci-fi luddite antagonists who tries to repress all genetic/cybernetic/nano/whatever augmentation for the sake of ETERNAL HUMAN PURITY. I mean, wouldn't want to contemplate the possibility of anyone ever being better than than we are now, you know?
my heart is a shell of depleted uranium
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Artificial Intelligence: Why Would We Make One?
I think the problem here, and if you take a step back this becomes utterly obvious, is that Hoth's definition of "transhumanist" doesn't match yours.Seggybop wrote:It's kind of cool that you assert that a group of people generally interested in permanently curing all humans of death, disease, and all forms of suffering and oppression are evil and sociopathic.Darth Hoth wrote:I cannot but consider the people who actually believe in the "transhumanist" agenda, and yet advocate it, utterly evil, sociopathic, and morally bankrupt.
Looks like you have a promising career ahead of you as one of those generic sci-fi luddite antagonists who tries to repress all genetic/cybernetic/nano/whatever augmentation for the sake of ETERNAL HUMAN PURITY. I mean, wouldn't want to contemplate the possibility of anyone ever being better than than we are now, you know?
The basic problem with transhumanism is the question: what happens to the people who don't want to be uploaded into the machine paradise, because they distrust the promises made by it? What happens to the people who don't want to tweak their children's DNA beyond recognition- or who, due to poverty or safety concerns, become the evolutionary late-adopters by a generation or two? And so on.
Given the way real economies and people work, that's liable to consist of the bulk of humanity... and yet there are far, far too many waving the 'transhumanist' banner whose only answer is "well, they're going to wind up marginalized on their own planet, much like chimpanzees are today." And they think that's good enough; they still work towards this end.
At that point, yes transhumanism does become actively evil- or at least a case of massive, genocidal negligence.
The vision of making yourself and your chosen elect into gods can, believe it or not, become something twisted and evil. It doesn't have to be, but it can.
And I'm pretty sure the twisted branch of that is what Hoth has come to identify as "transhumanism." Not "all attempts to solve problems or cure disease by the application of advanced technology," as you accuse. That's not what he's attacking; he's attacking the notion that it is somehow okay to marginalize and destroy everyone whose feet of clay won't stand the march to godhood.
Our own recently, happily ejected LionElJonson is the extreme example of this: all problems will be solved by uploading to the machine paradise, which will probably then depart the world in an Orion drive starship that conveniently burns up everything the Rapture leaves behind it. And yes he is an extreme example- the sociopathic little shit to beat all sociopathic little shits. But I can't blame Hoth if he's run into enough people who are like this to varying extents that it's soured him on the whole movement.
This space dedicated to Vasily Arkhipov
- Broomstick
- Emperor's Hand
- Posts: 28822
- Joined: 2004-01-02 07:04pm
- Location: Industrial armpit of the US Midwest
Re: Artificial Intelligence: Why Would We Make One?
Perhaps some people are most comfortable being human, even if they don't think humanity is perfect in its current form.Samuel wrote:Uh Hoth, it sounds as if you are against tranhumanism because it will render humanity extinct. Transhumanism renders humanity extinct by having people improve themselves to the point they are no longer human. What is wrong with that?
It puzzles me that people around here, by and large, would react with horror at some future plan that would convert all homosexuals into heterosexuals (or vice versa) but don't understand why someone might want to remain an imperfect human being. If being H. sapiens is a vital part of your personal identity then transformation into an arbitrarily "better" form means death - sure, something related to you continues onward, but it's not the "real" you. It's like saying it's OK to kill one of a pair of identical twins because, hey, they're identical, right? Well, no they're not. Likewise, for some people to become something other than human is tantamount to personal destruction, no matter how similar the "copy" that remains.
So... for those people I'd say it's wrong because it's forcing them to undergo something they find as distasteful as death, if not actually viewing it as death, or perhaps even worse than death. Under what ethical system would such a thing be acceptable?
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.
Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.
If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy
Sam Vimes Theory of Economic Injustice
Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.
If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy
Sam Vimes Theory of Economic Injustice
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Artificial Intelligence: Why Would We Make One?
To put it another way, remember Starglider's take on all this:
The Friendly AI problem is explicitly advertised as being so transcendentally hard compared to the General AI problem that for most of us, it's difficult to imagine the former being solved first. And yet we have people telling us that yes, the first big AI on the planet will set the tone of all future existence.
Stop and think that through. Look at the trends of world geopolitics over the past two decades. Does anyone really believe there's a way to stop the first big AI on the planet from belonging to a corporation, or a bunch of politicoes in the pay of one, at this rate? Or to stop them from simply 'leaving behind,' in miserable conditions, all the people they don't need to maintain some arbitrary and increasingly meaningless dollar bottom line?
If we take claims about AI singularities seriously, the future we're looking at is, at best, a rather dark shade of cyberpunk. And there really is not a way to stop this short of mass populist Luddism while we still have the means to do so.
And yet there are people who seem, explicitly or implicitly, to announce that this is our future... while actively working to make it happen. What are we to think of people like that?
The same goes for nanotech, cybernetics, and the like. Whatever the technology is, you can bet the rich will get it first. If it turns them into gods, and gods have no real need of mortals, then the outcome is going to be horrific, given the collective sociopathy of the modern upper class, the way their preferred ideologies spit on any notion of a collective social contract or economic rights for the groundlings.
People who promise that this is in our future, are smart enough to figure this out, and still actively want to make it happen... what do you say about them?
Now, Starglider himself isn't even slightly one of the offenders here. But this illustrates the nature of the problem, and the reason why people can understandably rebel against transhumanism entirely.Starglider wrote:Ignoring the take-off problem again, this is where your mass-produced hordes of soulless robotic riot-suppression troops come in. Imagine, thousands of black storm-trooper like figures marching in perfect formation, clubbing down those dirty poor people with their electro-shock batons. The molotov cocktails are useless, the steel plated robots march right through the flames. No police unions to worry about, no pay-offs, loyalty or morale issues. Sure, a few get taken down by ramming with hijacked vehicles and improvised explosives, but that's acceptable losses, the downed units are quickly repaired or recycled. The terrorist druggie anti-freedom anti-American mob is soon rounded up, put in the automated trucks and sent to the automated underground internment centers.Destructionator XIII wrote:With a little luck though, the poverty ridden proleteriat will rise up and slaughter the AI owners, correcting this injustice. (or elect congresspeople who support *gasp* redistribution of wealth, but why vote when you can use murder?) But it's possible that they won't...
Taking on the mentality of a sociopath ultra-capitalist for a moment, this vision brings a little tear of joy to my eye...
The Friendly AI problem is explicitly advertised as being so transcendentally hard compared to the General AI problem that for most of us, it's difficult to imagine the former being solved first. And yet we have people telling us that yes, the first big AI on the planet will set the tone of all future existence.
Stop and think that through. Look at the trends of world geopolitics over the past two decades. Does anyone really believe there's a way to stop the first big AI on the planet from belonging to a corporation, or a bunch of politicoes in the pay of one, at this rate? Or to stop them from simply 'leaving behind,' in miserable conditions, all the people they don't need to maintain some arbitrary and increasingly meaningless dollar bottom line?
If we take claims about AI singularities seriously, the future we're looking at is, at best, a rather dark shade of cyberpunk. And there really is not a way to stop this short of mass populist Luddism while we still have the means to do so.
And yet there are people who seem, explicitly or implicitly, to announce that this is our future... while actively working to make it happen. What are we to think of people like that?
The same goes for nanotech, cybernetics, and the like. Whatever the technology is, you can bet the rich will get it first. If it turns them into gods, and gods have no real need of mortals, then the outcome is going to be horrific, given the collective sociopathy of the modern upper class, the way their preferred ideologies spit on any notion of a collective social contract or economic rights for the groundlings.
People who promise that this is in our future, are smart enough to figure this out, and still actively want to make it happen... what do you say about them?
This space dedicated to Vasily Arkhipov
Re: Artificial Intelligence: Why Would We Make One?
That's only true if you upload to a computer body, Broomie. It's likely that many, many other options will be available due to high technology for those who'd like to keep their current bodies thankyouerymuch.
Already we're extending people's lives and improving their quality of life a lot with cybernetic implants (it has now become possible to replace much or all heart function with an artificial implant). Genetic engineering is starting to trickle into the mainstream. It's not really death if you keep your body, just replace the aging bits with brand new fresh parts and make sure you're not going to waste away in a hospital bed due to alzheimer's or cancer.
There are many, many possibilities for how the future will turn out and the amount of variables involved is too high to predict with any certainty the demographics and economic systems that will operate in, say, 300 years, but the change will most likely be gradual, not revolutionary due to simple logistics. Just like manned and automated checkouts operate side by side, some people ride bicycles to work and I have a microwave oven and a stove at home. Or, hell - a cell phone, a laptop and lots of paper and pencils.
Sure, yeah, the elite will dominate because they will have easy access to all those new technologies and improvements and whatnot. But let's be real here ; The elite dominates anyway, and always have.
The darkest possible scenario means the elite wall themselves off in ivory towers, surrounded by armies of perfectly obedient soulless robots that cater to their every whim and sustain the infrastructure necessary to keep their masters living in comfort and luxury - and then proceed to genocide everybody else, or stick them in ghettoes denies the basic things necessary for survival.
But...how does this come about? The robots won't come out of nowhere ; Neither will the necessary factories and automated infrastructure to support them. Further complicating things is the fact that this sort of tech does not appear in the hands of only a few people, it will quickly disseminate around the world and may be implemented in an entirely different way two countries to the side.
Look at it this way: industrialization allowed unprecedented power to the nations that did it first, but spread around fast enough to make sure no one nation could conquer the entire planet. Even poorly industrialized nations were abused and kicked around for a while, but never conquered outright due to logistics and manpower issues.
And manpower applies to robotpower too, as resources are not unlimited. The disgruntled masses can still do lots of damage, especially if, say, Russia gives them loads of heavy weapons and explosives and their own combat robotoids and hackers...The elites would need to reach a "critical mass" of robotoid soldiers so that the entire prouction and power infrastructure is secure from organics attacking them, and that's not guaranteed to happen before the backlash makes it untenable.
Further complications include flunkies and subordinates of these sociopathic elites, who might sabotage their plans of opression.
Already we're extending people's lives and improving their quality of life a lot with cybernetic implants (it has now become possible to replace much or all heart function with an artificial implant). Genetic engineering is starting to trickle into the mainstream. It's not really death if you keep your body, just replace the aging bits with brand new fresh parts and make sure you're not going to waste away in a hospital bed due to alzheimer's or cancer.
There are many, many possibilities for how the future will turn out and the amount of variables involved is too high to predict with any certainty the demographics and economic systems that will operate in, say, 300 years, but the change will most likely be gradual, not revolutionary due to simple logistics. Just like manned and automated checkouts operate side by side, some people ride bicycles to work and I have a microwave oven and a stove at home. Or, hell - a cell phone, a laptop and lots of paper and pencils.
Sure, yeah, the elite will dominate because they will have easy access to all those new technologies and improvements and whatnot. But let's be real here ; The elite dominates anyway, and always have.
You know, the problem is that even if the first ever AI belongs to a corporation, it's not going to *POOF* alter the established infrastructure to make the sort of mass opression possible.Simon_Jester wrote:The same goes for nanotech, cybernetics, and the like. Whatever the technology is, you can bet the rich will get it first. If it turns them into gods, and gods have no real need of mortals, then the outcome is going to be horrific, given the collective sociopathy of the modern upper class, the way their preferred ideologies spit on any notion of a collective social contract or economic rights for the groundlings.
The darkest possible scenario means the elite wall themselves off in ivory towers, surrounded by armies of perfectly obedient soulless robots that cater to their every whim and sustain the infrastructure necessary to keep their masters living in comfort and luxury - and then proceed to genocide everybody else, or stick them in ghettoes denies the basic things necessary for survival.
But...how does this come about? The robots won't come out of nowhere ; Neither will the necessary factories and automated infrastructure to support them. Further complicating things is the fact that this sort of tech does not appear in the hands of only a few people, it will quickly disseminate around the world and may be implemented in an entirely different way two countries to the side.
Look at it this way: industrialization allowed unprecedented power to the nations that did it first, but spread around fast enough to make sure no one nation could conquer the entire planet. Even poorly industrialized nations were abused and kicked around for a while, but never conquered outright due to logistics and manpower issues.
And manpower applies to robotpower too, as resources are not unlimited. The disgruntled masses can still do lots of damage, especially if, say, Russia gives them loads of heavy weapons and explosives and their own combat robotoids and hackers...The elites would need to reach a "critical mass" of robotoid soldiers so that the entire prouction and power infrastructure is secure from organics attacking them, and that's not guaranteed to happen before the backlash makes it untenable.
Further complications include flunkies and subordinates of these sociopathic elites, who might sabotage their plans of opression.
JULY 20TH 1969 - The day the entire world was looking up
It suddenly struck me that that tiny pea, pretty and blue, was the Earth. I put up my thumb and shut one eye, and my thumb blotted out the planet Earth. I didn't feel like a giant. I felt very, very small.
- NEIL ARMSTRONG, MISSION COMMANDER, APOLLO 11
Signature dedicated to the greatest achievement of mankind.
MILDLY DERANGED PHYSICIST does not mind BREAKING the SOUND BARRIER, because it is INSURED. - Simon_Jester considering the problems of hypersonic flight for Team L.A.M.E.
It suddenly struck me that that tiny pea, pretty and blue, was the Earth. I put up my thumb and shut one eye, and my thumb blotted out the planet Earth. I didn't feel like a giant. I felt very, very small.
- NEIL ARMSTRONG, MISSION COMMANDER, APOLLO 11
Signature dedicated to the greatest achievement of mankind.
MILDLY DERANGED PHYSICIST does not mind BREAKING the SOUND BARRIER, because it is INSURED. - Simon_Jester considering the problems of hypersonic flight for Team L.A.M.E.
- cosmicalstorm
- Jedi Council Member
- Posts: 1642
- Joined: 2008-02-14 09:35am
Re: Artificial Intelligence: Why Would We Make One?
I don't believe in all of the singularity stuff, but machine intelligence and an intelligence explosion does seem very reasonable to me provided techonology continus to develop. I do not see a similarity to flying cars, jetpacks and so on in regards to that subject, since we already know intelligence exists inside human skulls. Every other aspect of our body can be readily outdone by machinery, so why not the brain?Formless wrote:You need to read PZ Myers. This guy is a textbook loon. Knowing why he is a loon reveals much about the transhumanist (especially singularity type) ideology is a load of crap-- the people are (in my experience) almost uniformly computer science nerds who think they know everything, even stuff far out of their area of expertise like biology and engineering.Kurzweil,
The problem is, people only remember the improbable technologies that did happen like nuclear energy and forget all those improbable technologies that didn't like cold fusion and jetpacks. You cannot assume that these technologies are inevitable by looking backwards-- in psychology we call this the hindsight bias, and in logic we call it a fallacy.To be relevant to this thread, will it be hard to map a human brain? Yeah, of course. Was it hard to build the first nuclear bomb? Yeah it was, so what. My idea is that if the technological development is not halted permanently within the next century or less, this stuff* is bound to come about.
To me that violates the principle of mediocrity and it would seem a stunning coincidence if the current cognitive ability of humans just happened to be the absolute best kind of cognitive ability that can be produced in the universe.
You are right about hindsight bias though, I've considered it and in the light of it I'm more skeptical about things like nanotechnology.
I also don't think that any advances in AI will necessarily produce a world of rainbows and ponies for us humans, I wouldn't be shocked if it simply kills us all and pursues something that would seem ridiculous from a human POV.
I'm also completely aware of the possibility that I might be completely wrong about everything, the future is a strange beast it seems.