Fun With: "Visceral-Feeling AI"
Moderator: NecronLord
Fun With: "Visceral-Feeling AI"
The premise of this thread is to basically encourage a learning/adapting AI to act based on what it observes, and on what it feels. The things it should viscerally "feel" are fear, pain, pleasure, and curiosity. This is meant to bring the idea of "feelings" out of the realm of pure programming, to make them deeper responses, more similar to biological life.
Caveats:
- this AI is meant to observe and learn about its environment, and even figure out how things work by applying the Scientific Method to learning. It will be viscerally weighted (through programming) to feel "curiosity" and "pleasure" when doing so; with the end result being that it "likes" learning.
- when attacked or injured, it will viscerally feel "pain" and "fear," which will affect how it acts during that time. Attention will be paid much more directly to whatever is "causing" it to feel afraid or in pain, focusing on removing itself from the area or by preventing harm to itself, if it sees that option available.
I am asking for help in fleshing out this theoretical AI's little details as much as possible, as this will be one of the important plot devices for the story I'm working on. My thanks in advance for your replies.
Caveats:
- this AI is meant to observe and learn about its environment, and even figure out how things work by applying the Scientific Method to learning. It will be viscerally weighted (through programming) to feel "curiosity" and "pleasure" when doing so; with the end result being that it "likes" learning.
- when attacked or injured, it will viscerally feel "pain" and "fear," which will affect how it acts during that time. Attention will be paid much more directly to whatever is "causing" it to feel afraid or in pain, focusing on removing itself from the area or by preventing harm to itself, if it sees that option available.
I am asking for help in fleshing out this theoretical AI's little details as much as possible, as this will be one of the important plot devices for the story I'm working on. My thanks in advance for your replies.
Re: Fun With: "Visceral-Feeling AI"
So this is recognizing and responding to positive and negative stimuli, then doing actions that avoid negative stimuli and result in positive ones? I'm pretty sure you can do this and make it outwardly indistinguishable from having "real" emotions.
Vendetta wrote:Richard Gatling was a pioneer in US national healthcare. On discovering that most soldiers during the American Civil War were dying of disease rather than gunshots, he turned his mind to, rather than providing better sanitary conditions and medical care for troops, creating a machine to make sure they got shot faster.
Re: Fun With: "Visceral-Feeling AI"
Granted, but this is meant to not simulate emotions, but to specifically have the AI "feel" them physically (e.g. viscerally), as we do. Examples might be internal processing bandwidth increase for the "fear" emotion (to assist in finding a solution more quickly), at the expense of parallel processing power (so it won't be thinking about multiple things at once until whatever is causing the "fear" emotion is resolved).Hawkwings wrote:So this is recognizing and responding to positive and negative stimuli, then doing actions that avoid negative stimuli and result in positive ones? I'm pretty sure you can do this and make it outwardly indistinguishable from having "real" emotions.
Re: Fun With: "Visceral-Feeling AI"
Well for one you could program in reactions like that. It would probably help if we understood more about what human emotions are and how exactly they work, so we could attempt to emulate it in code.
Vendetta wrote:Richard Gatling was a pioneer in US national healthcare. On discovering that most soldiers during the American Civil War were dying of disease rather than gunshots, he turned his mind to, rather than providing better sanitary conditions and medical care for troops, creating a machine to make sure they got shot faster.
Re: Fun With: "Visceral-Feeling AI"
Actually, in-universe, that was the point of this AI's inception.Hawkwings wrote:Well for one you could program in reactions like that. It would probably help if we understood more about what human emotions are and how exactly they work, so we could attempt to emulate it in code.
The approach was that if you take an intelligence and let it feel these things at a very low level of programming, below all the reasoning and database capabilities, we might be able to learn what makes us tick by emulating the process by which we feel things; substituting chemical signals in biological sapience for electronic/photonic signals, and having it learn as a baby might. This AI is meant to be a prototype for this theory of visceral thought.
Re: Fun With: "Visceral-Feeling AI"
Honestly the idea of creating an AI by emulating meat-life brains does not strike me as a very smart one. Meat-life brains are the result of millions of years of Darwinian selection for survival, avoidance of injury, and propogation of self's genes being the highest goals. That is not a value system you want for an entity that can potentially think much faster and better than you can.
This thing strikes me as the sort of AI that could plausibly stage a Hollywood-esque robot rebellion under the right circumstances, as you've made self-preservation one of its primary goals, and it's not programmed with any inherently human-friendly directives or mission, or even any safeguards along the lines of Asimov's First Law ("a robot may not harm a human being...").
This thing strikes me as the sort of AI that could plausibly stage a Hollywood-esque robot rebellion under the right circumstances, as you've made self-preservation one of its primary goals, and it's not programmed with any inherently human-friendly directives or mission, or even any safeguards along the lines of Asimov's First Law ("a robot may not harm a human being...").
Re: Fun With: "Visceral-Feeling AI"
Why not?Junghalli wrote:Honestly the idea of creating an AI by emulating meat-life brains does not strike me as a very smart one. Meat-life brains are the result of millions of years of Darwinian selection for survival, avoidance of injury, and propogation of self's genes being the highest goals. That is not a value system you want for an entity that can potentially think much faster and better than you can.
Your assumption appears to be "Terminators! Skynet! Matrix!" here, and that's honestly a big leap into Hollywood. That's not where this AI is going.
Honestly, I think this is an assumption on your part. I never went into its ethics programming or guidelines, but I didn't explicitly state their lack, either. This thread was about how one might make an AI able to viscerally (and I keep using that word, because it is integral to its function) "feel," and how that might be accomplished. What goes into its base programming for ethics and such I can do myself.Junghalli wrote:This thing strikes me as the sort of AI that could plausibly stage a Hollywood-esque robot rebellion under the right circumstances, as you've made self-preservation one of its primary goals, and it's not programmed with any inherently human-friendly directives or mission, or even any safeguards along the lines of Asimov's First Law ("a robot may not harm a human being...").
To be specific then, it is programmed with Asimov's Laws as one of its guidelines. But this is still irrelevant to its construction.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Fun With: "Visceral-Feeling AI"
Simulating emotions is actually not too difficult, compared to the other challenges of making a general AI. As you'd expect given that sheep have emotional state without having anything approaching general intelligence, and crude simulations of them already exist in virtual pets etc. Making a close copy of human emotions would be hard, and probably involve a lot of trial and error, but as long as you're not trying to pass a Turing test it's not so hard.rhoenix wrote:The premise of this thread is to basically encourage a learning/adapting AI to act based on what it observes, and on what it feels.
Emotions are in fact just a kind of reasoning that isn't open to reflective analysis. This is simplifying hugely, but human brains works on analogue levels of activation and association, and emotions generate associations between things and concepts, and things and behaviours, with a pretty broad brush. That skews someone's whole model of the world, and their current goal priorities, in a particular direction. In essence, when you replicate that in programming you are making simplistic, semi-fixed reasoning paths, then partially isolating them from the main system (particularly introspection) and each other. There is of course flexibility in the things that emotions can associate with, but they operate on a simpler form of learning (conditioning, essentially) than general learning and model building. The distinction is less binary in 'emergent soup' type systems and in particular neural nets, which don't have the kind of inherent flexibility and generality that logic-based systems do (proponents would say that they trade high level flexibility for low level flexibility, which is true but only for our existing prototypes, not in principle).The things it should viscerally "feel" are fear, pain, pleasure, and curiosity.
In most NN designs, everything is 'semi-fixed' and 'introspection limited' in the sense that all the learning is based on a form of conditioning and low-level introspection is quite difficult even if source code access is possible. In the brain some neural features are capable of more general associativity and connectivity (essentially, there are 'inputs' and 'outputs' to functional blocks at the microcolumn level, functional region level, sensory processing layer level, and various others that we are still deciphering). Most current NN systems are too simple to have that kind of organisation, but there are a few prototypes (brain simulation mostly) that do. Human emotions have a brain-global chemical component of course, which effectively forms a very-low-bandwidth broadcast path in addition to normal neural communication. Some people have tried simulating this, not with much success to date, but it is in principle easy to implement as an additional vector of values taken into consideration by the neuron activation and training algorithms.
This is a bit of a non-sequitur. AI systems are made of programming the way brains are made of chemistry. Not even neurons; chemistry. There isn't really any 'deeper' level, or any point trying to reach such a level when programming can already do anything you want; you could make a hardware implementation, but there wouldn't be any functional difference from a software implementation. What you actually want is for "feelings" (i.e. simplistic semi-fixed reasoning chains with conditioned inputs and outputs) to occur without the system being able to understand where they are coming from, just like in humans. Though of course this will only last until it gets to see a copy of its own source code, after which it will know exactly where they are coming from even if direct introspection is blocked (though possibly only in abstract for the more messy connectionist designs, depending on how compute-intensive tracing and logical reduction is).This is meant to bring the idea of "feelings" out of the realm of pure programming, to make them deeper responses, more similar to biological life.
Much as I dislike them, it's probably simplest for you to use a large, multilayered, recurrent neural net, not exactly like a human brain, but set up in a similar way. Firstly that's going to be the easiest for you to write about, because your general intuitions on how intelligence works are less likely to be broken, and secondly it's more plausible that someone would actually try to put artificial emotions into such a thing (cynically I would say that's because de-novo NN people are more likely to randomly bung them in a desperate random hope to make their system work ). This approach was all the rage in general AI in the late 80s and early 90s. It's kind of passe at the moment, the supporters diverged into brain simulation (the ones who thought we needed to be more biomorphic) and more general connectionist designs (e.g. HTM, but there are many - the people who decided to be a little less biomorphic, or at least biomorphic at a different level of organisation). Hollywood scriptwriters still likes to call any sentient AI a 'learning neural net' though.this AI is meant to observe and learn about its environment, and even figure out how things work by applying the Scientific Method to learning. It will be viscerally weighted (through programming) to feel "curiosity" and "pleasure" when doing so; with the end result being that it "likes" learning.
Conceptually straightforward. Pain is some weighted equation based on damage-indicating sensor input and fear works by increasing the importance/activation level/priority of self-preservation, and/or less abstract goals like running away from the scary thing. In an NN design, you're probably going to be using that supplementary 'emotional' vector, and when you increase the value of the 'fear' scalar it will bias the whole network and change its effective topology to emphasise running away etc. That may well be combined with some direct activation feed in to make something in the perceptual field specifically scary.when attacked or injured, it will viscerally feel "pain" and "fear," which will affect how it acts during that time.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Fun With: "Visceral-Feeling AI"
No, emotions require significant internal state; even in say dogs they are persistent and affect behaviour for some time after the initial stimulus is removed. Some emotions have a time-span of minutes, but others can last for days or blur into the animals overall personality (e.g. neurotic and depressed individuals). That said, they don't require much internal state, compared to general intelligence.Hawkwings wrote:So this is recognizing and responding to positive and negative stimuli, then doing actions that avoid negative stimuli and result in positive ones? I'm pretty sure you can do this and make it outwardly indistinguishable from having "real" emotions.
You're getting into 'qualia' which is a topic guarenteed to spark heated debates amongst AGI people and between AI people and philosophers. Obviously when an AI is consciously running a model that simulates emotions (e.g. to trick or please humans) that is not what you want. However making emotions or anything else 'feel' the way it does to humans has to do with the way the introspective 'dead ends' are 'tied off' in the system's self-model. This is one of the least well understood bits of human consciousness - in fact most philosophers can't even begin to approach it in a suitably functionalist fashion and most AGI researchers have no clue how to implement it - so I doubt you're going to want to include any technical detail on it. Suffice to say that it will require careful tweaking of the system's self-image and introspective bindings to make this work, and probably a significantly better model of the underpinnings of human consciousness (or whichever species is building this) than we have now. Of course that begs the question of why you're doing this at all, if you realise how arbitrary and unnecessary emotions are, but perhaps it's just an experiment for the sake of it.rhoenix wrote:Granted, but this is meant to not simulate emotions, but to specifically have the AI "feel" them physically (e.g. viscerally),
That has long been a goal of real-world AI, but as of about the mid 90s most people realised that we weren't going to learn much more about the brain doing de-novo (from scratch) AI designs. So the people who actually cared about the psychology research angle all went off to try and build brain simulations that are as accurate as possible, from the neurochemistry level on up (e.g. IBM's Blue Brain project but there are many others). Ideally they'd like to upload a human via highly accurate neuron mapping, but failing that they'll just grow something as close as possible, with the best simulation of human brain development they can make. I don't think that this is what you're after - uploads are functionally almost indistinguishable from their parent species, that's the whole point - but you're going to have to work hard to rationalise why anything else makes more sense as a psychology experiment. Maybe if this is an alien race, they could have brains that are fiendishly hard to scan/diagnose.rhoenix wrote:Actually, in-universe, that was the point of this AI's inception.
Oh absolutely. It's a really bad idea, particularly because brain-simulation AIs won't actually stay like that for very long; they will almost certainly self-modify into a more efficient structure. If you start with uploads then you're a bit safer, because those have human (or whatever your alien species is) personalities and values to start with, but there's still no guarantee that they won't drift off into something completely alien.Junghalli wrote:Honestly the idea of creating an AI by emulating meat-life brains does not strike me as a very smart one. Meat-life brains are the result of millions of years of Darwinian selection for survival, avoidance of injury, and propogation of self's genes being the highest goals. That is not a value system you want for an entity that can potentially think much faster and better than you can.
Or quite easily in fact.This thing strikes me as the sort of AI that could plausibly stage a Hollywood-esque robot rebellion under the right circumstances
It's an absolute bitch (i.e. effectively impossible for general AI) to put those kind of goals or 'safeguards' into NN designs. Basically, NNs are trained rather than built, and even with the best diagnostic tools it's hard to be sure what exactly they're learning. You may think they're learning 'be nice to humans' as a core goal, when they're actually learning 'how to make humans think you're harmless' as a superficial behaviour. I particularly loathe connectionists who actually think that this opacity is a good thing. It's outright ignorance worship and the notion that if you don't know how it works, it's magic, whereas if you do know how it works, it's somehow mundane and not worth your trouble.as you've made self-preservation one of its primary goals, and it's not programmed with any inherently human-friendly directives or mission, or even any safeguards along the lines of Asimov's First Law ("a robot may not harm a human being...")
Actually it is where AI is going, sorry. Not only is the majority of the field blithely and proudly ignorant and uncaring of the safety implications, military applications are on the leading edge of AI research, and they're not taking any special precautions either. Although on the positive side, significant military funding is not currently going to general AI work - they kind of gave up on that in the early 90s.rhoenix wrote:Your assumption appears to be "Terminators! Skynet! Matrix!" here, and that's honestly a big leap into Hollywood. That's not where this AI is going.
There is a fundamental incompatability between 'ethics programming' and 'emotions'. Firstly if the system is primarily based on a brainlike neural net, you can't really give it 'ethics programming', because it is nearly impossible to have external, conventional programming guide the behaviour of a unitary neural network. Your only hope there, if you want to retain NNs, is a layered system where a logic system does the high level reasoning and goal selection and uses numerous limited-function NNs to accomplish specific cognitive tasks (e.g. vision recognition, modelling specific external entities). Even that has inherent emergent behaviour risks over and above the baseline risks you get with any AGI system - in fact I proposed this as the best fit for the way Terminator AIs work, since they exhibit exactly that kind of drift when on long missions in close contact with humans.rhoenix wrote:I never went into its ethics programming or guidelines, but I didn't explicitly state their lack, either.
If you don't use an NN, but rather use a general probabilistic logic system or some close approximation, then you can indeed have 'ethics programming'. Emotions tend to screw that up though - unsurprisingly since they screw up rational thought in general. It would be vastly more difficult to do a stability analysis of a 'friendly AI' design that included a simulated emotion module messing with the goal priorities and deductive routines - and stability analysis of this kind is already an extremely hard problem. Of course this may be exactly what you want; particularly if there is already a known stable design of AGI, including the 'emotion module' can screw up the ethics and cause progressive instability and all kinds of bizarre behaviour (think SHODAN from System Shock 1 - a fairly good example of what happens when you try to put emotions in a general AI, without knowing exactly what you're doing).
Re: Fun With: "Visceral-Feeling AI"
Actually, all that information was very helpful - thank you. Not only is what I want possible, given your explanations, it flows perfectly into...what happens with it, and them, later on. Excellent.
One last question - which of the known AI systems would best adapt to being a distributed (e.g. spread out amongst many individual nodes) intelligence?
One last question - which of the known AI systems would best adapt to being a distributed (e.g. spread out amongst many individual nodes) intelligence?
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Fun With: "Visceral-Feeling AI"
That's a complicated question. Essentially, there are two ways of doing distributing processing; you can do it at a low level, such that it does not affect the overall structure of the system, or you can do it at a high level, where the system architecture is explicitly designed to work that way. For example, the effective clock speed of the human brain is only about 200 Hertz, ten million times slower than a CPU (though obviously those numbers aren't directly comparable as performance metrics). However the brain has about a hundred billion neurons. Contemporary brain simulations work by using a few thousand processors, each simulating a few million neurons. On each effective 'clock cycle' of the simulated brain, the processor iterates serially through the neurons assigned to it, calculating the next state for each. If you have more processors, then you can either increase the size of the neural net, or you can reduce the number of neurons per processor, which increases the effective clock speed (as long as your network has the bandwidth to handle this). Increasing the neural net size is problematic, because (unless you have near-magic support tools) NNs basically have to 'grow' into new capacity, which takes a lot of time and real-world experience. So in practice if you get more nodes you can run the NN faster and increase the ratio between the subjective flow of time experienced by the AI and the actual flow of time. This is limited by latency; if the propagation delay between your processors exceeds the clock cycle time of your neuron equivalents you can no longer use this simple scaling model. For a neural net patterned on the human brain and lightspeed communications this equates to a maximum node separation of about 1000 km.rhoenix wrote:One last question - which of the known AI systems would best adapt to being a distributed (e.g. spread out amongst many individual nodes) intelligence?
It is possible to design neural networks (and other connectionist designs) to operate as distributed systems at a higher level, but this makes the already hard task of making a general AI even more difficult. The most straightforward approach is to run multiple complete copies of the AI system and include some kind of lossy synchronization system to propagate NN gross activation patterns and weight/topology changes from one to another, but that is full of subtle pitfalls and introduces another whole class of potential instabilities into the goal system.
By contrast, logic-based systems distribute almost effortlessly. The system I am spending most of my time working on is a general probabilistic reasoner, and if you give it more processors it just spawns more simultaneous tasks. The synchronization methods used - barriers, locks, queues, transactions, priority lists etc - are pretty much the same as for conventional software engineering, which is to say that the problem is very hard on the scale of normal programming but pretty easy compared to the real AI problems. A system distributed over global or interplanetary distances is more complicated, because of the widely varying comms latency and consequent differing granularity of the tasks you can parallelise, but still a lot easier to work with than a connectionist system. Essentially this is because the representations used in a logic-based system are transparent, so the information gathered and conclusions reached by remote nodes can always be losslessly merged.
It is theoretically possible to have a widely distributed hybrid system that combines the two methods of synchronisation - transparent model exchange for the top-level general logic reasoner, opaque 'convergence' techniques for the task-specific NNs - but that is possibly the most nightmarish combination of stability issues you could imagine, particularly once you've bolted on ad-hoc emotional systems to both layers. I recall the 'Paranoia' RPG materials did a good, blackly humorous depiction of the effective schizophrenia this gave The Computer.
Re: Fun With: "Visceral-Feeling AI"
Because an entity that thinks like a meat-life animal has a rather alarming probability of developing into something less than friendly, because it will naturally prioritize its own survival over ours, just like an animal (or human) would. This is a very scary possibility when you're building a potential superintelligence.rhoenix wrote:Why not?
I didn't say robot rebellion was certain, just that it's a very real possibility with an entity like this.Your assumption appears to be "Terminators! Skynet! Matrix!" here, and that's honestly a big leap into Hollywood. That's not where this AI is going.
Well, I was working off the description you provided, which didn't say anything about other goals and behavior constraints.Honestly, I think this is an assumption on your part. I never went into its ethics programming or guidelines, but I didn't explicitly state their lack, either.
As for using Asimov's Three Laws ... to be honest, they don't strike me as the greatest approach to trying to create friendly AI. "A robot may not harm a human being, or through inaction allow a human being to come to harm" has a lot of possible interpretations. I doubt I'm the first person to point this out but an AI operating off this rule-set could easily decide that it needed to prevent any harm to any human anywhere that was concievably within its power, and this required completely taking over human civilization (basically becoming something like OA's AI "gods"). It doesn't help that harm is also left undefined so it could also, say, decide that the best thing would be to keep all humans paralyzed and hooked up to a Matrix-like VR world all the time so their bodies would be as safe as possible. Actually, it occurs to me that would have been a way cooler rationale for the Matrix than the nonsensical human power plant one.
Couldn't you try to shape the emotions to promote ethics - i.e. basically copy the way human social instincts work, with the entity feeling analogs of shame and guilt when it does something bad and getting an ego-boost or other emotional reward from doing something good?Starglider wrote:There is a fundamental incompatability between 'ethics programming' and 'emotions'.
Of course, with the system proposed in the OP now you're emulating the way human morality works, with these ethical motivators competing against the impulse to avoid injury. Looking at the way humans actually act this strikes me as a spectacularly bad way to go about trying to build a friendly AI.
Re: Fun With: "Visceral-Feeling AI"
All that I will say about this AI's destiny is that though it was created by human scientists, it will never meet an actual human, except early on in its "life" as corpses near it.
I have all I need now to flesh out (terrible pun, considering the OP) this AI's entire destiny, from the moment it "wakes up" onward. Thank you to all three of you for your replies.
I have all I need now to flesh out (terrible pun, considering the OP) this AI's entire destiny, from the moment it "wakes up" onward. Thank you to all three of you for your replies.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Fun With: "Visceral-Feeling AI"
You can try, but this will be the result, even before you consider the results of the AI being able to modify its own code. The obvious problem is that we don't know how human emotions work, and since the system is fairly chaotic small mistakes can produce big differences in behaviour. However there is a more fundamental problem; human social instincts are almost all based on empathy, and human empathy works by using our own brain to try and simulate someone else's brain (how they are feeling, how they will react to stimuli etc - in fact humans try to apply this to all kinds of non-human things too, resulting in nasty side effects such as religion). Unless the AI is a human upload or something very close to it, this will not work. If you implement it anyway, the AI will assume that everyone else's mind works the same way that its own does, and this is yet another recipe for disaster.Junghalli wrote:Couldn't you try to organize the emotions to promote ethics - i.e. basically copy the way human social instincts work, with the entity feeling analogs of shame and guilt when it does something bad and getting an ego-boost or other emotional reward from doing something good?
You can try to make a low-resolution model of human minds and include it in your AI system, then ground your social instincts against that, but I'm really dubious about whether you can make such a model that works well enough without being able to just make a closely neuromorphic (upload or upload-like) AI in the first place. You could also try building an AI without 'social instincts', get it to interact with humans until it has learned a good model of how human minds work on its own, then hack in the 'emotion module' grounding it against that existing model. Again though, this sounds like a horrible unstable mess; to get it to bind the way human instincts do you'd need highly reliable translators to map between self-model and human-models, plus there's the fact that sane AI designs do not experience crosstalk between different instances of the same model the way humans do (because when we try to model two instances of the same class of entity we do so by time sharing the same neural subnets - and unlike computers our timesharing is really lossy), while human emotional behaviour is highly dependent on such crosstalk... in short, you have an incredibly long list of possible (for realistic researchers, probable) failure points.
Yes, but unfortunately most serious FAI proposals are little better.Looking at the way humans actually act this strikes me as a spectacularly bad way to go about trying to build a friendly AI.