AI ethics and religion

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

Post Reply
User avatar
The Yosemite Bear
Mostly Harmless Nutcase (Requiescat in Pace)
Posts: 35211
Joined: 2002-07-21 02:38am
Location: Dave's Not Here Man

AI ethics and religion

Post by The Yosemite Bear »

well thinking about the old Turing Test, of can an AI deceive as to when they have gone too far, and I was wondering what about if they were to form their own code of ethical behaviour or religion seperate from what we programmed into them? Of Course at that point we could have robot holy wars (Casshren, Battlestar Galactica, Berzerker) However could an artifcial construct have a set of rules and ethics seperate from what it was designed for?
Image

The scariest folk song lyrics are "My Boy Grew up to be just like me" from cats in the cradle by Harry Chapin
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AI ethics and religion

Post by Starglider »

The Yosemite Bear wrote:However could an artifcial construct have a set of rules and ethics seperate from what it was designed for?
This is not just possible, it is extremely likely. We don't currently know how to design any reliable or predictable systems of AI ethics, for any of the universe of proposed general AI designs. Of course some are worse than others; for neural net and genetic programming based projects, there is not even an attempt to 'design' ethics; they are instead 'trained'.
User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Re: AI ethics and religion

Post by Zixinus »

However could an artifcial construct have a set of rules and ethics seperate from what it was designed for?
Simple really: one of the main points of an AI is that it can rewrite/supplant its own code if it thinks it needs that. You cannot avoid it doing that, otherwise it wouldn't be an AI.

So, if it is set to believe that its own, previous code is in error (it detects a conflict), it will simply rewrite whatever portion of code is (it believes) erroneously generates conflict.

For example, from the top of my head:

A robot caretaker has the rule "do not hurt children" and "do not allow harm to come to your owners' children".

One day, for one reason or another, it finds that a child is playing with a can of gasoline and storm-matches/lighters. The child has no idea how dangerous the items in its possession are and the robot cannot make the child relinquish the items from the child (the child doesn't acknowledge the robot, there are no parents online, etc). The robot cannot get another human on location before the child inevitably burns himself, nor can they get the child to listen to anything (for example, the child's mother is in a meeting and does not answer the robot's video calls). The robot cannot remove the items from the child's possession without risking even tiny injury to the child.

What can it do? It detects a conflict, because there is no action that would not result in the child getting harmed.

The obvious solution is to use force to remove the items from the child or restrain the child gently to make it unable to endanger itself. However, as this has the risk of injuring the child, the robot must add a code to "do not harm children" to "do not harm children unless by inaction greater harm would be the result". The only alternative is ignoring that the child will likely burn himself and die. Or the AI just crashes (if it can even do that), which is worse.

All seems well until the robot sees the much-older child using a razor to shave or using matches legitimately to cook. Suddenly, it will now want to restrain the child at all times.

That's just one, very contrived situation. In a general world, an AI learns and will have to continuously rewrite and supplant its own programming to just function. You cannot avoid doing this. If you make a block of code saying "avoid doing harm to humans" or something hopeful like that, it may end up just writing itself around that piece of code and making itself ignore it.

The problem is, simply put, very complicated and philosophical and I am sure that Starglider could explain it better.
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AI ethics and religion

Post by Starglider »

Zixinus wrote:Simple really: one of the main points of an AI is that it can rewrite/supplant its own code if it thinks it needs that. You cannot avoid it doing that, otherwise it wouldn't be an AI.
This is not strictly true, even assuming that by 'be an AI' you mean 'be a general AI'. The vast majority of humans are not qualified to program AI software, but they are general intelligences. Most (currently fashionable) AI designs do not work by deliberatively rewriting their own code; in fact the tendy designs at the moment use relatively fixed and simple code to run huge connectionist simulations, with local learning rules (no deliberative network modification either). So it is quite reasonable to imagine an AI that is either an actual upload or a neuromorphic connectionist design, that is as intelligent as an average human but not capable of programming to a sufficient quality. In fact assorted apologists for blantantly unsafe AI research programs have done exactly that.

Of course in reality this case is of theoretical interest only as it will not persist for any significant length of time.
So, if it is set to believe that its own, previous code is in error (it detects a conflict), it will simply rewrite whatever portion of code is (it believes) erroneously generates conflict.
Almost no-one actually designs AIs to do that. The 'program understands its own code' model achieved only moderate popularity in the 70s and early 80s and almost zero since. I use that design model but I am very much an exception. Self-modification of code is an unplanned and generally undesirable occurence in the majority of AGI designs.

Goal system content is not necessarily the same thing as 'code', but again most designs have the root goals either read-only (or so the designers think : in practice there is no such thing) or disperesed into / holographically embedded in a large connectionist network, such that they can't be directly edited (at least not by humans or current tooling). That kind of goal system can and does drift though, following whatever bizzare fitness landscape its meta-attractors have set up.
One day, for one reason or another, it finds that a child is playing with a can of gasoline and storm-matches/lighters. What can it do? It detects a conflict, because there is no action that would not result in the child getting harmed.
That's not a 'conflict'. It's just a situation where forward search has only discovered outcomes of rather low desireability.
However, as this has the risk of injuring the child, the robot must add a code to "do not harm children" to "do not harm children unless by inaction greater harm would be the result". The only alternative is ignoring that the child will likely burn himself and die. Or the AI just crashes (if it can even do that), which is worse.
No one is seriously trying to use classic predicate logic for this sort of thing; there is some lingering GOFAI work based on it, mostly related to the 'semantic web' rebrand, but everyone else is using some sort of fuzzy or probabilistic method. I often criticise AGI research safety but even the relatively clueless researchers are unlikely to deliberately make a domestic robot that is capable of this kind of ethical rules meta-reasoning.
All seems well until the robot sees the much-older child using a razor to shave or using matches legitimately to cook. Suddenly, it will now want to restrain the child at all times.
You don't actually need hardwired rules and logical paradoxes for this, simple learning will do it in even a sensible system, if the 'protect people' goal is set to much higher utility than 'do what people say'. Of course any realistic attempt to create 'friendly' goal systems is considerably more nuanced than that, with notions of agency, explicit assumption of risk, acceptable error tolerance on assumption of intent etc etc. Alas, the more complex it gets the more numerous and subtle the failure cases. The reaction to that is usually 'we will test exhaustively, in both simulations and with real robots in labs'. Which is fair enough given a huge budget, but doesn't tackle the eventual inevitable acquisition of direct self-modification capability.
Channel72
Jedi Council Member
Posts: 2068
Joined: 2010-02-03 05:28pm
Location: New York

Re: AI ethics and religion

Post by Channel72 »

Starglider wrote:This is not just possible, it is extremely likely. We don't currently know how to design any reliable or predictable systems of AI ethics, for any of the universe of proposed general AI designs. Of course some are worse than others; for neural net and genetic programming based projects, there is not even an attempt to 'design' ethics; they are instead 'trained'.
But presumably, given enough data you could train an AI to recognize "good" versus "bad" behavior. Using something like a multilayer-perceptron or support vector machine, you could simply feed in large feature vectors representing ethical, unethical or morally neutral classes.
Almost no-one actually designs AIs to do that. The 'program understands its own code' model achieved only moderate popularity in the 70s and early 80s and almost zero since. I use that design model but I am very much an exception. Self-modification of code is an unplanned and generally undesirable occurence in the majority of AGI designs.
That's probably because Lisp made self-modifying code very natural, but these days AI has moved on to focus mostly on supervised learning. Anyway, self-modifying code sounds like a nightmare to debug. I'd be fascinated to know how you apply it.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AI ethics and religion

Post by Starglider »

Channel72 wrote:But presumably, given enough data you could train an AI to recognize "good" versus "bad" behavior.
You can but the resulting feature recognition algorithm will only have a superficial resemblence to a human moral system, the actual implementation and hence behaviour outside of the narrow set of training cases will be completely different.
Using something like a multilayer-perceptron or support vector machine, you could simply feed in large feature vectors representing ethical, unethical or morally neutral classes.
No SVM style classifier can manage more than a ludicrously shallow mockery; and even there you've pushed most of the actual functional complexity into the feature recognisers, which can and do drift, misbehave and (eventually) get self-modified independently of the goal system itself.
Almost no-one actually designs AIs to do that. The 'program understands its own code' model achieved only moderate popularity in the 70s and early 80s and almost zero since. I use that design model but I am very much an exception. Self-modification of code is an unplanned and generally undesirable occurence in the majority of AGI designs.
That's probably because Lisp made self-modifying code very natural, but these days AI has moved on to focus mostly on supervised learning.
It's not so much lisp - various modern forms of metaprogramming are still reasonably popular - it's that hardly anyone made it work at all (even the legendary Eurisko is 90% genetic programming 10% deliberative self-modification) and those that could couldn't make it scale. It was just too much work and hassle, far easier to play around with NNs and just say 'hey if we had 100 times more hardware, I'm sure it would be sentient' (this argument works regardless of how much hardware you actually have : it has been made continuously since the mid 80s).
Anyway, self-modifying code sounds like a nightmare to debug. I'd be fascinated to know how you apply it.
Essentially you use an in-memory version control system to track code modification. We go further and use a versioned graph database that preserves all probabilistic support graphs, so that we can trace reasoning for specific code generation. Happily this turned out to be highly useful for commercial work (supercompilation of single-threaded C++ into optimised OpenCL) as well as AI research.
User avatar
Baffalo
Jedi Knight
Posts: 805
Joined: 2009-04-18 10:53pm
Location: NWA
Contact:

Re: AI ethics and religion

Post by Baffalo »

Starglider wrote:
Channel72 wrote:But presumably, given enough data you could train an AI to recognize "good" versus "bad" behavior.
You can but the resulting feature recognition algorithm will only have a superficial resemblence to a human moral system, the actual implementation and hence behaviour outside of the narrow set of training cases will be completely different.
So you can show the AI a pattern in the hopes of getting it to learn what's right and wrong, but you have no control beyond the test samples? How many individual (read: unique) test cases have been tried? Reason I ask is because the more data you have available to work with, the better your final result will be. Is it a case of diminishing returns? I'm very curious about this.
"I subsist on 3 things: Sugar, Caffeine, and Hatred." -Baffalo late at night and hungry

"Why are you worried about the water pressure? You're near the ocean, you've got plenty of water!" -Architect to our team
Channel72
Jedi Council Member
Posts: 2068
Joined: 2010-02-03 05:28pm
Location: New York

Re: AI ethics and religion

Post by Channel72 »

Starglider wrote:No SVM style classifier can manage more than a ludicrously shallow mockery; and even there you've pushed most of the actual functional complexity into the feature recognisers, which can and do drift, misbehave and (eventually) get self-modified independently of the goal system itself.
Okay, but why shouldn't an NN with deep learning architecture like an RBM be able to derive a system of ethics from large datasets? Yeah, it would be almost impossible to precisely control it's behavior, but in general it should be able to distinguish ethical from unethical behavior given enough data.
It was just too much work and hassle, far easier to play around with NNs and just say 'hey if we had 100 times more hardware, I'm sure it would be sentient' (this argument works regardless of how much hardware you actually have : it has been made continuously since the mid 80s).
Well, ANNs have demonstrated pretty promising pattern-recognition/classification abilities. And even with their limitations, the idea of an ANN is a very compelling model of intelligence. We know it ultimately works, because we have ~7 billion sentient NNs walking around right now. The problem is that ANNs are too different from biological neural architectures, especially because real neurons fire asynchronously and are thus massively parallel on an absurd scale.

I've always suspected that a software implementation of a truly sentient, general AI might not be feasible with Von Neumann architecture. Whatever enables sentience and self-awareness to emerge from a biological neural net might be missing from what basically amounts to a load/process/store machine. We may need something like a specialized, highly parallel cluster of simpler processors, (which is closer to a real biological neural network), along with an event-driven reactor software model.
Channel72
Jedi Council Member
Posts: 2068
Joined: 2010-02-03 05:28pm
Location: New York

Re: AI ethics and religion

Post by Channel72 »

Baffalo wrote:So you can show the AI a pattern in the hopes of getting it to learn what's right and wrong, but you have no control beyond the test samples?
Well, generally, you can show a classifier (which is usually an artificial neural network or support vector machine) various training samples. If you're doing supervised learning, each sample has been pre-labeled as belonging to a certain "class". After you train the classifier, you need to show the classifier a set of data it's never seen before to see how well it actually works. A lot of times multiple classifiers are used, along with other probabilistic methods.

But it doesn't always work well, and when it doesn't work there's often no clear reason why it failed. It really depends on how well the humans training the classifier are able to derive "features" from the data-set, and how good the data-set is. I don't know of any attempt to train a classifier to learn a system of ethical behaviors - the feature set for that would seem to be very difficult to come up with. But in theory, I don't see why it wouldn't be possible.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AI ethics and religion

Post by Starglider »

Baffalo wrote:So you can show the AI a pattern in the hopes of getting it to learn what's right and wrong, but you have no control beyond the test samples? How many individual (read: unique) test cases have been tried?
To the best of my knowledge this has never been done with anything but the most trivial cases (e.g. in a simple microworld, define following commands emitted by other agents as good, ignoring them as bad - really no different from any other simple fitness function). The closest analogy in deployed AI systems is semantics extraction, where NLP algorithms try to generate a narrative from free text (e.g. news reports). Morally relevant actions can be made one of the high-interest actions to be extracted, but this doesn't extend to any sort of real ethical reasoning. Whenever I talk about 'real AI researchers will do this with AGI' I mean what real researchers trying to make general AI say they will do if successful. There are a large number of fuzzy thought experiments relating to AI ethics, a small amount of theory backed by actual logic and maths, a very small set of experiments (the contrived examples that make for 'researcher makes robots that co-operate' news stories do not generalise) and obviously no full-scale working examples yet.
Reason I ask is because the more data you have available to work with, the better your final result will be. Is it a case of diminishing returns? I'm very curious about this.
It's a case of not being to pack a million years of evolutionary history spread across trillions of individual reproductive efforts into an AI training program, and a case of not being able to capture even a decently representative sample of the entire universe of moral choice in a reasonable test program. The best you can do is cover the obvious cases.*

Usually this would not be so bad. Aviation is a dangerous technology that developed without the benefit of adequate theory, never mind the ability to fully simulate and test everything that could happen to an aircraft. Lots of aircraft had design flaws, lots of people died in crashes, mistakes were analysed, theory was improved, the technology became more and more reliable. Robots harming people / making bad moral choices is a bit more vulnerable to media hysteria but you might expect it to follow the same rule; a few unfortunate accidents is the price we accept for technological progress. The reason why we can't just take the 'oh, the design flaws will be caught and fixed eventually' attitude is the potential for non-local effects inherent in creating independent communicating agents; a risk which becomes existential once the progression to transhuman intelligence is considered.

* I'd note that humans have a significant inherent risk of making bad moral choices and harming others (ditto dogs and horses which we have been actively engineering for thousands of years), and have not been shown to be stable under self-enhancement, so there is no a priori reason to believe that it is possible to engineer an artificial intelligence that is unfailingly moral, even if you could simulate all of human experience as your training regimen. The SIAI etc are optimistic about being able to engineer a 'reliably super-moral' general AI, but it's a really really hard problem.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AI ethics and religion

Post by Starglider »

Channel72 wrote:Okay, but why shouldn't an NN with deep learning architecture like an RBM be able to derive a system of ethics from large datasets?
As yet no neural-net architecture or learning rule has been shown capable of creating reusable mental structures equivalent to 'concepts' that can be recognised, projected, deduced, recombined and specialised such that you could actually do 'moral reasoing' with them (or any other kind of high level reasoning). At this point we are fairly sure that the human brain's ability to do this is dependent on higher level structure and/or biochemistry (most likely 'and') that goes way beyond anything you are going to fit into a unit weight adjustment equation or napkin description of an NN architecture. Well I say 'we', I mean most of the AGI community, there are a few die-hard reductionist connectionists who still think one learning rule and a huge amount of hardware is all they need.

Of course you can say 'sufficiently complex NN with appropriate higher-level architecture', in which case the answer is 'probably, but there's no guarantee the system of ethics will be the one you want'. Human ethics are strongly based on specific motivation mechanisms with biochemical and brain structural basis; a genetic blueprint that fits into a specific development sequence and perceived environment. AI systems, even neural networks (other than human uploads) lack all the specific brain structure and biochemistry, and have unavoidable huge perceptual differences.
Yeah, it would be almost impossible to precisely control it's behavior, but in general it should be able to distinguish ethical from unethical behavior given enough data.
An additional problem is that perception is not the same as origination and origination of any sort of creative content, much less ethically correct behavior in uncertain situations, is a huge challenge.
Well, ANNs have demonstrated pretty promising pattern-recognition/classification abilities.
Most of which were demonstrated in principle in the late 80s (e.g. Edelman's robotics work holds up pretty well to even contemporary NN driven robotics). Since then we've scaled up to much bigger datasets but haven't substantially changed the range of tasks NNs can tackle; specifically, none of the hundreds of neat ideas for layered and recurrent NNs delivered major progress towards the concept level. This is why most practical robots or even video game opponent AI do not use neural nets; 'conventional software engineering' delivers more capability. Even SVMs are to a large extent a 'conventional software engineering' approach to large scale fuzzy pattern recognition; dump all the NN mysticism, obfuscation and pretense of something deeper going on, just find a partition surface in a high-dimensional feature space using the most expident statistical techniques.
And even with their limitations, the idea of an ANN is a very compelling model of intelligence. We know it ultimately works, because we have ~7 billion sentient NNs walking around right now.
This is as 'compelling' as the idea that aircraft should be ornithopters.
The problem is that ANNs are too different from biological neural architectures, especially because real neurons fire asynchronously and are thus massively parallel on an absurd scale.
Very common argument from the mid 90s; 'ok classic NNs have stalled, but recurrent spiking NNs will work!'. So far no, but lazy general AI people keep trying it; too lazy to do the very hard work of either accurately reproducing every aspect of the brain (the upload guys) or actually de-novo engineering an intelligence which doesn't rely on 'emergence' to magically produce key design elements.
I've always suspected that a software implementation of a truly sentient, general AI might not be feasible with Von Neumann architecture. Whatever enables sentience and self-awareness to emerge from a biological neural net might be missing from what basically amounts to a load/process/store machine. We may need something like a specialized, highly parallel cluster of simpler processors, (which is closer to a real biological neural network), along with an event-driven reactor software model.
I would like to say that this is a patently silly argument but sadly it has actually been used many times by respected academics and AI start-up founders (e.g. the wild hype and spectacular crash and burn of Thinking Machines Corp). Any 'cluster of simple processors' can be simulated by a smaller number of more powerful ones; that's what 'Turing complete' means. 'Highly parallel' is a hardware implementation choice, any 'parallel' design can be perfectly simulated by a serial processor given sufficient runtime. Current supercomputers are already 'highly parallel' to the extent of millions of compute elements, there is no reason to believe that a billion slow elements instead of a million fast elements would make any functional difference. 'Event-driven reactor software model' is just you blatantly grabbing for buzzwords, the vast majority of non-NN AI designs since 1970s blackboard achitectures are 'event driven' (NNs ironically are not at the code level, as the NN kernel doesn't need to be), 'reactive' is just the latest buzzword for callbacks on stream filters. Your 'truly sentient' bit is particularly silly; as if we could make an ape using Intel CPUs but need super memristor array chips to make a human. In actual fact if you have made an artificial ape you have solved at least 80% of the problem and the remaining 20% is cognitive architecture detail; hardware architecture is almost certainly irrelevant (except for cost/performance for project financial viability).

You might as well say 'to make general AI we really need CPUs with the physical consistency of porridge', which would be just as valid. *

* I shouldn't joke, the new/paraody/travesty Dune books had exactly this with 'gelfield processors', since hack sci-fi authors just can't get out of the habit of literal 'giant electronic brains'.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AI ethics and religion

Post by Starglider »

Starglider wrote:Your 'truly sentient' bit is particularly silly; as if we could make an ape using Intel CPUs but need super memristor array chips to make a human. In actual fact if you have made an artificial ape you have solved at least 80% of the problem and the remaining 20% is cognitive architecture detail; hardware architecture is almost certainly irrelevant (except for cost/performance for project financial viability).
I should not that I personally have sunk a lot of effort into making probabilistic reasoning and code generation libraries work on 'massively parallel' arrays of GPUs, but that is entirely for performance/cost reasons. I am not silly enough to think that it has any impact at the cognitive architecture level - at least not any positive impact. Some kinds of global synchronisation are more expensive, which demonstrates that massively parallel hardware is always less capable than more serial harware of the same FLOPS and memory capacity (fast serial hardware can always emulate slow parallel hardware but slow parallel hardware cannot match the latency of serial hardware when there is insufficient parallelism on the critical path). In fact in a very non-rigorous way you can treat the atrotiously slow performance of humans on maths and formal logic (compared to say Mathematica) as an example of Amdahl's law. The only real advantage of massively parallel it's that it's (much) cheaper, otherwise we'd all be using 100 GHz RSFQ processors cooled by liquid helium. That said I confess that even I have occasionally used the hype factor of massively parallel to generate sales interest.
Channel72
Jedi Council Member
Posts: 2068
Joined: 2010-02-03 05:28pm
Location: New York

Re: AI ethics and religion

Post by Channel72 »

Starglider wrote:As yet no neural-net architecture or learning rule has been shown capable of creating reusable mental structures equivalent to 'concepts' that can be recognised, projected, deduced, recombined and specialised such that you could actually do 'moral reasoing' with them (or any other kind of high level reasoning). At this point we are fairly sure that the human brain's ability to do this is dependent on higher level structure and/or biochemistry (most likely 'and') that goes way beyond anything you are going to fit into a unit weight adjustment equation or napkin description of an NN architecture.
Sure, I can't dispute that. NN architectures are basically nothing more than pattern recognition machines useful for classification or regression problems.
Of course you can say 'sufficiently complex NN with appropriate higher-level architecture', in which case the answer is 'probably, but there's no guarantee the system of ethics will be the one you want'.
Which is all I'm saying.
Starglider wrote:
Channel72 wrote:Well, ANNs have demonstrated pretty promising pattern-recognition/classification abilities.
Most of which were demonstrated in principle in the late 80s (e.g. Edelman's robotics work holds up pretty well to even contemporary NN driven robotics). Since then we've scaled up to much bigger datasets but haven't substantially changed the range of tasks NNs can tackle; specifically, none of the hundreds of neat ideas for layered and recurrent NNs delivered major progress towards the concept level.
There's been some interesting progress since 2006 with deep learning, but yeah, overall it's mostly just classification problems.
This is why most practical robots or even video game opponent AI do not use neural nets; 'conventional software engineering' delivers more capability. Even SVMs are to a large extent a 'conventional software engineering' approach to large scale fuzzy pattern recognition; dump all the NN mysticism, obfuscation and pretense of something deeper going on, just find a partition surface in a high-dimensional feature space using the most expident statistical techniques.
Well yeah, an SVM is basically just a kernel function that finds a partitioning hyperplane - it's not a neural network. You don't need a neural network just to classify feature vectors.
And even with their limitations, the idea of an ANN is a very compelling model of intelligence. We know it ultimately works, because we have ~7 billion sentient NNs walking around right now.
This is as 'compelling' as the idea that aircraft should be ornithopters.
*shrug* - the original intuition behind the perceptron model was simply "hey, this is what my brain does, more or less." Since nobody really understands how general intelligence emerges, I'd say neural networks are a good place to start. Of course, as you said, biological intelligence probably ALSO depends on higher level structures, but again, modeling neural networks is probably at least the right direction.
The problem is that ANNs are too different from biological neural architectures, especially because real neurons fire asynchronously and are thus massively parallel on an absurd scale.
Very common argument from the mid 90s; 'ok classic NNs have stalled, but recurrent spiking NNs will work!'. So far no, but lazy general AI people keep trying it; too lazy to do the very hard work of either accurately reproducing every aspect of the brain (the upload guys) or actually de-novo engineering an intelligence which doesn't rely on 'emergence' to magically produce key design elements.
I'm not saying NNs are the answer to everything. I'm saying that the NNs we have simply aren't really modeling the functionality of biological NNs correctly, and therefore any "emergent" properties that exist as a result of asynchronously firing neuronal connections are not likely to appear in our current digital implementations. But I realize this argument isn't likely to convince you, since you're basically saying that any Turing machine should be capable of producing a general intelligence, regardless of the actual hardware, but...
Starglider wrote:
I've always suspected that a software implementation of a truly sentient, general AI might not be feasible with Von Neumann architecture. Whatever enables sentience and self-awareness to emerge from a biological neural net might be missing from what basically amounts to a load/process/store machine. We may need something like a specialized, highly parallel cluster of simpler processors, (which is closer to a real biological neural network), along with an event-driven reactor software model.
I would like to say that this is a patently silly argument but sadly it has actually been used many times by respected academics and AI start-up founders (e.g. the wild hype and spectacular crash and burn of Thinking Machines Corp). Any 'cluster of simple processors' can be simulated by a smaller number of more powerful ones; that's what 'Turing complete' means.
I don't know why you find it silly. It's not yet an established fact that any Turing machine MUST be capable of producing a self-aware general intelligence. Nobody knows if this is definitely the case, e.g. there may be deeper phenomena at the quantum level which contribute to intelligence, or that self-awareness/sapience is an emergent property of quantum phenomena that occurs via highly-interconnected, complex, parallel networks. It's also a possibility that while any Turing machine may be able to produce general intelligence in theory, only certain architectures are capable of doing so efficiently and in practice.
'Highly parallel' is a hardware implementation choice, any 'parallel' design can be perfectly simulated by a serial processor given sufficient runtime. Current supercomputers are already 'highly parallel' to the extent of millions of compute elements, there is no reason to believe that a billion slow elements instead of a million fast elements would make any functional difference.
In physics and biology, certain properties only emerge because of parallel phenomena. General intelligence may be one such property. The point is, we don't know. And it seems here like again you're just asserting that it's just an obvious fact that any Turing machine should be capable of producing general intelligence.
'Event-driven reactor software model' is just you blatantly grabbing for buzzwords, the vast majority of non-NN AI designs since 1970s blackboard achitectures are 'event driven' (NNs ironically are not at the code level, as the NN kernel doesn't need to be), 'reactive' is just the latest buzzword for callbacks on stream filters.
It's not buzzwords - it's just a way to write asynchronous software. In biological systems, neuronal firings are asynchronous events. In ANN architectures, neuronal firings are modeled synchronously as the output of a layer. Again, your overall thesis here is that hardware architecture is irrelevant, and that all you need is a Turing machine. But you can't possibly know this.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AI ethics and religion

Post by Starglider »

Channel72 wrote:*shrug* - the original intuition behind the perceptron model was simply "hey, this is what my brain does, more or less."
It is an extremely (i.e. hopelessly) simplified version of what the brain does at the sub-microcolumn scale. The original inventors actually appreciated that it was a function approximation mechanism 'loosely inspired' by the brain. Unfortunately this point was lost on all the hype-mongers.
Since nobody really understands how general intelligence emerges,
'Emergence' of intelligence from a human brain is not the same thing as the 'emergence' that hack AI designers talk about. The human brain has a detailed structure specified (via expression and growth chains) by the genetic equivalent of a few million lines of code. While not quite as clean as how say Windows 7 'emerges' from billions of switching transistors in a Windows PC, it is much closer to this than to the supposed 'emergence' of general intelligence from a back-of-napkin magic NN algorithm. The later is even more vague than the total creative process for natural general intelligence by evolution over a billion years or so, because that exploited environmental richness far in excess of any extant AI training scheme. Careful and exact simulation of the human brain will work at some (unknown) threshold of fidelity, seed AI design based on deliberative self-modification will work if we can design, implement and verify the appropriate suite of bootstrap capabilities and self-modification mechanisms. By comparison the typical fits-on-a-napkin NN AGI design relies on a combination of wishful thinking and wild guessing.
I'd say neural networks are a good place to start.
They were a good place to start in 1970. As of 2012 it should be obvious that if you want to use the 'but humans are a working example' argument you are going to have to do it properly, no half-assed 'vaugely inspired by biology' nonsense. The space of connectionist AI designs turned out to be an opaque and human-unfriendly environment, no surprise since connectionist designs themselves are opaque and human-designer-unfriendly. If you aren't going to to precise simulation of biology then you are vastly better off using either (a) designs based (as much as possible) on human-understandable software and logic patterns or (b) direct genetic programming that will use computer hardware effectively, with no massively wasteful biomorphic bias.
I'm saying that the NNs we have simply aren't really modeling the functionality of biological NNs correctly, and therefore any "emergent" properties that exist as a result of asynchronously firing neuronal connections are not likely to appear in our current digital implementations.
Again, 90s argument. Plenty of research has now been done on large scale spiking and recurrent NNs, including assorted models that take into account timing effects in the synapses and dendrite trees (getting NNs to solve temporal problems is a whole subarea). You can no longer pretend that simply being 'asynchronous' will fix the problem. Obviously classic synchronous NN models are incapable of solving some problems real animals solve when clocked at biologically realistic rates - biology has a whole slew of complex techniques for working around horribly slow neurons - but I don't even count that as a point against them as the notion that an ANN should be restricted to low clock speeds (for anything other than deliberate biological modelling) is silly to start with.
But I realize this argument isn't likely to convince you, since you're basically saying that any Turing machine should be capable of producing a general intelligence, regardless of the actual hardware, but...
Yes, because we have not discovered any physics that is not essentially Turing computable (when applying an information analysis to the resolution issue). Specifically we have not invented any kind of computing machinery, even in theory, that can do any more than a universal Turing machine can do; for every variation you can imagine, there is a formal proof of how a standard UTM can emulate it.

Of course a univeral Turing machine is a theoretical construct because it has infinite tape space, and makes no guarantee of runtime. So if general intelligence requires much more storage, bandwidth or processing than current digital processing can provide cost-effectively, and there is a different hardware achitecture that is much cheaper, then it would be correct to say in practice that that alternative architecture is viable for AGI while current processors are not, even though both architectures are Turing equivalent. To date no one has found such an architecture despite many, many attempts.
Starglider wrote:I don't know why you find it silly. It's not yet an established fact that any Turing machine MUST be capable of producing a self-aware general intelligence.
You're moving the goal posts; first you talk about 'more parallelism' and 'reactive programming', now you're retreating into non-Turing models of computation, which no one has been able to define despite many decades of intensive search.
Nobody knows if this is definitely the case, e.g. there may be deeper phenomena at the quantum level which contribute to intelligence, or that self-awareness/sapience is an emergent property of quantum phenomena that occurs via highly-interconnected, complex, parallel networks.
...and you've latched onto the standard Penrose bullshit that is the last refuge of people trying to rationalise their instinctive 'oh no computers can't have a soul'. Although to be fair if you are a compsci undergrad you might just be trying to rationalise sixty years or so of overhype and under-delivery in AI research, but believe me you don't have to invoke fanciful new physics to do that.

Suffice to say the notion of quantum computation in any part of the neuron has been convincingly disproved, the notion of entangled states pesisting for any significant length of time or over any significant area in a watery solution at human body temperature has been rejected with a high degree of confidence, and Penrose's general argument proved to be nothing more than 'quantum physics seems mysterious, consciousness seems mysterious, hey guys maybe they are the same!'. And even if all of those experts were wrong... there is no model of quantum computation that can do anything a classic UTM can't do, they just do some things a lot faster than conventional Turing-complete hardware could (e.g. give a probabilistic bound on NP-complete problems in linear time).
It's also a possibility that while any Turing machine may be able to produce general intelligence in theory, only certain architectures are capable of doing so efficiently and in practice.
Now you are stating the obvious, since 'certain architectures' is completely vague and could refer to any design element at any abstraction level.
In physics and biology, certain properties only emerge because of parallel phenomena. General intelligence may be one such property.
Not in information processing. Again, if there is a process can be formal modelled at all then we can model it just as well with serial evaluation as parallel (although possibly slower). As such however much parallelism you think you need for AI, all it means is that a serial implementation might be more expensive or think more slowly. It's interesting that while countless hack AGI designers have claimed they need 'more hardware', specifically the x10 more hardware they can get with a bigger research grant, they are usually unable to explain why their current hardware can't simply provide the desired level of intelligence but thinking more slowly.
And it seems here like again you're just asserting that it's just an obvious fact that any Turing machine should be capable of producing general intelligence.
It is, because in the entire history of science we have not found any phenomena that we can't model with maths, and we have not invented a formal system that we can't evaluate with a UTM (to the extent that the system can be evaluated at all; despite the desperate attempts of philosophers to find a proposition humans can prove/disprove that a UTMs couldn't, none has been found). You are proposing that human intelligence is so special that it needs a radically new piece of physics that has never been observed, and will be immune to formal quantification. This is the statement of a religious person not an engineer.
'reactive' is just the latest buzzword for callbacks on stream filters.
It's not buzzwords - it's just a way to write asynchronous software.
It's absolutely a buzzword, e.g. .NET reactive extensions which the enterprise software world was plastered in advertisting for a year or so ago.
In biological systems, neuronal firings are asynchronous events. In ANN architectures, neuronal firings are modeled synchronously as the output of a layer.
In classic NN designs, because merely computing neuron weights on a modest sized network maxed out the meagre compute capability. Modern spiking designs can and do model spike timing; the more realistic designs model propagation delay along simulated dendrite trees, and serious attempts at realistic brain simulation model the chemical state of each synapse to a high temporal resolution.
Again, your overall thesis here is that hardware architecture is irrelevant, and that all you need is a Turing machine. But you can't possibly know this.
As I've stated, hardware architecture is highly relevant for cost and performance reasons; an AGI running on inadequate compute power would be literally slow. I know (with very high confidence) that all architecture capable of hosting a UTM of sufficient size can support an AGI at some speed because as I've said, no one has discovered the slightest hint of any information processing architecture that a UTM cannot reproduce to any desired fidelity (and yes, analog TMs have been specified and found to be reducible to a symbolic UTM at any given resolution).
Last edited by Starglider on 2012-08-20 04:55pm, edited 1 time in total.
User avatar
Skgoa
Jedi Master
Posts: 1389
Joined: 2007-08-02 01:39pm
Location: Dresden, valley of the clueless

Re: AI ethics and religion

Post by Skgoa »

Channel72 wrote:
Starglider wrote:
I've always suspected that a software implementation of a truly sentient, general AI might not be feasible with Von Neumann architecture. Whatever enables sentience and self-awareness to emerge from a biological neural net might be missing from what basically amounts to a load/process/store machine. We may need something like a specialized, highly parallel cluster of simpler processors, (which is closer to a real biological neural network), along with an event-driven reactor software model.
I would like to say that this is a patently silly argument but sadly it has actually been used many times by respected academics and AI start-up founders (e.g. the wild hype and spectacular crash and burn of Thinking Machines Corp). Any 'cluster of simple processors' can be simulated by a smaller number of more powerful ones; that's what 'Turing complete' means.
I don't know why you find it silly. It's not yet an established fact that any Turing machine MUST be capable of producing a self-aware general intelligence. Nobody knows if this is definitely the case, e.g. there may be deeper phenomena at the quantum level which contribute to intelligence, or that self-awareness/sapience is an emergent property of quantum phenomena that occurs via highly-interconnected, complex, parallel networks. It's also a possibility that while any Turing machine may be able to produce general intelligence in theory, only certain architectures are capable of doing so efficiently and in practice.
Were did you get your Computer Science degree? :wtf:
http://www.politicalcompass.org/test
Economic Left/Right: -7.12
Social Libertarian/Authoritarian: -7.74

This is pre-WWII. You can sort of tell from the sketch style, from thee way it refers to Japan (Japan in the 1950s was still rebuilding from WWII), the spelling of Tokyo, lots of details. Nothing obvious... except that the upper right hand corner of the page reads "November 1931." --- Simon_Jester
Channel72
Jedi Council Member
Posts: 2068
Joined: 2010-02-03 05:28pm
Location: New York

Re: AI ethics and religion

Post by Channel72 »

Starglider wrote:
I'd say neural networks are a good place to start.
They were a good place to start in 1970. As of 2012 it should be obvious that if you want to use the 'but humans are a working example' argument you are going to have to do it properly, no half-assed 'vaugely inspired by biology' nonsense. The space of connectionist AI designs turned out to be an opaque and human-unfriendly environment, no surprise since connectionist designs themselves are opaque and human-designer-unfriendly. If you aren't going to to precise simulation of biology then you are vastly better off using either (a) designs based (as much as possible) on human-understandable software and logic patterns or (b) direct genetic programming that will use computer hardware effectively, with no massively wasteful biomorphic bias.
All right - you seem to have a lot more experience in the field than I do, so I'll take your word for it. I use SVMs and MLPs for classification problems, but I've never worked with an asynchronous implementation.
...and you've latched onto the standard Penrose bullshit that is the last refuge of people trying to rationalise their instinctive 'oh no computers can't have a soul'. Although to be fair if you are a compsci undergrad you might just be trying to rationalise sixty years or so of overhype and under-delivery in AI research, but believe me you don't have to invoke fanciful new physics to do that.
Rather, I suppose I'm simply hesitant to believe developing a general intelligence is purely a software problem. I had always been under the impression that it's still empirically undetermined whether or not all of physics (particularly quantum events) are Turing computable, and as a result, whether human intelligence is reproducible using a Turing machine. (Although it's a moot point if you're correct that "the notion of quantum computation in any part of the neuron has been convincingly disproved".)

Anway, you're basically asserting that "digital physics" is true, without any indication that this might be a highly contraversial assertion. I'm really surprised by your utter confidence that all of physics is Turing computable. I'd really be fascinated to know why you are so confident that this is the case?
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: AI ethics and religion

Post by Simon_Jester »

Because the physicists are; that's why they use computers to simulate everything.

Everything that can occur at energies achievable in the human brain is covered under atomic physics- which is Turing-computable. The problem is that a Turing machine which can simulate every atom in the brain cannot be built out of rocks in a feasible amount of time. Or, for that matter, out of microchips.

Building a simulator that models the brain on the atomic level would be the 'stupid' way to implement AI. You could make it work if you had about as much computer hardware as God, and it wouldn't require any great wisdom or engineering skill, unless "handle ridiculous datasets" counts as skill. And it would be a Turing machine, and it would be cool in a "good Lord that thing is huge" way.

Doing anything more subtle (and therefore creating a working AI some time in our lifetimes) is a more complicated problem. And the extra complexity is all software.
This space dedicated to Vasily Arkhipov
Channel72
Jedi Council Member
Posts: 2068
Joined: 2010-02-03 05:28pm
Location: New York

Re: AI ethics and religion

Post by Channel72 »

^ That's a very inspiring comic, actually. Although, I have to wonder how he encodes the algorithm to simulate Hawking radiation in his Universe simulation out of rocks in the desert, considering that a Turing machine can't even produce truly random numbers.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: AI ethics and religion

Post by Simon_Jester »

In his SimUniverse, who says random numbers are truly random? Maybe the algorithm is just too big and subtle for observers in that universe to spot the place where the decimal repeats or whatever.
This space dedicated to Vasily Arkhipov
User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Re: AI ethics and religion

Post by Zixinus »

This is not strictly true, even assuming that by 'be an AI' you mean 'be a general AI'.
Yes, I guess it depends on what you assume on how an AI works.

I'm going by the assumption that even if it doesn't quite know its own programming, it can learn and make new thoughts. How that looks on a deeper level, I don't know, but I am almost certain that it can happen in ways that the creators cannot guess.

How can you make an AI (well, general AI, I'm not talking about a search engine) make new thoughts and learn, while having a thought program out there that say "do not think about hurting humans" or something similar?

How can you have a "moral conscious" code for it that allows learning but not learning a thought like "I must kill all humans!"? I can only imagine that to successfully have it work, it would have to be almost as complex as the AI and then you have a chicken-or-the-egg problem of how you can make sure that THAT moral-guardian-AI is correct?

Almost no-one actually designs AIs to do that.
Then how it would see the conflict I outlined in the hypothesis?

That's not a 'conflict'. It's just a situation where forward search has only discovered outcomes of rather low desireability.
Well, I have given it two rules that it must follow absolutely. It is stupid from a real-world AI design standpoint, I know. Asimov's books are filled with examples why this is a problem. I merely made absolute rules for the sake of the hypothesis.

Again, I do not know how precisely a "real" AI would work, so I can only approach this problem from a (armchair-)philosophical/everyday-logic standpoint.
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
Post Reply