SirNitram wrote:Artificially paring down the observed evidence to support your pre-existing conclusions. You again assume you can pole-vault from human-level intelligence to something that resembles nothing we've ever seen.
WTF are you talking about? Pole vaulting? What? Do you have any actual rebuttal to my point that a few very very similar intelligences don't constitute a representative sample or are you just going to repeat the word 'artificial' over and over again?
Question: Why must there be a different structure? Why not just use the observable evidence we have as a basis and build up from it?
We could. That's called either neuromorphic AI design, or outright uploading.
Neuromorphic AI design is a bad idea essentially because it gives a false sense of security. You have to get it
exactly right for two-way human intuition, emapthy etc to work. Otherwise you've build something roughly equivalent to a simulated biological extraterrestial. In that case you can't make strong predictions about what its behaviour will be, particularly when it self-modifies into a non-neuromorphic intelligence (which will be massively more efficient when running on conventional computers).
Uploading is a good idea. It only punts the basic problem forward to our transhuman successors, but that's still a net improvement. I support uploading research. However the technological barriers that have to be overcome are quite different from de novo seed AI development and I fear that they may not be overcome in time to avert a seed AI disaster.
And of course empathy is a cheap trick! You know why evolution loves cheap tricks? Easier to make, more efficient to create, lower costs.
More important than any of that is
easier to reach. Evolution
must follow incremental paths, it has no ability to make multiple simultaneous complex changes to achieve a result, because it has a lookahead depth of zero.
A sensible engineer doesn't look at the lever and discard it as a cheap trick; he uses it to make things better.
I've explained why humanlike empathy is extremely hard to implement in AGI and completely unnecessary anyway. AGI design does not suffer from the same limitations as evolution. It makes about as much sense as trying to build giant ornithopters instead of 747s.
Again we come back to the idea you must have a scratch-built, resemble-nothing-before AGI. Why waste all this handy data and observable subjects you have laying around?
We don't have 'handy data' or good means of observing the subjects. This is why psychology is in a rather sorry state despite over a century as a supposedly scientific discipline. We have a reasonable idea of how neurons work, a rough idea of how microcolumns work and the general functional areas of the brain and that's it. This is equivalent to trying to reverse engineer and clone a typical modern PC given a good knowledge of transistors, a rough knowledge of logic gates and a videotape of someone using Microsoft Office.
A sufficiently powerful AGI will model from first principles, this is a given. However, one would have thought, given your familiarity with the experiment about how a sufficiently clever AGI with such total understanding might get out, you'd think of such clever ideas as not giving it the keys to the mansion.
Say what?
So no, emotions are not necessary for empathy in general and human-type empathy is irrelevant for AGIs.
For any sufficiently powerful AGI that is not modelled on any previous GI.
No, they're not necessary for even for a closely neuromorphic AGI, because even that can spawn off subprocesses (to model external agents) that can be observed but firewalled from the main goal system. You'd have to
actively cripple an AGI to force it to use something as sucky as humanlike empathy. Of course that's the adversarial methods quagmire again and will not work in the long run; the AGI will independently recreate the missing capability.
Which are two steps which don't strike me as terribly wise.
You're not getting it. Making an AGI more brain like doesn't make it significantly more likely to be nice or comprehensible. It just makes it slower, much harder to understand/analyse and impossible to formally verify as benevolent. There is a huge difference between 'something we coded to be roughly like the brain' and 'human upload'. The later are ok, accepting that they're just an intermediate step and they will still have to solve the hard takeoff issue.
That's the whole point, Starglider. The point of Friendly AI Theory is to anthropomorphise any actual AI so that it'll have such an impulse.
Different meaning of anthromorphise; 'make anthropomorphic' rather than 'act as if it was anthropomorphic'. I'm telling you, having done an in depth review of every AGI project I could find a decent description of, that the first one is ridiculously difficult for anything other than a human upload. There are countless opportunities for subtle, undetectable failure that create an AGI that initially seems ok in the lab but goes rampant shortly afterwards.
Frankly I expected better from this board. You're advocating fuzzy ill-defined 'emotions' and 'feelings' and dragging AGIs down to the lowest common human denominator in the hope that this will make Everything Turn Out Ok (tm). I'm saying that the way to do it is with clean, objectively verifiable logic and clearly stated robust ethical principles, and that we can (with a lot of work, but it's worth it)
prove this will work. Regardless of the more technical feasibility concerns, the latter should be preferable to anyone with any respect for rationality.
If it has no impulse, it will not care, because 'caring' is an emotion. It will continue it's goal.
I've already pointed out that the gigantic steaming mess that is the human goal and emotional system is a horribly bad example to copy even if we had the capability. It's barely stable even with the minimal reflection humans have, it would be pathetically unstable if suddenly equipped with full direct self-modification capability and consistency pressure.
'Empathy' isn't the solution. Ethics are the solution.
You asked for one thing that required empathy;
I pointed out that empathy is not required for understanding others. I then pointed out that empathy will only make an AI 'nice' if we
perfectly copied a (nice) human goal system, and this is technically infeasible, particularly if you want to actually be sure it will work before you fire it up. The fact that even if you could magically do this it wouldn't be stable under reflection just further underlines how worthless the whole concept is.
you didn't say 'Give me a solution to the indifferent AI God.'. The only way there is to make it never want to harm a human, ethics.
Never harming humans is a start, but I think we can do rather better than that. Even the results of the three laws as presented in 'The Metamorphosis of Prime Intellect' are better than that.
But this again touches on this ridiculous idea that we should throw out all collected data on intelligence, how it works, how it grows,
No one tossed it away without consideration. Everyone I know who's working on this kind of AI carefully considered the concept of a neuromorphic AGI, judged it
worse than useless and discarded it.
presumably yelling 'IT'S ALIVE' or something ridiculous.
Well, I confess that that is kind of fun.
Your labelling of empathy as a 'cheap trick' in a derogatory way
I was not being particularly derogratory; it's cheap in the sense of needing the least cuumulative fitness pressure (the single most precious resource in natural selection) to evolve. However an intelligent designer can achieve much better performance without the drawbacks of overlaying someone else's (presumed) mental state onto your own brain. As such it is not a
useful trick in this domain. Cases of biomorphic engineering in general tend to get a lot of press, IMHO mainly due to nature worshippers and biowankers, but they are very much the exception rather than the rule.
They make extra steps unnecessary!
Only for evolution. Not for software designers. As I said, you would have to
work extra hard to cripple an AGI in this way. Spawning seperate instances and then monitoring them is by far the easiest technical solution.