Simon_Jester wrote:Look, my real concern is that I find the specification of desires the organism doesn't need to be a less desirable solution than giving it reasons to have those desires other than "they were in the design specs."
Humans don't have any reasons for our desires other than 'those drives tended to propagate the species'. I don't see anything particularly deep and meaningful in that.
We're trying to design a general intelligence here, not a specialized tool; it ought to be able to figure out what it should want.
All goals are arbitrary. For a rational system, 'figuring out what it wants' consists of exactly two activities; converting abstract goals into more specific goals, and resolving any ambiguity present in the original goal specification. Irrational systems e.g. humans have a less straightforward motivational system, but it boils down to the same thing. You cannot pluck goals out of thin air; not humans, not genetically engineered organisms, not seed AIs. You can construct new goals, if your cognitive system is well designed you may even be able to set them directly instead of just trying to live by them, but those new goals are always based on existing goals.
We don't just want it to go down, be fruitful and multiply, and hopefully build a civilization. Whereas for our hypothetical designed intelligence, we have more general objectives.
There is no fundamental difference. You just need different tools for expressing what you want. Gravitating towards opaque, emergent methods for getting 'civilisation' from simple, human-like base desires is a case of
emergence mysticism aka ignorance worship. There is nothing more 'authentic' about such a derivation and any such perception is a failure on your part to consider the messy details of the process. Specifying what you want directly, without flim-flam or unnecessary intermediate stages, is always more reliable and ultimately (if done competently) allows more scope for richness and diversity.
I think that's at least as much evidence that we overengineered it physically as that we underengineered it cognitively.
To me, overengineering means that you blew the budget, or you made it too complex to maintain. Simply exceeding specifications is always a good thing - if it is overengineering, it is overengineering of a purely positive kind.
Deliberately crippling an intelligent being because you thought that it would make up for your inability to directly make its cognitive design do what you want is in fact cruelty on a massive scale.
Yes, I know, this is not specifically true of every general AI project; I don't even know that most general AI projects have a specific application in mind
Actually most
general AI projects are very open ended in intent, but then they've all failed miserably so far. Narrow AI systems that actually do useful things have very narrow motivational systems, either internal (expert systems) or external (genetic algorithms). As you may have guessed, I would consider the later a nasty cop-out stemming from inability to design the former properly.