Intelligence based value
Moderator: Alyrium Denryle
-
- Warlock
- Posts: 10285
- Joined: 2002-07-05 02:28am
- Location: Boston
- Contact:
Intelligence based value
So Ive been reading some transhumanist books, doing a bit of research on the side, and various brainstorming. As a result of this, Im trying to come up with a new set of ethics that could hypothetically be popular in a 21st century world. The primary goal is to be readily adaptable to AIs/uplifts/cyborgs and all the other people that might crop up with future tech. As such, there is one primary tenet that I work with.
Intelligence is the primary value.
As is, we have human life being the primary value; humans are worth more then plants or animals, or matter their age or capacity. This ethic system changes it around a little. As intelligence capability increases, their value becomes worth more. IE, minerals and plants are worth utility only, animals are divided by their ability to feel pain/communicate, and the higher animals - dolphins, great apes, etc - have the same value as human children. By the same token, hypothetical AIs, cyborgs, or animal uplifts would have full rights of personhood. Where it becomes controversial, if the internal consistency is held, is that human babies on this chart are equal to about puppies. The value of human life is on a bell curve, with new borns and the senile being equal to pets.
Im putting this system in a larger framework of peak oil/global warming/general catastrophe, where everything in the biosphere is divided by its utility. We dont have the resources to help most of the nonhuman species survive, so we need to start the triage, based on the intelligence/utility framework.
This is obviously a work in progress; Im trying to do something new, and I fully expect the system to take me into areas where I didnt expect. Im posting it here for a fun little discussion on ethics and value, and seeing how far you can take the idea of "humanity".
Intelligence is the primary value.
As is, we have human life being the primary value; humans are worth more then plants or animals, or matter their age or capacity. This ethic system changes it around a little. As intelligence capability increases, their value becomes worth more. IE, minerals and plants are worth utility only, animals are divided by their ability to feel pain/communicate, and the higher animals - dolphins, great apes, etc - have the same value as human children. By the same token, hypothetical AIs, cyborgs, or animal uplifts would have full rights of personhood. Where it becomes controversial, if the internal consistency is held, is that human babies on this chart are equal to about puppies. The value of human life is on a bell curve, with new borns and the senile being equal to pets.
Im putting this system in a larger framework of peak oil/global warming/general catastrophe, where everything in the biosphere is divided by its utility. We dont have the resources to help most of the nonhuman species survive, so we need to start the triage, based on the intelligence/utility framework.
This is obviously a work in progress; Im trying to do something new, and I fully expect the system to take me into areas where I didnt expect. Im posting it here for a fun little discussion on ethics and value, and seeing how far you can take the idea of "humanity".
This day is Fantastic!
Myers Briggs: ENTJ
Political Compass: -3/-6
DOOMer WoW
"I really hate it when the guy you were pegging as Mr. Worst Case starts saying, "Oh, I was wrong, it's going to be much worse." " - Adrian Laguna
Re: Intelligence based value
Yeaaah, I don't think that works. Quantifying intelligence is one thing, but being intelligent simply gives you a capacity for usefulness, rather than actual usefulness yourself. It's really hard to value people based on anything other than what they accomplish, and while intelligence is a characteristic to help predict capacity for accomplishments, there's way too many variables at play for it to be the basis of a universal value system.
It'd work fine as the ethic of a creepy, inhuman world of the future, but the whole transhumanist thing isn't going to commoiditize human life. The transhumanists are going to become the most consumerist fashion whores imaginable, and they'll only be as smart, as long-living, and as valuable to society as the upgrades they can buy. And they will have to pay an unreasonably huge sum to do so, mind you, and it would be surprising if the improvements were really all that relevent for any period of time. You're going to hit a form of societal deep time before upgraded humans can do much more than a normal, let alone where a bunch of upgrade-junkies are going to have the clout to reorganize the scope of human ethics.
It'd work fine as the ethic of a creepy, inhuman world of the future, but the whole transhumanist thing isn't going to commoiditize human life. The transhumanists are going to become the most consumerist fashion whores imaginable, and they'll only be as smart, as long-living, and as valuable to society as the upgrades they can buy. And they will have to pay an unreasonably huge sum to do so, mind you, and it would be surprising if the improvements were really all that relevent for any period of time. You're going to hit a form of societal deep time before upgraded humans can do much more than a normal, let alone where a bunch of upgrade-junkies are going to have the clout to reorganize the scope of human ethics.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Intelligence based value
Ethics for cyborgs (and for that matter, uploads) isn't challenging if you aren't a moron. If someone has a human or human-equivalent brain, you should treat them as a human, end of story. Uplifts are somewhat harder but not too bad; essentially it's some combination of existing human and animal rights, and you're probably ok in assuming that mix depends on general intelligence. I doubt this is going to be relevant any time soon, given all the wackos (religios, animal rights, political correctness etc etc) that will try to sabotage any uplift project, and the high costs and relatively low benefits.Enforcer Talen wrote:The primary goal is to be readily adaptable to AIs/uplifts/cyborgs and all the other people that might crop up with future tech. As such, there is one primary tenet that I work with.
Intelligence is the primary value.
AIs are the real problem. The sad fact of the matter is that generalising current human ethics to designed intelligences is very, very hard. Human ethics are all based around the notion of human-like consciousness. 'Humane treatment of animals', as the name suggests, is based on the idea that animals have a limited or less sophisticated version of our own self-awareness, emotional spectrum and general sensations. AIs don't necessarily have any of that. It is in fact possible to have general intelligence without having any human-like sense of self, personal desires, ego, 'free will' etc (if you aren't in general AI, you will probably have to read that as 'it may be possible'). A lot of people just don't get this - I recall arguing with Ender a about this a while back, and he was just stuck in a broken-record wall-of-ignorance 'but general AIs must have rights... because... because... slavery! racism!'. I think he eventually conceeded that AIs don't even desire 'rights' unless you design them that way (deliberately or accidentally), but still maintained that this was somehow evil, presumably the same way that using a condom is evil because it denies potential children life.
Anyway, it doesn't bother me too much as there are loads of basic mistakes about AI that intelligent people seem to automatically make - the ethical system of the Culture (from the Iain Banks novel series) seems to be based on what you propose, although I think they make a deliberate effort to make general intelligences somewhat human-like. The other reason that it doesn't bother me is that it's practically irrelevant, in that once general AI is developed, it will be more a question of what rights if any it is prepared to grant you rather than vice versa.
'Feel pain' and 'communicate' are a bit vague. The hypothetical benevolent overlords of the future would design an ethical system based on a detailed understanding of neurology and cognitive function, assigning moral importance to specific structural features of intelligent systems. I like the fact that this makes conventional philosophers (who for the most part absolutely hate anything remotely practical or grounded) run screaming in horror.animals are divided by their ability to feel pain/communicate, and the higher animals - dolphins, great apes, etc - have the same value as human children.
Completely true, but good luck getting anyone to admit it. The only people I know who recognise this are in the transhuman / singularitarian community, unfortunately it just feels instinctively wrong to most humans, for the same reasons that people cling to the 'unique indivisible self' notion of personhood and harp on about 'continuity flaws' in uploading etc.Where it becomes controversial, if the internal consistency is held, is that human babies on this chart are equal to about puppies. The value of human life is on a bell curve, with new borns and the senile being equal to pets.
Bah. Keep DNA samples and we can almost certainly revive them later.Im putting this system in a larger framework of peak oil/global warming/general catastrophe, where everything in the biosphere is divided by its utility. We dont have the resources to help most of the nonhuman species survive,
Re: Intelligence based value
Starglider wrote:Completely true, but good luck getting anyone to admit it. The only people I know who recognise this are in the transhuman / singularitarian community, unfortunately it just feels instinctively wrong to most humans, for the same reasons that people cling to the 'unique indivisible self' notion of personhood and harp on about 'continuity flaws' in uploading etc.Where it becomes controversial, if the internal consistency is held, is that human babies on this chart are equal to about puppies. The value of human life is on a bell curve, with new borns and the senile being equal to pets.
Surely potential is a factor here - A puppie will develop far less intelligence then a baby, therefore a baby is worth more now in the present.
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Intelligence based value
Why? How is that different from extreme catholics complaining that contraception destroys 'potential lives'?madd0ct0r wrote:Surely potential is a factor here - A puppie will develop far less intelligence then a baby, therefore a baby is worth more now in the present.
In a consistent utilitarian analysis, 'potential' is a factor, but only to the extent that we assign value to intelligences existing and fulfilling their desires over time, and reducing the number of entities decreases the expected utility. If you could vaporise one baby and create another one of equivalent age and capability, then there is no net effect on 'potential utility'. There is of course the negative utility assigned directly to terminating the existence of an intelligent creature against its will, plus all the negative utility associated with the suffering of the parents etc.
Legal issues are a different question entirely; the fact that all human life is not ethically equivalent (under a consistent utilitarian analysis) does not mean that we should try to write legislation reflecting this, as that is a can of worms better left unopened (strictly, there are lots of indirect costs and game theory effects that would almost certainly swamp any theoretical utilitarian benefits under major drawbacks). Of course that only goes as long as humans are running the legal system; a future with nonhuman superintelligences may not (read : probably won't) have a legal system that looks anything like contemporary ones.
Re: Intelligence based value
It's certainly reasonable to say "this living being will mature into being a person, this living being won't" and value the second less than the first. Being 'transhuman' doesn't mean being a fucking idiot with regard to basic concepts outside of the immediate present. If one is aiming to uplift and expand the possibilities of the human race, one should be wary not to shoot themselves in the foot by forgetting that it's people, not dogs, which will one day solve the problems that you yourself have even yet to discover. If the kid is already born, treat it with respect and take note of that potential--there's no way to deny it the potential for intelligent humanity now without an act of grievous inhumanity on your part. And the value of the being it'll grow into is greatly influenced by the treatment it receives during development.Starglider wrote:Why? How is that different from extreme catholics complaining that contraception destroys 'potential lives'?madd0ct0r wrote:Surely potential is a factor here - A puppie will develop far less intelligence then a baby, therefore a baby is worth more now in the present.
Because there's a difference between a baby which already exists and will mature into a thinking, independent, rational being and a dog which never will--whereas what the catholics are worried about are the hypothetical lives that are lost when you assume people are choosing not to become pregnant.
The reason the catholic argument is insane is because a) people are under no obligation to breed at every opportunity--even at every chance at sex b) people who do not exist yet do not have a right to exist that supercedes your right to continue their nonexistance c) it only works based on the premise that god wills you to have children on his timetable, and condoms fuck up his divine plan by making him fumble the sperm football.
Puppies grow up into pets which can at best live to service a human, babies are the larval form of rational superbeings. If puppies grew up into beings that could even approach human intelligence, there might be some level of comparison there. But outside the strangest of settings, potential is necessary for any analysis. Even in a setting where a baby cow might be more vaulable than a baby human, it's still the potential that you're weighing. Saying otherwise is just philosophical masturbation and fake enlightenment, and it's the kind of silliness which would certainly hold back any form of transhumanism that could ever exist, if any could exist at all.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Intelligence based value
It is difficult to argue this logically, because all goal systems are essentially arbitrary. However a system which assigns additional utility to potential over and above what is implied by normal expected utility applied to results is more complicated, less consistent, less reliable (in an AGI application) and to my mind thoroughly unnecessary.Covenant wrote:It's certainly reasonable to say "this living being will mature into being a person, this living being won't" and value the second less than the first.
Firstly, goal system content isn't correlated with intelligence. It is perfectly possible to have a superintelligence obsessed with the cause of turning the observable universe into paperclips. In humans more intelligent individuals tend to have more sophisticated and possibly more ethical goal systems, but this is not true of intelligences in general.Being 'transhuman' doesn't mean being a fucking idiot with regard to basic concepts outside of the immediate present.
Secondly, expected utility already takes all of that into account. You don't have to assign utility directly to 'potential' in the utility function; in fact that just breaks everything and causes irrational behaviour, blatantly irrational and possibly dangerous at the corner cases in fact. The computed cost of say losing a baby includes (to the limits of computation) the entire spectrum of possible lives that baby could have had, each multiplied by the probability (from 0 to 1) of that outcome occuring, and then summed.If one is aiming to uplift and expand the possibilities of the human race, one should be wary not to shoot themselves in the foot by forgetting that it's people, not dogs, which will one day solve the problems that you yourself have even yet to discover.
Currently, yes, but note that the technology to uplift already-extant dogs is not fundamentally harder than say uploading humans.Because there's a difference between a baby which already exists and will mature into a thinking, independent, rational being and a dog which never will
They're lives that don't occur. Logically (and considered in isolation) there is a utility cost to people not having children, if we assign positive utility to the existence of happy intelligent beings. Of course there's a lot of negative utility to making people doing things against their will, so that's not a reason to make people have babies.whereas what the catholics are worried about are the hypothetical lives that are lost when you assume people are choosing not to become pregnant.
Bleh. 'Obligations', 'free will', these things have no hard materialist grounding. It would be folly to use them as basic concepts in a goal system for generalised posthuman intelligence. We may want to respect them, but you have to make that behaviour implied by mechanisms attached to harder, more grounded concepts.The reason the catholic argument is insane is because a) people are under no obligation to breed at every opportunity
That is essentially correct, but 'rights' are something you codify in legal systems, not goal systems. Or rather, humans do adhere to 'codes of honor' etc as a means of self-control, but inevitably there are cases where the rules don't work or have to be interpreted. I would much prefer to have conclusions like this emerge from a small set of axioms (i.e. a compact utility function), as opposed to trying to load down an artificial intelligence with hundreds of 'ethical rules'.b) people who do not exist yet do not have a right to exist that supercedes your right to continue their nonexistance
Re: Intelligence based value
C'mon, let's not do a line-by-line back and forth. You need to excuse me for not responding to the rest because I simply don't agree with any of your first principles, so we're never going to find a common ground there. But I do disagree that potential has no merit in the equation, and I think you'd agree if you looked at it more reasonably.
It applies to people too. Why do you value intelligence? Is it because intelligence provides any value at the moment? Of course not, then all you would value is skill. Intelligence is valued because a higher intelligence is an indicator for a higher capacity. If you really graded highest for those things that are most instantly valuable you'd be putting the plumbers and mechanics at the top of the pile, because their skills are valuable right now whereas a researcher in a medical facility won't provide a benefit (if any at all) for years and years.
Ergo, potential is a key indicator of value. It's just not the only indicator of value.
Well, you're incorrect. When you're setting up a colony, what's more valuable, ten tons of steak or ten tons of cattle? Potential is most certainly a big element to value, since things that increase in value over time will become more valuable in time--and if you plan ahead, will yield consistently higher value over any timeframe except the shortest one.Starglider wrote:It is difficult to argue this logically, because all goal systems are essentially arbitrary. However a system which assigns additional utility to potential over and above what is implied by normal expected utility applied to results is more complicated, less consistent, less reliable (in an AGI application) and to my mind thoroughly unnecessary.
It applies to people too. Why do you value intelligence? Is it because intelligence provides any value at the moment? Of course not, then all you would value is skill. Intelligence is valued because a higher intelligence is an indicator for a higher capacity. If you really graded highest for those things that are most instantly valuable you'd be putting the plumbers and mechanics at the top of the pile, because their skills are valuable right now whereas a researcher in a medical facility won't provide a benefit (if any at all) for years and years.
Ergo, potential is a key indicator of value. It's just not the only indicator of value.
- Nephtys
- Sith Acolyte
- Posts: 6227
- Joined: 2005-04-02 10:54pm
- Location: South Cali... where life is cheap!
Re: Intelligence based value
This seems to me as an overcomplication of a few very simple concepts.
A baby is a sentient creature of low intelligence that will become one of high intelligence.
A puppy is a sentient creature of low intelligence that will not.
An embryo is not a sentient creature, but has the potential to become one with high intelligence.
Major differences there. For that one angle, aborting an embryo is preventing a life from occurring. While killing a baby is ending one in progress.
It's pretty easy to say that something of human intelligence deserves human level of rights. That's not particularly complicated. However, it's not that easy to just make a magical, quantifiable system of morals where a dolphin equals a newborn.
For example, if I had the choice of saving a million tons of oil or some other equally precious resource, or a puppy, sorry. That oil comes first. Because it is valuable, rare, and something that people can make good use of. However, if I had the choice of saving a baby or a thousand puppies, that baby is going to get rescued unless there are particularly odd circumstances. If one must formulate a system of ethics, pragmatism must be kept in mind, otherwise it'd be as uselessly ambigious as utilitarianism is in practice.
A baby is a sentient creature of low intelligence that will become one of high intelligence.
A puppy is a sentient creature of low intelligence that will not.
An embryo is not a sentient creature, but has the potential to become one with high intelligence.
Major differences there. For that one angle, aborting an embryo is preventing a life from occurring. While killing a baby is ending one in progress.
It's pretty easy to say that something of human intelligence deserves human level of rights. That's not particularly complicated. However, it's not that easy to just make a magical, quantifiable system of morals where a dolphin equals a newborn.
For example, if I had the choice of saving a million tons of oil or some other equally precious resource, or a puppy, sorry. That oil comes first. Because it is valuable, rare, and something that people can make good use of. However, if I had the choice of saving a baby or a thousand puppies, that baby is going to get rescued unless there are particularly odd circumstances. If one must formulate a system of ethics, pragmatism must be kept in mind, otherwise it'd be as uselessly ambigious as utilitarianism is in practice.
Re: Intelligence based value
In this scenario humans do NOT have any greater potential than any other living thing- they would have more being provided with the basics... but presumably, no one is provided with just the basics if they are considered important.animal uplifts would have full rights of personhood.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Intelligence based value
Are you even reading my posts? Expected utility automatically assigns utility to 'potential'; that's how planning and decision making works, utility flows backwards from end-goals until it reaches actions you can actually take. It is (highly) counterproductive to assign utility directly to 'potential', over and above this automatic assignment, because that screws up the calculations and causes irrational behaviour. Humans sometimes end up treating things that are actually intermediate goals as ends in themselves, but that's because our brains aren't good at creating and sustaining long inferential chains. AIs don't have this problem.Covenant wrote:Well, you're incorrect. When you're setting up a colony, what's more valuable, ten tons of steak or ten tons of cattle?It is difficult to argue this logically, because all goal systems are essentially arbitrary. However a system which assigns additional utility to potential over and above what is implied by normal expected utility applied to results is more complicated, less consistent, less reliable (in an AGI application) and to my mind thoroughly unnecessary.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Intelligence based value
What number of puppies must there be before they are worth more than the baby? One million? One trillion? Fifty billion quadrillion? Would you destroy a galaxy full of planets populated by puppies to save a baby?Nephtys wrote:However, if I had the choice of saving a baby or a thousand puppies, that baby is going to get rescued unless there are particularly odd circumstances. If one must formulate a system of ethics, pragmatism must be kept in mind, otherwise it'd be as uselessly ambigious as utilitarianism is in practice.
The statement 'one baby is worth more than an infinite number of puppies' can be accomodated in EU with some difficultly (i.e. transfinite mathematics), but goal systems like that tend to prove horribly inconsistent, prone to bizarre and unexpected behaviour when executed. Expected utility is actually the most clear, simple and practical goal system design (for designed intelligences) we have discovered.
- CaptainChewbacca
- Browncoat Wookiee
- Posts: 15746
- Joined: 2003-05-06 02:36am
- Location: Deep beneath Boatmurdered.
Re: Intelligence based value
Yes. Is that wrong? Can I just say NO amount of puppies is worth as much to me as a human life, or is that wrong?What number of puppies must there be before they are worth more than the baby? One million? One trillion? Fifty billion quadrillion? Would you destroy a galaxy full of planets populated by puppies to save a baby?
Stuart: The only problem is, I'm losing track of which universe I'm in.
You kinda look like Jesus. With a lightsaber.- Peregrin Toker
You kinda look like Jesus. With a lightsaber.- Peregrin Toker
Re: Intelligence based value
It is inconsistent with what people actually do. Ever seen someone run into a burning building in order to save a pet, or even worse, a painting?CaptainChewbacca wrote:Yes. Is that wrong? Can I just say NO amount of puppies is worth as much to me as a human life, or is that wrong?What number of puppies must there be before they are worth more than the baby? One million? One trillion? Fifty billion quadrillion? Would you destroy a galaxy full of planets populated by puppies to save a baby?
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Intelligence based value
Goal systems aren't 'right' or 'wrong' as such, they're just consistent or inconsistent. Human goal systems are inherently inconsistent (non-transitive, non-deterministic, excessively context-sensitive, inconsistent under reflection and just generally messy). AI goal systems will tend to quickly evolve into a consistent configuration even if they start in an inconsistent configuration, due to the way reflection and general convergence on optimal reasoning works; this process is very difficult to predict so it should be avoided if possible.CaptainChewbacca wrote:Yes. Is that wrong? Can I just say NO amount of puppies is worth as much to me as a human life, or is that wrong?What number of puppies must there be before they are worth more than the baby? One million? One trillion? Fifty billion quadrillion? Would you destroy a galaxy full of planets populated by puppies to save a baby?
You can declare a baby more important than an infinite number of puppies, and you can even make that formally consistent (with effort), but that results in a complicated goal system with a lot of arbitary aspects to it. Personally I subscribe to a kind of 'Occam's Razor for goal systems', in that simpler ethical systems are preferable to complex ones, if the extra arbitariness is not adding anything really relevant. This is of course ultimately a goal system component itself, a meta-goal if you will, on top of the fact that I simply disagree with you.
Obviously I am now compelled to build a benevolent AI superintelligence that will defend the peaceful galaxy of Pupulon from your horrific baby-powered V'Ger style anhiliation probe.
Re: Intelligence based value
How can you say no to such cuteness?
I think CCs objection is based on his goal system being humans are the only value and things are valued based on how much they can assist humanity. The problem being that human is not an entirely discrete category thanks to the power of science
Re: Intelligence based value
You could probably justify this "Occam's Razor for goal systems" by noting that unintended consequences increase significantly as the goal system increases in complexity, right?Starglider wrote:Personally I subscribe to a kind of 'Occam's Razor for goal systems', in that simpler ethical systems are preferable to complex ones, if the extra arbitariness is not adding anything really relevant.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
- CaptainChewbacca
- Browncoat Wookiee
- Posts: 15746
- Joined: 2003-05-06 02:36am
- Location: Deep beneath Boatmurdered.
Re: Intelligence based value
Oh, I've known people to risk their OWN lives for a puppy. I havn't, however, heard of a person running into a burning building to save a puppy while ignoring a baby also in the building.
Stuart: The only problem is, I'm losing track of which universe I'm in.
You kinda look like Jesus. With a lightsaber.- Peregrin Toker
You kinda look like Jesus. With a lightsaber.- Peregrin Toker
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Intelligence based value
Oh good, in that case I can flag him as 'carbon-facist scum' and let the killsats take him out.Samuel wrote:I think CCs objection is based on his goal system being humans are the only value and things are valued based on how much they can assist humanity.
I already did, but that's only in the case of you designing another entity, with a goal system different from your own, which you want to exhibit specific behaviour. By definition you can't have 'unintended consequences' from your own goal system, because anything you want is 'intentional'. Of course if your goal system is inconsistent you may get conflicted and confused to the point that you start worrying about this (e.g. you adopt a rigid code of honor that has 'unintended consequences'), but that's your own fault for being so damn irrational.Surlethe wrote:You could probably justify this "Occam's Razor for goal systems" by noting that unintended consequences increase significantly as the goal system increases in complexity, right?
Re: Intelligence based value
Except just by risking their own life they are showing they value the puppy for a substantial fraction of their own. The fact that babies are considered more valuable does not change the fact that people consider puppies worth a fraction of their own life (greater than odds of death times value of their life in fact). Which means that a given number of puppies is more valuable than a baby.CaptainChewbacca wrote:Oh, I've known people to risk their OWN lives for a puppy. I havn't, however, heard of a person running into a burning building to save a puppy while ignoring a baby also in the building.
In hindsight, Use of Weapons was both a good and a bad book to be introduced to the culture series.Oh good, in that case I can flag him as 'carbon-facist scum' and let the killsats take him out.
Re: Intelligence based value
Starglider wrote:
Secondly, expected utility already takes all of that into account. You don't have to assign utility directly to 'potential' in the utility function; in fact that just breaks everything and causes irrational behaviour, blatantly irrational and possibly dangerous at the corner cases in fact. The computed cost of say losing a baby includes (to the limits of computation) the entire spectrum of possible lives that baby could have had, each multiplied by the probability (from 0 to 1) of that outcome occuring, and then summed.
yup. I can go with that. There may be ways and assumptions to shortcut the kind of calculation involved, but that's fundamentally something i can agree with. I'd be amazed if that calc gave you an equal value to a puppy though.
Besides, in a utlitarian future (as you specified) I'd expect utility to be calculated according to the benefit of society from that individual; society being defined as all beings with a weighting for intelligence (as you said).
Anything attempting to numerically reprsent an ethics system is bound to fail through iterative assumptions. fun to try though.
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Intelligence based value
That is a fiendishly complicated subject. Expected utility theory is defined based on probability calculus and preference functions over universe timelines, both of which assume infinite computing power by default. Probability theory is the optimal way to handle limited information, but handling limited computing power requires some pretty knotty recursion. It's bad enough making a general system for the allocation of computing power that avoids infinite regress (in the sense of trying to decide how to decide how to decide how to decide... (n iterations)... how to decide what to do), without moral hazard getting in there as well.madd0ct0r wrote:yup. I can go with that. There may be ways and assumptions to shortcut the kind of calculation involvedStarglider wrote:The computed cost of say losing a baby includes (to the limits of computation) the entire spectrum of possible lives that baby could have had, each multiplied by the probability (from 0 to 1) of that outcome occuring, and then summed.
Our goal systems are incredibly close, in the scope of all possible goal systems, if we can simply agree that the existence of happy babies and puppies both have finite, comparable utility. It's really asking too much to expect us both to assign the exact same weights to each entity.I'd be amazed if that calc gave you an equal value to a puppy though.
I hope the important decisions will be made that way, but it'd be kind of boring if every intelligence made all their decisions that way.Besides, in a utlitarian future (as you specified) I'd expect utility to be calculated according to the benefit of society from that individual; society being defined as all beings with a weighting for intelligence (as you said).
I hope not, because when we get to the point of making general AIs, if you want ethics you have to express it in maths and/or program code (well strictly you can try to teach ethics like you would to a child, but this is an enterprise doomed to horrible failure).Anything attempting to numerically reprsent an ethics system is bound to fail through iterative assumptions.
Re: Intelligence based value
Conversely, I can confidently state I would readily kill a person to save Earth's canines from complete extinction. I say so from an emotional standpoint, but based on utility it's smart too; rescue and companion dogs would make up the difference in a couple minutes. It rapidly gets harder if we put numbers to it, though. Would I kill 250 people to save 30% of the world's dogs?CaptainChewbacca wrote:Yes. Is that wrong? Can I just say NO amount of puppies is worth as much to me as a human life, or is that wrong?What number of puppies must there be before they are worth more than the baby? One million? One trillion? Fifty billion quadrillion? Would you destroy a galaxy full of planets populated by puppies to save a baby?
What would you call this generalized subject, Starglider? It's intriguing. You may have said already and I missed it, sleep dep has me running at reduced mental capacity and I skimmed a lot of the thread.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Intelligence based value
I don't think there's a term for it other than just 'utilitarian ethics'. Expected utility theory and probability calculus are the tools used to make decisions, give a particular utility function, and there's plenty of books and papers on those. I haven't seen much material on design of general utility functions themselves though; economists tend to use trivialised examples and philosophers generally refuse to sully themselves with anything as intellectually rigorous as actual equations.Sriad wrote:What would you call this generalized subject, Starglider? It's intriguing.