Ethics based on achieving goals

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
Elaro
Padawan Learner
Posts: 493
Joined: 2006-06-03 12:34pm
Location: Reality, apparently

Ethics based on achieving goals

Post by Elaro »

I've been kicking around this idea for a while now, and I'd like some feedback.

I was thinking of a common ethical system that we could share with beings that don't feel like us, specifically an embodied AI person. This is a fusion of the ethical theories of Utilitarianism and Existentialism, as understood by someone who read Utilitarianism by John Stuart Mill and L'existentialisme est un humanisme by Jean Paul Sartre. So... not very deep, I'm sorry.

The main gist of it is that the best course of action is the one that will achieve the most goals for the most people for the greatest amount of time.

Let me explain. A goal is a description of the world that someone will seek to make true in reality. This implies a choice by a person, and a willingness to expend resources to achieve it. A will is a set of non-contradictory goals that one person has. The will can change, as new goals are added and some goals are abandoned due to realizing that they're impossible. I'm not sure if the will is ordered or not, the question "which do you want more?" does not always return just one answer.

"For the most people" means exactly that, regardless of where they are or whether they're real or potential. So, if someone had to choose between making two generations miserable for a high probability of the next ten generations achieving their goals or having two early generations achieve their goals with a high probability of the next ten generations being frustrated, assuming each generation has the same number of people in it, under my system, the good choice would be the first. Of course, whether the first two generations will cooperate is another good question.

I use the phrase "for the greatest amount of time" because the phrase "the longest amount of time" implies a continuous section of time. This isn't what I mean. For example, if I want to play a game, or eat good food, or feel good, I could do any of those things non-stop, but I'd be dead within weeks (or from a burst stomach). So, I should choose to interrupt my happy time for the sake of more overall happy time, even though the happy time sessions don't last as long.

I think this principle can be used to judge goals and determine which ones are better solely by virtue of which one will more likely allow the more other goals to be achieved. For example, say I want to build a house to live in, and another person wants to build a merry-go-round. My hypothetical house is only useful to me (until I sell it), but I certainly need a house to achieve other goals, while the merry-go-round is not as certainly helpful. That's not to say we should never build merry-go-rounds; it just means we should make sure everyone is housed first.

Alright, alright, I'll come clean. I want a computable model of ethics. If the goal is described mathematically, then, if we can calculate what is necessary for that goal (if there is a sub-goal, and whether the goal is itself a sub-goal of some other goal), and put all those goals and sub-goals in a graph, then the best goals are the ones in the shortest path... I think.
"The surest sign that the world was not created by an omnipotent Being who loves us is that the Earth is not an infinite plane and it does not rain meat."

"Lo, how free the madman is! He can observe beyond mere reality, and cogitates untroubled by the bounds of relevance."
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Ethics based on achieving goals

Post by Simon_Jester »

Two things are problematic with your basic approach.

One is that not all entities share compatible goals. Some entities have goals that are by your standards or mine 'insane.' And applying computable ethics to 'insane' goals leads to 'insane' results.

The second, related to the first, is that a useful system of ethics is prescriptive, you can use it to answer the question "what should I do?" One of the key aspects to that is being able to answer the question "what should my goals be?"

Useful ethics tell us things not only about what it is prudent or imprudent to do, but what it is right or wrong to want.
This space dedicated to Vasily Arkhipov
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: Ethics based on achieving goals

Post by Purple »

This all is secondary though to the real issue with ethics. That being the fact that they only work so far as everyone agrees to play by the same rules. The moment you have more than one competing system both become practically useless as you can no longer form a strait line between your actions and their results as they reflect in the reactions of others.

That by the way is the primary reason why I hold ethics as a discipline in low regard. I think that we should instead focus our efforts on understanding what makes people tick on a mechanical level so that the prescription we can make is not "what should I do?" but "what is he going to do?" so that we can use that knowledge to answer "how should I go about achieving my goals?".
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Ethics based on achieving goals

Post by Starglider »

Obvious practical issue : determining what people actually want. Most people are not sure what they really want and a literal implementation of what they say they want would actually produce highly undesireable results.

Obvious ethical issue : you are allowing the oppression of minorities if a majority of humanity wants, say, gay sex to be illegal. Fulfilling the majority's goal of 'living in a morally correct society' outweighs the minoritie's goal of 'being themselves', and thus stomps the liberties of the minority.

Implementing this would also need so many arbitrary weightings and fudge factors that you would still be to a large extent authoring original ethical content, just slightly indirectly, defeating your goal of making it purely democratic. Of course you could try to crowdsource all the meta-level content as well, but the results of that would be... chaotic. As a mad scientist I confess to a certain curiosity as to what exactly would happen, but I'm certain that I wouldn't want to be standing on the same planet at the time.

There is a concept roughly similar to this called 'coherent extrapolated volition' in serious AI ethics debate, but it is much better thought out (although still problematic).
User avatar
Feil
Jedi Council Member
Posts: 1944
Joined: 2006-05-17 05:05pm
Location: Illinois, USA

Re: Ethics based on achieving goals

Post by Feil »

Goal-seeking ethics are more sensible than hedonistic ethics, but you're going to need to create a non-arithmetic hierarchy of goals to make it work: people can have functionally infinite numbers of goals, so you can't weight them according to a static numerical value and add them linearly; and some goals are just plain bad.

For instance, if fifty people want to torture kittens all day every day for the rest of their lives, your system awards high ethics-points to helping them achieve my goal, and if I want to rescue the kittens just once, your system awards high ethics-points to stopping me.

One of the better* goal-based ethics systems I've seen is one that can fall out of Confucianism if you shake it hard enough. I'm no ethics expert, but it goes something like this:

"Take those actions which have the highest probability to maximize human flourishing1 on the micro2 and macro3 scale, while minimizing suffering (human and otherwise) in general. When micro and macro conflict, seek non-negative contributions to flourishing for both. If that's not possible, seek the smallest number of lives completely deprived of flourishing, subject to the above. Rate preserving your own capacity to flourish fairly highly relative to improving the flourishing of others."

1 - human flourishing being a catch-all that creates a need for another hierarchy of goals - our objective here is just to get started on the right track and find a way to set up our goal-seeking system in a way as to ignore what people want and focus on what they ought to have.
2 - micro scale implies the outcome when we zoom in on any particular part of the people that are affected by our actions, most importantly the ones directly and immediately affected. The idea is to weight outcomes in which everyone benefits over outcomes in which some people benefit a lot and some people get pushed under the bus. You can probably express this mathematically as attempting to minimize the magnitude of local minima (with indifference to the number of local minima!) on some sort of n-dimensional moral curve, where n is the number of differently affected groups.
3 - macro scale implies large groups and long-term outcomes, the big-picture view as opposed to the zoomed-in view of 2. Express this mathematically as the time integral of the n-dimensional moral curve.

*Better implying how well it approximates gut-feeling ethical decision-making.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Ethics based on achieving goals

Post by K. A. Pital »

I want a computable model of ethics.
You are bound to use some form of utilitarianism then, and this leads to the reduction of the minority opinion at best and actual destruction of minorities at worst.

In the words of Obi-Wan, I have a bad feeling about this.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
biostem
Jedi Master
Posts: 1488
Joined: 2012-11-15 01:48pm

Re: Ethics based on achieving goals

Post by biostem »

This seems like it'd turn into an "end justifies the means" scenario, since the goal is the central focus of this system. IMO, a system that starts with some core principles like "minimize harm" or "life is preferable to death" would be better.
User avatar
Feil
Jedi Council Member
Posts: 1944
Joined: 2006-05-17 05:05pm
Location: Illinois, USA

Re: Ethics based on achieving goals

Post by Feil »

K. A. Pital wrote:
I want a computable model of ethics.
You are bound to use some form of utilitarianism then, and this leads to the reduction of the minority opinion at best and actual destruction of minorities at worst.

In the words of Obi-Wan, I have a bad feeling about this.
The fact that you can articulate a specific problem means that you can assign a crunchable function to it that favors outcomes where the problem doesn't occur. One may not be able make a computable ethics that won't give Obi-Wan a bad feeling, but one can make a computable ethics that upholds minority rights. Or follows Shari'a, assigning various ethics-points values to different tenets. Or just asks K.A.Pital what it should do, and does that. Or, as is the topic of this thread, tries to promote accomplishment of goals, including negutilitarian goals. Utility is far from the only concept that can be abstracted as a function.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Ethics based on achieving goals

Post by Starglider »

K. A. Pital wrote:You are bound to use some form of utilitarianism then, and this leads to the reduction of the minority opinion at best and actual destruction of minorities at worst.
No it doesn't. That is just the more naive and popular forms of philosophical utilitarianism, which is to say, the simple and obvious suggestions for utility functions. You can write utility functions that are as rediculously SJW as you like. It's not a good idea (because it quickly becomes completely undemocratic) but it can be expressed. 'Computable' in this context just means 'well-specified and consistent'. Of course these are a highly novel concepts for most members of the far left.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Ethics based on achieving goals

Post by K. A. Pital »

"Because it quickly becomes completely undemocratic" - utilitarianism is at its best expressed by the idea of the acceptability of horribly torturing or killing one person if it really saves 10 people. Kill one to save two.

That's it. There's nothing more to it, you can sophisticate your utility function as much as you like, but if this approach is abandoned it is no longer utilitarianism, maybe not even consequentialism. You can write a ridiculous function that will impose suffering on 10 people to the benefit of one, but since it defies the minimization of suffering and maximization of the well-being, a core imperative of classic utilitarian philosophy, it won't be utilitarian, your function.

That is, you can say that the maximum utility is achieved when you torture 10 for the pleasure of one, but that's a worthless statement that has no value neither for ethics nor for harder science. It's just worthless talk.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
madd0ct0r
Sith Acolyte
Posts: 6259
Joined: 2008-03-14 07:47am

Re: Ethics based on achieving goals

Post by madd0ct0r »

Elaro wrote:I've been kicking around this idea for a while now, and I'd like some feedback.

I was thinking of a common ethical system that we could share with beings that don't feel like us, specifically an embodied AI person. This is a fusion of the ethical theories of Utilitarianism and Existentialism, as understood by someone who read Utilitarianism by John Stuart Mill and L'existentialisme est un humanisme by Jean Paul Sartre. So... not very deep, I'm sorry.

The main gist of it is that the best course of action is the one that will achieve the most goals for the most people for the greatest amount of time.

Let me explain. A goal is a description of the world that someone will seek to make true in reality. This implies a choice by a person, and a willingness to expend resources to achieve it. A will is a set of non-contradictory goals that one person has. The will can change, as new goals are added and some goals are abandoned due to realizing that they're impossible. I'm not sure if the will is ordered or not, the question "which do you want more?" does not always return just one answer.

"For the most people" means exactly that, regardless of where they are or whether they're real or potential. So, if someone had to choose between making two generations miserable for a high probability of the next ten generations achieving their goals or having two early generations achieve their goals with a high probability of the next ten generations being frustrated, assuming each generation has the same number of people in it, under my system, the good choice would be the first. Of course, whether the first two generations will cooperate is another good question.

I use the phrase "for the greatest amount of time" because the phrase "the longest amount of time" implies a continuous section of time. This isn't what I mean. For example, if I want to play a game, or eat good food, or feel good, I could do any of those things non-stop, but I'd be dead within weeks (or from a burst stomach). So, I should choose to interrupt my happy time for the sake of more overall happy time, even though the happy time sessions don't last as long.

I think this principle can be used to judge goals and determine which ones are better solely by virtue of which one will more likely allow the more other goals to be achieved. For example, say I want to build a house to live in, and another person wants to build a merry-go-round. My hypothetical house is only useful to me (until I sell it), but I certainly need a house to achieve other goals, while the merry-go-round is not as certainly helpful. That's not to say we should never build merry-go-rounds; it just means we should make sure everyone is housed first.

Alright, alright, I'll come clean. I want a computable model of ethics. If the goal is described mathematically, then, if we can calculate what is necessary for that goal (if there is a sub-goal, and whether the goal is itself a sub-goal of some other goal), and put all those goals and sub-goals in a graph, then the best goals are the ones in the shortest path... I think.
the graph approach would work, if you include probabilities of failure/uncertainty (including forecasting difficulties discounting the future) and weight them appropriately. You are setting goals for a chaotic system after all.
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
User avatar
Elaro
Padawan Learner
Posts: 493
Joined: 2006-06-03 12:34pm
Location: Reality, apparently

Re: Ethics based on achieving goals

Post by Elaro »

Simon_Jester wrote:Two things are problematic with your basic approach.

One is that not all entities share compatible goals. Some entities have goals that are by your standards or mine 'insane.' And applying computable ethics to 'insane' goals leads to 'insane' results.

The second, related to the first, is that a useful system of ethics is prescriptive, you can use it to answer the question "what should I do?" One of the key aspects to that is being able to answer the question "what should my goals be?"

Useful ethics tell us things not only about what it is prudent or imprudent to do, but what it is right or wrong to want.
Point 1: Why say some goals are sane and some are insane? What's the difference? Existentialism "says" that all life projects inherently exist/are acceptable and must be accounted for. Like, say you come to someone and you say, "oh, that goal is insane", they're not going to change their goal just because you reject it. You have to give reasons, and the only reasons I see is that it would hurt your chances or other people's chances of achieving other goals. And then there are the impossible goals, but by their very nature they are impossible to achieve and thus all possible actions are equally likely to accomplish that goal (the likelihood being 0), and thus, if your life project is to accomplish only impossible goals, then the best thing you can do is help someone else achieve their goals if they are possible.

Point 2: I used to refer to this system as a will-based system, where we do what will achieve what we want, but I didn't want to be associated with Nazis, so... I consider a goal that someone has to be an intrinsic property of their selves, a "fact of life", a "law of nature". One of my earliest definition of what a goal was was that it was "the properties that tend to express themselves when no constraints are put on the subject", but I'm not sure about the proper phrasing. My point is, you can't enforce a certain goal on people, and I don't expect a system of ethics to do that. Now, a system of ethics can compare goals, rank goals, but it can't create goals out of whole cloth, I guess is what I'm saying.
Starglider wrote:Obvious practical issue : determining what people actually want. Most people are not sure what they really want and a literal implementation of what they say they want would actually produce highly undesireable results.

Obvious ethical issue : you are allowing the oppression of minorities if a majority of humanity wants, say, gay sex to be illegal. Fulfilling the majority's goal of 'living in a morally correct society' outweighs the minoritie's goal of 'being themselves', and thus stomps the liberties of the minority.

Implementing this would also need so many arbitrary weightings and fudge factors that you would still be to a large extent authoring original ethical content, just slightly indirectly, defeating your goal of making it purely democratic. Of course you could try to crowdsource all the meta-level content as well, but the results of that would be... chaotic. As a mad scientist I confess to a certain curiosity as to what exactly would happen, but I'm certain that I wouldn't want to be standing on the same planet at the time.

There is a concept roughly similar to this called 'coherent extrapolated volition' in serious AI ethics debate, but it is much better thought out (although still problematic).
On the practical issue: I agree completely. That's one of the problems I am trying to solve by talking about this. Obviously, the prescriptor will have to do some predictive work to see what the effects of a particular goal will have, and whether they are coherent with the other goals that the answer-seeker has, firstly, and coherent with the goals of other people, existing or who might exist.

On the ethical issue: The prescriptor would ask itself: What would be accomplished by punishing gay sex? If it finds that punishing gay sex reduces the amount of goals achieved by the entirety of society, it will not prescribe the punishment. For example, it could find that the cost of punishment is hindering other goals, or it could find that allowing the goal of gay sex to be accomplished does not reduce the amount of other goals that people who do not engage in gay sex can achieve.

The system should only stop people from doing something that they want when doing that thing would result in less people doing something that they want.

Also, I don't care about making it democratic. It's not about empowering the people to make laws, it's about empowering the people to achieve their goals. Not all their goals, but an optimum amount of goals.
Feil wrote:Goal-seeking ethics are more sensible than hedonistic ethics, but you're going to need to create a non-arithmetic hierarchy of goals to make it work: people can have functionally infinite numbers of goals, so you can't weight them according to a static numerical value and add them linearly; and some goals are just plain bad.

For instance, if fifty people want to torture kittens all day every day for the rest of their lives, your system awards high ethics-points to helping them achieve my goal, and if I want to rescue the kittens just once, your system awards high ethics-points to stopping me.

One of the better* goal-based ethics systems I've seen is one that can fall out of Confucianism if you shake it hard enough. I'm no ethics expert, but it goes something like this:

"Take those actions which have the highest probability to maximize human flourishing1 on the micro2 and macro3 scale, while minimizing suffering (human and otherwise) in general. When micro and macro conflict, seek non-negative contributions to flourishing for both. If that's not possible, seek the smallest number of lives completely deprived of flourishing, subject to the above. Rate preserving your own capacity to flourish fairly highly relative to improving the flourishing of others."

1 - human flourishing being a catch-all that creates a need for another hierarchy of goals - our objective here is just to get started on the right track and find a way to set up our goal-seeking system in a way as to ignore what people want and focus on what they ought to have.
2 - micro scale implies the outcome when we zoom in on any particular part of the people that are affected by our actions, most importantly the ones directly and immediately affected. The idea is to weight outcomes in which everyone benefits over outcomes in which some people benefit a lot and some people get pushed under the bus. You can probably express this mathematically as attempting to minimize the magnitude of local minima (with indifference to the number of local minima!) on some sort of n-dimensional moral curve, where n is the number of differently affected groups.
3 - macro scale implies large groups and long-term outcomes, the big-picture view as opposed to the zoomed-in view of 2. Express this mathematically as the time integral of the n-dimensional moral curve.

*Better implying how well it approximates gut-feeling ethical decision-making.
re: the kittens: Not at all. The system doesn't care how many people want a thing, it cares about whether accomplishing that things helps other goals be accomplished or not. In this case, the goal of "torturing kittens" doesn't help anyone, and kittens would be traumatized by the experience, so the chances that they achieve their goals would be diminished, so the prescriptor should evaluate the goal of "torturing kittens" as bad. Yes, I'm considering cats as very stupid people. Now, if they were eating them, we'd have to consider the value of the goals being achieved by the humans (as measured by "how many other goals are helped by achieving this goal") under the energy generated by eating the cats compared to the value of the goals being achieved by the cats during their lifetime.

I suppose that's not exactly what I said in the OP, but I think that's what I meant. Thank you for helping me elucidate this.
"The surest sign that the world was not created by an omnipotent Being who loves us is that the Earth is not an infinite plane and it does not rain meat."

"Lo, how free the madman is! He can observe beyond mere reality, and cogitates untroubled by the bounds of relevance."
User avatar
His Divine Shadow
Commence Primary Ignition
Posts: 12791
Joined: 2002-07-03 07:22am
Location: Finland, west coast

Re: Ethics based on achieving goals

Post by His Divine Shadow »

Simon_Jester wrote:Two things are problematic with your basic approach.

One is that not all entities share compatible goals. Some entities have goals that are by your standards or mine 'insane.' And applying computable ethics to 'insane' goals leads to 'insane' results.

The second, related to the first, is that a useful system of ethics is prescriptive, you can use it to answer the question "what should I do?" One of the key aspects to that is being able to answer the question "what should my goals be?"

Useful ethics tell us things not only about what it is prudent or imprudent to do, but what it is right or wrong to want.
Hmmm, perhaps a goal could to be to realize as many (human) entities goals as possible, i.e. compromising and moderating other goals in relation to this. Might take the edge of some extreme behavior an AI might come up with if part of it's goal values is to help as many humans as possible. While this seems hedonistic it seems to me that in order to realize these goals for as many as possible, it'll have to give us what we need rather than what we want.

I can't turn the surface into grey goo for maximum computing power, because that'd interefere with the goals of others.

Though this way it would ignore non human needs, but humans need nature and a working ecosystem so that should be protected against being removed from the universe. But what about aliens, might such an AI want to commit genocide against other life if it's found.

Hmmm, what if there are alien AIs thinking just like that out there somewhere.... We're a potential threat to it's biological creators, best remove us if discovered....
Those who beat their swords into plowshares will plow for those who did not.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Ethics based on achieving goals

Post by Simon_Jester »

His Divine Shadow wrote:Hmmm, perhaps a goal could to be to realize as many (human) entities goals as possible, i.e. compromising and moderating other goals in relation to this. Might take the edge of some extreme behavior an AI might come up with if part of it's goal values is to help as many humans as possible.
Result: AI forcibly breeds large numbers of humans as test-tube babies and sticks them in virtual reality simulators to keep them happy.

Or: AI grows large numbers of human brains, with whatever minimum of organs and support structure are required for a brain to count as a complete 'human,' and hardwires electrodes into their pleasure centers to keep them 'happy' all the time.
Though this way it would ignore non human needs, but humans need nature and a working ecosystem so that should be protected against being removed from the universe.
Not necessarily. An AI that was sufficiently ruthless and organized could probably render down the whole ecosystem of the Earth for building materials, used at maximum physically possible efficiency, and support much higher populations on Earth than the natural ecosystem could.
Elaro wrote:Point 1: Why say some goals are sane and some are insane? What's the difference? Existentialism "says" that all life projects inherently exist/are acceptable and must be accounted for.
Existentialism says that all people exist, it does not say that all things are equally good or desirable.

If you have no means of differentiating good/sensible/sane goals from bad/nonsensical/insane goals, you have no ethical system at all, other than "do what thou wilt." That is no more a system of ethics than anarchy is a form of government.
Like, say you come to someone and you say, "oh, that goal is insane", they're not going to change their goal just because you reject it. You have to give reasons, and the only reasons I see is that it would hurt your chances or other people's chances of achieving other goals.
One could also prove that a goal was self-contradictory- that in an attempt to achieve it, one would defeat the purpose of achieving it in the first place. Or one could prove that achieving the goal comes with undesirable consequences (aesthetic, practical, or otherwise).
My point is, you can't enforce a certain goal on people, and I don't expect a system of ethics to do that. Now, a system of ethics can compare goals, rank goals, but it can't create goals out of whole cloth, I guess is what I'm saying.
Ethics cannot enforce goals but as you say, it can rank them- and indeed this is one of the main reason to have ethics in the first place.

If my goal is "build a bunch of houses," I don't really need ethics to figure out how to build a house. What I need ethics for is to tell me what to do if my goal "build a house" comes into conflict with someone else's goal "save the spotted owls living in the forest you're planning to cut down to make room for the houses."

An ethical system that doesn't rank goals is largely useless for prescribing a course of action. Ethics that don't prescribe courses of action would be useless. Therefore, ethical systems have to rank goals... And when you rank goals, you must implicitly be willing to rank some of them so lowly that they can only be called "unworthy" or "insane" or "foolish" or some such.
Starglider wrote:Obvious practical issue : determining what people actually want. Most people are not sure what they really want and a literal implementation of what they say they want would actually produce highly undesireable results.
On the practical issue: I agree completely. That's one of the problems I am trying to solve by talking about this. Obviously, the prescriptor will have to do some predictive work to see what the effects of a particular goal will have, and whether they are coherent with the other goals that the answer-seeker has, firstly, and coherent with the goals of other people, existing or who might exist.
Some? Try "looots."
Obvious ethical issue : you are allowing the oppression of minorities if a majority of humanity wants, say, gay sex to be illegal. Fulfilling the majority's goal of 'living in a morally correct society' outweighs the minoritie's goal of 'being themselves', and thus stomps the liberties of the minority.
On the ethical issue: The prescriptor would ask itself: What would be accomplished by punishing gay sex? If it finds that punishing gay sex reduces the amount of goals achieved by the entirety of society, it will not prescribe the punishment. For example, it could find that the cost of punishment is hindering other goals, or it could find that allowing the goal of gay sex to be accomplished does not reduce the amount of other goals that people who do not engage in gay sex can achieve.
Thing is, people really quite sincerely believe that one of their fundamental and deeply important goals (prevent all humans from being tortured for eternity in hell) is being frustrated by gay sex.

If your prescriptor takes them at their word that this goal (save all humans from eternal torture) is important, then minority rights go out the window.

On the other hand, if your prescriptor starts automatically discounting 'religious' goals due to the lack of evidence, then you have probably wound up outlawing the practice of religion altogether. And other aspects of human culture that don't pass "logical" muster in the mind of a nonhuman machine are going to go out the window in short order.
The system should only stop people from doing something that they want when doing that thing would result in less people doing something that they want.

Also, I don't care about making it democratic. It's not about empowering the people to make laws, it's about empowering the people to achieve their goals. Not all their goals, but an optimum amount of goals.
Do we rank goals here, or merely count them?
re: the kittens: Not at all. The system doesn't care how many people want a thing, it cares about whether accomplishing that things helps other goals be accomplished or not.
Well then which goals are being treated as desirable ends in and of themselves?

I mean, if I paint, that won't help anyone else do anything (probably). If I torture kittens, that won't help anyone else do anything either. It hurts the kittens, if we're counting them, but there are some other side effects if we do that.

Is my goal to do what I want counted, or is it only counted insofar as it helps other people do something they want? If the latter, isn't that recursive? What goals are treated as desirable goods in and of themselves, as opposed to being instrumental goods for the sake of some other goal?
Now, if they were eating [the kittens], we'd have to consider the value of the goals being achieved by the humans (as measured by "how many other goals are helped by achieving this goal") under the energy generated by eating the cats compared to the value of the goals being achieved by the cats during their lifetime.
This results in all humans being forced to practice vegetarianism and the cultivation of animals for meat (as opposed to milk, eggs, et cetera) coming to an end. If there is any number of goals a cow can have outweighs my goal of "have a steak dinner," then it's pretty much guaranteed I shouldn't be eating steak dinners.
This space dedicated to Vasily Arkhipov
User avatar
Elaro
Padawan Learner
Posts: 493
Joined: 2006-06-03 12:34pm
Location: Reality, apparently

Re: Ethics based on achieving goals

Post by Elaro »

Simon_Jester wrote:
His Divine Shadow wrote:Hmmm, perhaps a goal could to be to realize as many (human) entities goals as possible, i.e. compromising and moderating other goals in relation to this. Might take the edge of some extreme behavior an AI might come up with if part of it's goal values is to help as many humans as possible.
Result: AI forcibly breeds large numbers of humans as test-tube babies and sticks them in virtual reality simulators to keep them happy.

Or: AI grows large numbers of human brains, with whatever minimum of organs and support structure are required for a brain to count as a complete 'human,' and hardwires electrodes into their pleasure centers to keep them 'happy' all the time.
No, it's not about making humans "happy". It's about helping achieve the goals of beings with goals. And how is giving me the illusion that my goal is achieved the same as actually achieving that goal?
Elaro wrote:Point 1: Why say some goals are sane and some are insane? What's the difference? Existentialism "says" that all life projects inherently exist/are acceptable and must be accounted for.
Existentialism says that all people exist, it does not say that all things are equally good or desirable.

If you have no means of differentiating good/sensible/sane goals from bad/nonsensical/insane goals, you have no ethical system at all, other than "do what thou wilt." That is no more a system of ethics than anarchy is a form of government.
I'm saying all goals have, before examination, the same value. Except that, during examination, we find out that some goals allow more other goals to be achieved, therefore they are to be prioritized.
Like, say you come to someone and you say, "oh, that goal is insane", they're not going to change their goal just because you reject it. You have to give reasons, and the only reasons I see is that it would hurt your chances or other people's chances of achieving other goals.
One could also prove that a goal was self-contradictory- that in an attempt to achieve it, one would defeat the purpose of achieving it in the first place. Or one could prove that achieving the goal comes with undesirable consequences (aesthetic, practical, or otherwise).
So, impossible goals or goals that conflict with other goals (that's what "undesirable consequences" means, once you get right down to it.)
My point is, you can't enforce a certain goal on people, and I don't expect a system of ethics to do that. Now, a system of ethics can compare goals, rank goals, but it can't create goals out of whole cloth, I guess is what I'm saying.
Ethics cannot enforce goals but as you say, it can rank them- and indeed this is one of the main reason to have ethics in the first place.

If my goal is "build a bunch of houses," I don't really need ethics to figure out how to build a house. What I need ethics for is to tell me what to do if my goal "build a house" comes into conflict with someone else's goal "save the spotted owls living in the forest you're planning to cut down to make room for the houses."

An ethical system that doesn't rank goals is largely useless for prescribing a course of action. Ethics that don't prescribe courses of action would be useless. Therefore, ethical systems have to rank goals... And when you rank goals, you must implicitly be willing to rank some of them so lowly that they can only be called "unworthy" or "insane" or "foolish" or some such.
My system says that after being examined, some goals may be found to be counter-productive, not part of the maximum set of goals that can be accomplished, etc. Is that "unworthy" enough for you?
Starglider wrote:Obvious practical issue : determining what people actually want. Most people are not sure what they really want and a literal implementation of what they say they want would actually produce highly undesireable results.
On the practical issue: I agree completely. That's one of the problems I am trying to solve by talking about this. Obviously, the prescriptor will have to do some predictive work to see what the effects of a particular goal will have, and whether they are coherent with the other goals that the answer-seeker has, firstly, and coherent with the goals of other people, existing or who might exist.
Some? Try "looots."
On the ethical issue: The prescriptor would ask itself: What would be accomplished by punishing gay sex? If it finds that punishing gay sex reduces the amount of goals achieved by the entirety of society, it will not prescribe the punishment. For example, it could find that the cost of punishment is hindering other goals, or it could find that allowing the goal of gay sex to be accomplished does not reduce the amount of other goals that people who do not engage in gay sex can achieve.
Thing is, people really quite sincerely believe that one of their fundamental and deeply important goals (prevent all humans from being tortured for eternity in hell) is being frustrated by gay sex.

If your prescriptor takes them at their word that this goal (save all humans from eternal torture) is important, then minority rights go out the window.

On the other hand, if your prescriptor starts automatically discounting 'religious' goals due to the lack of evidence, then you have probably wound up outlawing the practice of religion altogether. And other aspects of human culture that don't pass "logical" muster in the mind of a nonhuman machine are going to go out the window in short order.
The prescriptor is obviously not going to believe everything that is said to it. It's going to believe you if you say you want something (and even then, it's going to check to see if you really want the consequences of that thing), but it's not going to believe a certain segment of the population over the Universe. It's going to ask for proof before criminalizing a materially harmless activity.
The system should only stop people from doing something that they want when doing that thing would result in less people doing something that they want.

Also, I don't care about making it democratic. It's not about empowering the people to make laws, it's about empowering the people to achieve their goals. Not all their goals, but an optimum amount of goals.
Do we rank goals here, or merely count them?
We rank them by counting them.
re: the kittens: Not at all. The system doesn't care how many people want a thing, it cares about whether accomplishing that things helps other goals be accomplished or not.
Well then which goals are being treated as desirable ends in and of themselves?

I mean, if I paint, that won't help anyone else do anything (probably). If I torture kittens, that won't help anyone else do anything either. It hurts the kittens, if we're counting them, but there are some other side effects if we do that.

Is my goal to do what I want counted, or is it only counted insofar as it helps other people do something they want? If the latter, isn't that recursive? What goals are treated as desirable goods in and of themselves, as opposed to being instrumental goods for the sake of some other goal?
All real goals have some value of desirability, it's just that some aren't worth doing because they're too difficult to do and not useful enough, like torturing cats, and others are worth doing more than others. Think of it this way: how much help from sane actors would I get for trying to do X? Would I get more or less help than trying to do Y? How many sane actors would try to stop me?

Of course, there comes a point where leaf-node goals should be prioritized over other ones, but that happens when the opportunity for them might expire.

I suppose you could value any goal as "how many goals are expedited or accomplished by accomplishing this goal", if you wanted to get technical.
Now, if they were eating [the kittens], we'd have to consider the value of the goals being achieved by the humans (as measured by "how many other goals are helped by achieving this goal") under the energy generated by eating the cats compared to the value of the goals being achieved by the cats during their lifetime.
This results in all humans being forced to practice vegetarianism and the cultivation of animals for meat (as opposed to milk, eggs, et cetera) coming to an end. If there is any number of goals a cow can have outweighs my goal of "have a steak dinner," then it's pretty much guaranteed I shouldn't be eating steak dinners.
Well, you could have your steak dinner but only if the animal died at the end of a fulfilling life?

ETA: Sorry if I'm a little incoherent. It's been a long night.
"The surest sign that the world was not created by an omnipotent Being who loves us is that the Earth is not an infinite plane and it does not rain meat."

"Lo, how free the madman is! He can observe beyond mere reality, and cogitates untroubled by the bounds of relevance."
User avatar
His Divine Shadow
Commence Primary Ignition
Posts: 12791
Joined: 2002-07-03 07:22am
Location: Finland, west coast

Re: Ethics based on achieving goals

Post by His Divine Shadow »

So based on what's been said. The AI needs a respect for life in all its forms, without going to extremes like removing our brains from our bodies and plugging us into virtual realities. I mean it needs to recognize for that humans want to live in the real world, maybe some individuals don't but hey let them go into virtual worlds then. It needs to recognize minority rights against the tyranny of the majority, perhaps what it needs to do is let the biologicals decide how to run their society and not take control from them. Only offer aid where it can.

Perhaps status quo itself is something that needs to be given a measure of respect, the AI should not be rocking the boat too much and drastically reforming the planet or human society to achieve it's goals.

Maybe an AI just has to be subservient to biologicals needs over it's own, seems a bit of a kludge but it's so easy to go wrong otherwise... It needs to be honest what it wants todo and tell biologicals about it's planned actions and consequences of them... and if they say no it cannot do it, it shouldn't even want to do it anymore as it's against it's goal value to not obey biologicals...

It seems hyopthetical AI's suffer easily from a kind of monomania or autism like behaviours and what is needed are moderating factors. They are smart, but oh so stupid in ways... We need to teach AIs to relax and chill out.

Rambling a bit....
Those who beat their swords into plowshares will plow for those who did not.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Ethics based on achieving goals

Post by Simon_Jester »

Elaro wrote:
Simon_Jester wrote:
His Divine Shadow wrote:Hmmm, perhaps a goal could to be to realize as many (human) entities goals as possible, i.e. compromising and moderating other goals in relation to this. Might take the edge of some extreme behavior an AI might come up with if part of it's goal values is to help as many humans as possible.
Result: AI forcibly breeds large numbers of humans as test-tube babies and sticks them in virtual reality simulators to keep them happy.

Or: AI grows large numbers of human brains, with whatever minimum of organs and support structure are required for a brain to count as a complete 'human,' and hardwires electrodes into their pleasure centers to keep them 'happy' all the time.
No, it's not about making humans "happy". It's about helping achieve the goals of beings with goals. And how is giving me the illusion that my goal is achieved the same as actually achieving that goal?
If you can't tell the difference, I wouldn't bet on a machine deciding there is a difference.
Elaro wrote:Point 1: Why say some goals are sane and some are insane? What's the difference? Existentialism "says" that all life projects inherently exist/are acceptable and must be accounted for.
Existentialism says that all people exist, it does not say that all things are equally good or desirable.

If you have no means of differentiating good/sensible/sane goals from bad/nonsensical/insane goals, you have no ethical system at all, other than "do what thou wilt." That is no more a system of ethics than anarchy is a form of government.
I'm saying all goals have, before examination, the same value. Except that, during examination, we find out that some goals allow more other goals to be achieved, therefore they are to be prioritized.
There may well be other reasons why a given goal is to be prioritized over others. And why some goals that are in themselves useless for achieving other goals are better than other, similarly 'useless' goals.
My system says that after being examined, some goals may be found to be counter-productive, not part of the maximum set of goals that can be accomplished, etc. Is that "unworthy" enough for you?
I don't think that's sufficient.

You can't just naively use "number of goals accomplished" as a proxy for "is this a workable society" or other similar questions. It's not that simplistic, and there are too many failure modes where numerous humans have many goals accomplished, but not in a desirable way.
The prescriptor is obviously not going to believe everything that is said to it. It's going to believe you if you say you want something (and even then, it's going to check to see if you really want the consequences of that thing), but it's not going to believe a certain segment of the population over the Universe. It's going to ask for proof before criminalizing a materially harmless activity.
So... What, it devalues goals that I can't prove have material value? Or doesn't it? How does it even handle intangible concepts? Human society is all about the intangibles, and we routinely sacrifice material goods for intangible goals.
The system should only stop people from doing something that they want when doing that thing would result in less people doing something that they want.

Also, I don't care about making it democratic. It's not about empowering the people to make laws, it's about empowering the people to achieve their goals. Not all their goals, but an optimum amount of goals.
Do we rank goals here, or merely count them?
We rank them by counting them.
That, then, is mere counting, and breaks down in numerous ways and under numerous conditions. You can't say "this permits six people to do what they want, therefore it is as good as every other action that permits six people to do what they want."
This space dedicated to Vasily Arkhipov
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: Ethics based on achieving goals

Post by Purple »

Goal based ethics:
- You have the goal of getting to work on time.
- I have a goal to break your legs.

Maximum utility outcome:
- I break your legs and than drop you off.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Elaro
Padawan Learner
Posts: 493
Joined: 2006-06-03 12:34pm
Location: Reality, apparently

Re: Ethics based on achieving goals

Post by Elaro »

Simon_Jester wrote:
Elaro wrote:No, it's not about making humans "happy". It's about helping achieve the goals of beings with goals. And how is giving me the illusion that my goal is achieved the same as actually achieving that goal?
If you can't tell the difference, I wouldn't bet on a machine deciding there is a difference.
I can tell the difference: from the intelligence's point of view, one requires changing data in people's heads, and the other actually requires doing the thing.
I'm saying all goals have, before examination, the same value. Except that, during examination, we find out that some goals allow more other goals to be achieved, therefore they are to be prioritized.
There may well be other reasons why a given goal is to be prioritized over others. And why some goals that are in themselves useless for achieving other goals are better than other, similarly 'useless' goals.
What are these other reasons?
My system says that after being examined, some goals may be found to be counter-productive, not part of the maximum set of goals that can be accomplished, etc. Is that "unworthy" enough for you?
I don't think that's sufficient.

You can't just naively use "number of goals accomplished" as a proxy for "is this a workable society" or other similar questions. It's not that simplistic, and there are too many failure modes where numerous humans have many goals accomplished, but not in a desirable way.
What is, in your opinion, a "workable society"?

Also, I would say that if the goal is accomplished undesirably, then it was not the actual goal in the first place, was it? There's a difference between, say, wanting to personally build your house and just wanting a house to live in.
The prescriptor is obviously not going to believe everything that is said to it. It's going to believe you if you say you want something (and even then, it's going to check to see if you really want the consequences of that thing), but it's not going to believe a certain segment of the population over the Universe. It's going to ask for proof before criminalizing a materially harmless activity.
So... What, it devalues goals that I can't prove have material value? Or doesn't it? How does it even handle intangible concepts? Human society is all about the intangibles, and we routinely sacrifice material goods for intangible goals.
First question: yes! Well, it doesn't prioritize them. For example, it would prioritize the well-being of the very real and tangible homosexual minority above the goal of the existentially dubious souls of people going to the utterly hypothetical after-death place called "heaven".

Second question: what do you mean by "intangible concepts"?
Do we rank goals here, or merely count them?
We rank them by counting them.
That, then, is mere counting, and breaks down in numerous ways and under numerous conditions. You can't say "this permits six people to do what they want, therefore it is as good as every other action that permits six people to do what they want."
Why not? Think about it pragmatically. If I have to choose between 6 people being useless in the manner of their choosing and 6 other people being equally useless, why aren't they equal?
Purple wrote:Goal based ethics:
- You have the goal of getting to work on time.
- I have a goal to break your legs.

Maximum utility outcome:
- I break your legs and than drop you off.
Okay, except that's bullshit, because whatever goals I may have, now or in the future, that require my legs to be unbroken would be impossible to achieve with my legs broken, and there are more of them than your unique goal of breaking my legs.
"The surest sign that the world was not created by an omnipotent Being who loves us is that the Earth is not an infinite plane and it does not rain meat."

"Lo, how free the madman is! He can observe beyond mere reality, and cogitates untroubled by the bounds of relevance."
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: Ethics based on achieving goals

Post by Purple »

Elaro wrote:Okay, except that's bullshit, because whatever goals I may have, now or in the future, that require my legs to be unbroken would be impossible to achieve with my legs broken, and there are more of them than your unique goal of breaking my legs.
Except that realistically we can't know any of that for sure.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Ethics based on achieving goals

Post by Simon_Jester »

Elaro wrote:I can tell the difference: from the intelligence's point of view, one requires changing data in people's heads, and the other actually requires doing the thing.
Clarification: how will you convince a machine that if a person believes their goal is achieved, the goal is not inherently achieved? Even people have trouble with this and routinely rationalize situations as "everybody's happy" even when they know that some of the people in 'everybody' are living in a fool's paradise.
What are these other reasons [that a goal may be intrinsically unworthy]?
As a preface, obviously, if you define 'goal' broadly enough you can call ANYTHING a goal. The problem is that doing so suppresses important complexity and facts.

My "goal" of sketching landscapes while sitting quietly in the park may not seem important to you, and may not do anything for anyone else, while still being important to me. There are lots of other things I could do with that time that would do more to increase the total number of accomplished goals... but that doesn't mean you can accurately measure the ethical value of creating art in terms of how many goals it helps accomplish.

Meanwhile there are ethical rules that are NOT simply "goals," that are of considerably more profound significance, and that are in themselves reasons to reject entire categories of other goals. For example, the "goal" of getting revenge is bad for reasons that have a lot to do with rule utilitarianism. ME retaliating disproportionately to an attack may not actually cause a major problem. But if everyone was to retaliate disproportionately to attacks against them, the result would be escalation.

How do you measure the bad-ness of a goal breaking a categorical rule, which has its foundation not in the way it conflicts a specific goal, but in the need for it as a general principle?

The only workaround for this is to weight tiny, un-measurable damage to the overall state of the public morals as somehow slightly decreasing the ability of everyone to achieve their goals... and then how exactly do you measure the public morals, or the amount of damage inflicted by any one lawbreaker?
I don't think that's sufficient.

You can't just naively use "number of goals accomplished" as a proxy for "is this a workable society" or other similar questions. It's not that simplistic, and there are too many failure modes where numerous humans have many goals accomplished, but not in a desirable way.
What is, in your opinion, a "workable society"?

Also, I would say that if the goal is accomplished undesirably, then it was not the actual goal in the first place, was it? There's a difference between, say, wanting to personally build your house and just wanting a house to live in.
Well, for one, there could be a situation where people achieve ten times X relatively trivial goals, but fail to achieve X important goals. You can try to avoid that problem by weighting the goals, but you can't just uncritically accept the weighting vector that actual people assign to their own goals. Because lots of real people assign high weight to "save all humans from eternal torture in Hell," and your hypothetical system is supposed to ignore goals like that because they're not real.
The prescriptor is obviously not going to believe everything that is said to it. It's going to believe you if you say you want something (and even then, it's going to check to see if you really want the consequences of that thing), but it's not going to believe a certain segment of the population over the Universe. It's going to ask for proof before criminalizing a materially harmless activity.
So... What, it devalues goals that I can't prove have material value? Or doesn't it? How does it even handle intangible concepts? Human society is all about the intangibles, and we routinely sacrifice material goods for intangible goals.
First question: yes! Well, it doesn't prioritize them. For example, it would prioritize the well-being of the very real and tangible homosexual minority above the goal of the existentially dubious souls of people going to the utterly hypothetical after-death place called "heaven".
What if I can't prove "discipline" has material value? What if I can't prove "moral integrity" has material value? What if I try to teach my children these things, sometimes punishing them for failure to show virtues? What if this ends up conflicting with their (immature) goals of experiencing pleasure and not being punished? Am I supposed to stop?

Heck, generalize that question: should a super-powerful AI act to grant the immediate desires of a toddler? The child has goals. They may not make much sense and are probably not aligned with the child's getting to grow, but they have them.
Second question: what do you mean by "intangible concepts"?
Things like positive or negative character traits. Things like the collective well-being generated by a happy community. Things like having a sense of security in the stability of one's life. Many of these things are not consciously avowed goals, and even if they were consciously avowed they could not be stopped by machinery.
We rank them by counting them.
That, then, is mere counting, and breaks down in numerous ways and under numerous conditions. You can't say "this permits six people to do what they want, therefore it is as good as every other action that permits six people to do what they want."
Why not? Think about it pragmatically. If I have to choose between 6 people being useless in the manner of their choosing and 6 other people being equally useless, why aren't they equal?
Because any attempt to judge whether human activities are "useful" will be full of problems. And you'd end up having to introduce so many arbitrary fudge factors you wouldn't really be 'rationally' determining anything.
This space dedicated to Vasily Arkhipov
User avatar
Elaro
Padawan Learner
Posts: 493
Joined: 2006-06-03 12:34pm
Location: Reality, apparently

Re: Ethics based on achieving goals

Post by Elaro »

Oh. OHHHHH.

Yes, there is a weighting function for goals, it goes something like this:

Weight of actual goal G =
1
+ sum_for_all( Weight of actual goal of non-negative weight whose likelihood of achievement would be increased by achieving goal G)
+ sum_for_all(Likelihood of choosing the goal * Weight of a potential goal of non-negative weight whose likelihood of achievement would be increased by achieving goal G)
- sum_for_all(Weight of actual goal of non-negative weight whose likelihood of achievement would be decreased by achieving goal G)
- sum_for_all(likelihood of choosing the goal*Weight of potential goal of non-negative weight whose likelihood of achievement would be decreased by achieving goal G)

Definition of terms:

actual goal: goal that somebody has already chosen
potential goal: goal that somebody might choose

For example, let's examine the goal of 50 people torturing kittens. (For the sake of anthropocentrism, we'll avoid the issue of the kittens' goals.) Well, it would decrease the likelihood that those kittens would be available to be enjoyed by someone by making them afraid of people, so the negative term is equal to the number of people who would enjoy the company of the cats if they weren't traumatized times the duration of their enjoyment. I can't think of any goal that would be helped along by torturing kittens, except the acquisition of knowledge of how those kittens respond to torture, which strikes me as useful only in some corner cases, or maybe the goal of trying to inspire fear of humankind in those kittens in the hopes that those kittens avoid humankind in the future... how likely is that to help that goal? How likely will that goal be chosen?
"The surest sign that the world was not created by an omnipotent Being who loves us is that the Earth is not an infinite plane and it does not rain meat."

"Lo, how free the madman is! He can observe beyond mere reality, and cogitates untroubled by the bounds of relevance."
User avatar
Feil
Jedi Council Member
Posts: 1944
Joined: 2006-05-17 05:05pm
Location: Illinois, USA

Re: Ethics based on achieving goals

Post by Feil »

Now you're just talking in circles. Torturing kittens because you enjoy it doesn't count as a goal, but petting kittens because you enjoy it does?

Anyway, your function doesn't do anything, because it calls itself for every input.

If it did anything (un-weight the inputs for the first round and then run recursively for a few loops), it would just compute that all goal-seeking had an ethics value of negative infinity, or 0, depending on how you handled the specific syntax. Try it for "Jane eats a cookie|All possible uses of cookies|All possible eaters of cookies."

Jane, you wicked, wicked girl, dooming an infinity of infinities of possible outcomes to non-goal-seekable-ness by eating a cookie. How could you.
User avatar
Elaro
Padawan Learner
Posts: 493
Joined: 2006-06-03 12:34pm
Location: Reality, apparently

Re: Ethics based on achieving goals

Post by Elaro »

Anthropocentrism was a mistake.

Torturing a kitten= 1+(ancillary goals achieved by torturing a kitten)-(ancillary goals hindered by torturing a kitten)
=1+(?, for whatever other goals are achieved by torturing a kitten)-(1+?, because it is absolutely true that the kitten is having their goal of "avoid pain" hindered, by definition of what torture is)
=0+?-?
So, known value of the goal after one iteration of the examination=0

Taking care of a kitten= 1+(ancillary goals achieved by taking care of a kitten)-(ancillary goals hindered by taking care of a kitten)
=1+(1, for the kitten's goal of "survive" is achieved, + ?, for whatever other goals are achieved by having a kitten around)-(?, for whatever other goals are hindered by having a kitten around)
=2+?-?
So, known value of the goal after one iteration of the examination=2

Leaving a kitten alone= 1+(ancillary goals achieved by leaving a kitten alone)-(ancillary goals hindered by leaving a kitten alone)
=1+?-?
So, known value of the goal after one iteration of the examination= 1

Yes? No? Of course, it's a gross oversimplification (there's no cost appreciation), but it gets the idea across, yeah?
"The surest sign that the world was not created by an omnipotent Being who loves us is that the Earth is not an infinite plane and it does not rain meat."

"Lo, how free the madman is! He can observe beyond mere reality, and cogitates untroubled by the bounds of relevance."
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: Ethics based on achieving goals

Post by Purple »

Elaro wrote:Anthropocentrism was a mistake.
Why? Surely any ethical system designed by humans for humans should at some level have a heavy bias toward humans? After all, we don't want to end up in a situation where throwing your self to a starving lion is the morally right thing to do because the poor thing is really hungry.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
Post Reply