SDN Ethics Scratchpad
Moderator: Alyrium Denryle
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
SDN Ethics Scratchpad
Following a conversation with Simon Jester recently, I have decided to try and make a web app that can work out (very) basic formal ethics problems. By which I mean, it can take a simple specification of a situation, a simple specification of some ethical rules, and rank the possible actions based on the rules. This will only work for problems of the classic philosopher thought experiment type, where a single agent has to make a single choice between a limited, defined set of possible actions. I´d like to know if anyone has any design suggestions, or would be interested in testing such a program. Personally, there have been quite a few times when debating online where I have started to talk about the formal ethics of a situation, but stopped short of actually working out and specifying the details, due to the time it takes and the general difficulty of other participants responding in the same frame of reference. I think something like this could be quite helpful and interesting for objective examination of different ethical rules and their consequences in a debate. As such here are my initial thoughts on sensible requirements;
Mandatory : input
* Situation should ideally be specified in simple, restricted natural language using a parser equivalent to early text adventures, e.g. phrase structure must be Subject Verb Object (Conjunction/Adverb Subject Verb Object). If this isn´t usable enough, implement some form of sequential mutli-choice drop downs for specifying objects, actors, actions and consequences.
* Probabilities can be specified as S V O : p0.x or S V O : x%, probability distributions as (SVO1 : p0.x, SVO2 : p0.y...) or embedded as in S V (O1 : p0.x, O2 : p0.y...) : system will check that total p doesn´t exceed 1.0 for any mutually exclusive sets.
* Knowledge base sufficient to model a workable set of basic physical and social situations, e.g. enough to model the trolley problem (all common variants), lifeboat problem, prisoner´s dilemma etc. The idea is not to include all the settings people might want, but to include enough that given a list, users can specify most ethical dilemmas they can think of in terms that the system can model.
* Ethical rules also in simple/restricted natural language, with multi-choice selection if that doesn´t work. Can have attached utilities, if not then assumed to be in priority order. Can specify a preference over actions, outcomes or both. Support special handling of the null action.
* Flag up attempts to put world description into the ethical rules as an error. This is important for situations e.g. not discriminating in job interviews, where it is logically incorrect to say ´personal attribute X has no correlation with competence´ as part of an ethical system, but it is correct to say ´personal attribute X should not be considered when estimating competence´.
Mandatory : interface
* Minimal web 1.0 text interface; fancy GUI / Javascript is not needed.
* Interface shall be (Select ethical dilemma : (Author drop-down menu) (Dilemma name drop-down menu) (Edit Current button) (Define New button), Select ethical system : (Author drop-down menu) (System name drop-down menu) (Edit Current button) (Define New button), (Evaluate)
* Evaluate button will show reasoning and output preference over available actions, or some reasonably useful error message if evaluation failed.
* When editing situations and ethics rules, ´Save Changes´ / ´Save As New´ button to persist the results (in some DB on the server).
* This will need a username/password login so that they author can be identified : you can edit other people´s dilemmas and moralities but then have to save them under your own name (allow email addresses as usernames).
Mandatory : engine
* Basic probabilistic propositional logic evaluation, with naive bayes.
* Do simple symbolic physical modelling (discrete system state transition only, don´t need actual continuous physical modelling).
* Basic concept of agents having limited information, and transfer of information between agents as an action, but don´t need to model actual perception.
Optional
* Ability to show a summary screen where there is a matrix of ethical dilemma name (columns), ethical rules name (rows), and the cells show the preferred action that each ruleset generates for each dilemma. This would permit quick comparison of ethical systems (practical consequences of each). Might need a syntax to tag each possible action in the dilemma with a compact name.
* Ability to store a user´s personal preferences for each dilemma. This could be included on the above matrix to show how close each formal ethical ruleset comes to your own decisions.
* Knowledge base content sufficient to allow countries as well as individuals as the agents in the problem specification, e.g. dilemmas involving (simple) political situations.
* Knowledge base content sufficient to allow nontrivial reasoning about legal systems, e.g. the ethical dilemmas involved in making laws which are then applied to determine outcomes in many individual decision situations.
* Colloquial specification of probabilities e.g. S probably V O, S usually V O, where informal likelihoods are mapped to explicit probabilities using some sensible fixed table (normalised across mutually exclusive sets)
* Implicit support for transfinite utilities for when people say things like "you should save a puppy from drowing, but if you have a choice between saving one baby and saving one million billion trillion (i.e. an infinite number of) drowning puppies, you should save the baby". This is usually specified as a mix of utility and absolute preference rules, but from previous experience I think it´s easier to evaluate as purely utilitarian but with transfinite utilitites. Might have to convert it back to something less esoteric for output though.
* Relevance tag the output and prune to match maximum length, e.g. such that you never see more than a page of reasoning regardless of situation complexity, unless you press a (show verbose) button.
* Ability to reason about agent actions other than the primary decision of the ethical dilemma, e.g. do X because it will enable agent to then do Y leading to outcome Z. This will require planning (at minimum, STRIPS equivalent) and additional world knowledge (about typical actions agents are capable of).
* Some minimal model of empathy, e.g. agent A can reason about agent B, knows that B can reason about A. This is in general extremely hard but it would be practical to incorporate a very limited mechanism sufficient to produce the commonly accepted solutions to some simple dilemmas e.g. the prisoner´s dilemma.
* Multi-step or repeated dilemmas e.g. iterated prisoner´s dilemma. Will presumably need some way to prune and summarise the resulting combinatorially exploding state tree.
I think I have all necessary skills to make this app, at least in basic proof-of-concept form, and I have plenty of spare servers, static IP and bandwidth. I also have an assortment of existing code that might be able to be repurposed to assist with the project. Regrettably however I am short on time at the moment due to commercial programming work, building work on our house and my desire to get my turbo DeLorean project completed this year. So the best I can say for delivery is "I will work on it when I can and hope to put something online at some point next year".
P.S. For Windhaven, I regret that due to time constraints and the fact that Microsoft have discontinued the XNA framework and the Xbox Live Indie Games service, I am going to have to declare victory with the 2012 Dream Build Play prize and leave it at that. I have decided to put the whole thing online as open source; it has all modes playable except the story campaign, for which I completed 3 of 16 missions. I am having some issues with my DNS at the moment (of the British Telecom beurecractic kind, not technical) but once that is fixed I will put zips of the executable, source and content files on a server.
Mandatory : input
* Situation should ideally be specified in simple, restricted natural language using a parser equivalent to early text adventures, e.g. phrase structure must be Subject Verb Object (Conjunction/Adverb Subject Verb Object). If this isn´t usable enough, implement some form of sequential mutli-choice drop downs for specifying objects, actors, actions and consequences.
* Probabilities can be specified as S V O : p0.x or S V O : x%, probability distributions as (SVO1 : p0.x, SVO2 : p0.y...) or embedded as in S V (O1 : p0.x, O2 : p0.y...) : system will check that total p doesn´t exceed 1.0 for any mutually exclusive sets.
* Knowledge base sufficient to model a workable set of basic physical and social situations, e.g. enough to model the trolley problem (all common variants), lifeboat problem, prisoner´s dilemma etc. The idea is not to include all the settings people might want, but to include enough that given a list, users can specify most ethical dilemmas they can think of in terms that the system can model.
* Ethical rules also in simple/restricted natural language, with multi-choice selection if that doesn´t work. Can have attached utilities, if not then assumed to be in priority order. Can specify a preference over actions, outcomes or both. Support special handling of the null action.
* Flag up attempts to put world description into the ethical rules as an error. This is important for situations e.g. not discriminating in job interviews, where it is logically incorrect to say ´personal attribute X has no correlation with competence´ as part of an ethical system, but it is correct to say ´personal attribute X should not be considered when estimating competence´.
Mandatory : interface
* Minimal web 1.0 text interface; fancy GUI / Javascript is not needed.
* Interface shall be (Select ethical dilemma : (Author drop-down menu) (Dilemma name drop-down menu) (Edit Current button) (Define New button), Select ethical system : (Author drop-down menu) (System name drop-down menu) (Edit Current button) (Define New button), (Evaluate)
* Evaluate button will show reasoning and output preference over available actions, or some reasonably useful error message if evaluation failed.
* When editing situations and ethics rules, ´Save Changes´ / ´Save As New´ button to persist the results (in some DB on the server).
* This will need a username/password login so that they author can be identified : you can edit other people´s dilemmas and moralities but then have to save them under your own name (allow email addresses as usernames).
Mandatory : engine
* Basic probabilistic propositional logic evaluation, with naive bayes.
* Do simple symbolic physical modelling (discrete system state transition only, don´t need actual continuous physical modelling).
* Basic concept of agents having limited information, and transfer of information between agents as an action, but don´t need to model actual perception.
Optional
* Ability to show a summary screen where there is a matrix of ethical dilemma name (columns), ethical rules name (rows), and the cells show the preferred action that each ruleset generates for each dilemma. This would permit quick comparison of ethical systems (practical consequences of each). Might need a syntax to tag each possible action in the dilemma with a compact name.
* Ability to store a user´s personal preferences for each dilemma. This could be included on the above matrix to show how close each formal ethical ruleset comes to your own decisions.
* Knowledge base content sufficient to allow countries as well as individuals as the agents in the problem specification, e.g. dilemmas involving (simple) political situations.
* Knowledge base content sufficient to allow nontrivial reasoning about legal systems, e.g. the ethical dilemmas involved in making laws which are then applied to determine outcomes in many individual decision situations.
* Colloquial specification of probabilities e.g. S probably V O, S usually V O, where informal likelihoods are mapped to explicit probabilities using some sensible fixed table (normalised across mutually exclusive sets)
* Implicit support for transfinite utilities for when people say things like "you should save a puppy from drowing, but if you have a choice between saving one baby and saving one million billion trillion (i.e. an infinite number of) drowning puppies, you should save the baby". This is usually specified as a mix of utility and absolute preference rules, but from previous experience I think it´s easier to evaluate as purely utilitarian but with transfinite utilitites. Might have to convert it back to something less esoteric for output though.
* Relevance tag the output and prune to match maximum length, e.g. such that you never see more than a page of reasoning regardless of situation complexity, unless you press a (show verbose) button.
* Ability to reason about agent actions other than the primary decision of the ethical dilemma, e.g. do X because it will enable agent to then do Y leading to outcome Z. This will require planning (at minimum, STRIPS equivalent) and additional world knowledge (about typical actions agents are capable of).
* Some minimal model of empathy, e.g. agent A can reason about agent B, knows that B can reason about A. This is in general extremely hard but it would be practical to incorporate a very limited mechanism sufficient to produce the commonly accepted solutions to some simple dilemmas e.g. the prisoner´s dilemma.
* Multi-step or repeated dilemmas e.g. iterated prisoner´s dilemma. Will presumably need some way to prune and summarise the resulting combinatorially exploding state tree.
I think I have all necessary skills to make this app, at least in basic proof-of-concept form, and I have plenty of spare servers, static IP and bandwidth. I also have an assortment of existing code that might be able to be repurposed to assist with the project. Regrettably however I am short on time at the moment due to commercial programming work, building work on our house and my desire to get my turbo DeLorean project completed this year. So the best I can say for delivery is "I will work on it when I can and hope to put something online at some point next year".
P.S. For Windhaven, I regret that due to time constraints and the fact that Microsoft have discontinued the XNA framework and the Xbox Live Indie Games service, I am going to have to declare victory with the 2012 Dream Build Play prize and leave it at that. I have decided to put the whole thing online as open source; it has all modes playable except the story campaign, for which I completed 3 of 16 missions. I am having some issues with my DNS at the moment (of the British Telecom beurecractic kind, not technical) but once that is fixed I will put zips of the executable, source and content files on a server.
Re: SDN Ethics Scratchpad
This is interesting to me, since I'm meant to be giving a workshop next month on evaluating projects for effectivness for Engineers without borders.
How would it handle a problem of aymetric infomation? Ie a crooked salesman selling a broken car is immoral, but an ignorant salesman is just ignorant. Is it immoral for you* not to check the salesman before buying a car?
*is immorality of the system raised?
How would it handle a problem of aymetric infomation? Ie a crooked salesman selling a broken car is immoral, but an ignorant salesman is just ignorant. Is it immoral for you* not to check the salesman before buying a car?
*is immorality of the system raised?
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: SDN Ethics Scratchpad
Modelling information (beliefs) that different agents have is essential; otherwise you can´t even meaningfully define a communication action (MTRANS in Schank-speak). The mechanism I intend to use to do this, for a proof of concept, is very roughly equivalent to the agent knowledge model used in the classic symbolic AI system TALESPIN, but probabilistic and somewhat generalised (I am thinking of adapting a mechanism from some narrative analysis code I have been playing with recently). So in short yes that case should model and resolve ok; the symbolic knowledge base I have for this includes the concepts ´car´, ´purchase´, ´vendor´ and ´broken machine´.madd0ct0r wrote:How would it handle a problem of aymetric infomation? Ie a crooked salesman selling a broken car is immoral, but an ignorant salesman is just ignorant. Is it immoral for you* not to check the salesman before buying a car?
Ah now this is a metaethical judgement. My wife suggested something along these lines earlier: that a bare minimum requirement to get started on metaethical problems is to distinguish between;is immorality of the system raised?
a) actions that it is ok for me to do and ok for others to do
b) actions that are bad for me to do and bad for others to do
c) actions that are bad for me to do but ok for others to do (i.e. I have a personal code of ethics but I don´t insist that everyone should follow it)
d) actions that are ok for me to do but bad for others to do (not in the sense that others may have authority or skills I lack; in the ethical sense of I am entitled to things that other agents are not).
This is distinct from a simple subjective criteria such as ´Given the chance to save one person from a burning building, I would prefer to save my mother over any arbitrary mother´ (actual Chinese job interview question BTW). This is still in category (a) as long as you think it is ok for other people to preferentially save their mothers over yours. This can be an optional requirement; generalisation to categories of agent (e.g. my tribe should have special privilidges over other tribes) is straightforward. Social classes is trickier because that may require economic reasoning; I might be able to include some basic microeconomic stuff but economics in general is of course a huge and intractable topic.
Generalised metaethical judgements are a much harder problem; I have spent quite a bit of time looking at self-consistency of complex goal systems in particular. That is out of scope for a proof of concept web app. That said if someone enters a specific interesting problem I´d certainly look at adding specific reasoning elements to allow evaluation on a case by case basis.
Re: SDN Ethics Scratchpad
Hmm. Privelage is easy to visualize: a doctor may advise medicine for a child. A random bloke in the pub probably shouldn't. Privelage could be temporary: after 8 pints of beer it'd be immoral for me to drive home and immoral for someone who knows to let me
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: SDN Ethics Scratchpad
For some reason I find the idea of a couple of corporate executives talking over lunch and saying "Oh, yeah, morals? There's an app for that now!" hilarious...
This space dedicated to Vasily Arkhipov
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: SDN Ethics Scratchpad
As absolute statements, (the sensible) ´qualification x required for action y to the ethical´ and (the bigoted) ´must be of tribe z for action w to be ethical´ are logically almost identical, the philosophical distinction is generally that ´tribe z´ is usually bound to ´the tribe that I am in´ i.e. has a subjective element, although not always i.e. when people are supporting caste systems despite being in a lower caste. I guess it is not a big deal for representation purposes.madd0ct0r wrote:Hmm. Privelage is easy to visualize: a doctor may advise medicine for a child. A random bloke in the pub probably shouldn't. Privelage could be temporary: after 8 pints of beer it'd be immoral for me to drive home and immoral for someone who knows to let me
Five posts in and Simon Jester has already come up with the commercialisation plan. Do you know how much money banks have dumped into consultants and marketing guys coming up with ethical slogans and moral key principles and five point good behaviour plans (for internal and external consumption)? Verifying that a supposedly religious-law-compatible banking product is actually religious law compatible is on its own a lucrative use case Although for that I might need to implement the reverse chaining retroactive rationalisation module, possibly with logical fallacies flipped to positive patterns rather than antipatterns.Simon_Jester wrote:For some reason I find the idea of a couple of corporate executives talking over lunch and saying "Oh, yeah, morals? There's an app for that now!" hilarious...
Re: SDN Ethics Scratchpad
No, no, no... you're not doing this right:Starglider wrote:Five posts in and Simon Jester has already come up with the commercialisation plan.
You need to use more buzz technologies, like Ruby on Rails, Node.js, etc. Be sure to use a NoSQL storage layer for extra buzziness. Also, will this app let me share photos with my friends and connect with the twitter api?Mandatory : interface
* Minimal web 1.0 text interface; fancy GUI / Javascript is not needed.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: SDN Ethics Scratchpad
I believe you have confused raising VC capital with actually selling a product to enterprises. Web 1.0 interfaces are fine for this task as long as you call them ´REST service endpoints´. The necessary buzzword sets may have a similar cadence to the uninitiated, but they are in fact largely disjoint. For example I would have to replace ´NoSQL´ with something like ´Big Data capability enabled by the Apache Spark/Cassandra/Ignite/Taychon/HDFS stack´, even if the entire database would actually fit comfortably into a single Postgres database running from a USB stick. And that´s just the technical buzzwords...Channel72 wrote:No, no, no... you're not doing this right: You need to use more buzz technologies, like Ruby on Rails, Node.js, etc. Be sure to use a NoSQL storage layer for extra buzziness. Also, will this app let me share photos with my friends and connect with the twitter api?
Re: SDN Ethics Scratchpad
I think many big enterprises have been infected with startup culture mentality, and therefore would rather more like to hear about how your app runs on an Amazon EC2 instance and uses node.js to "scale".
Re: SDN Ethics Scratchpad
Design-wise:
"Ability to show a summary screen where there is a matrix of ethical dilemma name (columns), ethical rules name (rows), and the cells show the preferred action that each ruleset generates for each dilemma."
If this makes it in to the final app, I would request that the ethical dilemma could be edited at this screen, or an option to go back to change parts of it linked from here - to see the effects of changing parts of the ethical dilemma.
Also, I'm assuming that the app would be able to churn through a decision making process and reply "more information is needed: X, Y and Z" for an incomplete scenario for some systems, while spitting out the answer correctly for other systems (e.g. kantian systems should be relatively simple to code). Would it be possible for the app to produce an answer for a moral dilemma, but also clarify what other information that would be relevant?
Speaking seriously for a moment about the potential monetization aspect, what sort of horrible, horrible legal minefield can this land you in?
If, for example, a very bad case scenario occurs: the app's sold, someone uses it, the results come out as murder being the most ethical solution and they use the app as justification for murdering a whole bunch of people, what legal ramifications are there? While of course "I was only following orders" isn't going to hold up one iota in court, would Starglider get in trouble for inciting criminal activity for releasing something that people might act upon? Either through sales, or by releasing such a thing freely.
The closest precedent I can think of writing in books, where it's potentially possible to encourage crime by putting in methods of doing such a thing. It should be pretty easy to prove that such an app is intended to provide more ethical reasoning rather than less and so be benign, but it's still worth a thought, and a hefty legal disclaimer. Especially if it comes with frameworks such as "self-importance" or something similar.
"Ability to show a summary screen where there is a matrix of ethical dilemma name (columns), ethical rules name (rows), and the cells show the preferred action that each ruleset generates for each dilemma."
If this makes it in to the final app, I would request that the ethical dilemma could be edited at this screen, or an option to go back to change parts of it linked from here - to see the effects of changing parts of the ethical dilemma.
Also, I'm assuming that the app would be able to churn through a decision making process and reply "more information is needed: X, Y and Z" for an incomplete scenario for some systems, while spitting out the answer correctly for other systems (e.g. kantian systems should be relatively simple to code). Would it be possible for the app to produce an answer for a moral dilemma, but also clarify what other information that would be relevant?
Speaking seriously for a moment about the potential monetization aspect, what sort of horrible, horrible legal minefield can this land you in?
If, for example, a very bad case scenario occurs: the app's sold, someone uses it, the results come out as murder being the most ethical solution and they use the app as justification for murdering a whole bunch of people, what legal ramifications are there? While of course "I was only following orders" isn't going to hold up one iota in court, would Starglider get in trouble for inciting criminal activity for releasing something that people might act upon? Either through sales, or by releasing such a thing freely.
The closest precedent I can think of writing in books, where it's potentially possible to encourage crime by putting in methods of doing such a thing. It should be pretty easy to prove that such an app is intended to provide more ethical reasoning rather than less and so be benign, but it's still worth a thought, and a hefty legal disclaimer. Especially if it comes with frameworks such as "self-importance" or something similar.
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: SDN Ethics Scratchpad
I wouldn't think Starglider would be liable because he's not telling you what to do. He's programming a computer to take known inputs, weight them in a known fashion, and compute the value of various results, presumably according to known, editable weighting vectors.
You're not doing anything for anyone they couldn't do for themselves. And anyone with a gram of sense selling this product would include a EULA to the effect that this product's recommendations should at least be examined casually by a lawyer to make sure doing the 'right' thing won't get you sued.
You're not doing anything for anyone they couldn't do for themselves. And anyone with a gram of sense selling this product would include a EULA to the effect that this product's recommendations should at least be examined casually by a lawyer to make sure doing the 'right' thing won't get you sued.
This space dedicated to Vasily Arkhipov
Re: SDN Ethics Scratchpad
A lot of more extremist religious frameworks could certainly be deemed as inciting such hate speech. "You have a moral imperative to murder those not of your religion because of reasons X, Y and Z" or "you have a moral imperative to blow up this abortion clinic due to X Y and Z"
I know, I know, I'm repeating the point, and it's definitely a stretch to consider a tool for calculating ethical actions to be used to justify atriocities and so on, but it's definitely worth checking with a lawyer before selling it, or even publishing it online for free.
I know, I know, I'm repeating the point, and it's definitely a stretch to consider a tool for calculating ethical actions to be used to justify atriocities and so on, but it's definitely worth checking with a lawyer before selling it, or even publishing it online for free.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: SDN Ethics Scratchpad
The link back to edit mode is trivial. If you´re planning to play around in this way, I should probably track edit history of scenarios and rulesets for easy reverts. Actual multi-editor concurrent version control is a bit execessively complex for a proof of concept, as merging would get difficult.Reaver225 wrote:"Ability to show a summary screen where there is a matrix of ethical dilemma name (columns), ethical rules name (rows), and the cells show the preferred action that each ruleset generates for each dilemma." If this makes it in to the final app, I would request that the ethical dilemma could be edited at this screen, or an option to go back to change parts of it linked from here - to see the effects of changing parts of the ethical dilemma.
There are two seaparate issues here. For modelling the scenario, the system is assumed to have total knowledge (even if some events have uncertain outcome from known frequency distribution). So yes underspecification of the situation such that you can´t even complete the causal chains for the specified possible agent actions would be a fail. Although I may be able to make this a bit less tedious via an experimental implementation of Script Applier Mechanism that I have written recently (it deduces things like "IF Twilight Sparkle is eating a Daffodil Sandwich in a Restaurant, THEN *somepony* in the Kitchen +probably made *the sandwich* AND Twilight Sparkle will +probably pay Bits to The Restaurant before *she* leaves The Restaurant").Also, I'm assuming that the app would be able to churn through a decision making process and reply "more information is needed: X, Y and Z" for an incomplete scenario for some systems, while spitting out the answer correctly for other systems (e.g. kantian systems should be relatively simple to code). Would it be possible for the app to produce an answer for a moral dilemma, but also clarify what other information that would be relevant?
For agents required to make decisions, working with limited information is a fundamental part of nearly all agent decision making (limited time/computing power is the other major limitation). So I would not expect any particular error messages there. If you entered a rule that ´you must assure condition X before action Y can be taken´ then I suppose the default of ´no action´ would result if X is not known (to high certainty) by the agent. I think I´ll have to get the thing working before we can explore precisely what kind of insufficient information situations you are interested in; I am sure there are some interesting paradoxes to do with information-gathering actions that themselves have ethical constraints, and general contradictory rules.
I´m pretty sure that´s in the same ballpark as using suicidal pilots using Microsoft Flight Simulator to train, or a would-be bomber using Excel to work out their ingredient amounts and Word to write their manifesto. i.e. no one would reasonably hold someone who makes general purpose tools responsible for some nutcase using said tool in relation to a crime.If, for example, a very bad case scenario occurs: the app's sold, someone uses it, the results come out as murder being the most ethical solution and they use the app as justification for murdering a whole bunch of people, what legal ramifications are there?
[/quote][/quote]but it's still worth a thought, and a hefty legal disclaimer.
In the unlikely event that anyone actually makes such an app for commercial sale, then yes, I am sure it would be loaded down with disclaimers on top of warnings embedded in EULAs. But ultimately as Simon points out, it would not be doing anything you can´t do with a paper and pencil (as utilitarian philosophers do all the time for example), it would just be a lot more convenient and less prone to trivial logic/arithmetic errors.
I will try and put some time into writing a first draft over the Christmas break.
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: SDN Ethics Scratchpad
There surely are. Half of police work, for one...Starglider wrote:I think I´ll have to get the thing working before we can explore precisely what kind of insufficient information situations you are interested in; I am sure there are some interesting paradoxes to do with information-gathering actions that themselves have ethical constraints, and general contradictory rules.
A lawyer might argue that it's different because your software makes prescriptive statements, or at least strongly implies prescriptive statements.I´m pretty sure that´s in the same ballpark as using suicidal pilots using Microsoft Flight Simulator to train, or a would-be bomber using Excel to work out their ingredient amounts and Word to write their manifesto. i.e. no one would reasonably hold someone who makes general purpose tools responsible for some nutcase using said tool in relation to a crime.If, for example, a very bad case scenario occurs: the app's sold, someone uses it, the results come out as murder being the most ethical solution and they use the app as justification for murdering a whole bunch of people, what legal ramifications are there?
But, well, given who you normally work for, I'm sure you know lawyers qualified to consult on such matters in the context of British law. Or know people who know lawyers.
This space dedicated to Vasily Arkhipov