Abstract
Using Asimov’s “Bicentennial Man” as a springboard, a number of metaethical issues concerning the emerging field of machine ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we make a judgment about the ethical status of the machine itself, a judgment that will be particularly difficult to make. Finally, it is argued that Asimov’s “three laws of robotics” are an unsatisfactory basis for machine ethics, regardless of the status of the machine.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
Related to me in conversation with Isaac Asimov.
A full-length novel based on the short story, was co-authored by Asimov with Robert Silverberg. This was called The Positronic Man (Asimov and Silverberg 1992). A 1999 movie directed by Christopher Columbus, entitled Bicentennial Man, was based on the novel, with a screenplay by Nicholas Kazan (Columbus 1999). While the novel and film have broadly similar plot developments, many additional elements are introduced in both of these works. For brevity, the present discussion is limited to issues raised by the original short story treatment.
One of the characters in “The Bicentennial Man” remarks “There have been times in history when segments of the human population fought for full human rights.”
Also, only in this second case can we say that the machine is autonomous.
I am indebted to Michael Anderson for making this point clear to me.
Bruce McLaren has also created a program that enables a machine to act as an ethical advisor to human beings, but in his program the machine does not make ethical decisions itself. His advisor system simply informs the human user of the ethical dimensions of the dilemma, without reaching a decision (McLaren 2003).
This is the reason why Anderson et al. have started with “MedEthEx” that advises health care workers and, initially, in just one particular circumstance.
I am assuming that one will adopt the action-based approach to ethics. For the virtue-based approach to be made precise, virtues must be spelled out in terms of actions.
A prima facie duty is something that one ought to do unless it conflicts with a stronger duty, so there can be exceptions, unlike an absolute duty, for which there are no exceptions.
Some, who are more pessimistic than I am, would say that there might always be some dilemmas about which even experts will disagree as to what is the correct answer. Even if this turns out to be the case, the agreement that surely exists on many dilemmas will allow us to reject a completely relativistic position.
The pessimists would, perhaps, say: “there are correct answers to many (or most) ethical dilemmas.”
If ethical egoism is accepted as a plausible ethical theory, then the agent only needs to take him/her/itself into account, whereas all other ethical theories consider others as well as the agent, assuming that the agent has moral status.
In a well-known video titled “Monkey in the Mirror,” a monkey soon realizes that the monkey it sees in a mirror is itself and it begins to enjoy making faces, etc., watching its own reflection.
Christopher Grau has pointed out that Kant probably had a more robust notion of self-consciousness in mind that includes autonomy and “allows one to discern the moral law through the Categorical Imperative.” Still, even if this rules out monkeys and great apes, it also rules out very young human beings.
In fact, however, it is problematic. Some would argue that Machan has set the bar too high. Two reasons could be given: (1) a number of humans (most noticeably very young children) would, according to his criterion, not have rights since they cannot be expected to behave morally. (2) Machan has confused “having rights” with “having duties.” It is reasonable to say that in order to have duties to others, you must be capable of behaving morally, that is, of respecting the rights of others, but to have rights requires something less than this. That is why young children can have rights, but not duties. In any case, Machan’s criterion would not justify our being speciesists because recent evidence concerning the great apes shows that they are capable of behaving morally. I have in mind Koko, the gorilla who has been raised by humans (at the Gorilla Foundation in Woodside, CA, USA) and absorbed their ethical principles as well as having been taught sign language.
I say “in some sense, could have done otherwise” because philosophers have analyzed “could have done otherwise” in different ways, some compatible with Determinism and some not; but it is generally accepted that freedom in some sense is required for moral responsibility.
I see no reason, however, why a robot/machine cannot be trained to take into account the suffering of others in calculating how it will act in an ethical dilemma, without its having to be emotional itself.
It is important to emphasize here that I am not necessarily agreeing with Kant that robots like Andrew, and animals, should not have moral standing/rights. I am just making the hypothetical claim that if we determine that they should not, there is still a good reason, because of indirect duties to human beings, to treat them respectfully.
Strictly speaking the three laws do not entail any permissions or obligations on humans. Nevertheless, in the absence of any additional moral principles concerning robot dealings with humans or vice versa, it is natural to take the Laws as licensing a permissive attitude towards human treatment of robots.
References
Anderson S (1995) Being morally responsible for an action versus acting responsibly or irresponsibly. J Philos Res XX:451–462
Anderson M, Anderson S, Armen C (2005) MedEthEx: towards a medical ethics advisor. In: Proceedings of the AAAI fall symposium on caring machines: AI and Eldercare, Menlo Park. AAAI, California
Asimov I (1976) ‘The bicentennial man’ in I. Asimov, The bicentennial man and other stories. Doubleday, New York, 1984
Asimov I, Silverberg R (1992) The positronic man. Doubleday, New York
Bentham J (1799) An introduction to the principles of morals and legislation, chapter 17. Burns J, Hart H (eds). Clarendon Press, Oxford, 1969
Columbus C (Director) (1999) Bicentennial Man [movie based on Asimov and Silverberg (1993), The positronic man]. Columbia Tristar Pictures Distributors International
Kant I (1780) Our duties to animals. In: Infield L (trans.). Lectures on ethics. Harper & Row, New York, pp 239–241
Kant I (1785) The groundwork of the metaphysic of morals, Paton HJ (trans.). Barnes and Noble, New York, 1948
Machan T (1991) Do animals have rights? Public Affairs Q 5(2):163–173
McLaren BM (2003) Extensionally defining principles and cases in ethics: an AI model. Artif Intell 150:145–181
Mill JS (1863) Utilitarianism. Parker, Son and Bourn, London
Ross WD (1930) The right and the good. Oxford University Press, Oxford
Singer P (1975) All animals are equal. In: Animal liberation: a new ethics for our treatment of animals New York. New York review, Distributed by Random House, pp 1–22
Tooley M (1972) Abortion and infanticide. Philos Public Affairs 2:47–66
Warren MA (1997) On the moral and legal status of abortion. In: LaFollette H (ed) Ethics in practice. Blackwell, Oxford
Acknowledgments
This material is based upon work supported in part by the National Science Foundation grant number IIS-0500133.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Anderson, S.L. Asimov’s “three laws of robotics” and machine metaethics. AI & Soc 22, 477–493 (2008). https://doi.org/10.1007/s00146-007-0094-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-007-0094-5