Chapter 2.  What Is an Ethical System?

Table of Contents

Ethical Reasoning Performed by Humans Concerning Computers

Ethical Reasoning Performed by Humans Concerning Computers

Picard anticipated "ethical and moral dilemmas" posed by technology specifically designed to sense emotions [picard1997]. Picard and Klein also described several theoretically unethical uses of affective systems [picard2002]. But these unethical uses were not wholly investigated. Following this work were preliminary forays into investigating the privacy consequences of this technology [reynolds2004CHI]. However, privacy is only a single dimension of ethical import.

Value-Sensitive Design [friedman2002] articulates many dimensions that are relevant to systems that mediate the communication of affect. Value-Sensitive Design (VSD) is "an approach to the design of technology that accounts for human values in a principled and comprehensive manner throughout the design process." It considers Human Welfare, Ownership and Property, Privacy, Freedom From Bias, Universal Usability, Trust, Autonomy, Informed Consent, Accountability, Identity, Calmness, and Environmental Sustainability as values that may be of ethical consequence. Friedman and Nissenbaum applied VSD to evaluation of bias in computer systems [friedman1997]. VSD has been applied by others to problems such as online privacy [agre1997] universal usability [thomas1997], urban planning [noth2000], and browser consent [friedman2002HICSS]. The Tangible Media Group has considered various ambient displays that support the something akin to the VSD notion in their research on computer-supported cooperative work and architectural space [wisneski1998]. VSD does not directly address variables relating to the use context of a system (e.g. what is at stake to users) but instead focuses on important values that should be accounted for during the design process. VSD does not directly address how the same technology can be perceived differently when motivators or context vary.

In "It's the computer's fault: reasoning about computers as moral agents," Friedman also considered how people evaluate the ethical and moral consequences of computer programs [friedman1995]. In interviews with computer-science students, Friedman found that 75% attributed "decision-making" to computers. But only 21% held the computer "morally responsible" for errors. These results indicate that the majority of the interviewees thought a computer could make decisions but a minority blamed the computer for the consequences of bad actions. One participant was quoted as saying "the decisions that the computer makes are decisions that somebody else made before and programmed into the computer..." Friedman concludes by noting that "designers should communicate through a system that a (human) who and not a computer (what) - is responsible for the consequences of computer use." This work suggests deeper questions about the possibility of a computer having ethical behavior.

But what does it mean for a computer to be ethical? Does the rule-following of an artificially intelligent chess program count as moral behavior? Perhaps one of the first individuals to explore these questions was Asimov. His fictional work on robot ethics has made the topic interesting and accessible to a wide audience [asimov1956]. In "I, Robot" Asimov described "Three Laws of Robotics" that sought to constrain harmful behavior, but then proceeded to show some of the limitations of such rules.

Lacking free will, Turing Machines do not make moral choices between "good" and "bad." Instead, they largely carry out their designer's choices. This means that if a designer makes "bad" choices from the user's perspective, the resulting interaction could be viewed as unethical. As such, computers without free will do not have the capability to perform their own ethical deliberation.

Moor, in the classic article "What is Computer Ethics?" [moor1985] conceptualizes computer ethics as dealing with the policy vacuums and conceptual muddles raised by information technology [bynum2001]. This definition does not address an important area of debate: the foundation of computer ethics. Floridi and Sanders categorize different types of foundations that have been used as a basis for ethical arguments about computers [floridi2004]. Many topics have been analyzed from the standpoint of computer ethics: privacy, crime, justice, and intellectual property [brey2000]. Of these, privacy is a value that is directly linked with communication systems.

Palen and Dourish define privacy to be a "dynamic boundary regulation process" in an extension of Altman's theory [palen2003]. Their view of privacy is as a dialectic process between "our own expectations and experiences" and others with whom we interact. Privacy has also been considered in value-sensitive design methodologies [friedman2002]. Friedman et al. also worked on the impact of informed consent in the domain of web browsers [friedman2002HICSS]. Bellotti and Sellen examined privacy in the context of ubiquitous computing. Their studies took place in the context of pervasive sensors (microphones and video cameras). They found that "feedback and control" were two principles that were important in the design of acceptable environments with sensing [bellotti1993]. Mann found that the notion of symmetry in surveillance can help balance inequities that cause privacy problems [mann1996]. Lederer, Mankoff and Dey studied location-sensing technology and determined that "who" is asking for information is an important factor for those determining preferences or policies for access to private information [lederer2003]. Hong also discussed context fabric as an architecture that provides support for privacy in ubiquitous computing systems [hong2004].

Outside of a narrow focus on privacy, there have also been some unusual approaches to considering computers and ethics. Weld and Etzioni worked to include the notion of "harm" into a planner to create ethical "softbots" [weld1994]. Eichmann proposed an ethic for Internet agents and spiders to limit bandwidth abuse [eichmann1994]. Allen et al. suggest a "moral Turing test" as a method to evaluate the ethical agency of artificial intelligence [allen2000]. Wallach also proposes the research and development of "robot morals" [wallach2002]. Brey proposes "disclosive computer ethics" as a methodology for maintaining human values [brey2000].

After all these different uses of the term "ethics" readers might ponder exactly "what is ethics?" It is beyond the scope of this proposal to answer this question. But those seeking more information might consult MacIntyre's A Short History of Ethics [macintyre1967]. A gentler introduction for non-specialists is also provided by Introducing Ethics [robinson2001]. Additionally, Sher has collected an anthology of readings related to ethics and moral philosophy [sher1989].

A question that is within the scope of this thesis is: "what do you the author mean by ethical?" Ethics [fieser1999] is often divided into:

  • applied ethics (such as Medical Ethics or Environmental Ethics)

  • normative ethics ("moral standards that regulate right and wrong conduct")

  • metaethics (argumentation about basic issues that often serve as a foundation for ethical theory).

What I mean by ethical is the application of ethical theory stemming from commitments to a metaethical position. In plain English, something is ethical if we can explain how we arrived at it being "good" (by relying on a framework). An example might help elucidate these somewhat cryptic explanations.

Consider the contractualist metaethical position. Contractualism founds ethical evaluations on a hypothetical or real contract formed between groups or individuals. An enormous amount of metaethical philosophy can be termed contractualist including the work of Hobbes, Rousseau, Rawls, and Gauthier.

Cudd describes the contractual macroethical position in the following manner: "Contractualism, which stems from the Kantian line of social contract thought, holds rationality requires that we respect persons, which in turn requires that moral principles be such that they can be justified to each person." [cudd2000]. Thus, we should offer our moral decisions in public and seek to justify them to each user.

In "Affective Sensors, Privacy, and Ethical Contracts," Reynolds and Picard discuss the application of contractualism to problems involving hypothetical systems that sense and communicate affect [reynolds2004CHI]. Specifically, they find that participants who did not have an ethical contract viewed hypothetical systems as significantly more invasive of privacy when compared with those who did receive an ethical contract. This finding was used to shape some of the experimental designs that appear in chapters 6, 7, and 8. Following this, Reynolds and Picard discuss the relationship between contractualism and the value-sensitive design development of informed consent [reynolds2004AD]. Both of these papers make metaethical commitments and then proceed by applying relevant ethical philosophy to problems related to the design of affective computing systems.