You are on page 1of 12

The Ethical Treatment of Robots: The Implications of Advanced Artificial Intelligence

A longtime dream of science fiction has been one of the android, or the cognizant robot. Modern technology creeps ever closer to this ideal w ith the research being done in the field of artificial intelligence. There are two schools of thought on how to achieve the ideal autonomous machine: strong and weak artificial intelligence (AI). Strong AI is modeled on human intelligence, taking human cognitive function as its goal (Is it Better to be Strong or Weak?). Weak AI seeks to attain such a machine through other methods, with it being only as intelligent as makes them useful (Ibid.). In this essay I will first examine the applicability of ethics to a robot with a mind designed to function like humans, and a robot created through weak AI technology. For strong AI theorists, a machine that could start with little or no information about its environment and systematically gather information would be the ideal; a machine analogous to the human child, but with a much quicker learning curve. If such a machine could communicate with other machines and humans on a meaningful level, its utility would be maximized. Strong AI, being modeled on the human brain, would have a brain neurologically modeled after a humans (Eliasmith et al). In essence, what would have been created would be a mechanical thing which has neural plasticity, learns from its environment, draws inferences, creates memories, and has the ability to communicate, just like a human being does. The question of what allows humans to be self conscious is still unanswered. For strong AI, the model is the human mind, and the objective to copy every neuron and

cognitive function. If consciousness comes from any social or mental functionality, then it would follow that such a robot might very well be self conscious, if it has a mind with the functionality of a humans. With such a possibility looming overhead, would it be ethical to create such a thing, and disallow it to exercise free will? There are those who believe that all robots should have a mechanism for immediate termination to defend against the possibility that they develop consciousness and then evil intent towards mankind (Rothman), or that military robots might kill innocents rampantly (Goose). Is it ethical to create a potentially conscious thing, only to deprive it of self awareness if it shows any self realization? This would seem to be the equivalent of the practice of enslavement: having a thing for the sole purpose of its disposability, and if it showed any higher thinking or awareness, to cut it off and destroy it. Considering current attempts at brain research on a mechanical model, would it be ethical to destroy or alter various areas of the brain of a machine that may very well be self aware (Eliasmith et al)? Looking at this from the consequentialist school of ethical thought, it seems that there is no problem. For consequentialists, something is good if its results are good (SinnottArmstrong). The benefit of the intended result is justification for the act in question. From this point of view, since robots are being created for use by humans, the consequence of improved life conditions for a society justify the act of creation and subsequent use as their creators see fit. The consciousness or unconsciousness of a robot would be irrelevant to the justification of creating a robot with the capability to be conscious. If a robot were found to be conscious, it would not affect the decision to injure or alter its neurological functionalities, if benefits were to be had. In fact, a discovering that a robot had become self aware might be valuable for human brain

research, as this could lead to the discovery of the exact mechanism for consciousness in humans. From the deontological viewpoint, an act is right if it is performed with good intentions and conforms with the moral norm (Alexander and Moore). The rightness or wrongness of something is determined by the intent of the doer. The intent to create a robot with maximum utility would be considered right by deontological ethics. The consideration that said robots might obtain human-like consciousness need not be factored into the ethical question, if the intent is not to make robots wi th consciousness, or to treat a self aware robot unethically. In the case of brain research through robotic brain models, the creation of a fully conscious being might be beneficial and therefore intentional. The ethicality of this situation would be debatable for a deontologist, and would likely be determined by what the normative feeling is, in regards to whether or not robots should have rights like humans. Presently there are robotic machines being used as disposable replacements for humans. Currently in development are semi autonomous robots which are manipulated by human counterparts (Anthony). Such a robot would not have a human like brain, or be likely to have consciousness, since it is only semi

autonomous and has no use for such functionalities. There are those who fear that semi autonomous robotic soldiers will soon become fully autonomous and kill humans without discretion (Goose). The petition to halt military robot production, written by a proponent of this belief, includes several ethically interesting arguments. One argument against robotic military agents is the lack of accountability for their actions, and their lack of empathy: They would lack the ability to relate to humans and to apply human judgment (Ibid.). These objections seem to imply that strong AI would

make for better robotic soldiers than weak AI. Strong AI would allow robots to have empathy for the individuals they come in contact with. The concept that there should be a way to terminate a robot in an immediate fashion (Rothman) would be justified under both schools of thought. Such a measure would be justified by the intention and goal being the avoidance of some immediate danger experienced by humans. Interestingly, the individuals who are advocating such a thing have clearly taken into consideration that robots could achieve consciousness by some mechanism, and indeed it is necessary for robots to obtain consciousness if remote termination is necessary. The reason that these robots would not be treated ethically, then, seems to be stemming from the fact that they are not organically human. The first robot to pass an IQ test has a brain with millions of neurons (Eliasmith et al). The ethically questionable aspect of this great achievement is the researchers destruction of select neurons to observe the effects on the robot. This resulted in impaired brain function, just like it would in humans, showing that robotic models of human brains would be of immense use in the study of brain damage and development. This is ethically justifiable due to the intent and goals involved. Examining the problem in the light of these ethical theories, it is difficult to imagine that there is anything wrong at all with creating a thing with potentially human-like consciousness and disposing of it arbitrarily, without allowing it to exercise free will. This is due to ethical thought being largely humanocentric at present, with benefit, good, and right defined and considered in relation to humans. Thus, although a thing may reach the status of a human mind, it would still not qualify to be treated ethically, unless it was organically human. If a robot were created with all the cognitive capabilities of a human, theoretically it would show all the behavioral and

physical traits of a functioning human being. It would never be known if such a robot were conscious, in the same way that it is impossible to conclusively prove that any given human is conscious (Hyslop). In the same way that one assumes another is conscious, i.e. by observing behavior and speech and seeing that it resembles ones own, the robot could be assumed to be conscious. In Alan Turings Test, if an individual cannot discriminate between answers being given by a human and those given by a machine, then the machine is indistinguishable from a human in regards to its capability to understand (Turing). The belief that a self aware robot, with a mind functionally indistinguishable from a human, is otherwise distinguishable from a human would be residing on the fact that robots are not organic beings. It seems therefore that ethics should be expanded to include not merely organic humans, who are superficially assumed to have human mental capability due to their physical attributes, but anything that shows the mental capabilities of a human: a mindcentered ethics. The main implication of this would be that robots would need to be able to exercise their free will, if the robot were capable of self consciousness. One objection to the ethical treatment of potentially self aware robots is that the utility of the robot is the reason that they exist at all, and ethical treatment would only hinder this. The robots could be performing those acts which they were created to do, without compromise to functionality due to considerations of free wil l. This argument must be considered using humans as a model, as a robot whose brain is modeled after the human brain can be assumed to experience human mental states. It is very possible that its utility would decrease if it were given no free choice, as this is what occurs with humans (Iyengar and Lepper). Another potential cause for decrease of utility is the robotic understanding of justice. If a robot has all the mental capabilities of a human then it would also have the capability to understand ethical concepts. If

such a robot were aware of being treated unethically, this would ethically justify the robot in revolting against its oppressors. This argument is relevant also to those who believe humans should have the ability to terminate a robot at any time: the realization that they are being treated unethically could lead to the very revolt which was to be avoided. The ethical problem of what to do with a self conscious robot could initially be solved with the definite prevention of a cognizant robot of ever existing, or the elimination of the mechanism of self awareness from a robots mind, and with it the need for any ethical consideration. The first solution would be making the ability of robots to be self aware a nonexistence, which would be neither ethical nor unethical with respect to the robots. However, once a robot has achieved consciousness, it would be unethical to take that consciousness away, on the grounds that a being with self awareness would be subject to ethical considerations. This leads to the conclusion that brain research on robots is unethical. If human like brains are considered to require ethical treatment like any other human mind, it would be equivalent of research on unwilling or unknowing human subjects. One consequence of robots being allowed to exercise free will would be their choice in what tasks they would do. This is problematic simply because the whole purpose of creating a robot would be negated if a robot were to merely become another member of society. Any given robot is most likely intended for a specific purpose, and given qualifications to fulfill that purpose in a superior manner. If a robot chose to do something which it was not crafted for, efficiency would be lost on a drastic scale. It must be noted that humans perform tasks better when they choose them, as opposed to being forced to do them (Iyengar and Lepper). Assuming the robot had functional

strong AI, it would also show improved performance. This correlation between choice and performance is only applicable for individualist societies. Individuals with collectivist values have increased motivation when choices are made for them by their superiors (Ibid.). The issue of choice of task could be solved by simply educating, socializing, or programming the robot so that it will have a collectivist mindset. This might prove to be a non solution in individualist cultures, since the exposure might cause the robot to imitate individualists. A secondary consideration in the discussion of a robots exercise of free will is a psychological one. Self awareness combined with lack of free will might result in a feeling of purposelessness, a feeling which is often experienced by suicidal individuals (McSwain, Lester, and Gunn III). This effect could be compounded by the robot being actually created for a singular purpose, resulting in a sense of purposelessness that outstrips the norm. If a robot were to experience depression or attempt to self terminate its usefulness would decrease. To prevent this, both for the sake of human benefit from and ethical treatment of robots, the self conscious robot requires the full practice of free will: the ability to create a sense of purpose on its own. One potential solution would be to predetermine a robots desires, such that it would necessarily choose to do what it was made to do. This would take away some of the robots free will, which would be unethical due to its self awareness and human like mind. Another alternative would be to allow the robot to choose its functionality and then equip it to perform this functionality in a superior manner. This would give the advantages of free choice in work, as well as fulfilling the original intent for creating robots. The correlation between conceptual self and physical self might complicate this somewhat simple solution to the free will problem. Changes in physical self have been

shown to cause changes in self concept (Charmaz). If an individuals physical appearance affects their self conception, altering a robots physical being could alter its self conception. This could result in a robot continuously changing its self identity and consequently what it desires to function as, not to mention the potential psychological difficulties involved in changes in self conception (Ibid.). Such a situation would be highly unstable and unviable considering robots are created for the purpose of making humans lives easier. The basis for ethical behavior in humans is thought to be social agreement on a thing being right or wrong, and subsequent approval or disapproval (Harman), causing social stigma for those who act outside the acceptable range of behaviors. If robots were to be able to have existential knowledge of social bonding and consequent social pressures, they too would ascribe to the ethics of the society. To treat self aware robots ethically, then, and ensure that they treat humans ethically in return, self conscious robots should be given the opportunity to experience social bonds. If self conscious robots had the ability to form the emotional bonds which unite individuals to one another, bonding might occur between humans and robots. This could result in robots ascribing to human ethics, and humans treating robots ethically. This would essentially put self aware robots in the position of being extra-functional members of society, as well as quell any fears of a bloody revolution (Goose, Rothman). In conclusion, ethical treatment of a potentially self conscious robot is necessary, if one accepts an ethics that apply to everything with the potential for human mental capability, and not merely human morphology. To allow the robot to exercise free will it would have to be optimized for a task after it chose the task. This would further require the ability to create personal purpose, whether on a subjective or objective

level. Social bonding between human and robot would help ensure ethical treatment of each to the other. If it were possible to create a robot with a human like mind, it seems that these steps would be within the realm of possibility. Allowing the operation of free will in a conscious robot would also avoid the potential detrimental effects of the negative mental states experienced by humans while feeling purposeless. Even after treating a robot with strong AI in an ethical manner, its util ity is put into question by all the possible problems associated with a programmable human mind trapped in a mechanical body. Attempting to provide such a robot with the ability to self motivate in order that it might fully realize its free will is confounded by the very purpose of the robots creation. It seems that the most ethical solution is to ensure that no robot could have self awareness, both to ensure that a self aware robot is not in a state of constant mental torture, and also to avoid the temptation to terminate such robots, in the event that such a mental state impedes functionality. Considering the robotic soldier issue, strong AI would result in empathetic but emotionally vulnerable soldiers. A robot with no emotion would not be able to commit crimes of passion, or suffer from trauma. The benefit of having an emotionless, objective military, and the questionable ethicality of subjecting human like robots with the aforementioned potential problems to a situation with lives on the line, weak AI seems the most ethical solution to the question of robotic consciousness. Weak AI would have robots be created with minimal mental capabilities, with only enough functionality for whatever task they are being created for. As such robots would not have human brain architecture, there is no reason to assume they are self aware, from a human oriented point of view. If there is no reason to assume that they are self aware, then there would be no real reason to consider ethics applicable. The

possibility that self consciousness might arise without duplicating the human brain in an exact fashion should be considered to avoid treating a potentially conscious thing unethically. If consciousness is not displayed by weak AI robots, it is only with the discovery of the mechanism for consciousness that their true mental abilities can be understood, and with this the determination of the appropriate ethical treatment.

References. Is it Better to be Strong or Weak? A-i. Web. Dec 2 2012. Alexander, Larry and Moore, Michael. Deontological Ethics. Stanford Encyclopedia of Philosophy. 21 Nov 2007. Web. 2 Dec 2012. Anthony, Sebastian. DARPA reveals Avatar program, robot soldiers incoming. ExtremeTech. 17 Feb 2012. Web. 3 Dec 2012. Charmaz, Kathy. (1995). The Body, Identity, and Self: Adapting To Impairment. The Sociological Quarterly, 36(4), 657680. DOI: 10.1111/j.1533-8525.1995.tb00459.x Eliasmith, Chris, Terrace, Stewart C., Choo, Xuan, Bekolay, Trevor, DeWolf, Travis, Tang, Yichuan, and Rasmussen, Daniel. (2012). A Large -Scale Model of the Functioning Brain. Science, 338 (6111), 1202-1205. DOI:10.1126/science.1225266 Goose, Steve. The Future of Global Warfare: Killer Robots. Human Rights Watch. 20 Nov 2012. Web. 2 Dec 2012. Harman, Gilbert. (1975). Moral Relativism Defended. The Philosophical Review, 84, 3-22. Hyslop, Alec. Other Minds. Stanford Encyclopedia of Philosophy. 12 Dec 3. Web. 2 Dec 2012. Iyengar, Sheena S., Lepper, Mark R. (1999). Rethinking the value of choice: A cultural perspective on intrinsic motivation. Journal of Personality and Social Psychology, 76(3), 349-366.

McSwain, Stephanie, Lester, David, and Gunn III, John F. (2012). Warning Signs for Suicide in Internet Forums. Psychological Reports, 111, 186-188. doi: 10.2466/12.13.PR0.111.4.186-188 Rothman, Wilson. Why the Terminator Uprising (Probably) Wont Ever Happen. Gizmodo. 21 May 2009. Web. 2 Dec 2012. Sinnott-Armstrong , Walter. Consequentialism. Stanford Encyclopedia of Philosophy. May 20 2003. Web. 2 Dec 2012. Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.

You might also like