You are on page 1of 7

Valen Booker

Ms. Linda Hofmann


English 1102
1 April 2014
How much autonomy is appropriate in the military and what are the risks?
When people imagine autonomy in artificial intelligence, (and I will be honest, I was the
same way before research when this began to interest me) they usually imagine walking robots
made in humans image taken straight from Isaac Asimovs I, Robot. This movie features Will
Smith, in which he is a law enforcement officer looking to protect mankind while working
alongside an autonomic, human-like robot. The movie concludes with the antagonist being
revealed as V.I.K.I, the computer that makes entirely autonomous decisions and controls all of
the public servant drones. This is a perfect example of the controversy that always comes to
mind when discussing autonomous decisions. Society is quick to bring up the idea that robots
making their own decisions could be potentially dangerous and could one day turn against
humans.
With increasing amounts of technology and autonomic power, I began to wonder how
that would affect the military, and most importantly, the publics opinion of autonomy in the
military. Throughout society, the opinion on artificial intelligence varies. This is mainly due to
lack of knowledge on the topic and how it is portrayed in media. Artificial intelligence can be
anything from Siri on the iPhone to finance programs that protect the public from fraud. So the
question at hand is: How much autonomy when discussing artificial intelligence is appropriate in
a militaristic environment and what are the risks?
I personally enjoy anything related to transhumanist inventions and technologies that will
be available in the future. Transhumanism is defined as the belief or theory that the human race
can evolve beyond its current physical and mental limitations, especially by means of science
and technology (Dictionary). I believe that there are many ways in which artificial intelligence
will benefit us in the future. It is within my near future that I will be commissioned in the United
States Air Force as an officer therefore I am curious to how growing autonomy will affect a
militaristic environment. It is fun to speculate what the future will be like when studying
autonomous robots already in place today. Technology grows at an exponential rate, which I find
highly significant considering this makes us closer and closer to the results we are looking for in
the civilian world and military.
My 12
th
grade ethics teacher had a big influence over me because of his humor and
beliefs. He always had interesting ideas and academic websites to share with us regarding future
technologies and their ethical concerns. We had many class discussions on the ethics of artificial
intelligence and the possible outcomes. We had joked around with the idea that robots may
someday enslave us; note that I say joked. The reason this is impossible is because robots do
not have the program to rebuild themselves from damage and this specifically applies to
employed drones on the battlefield. In addition, robots will never be able to make a decision that
over rides a humans decision. I am curious of this process and how we as society justify
autonomy when it is such a controversial subject.
This is such a significant inquiry when relating to the military because that is the United
States first and foremost defense system. When and if the military implements full autonomy in
its drones, then the risks could affect then entire civilian population as well as the soldiers
themselves. However, millions of lives could be saved in the military if the Department of
Defense is successful in its research and fully utilizing these systems.
With regard to the risks of deploying autonomic robots, there are two factors that must be
considered: Seriousness and probability. This means what is the worst case scenario and how
probable is it that it will actually occur. A good example of the relationship between seriousness
and probability is an asteroid hitting earth. An asteroid making forceful contact with Earth could
possibly wipe out the human race; however the possibility of this happening is thankfully low.
The seriousness of the morality often gets discussed. The military wants to make clear that the
robots will do no more than what is already morally expected of human soldier.
There are many risk factors that the military is studying including the most important one
called an objective standard. Simplified, this is what is known as a first generation problem.
How can the military employ something that could involve unacceptable risks if there is no
precedent in which someone has already endured and suffered from? This is a main form of
gathering evidence when determining risks. The Department of Defense believes they have a
solution to this problem. They feel they are ethically obligated to extensively test these robots in
an artificial and human-free environment, where they can then judge the situation before placing
them in a human-robot setting.
Looking towards the future, the government has started to introduce plausible scenarios
and questioning the systems in order to prevent further harm. Several of these inquiries being
discussed are the legal challenges behind it such as: Unclear responsibility, refusing an order,
and the consent given by soldiers to be involved with the risk. The responsibility issue examines
who would be at fault for anything unauthorized or improper whether it is intentional or by
accident. There are many people in the chain of command who could be involved including
designers, robot manufacturer, procurement officer, controller/supervisor, field commander, the
president or the robot (Lin, Beckeye, Abney).
Refusing an order is often considered as a serious risk. There are many situations that this
could be justified, including one possible scenario taken straight from the unclassified
Department of Defense proposal itself. If a commanding officer gave a robot the order to attack a
house when it is known that there are occupants inside of it, though the robot given the sensors to
see through the walls saw that there are children and innocent civilians inside, refused the
order based on its programmed instruction to minimize as many civilian casualties as possible
(Lin, Beckeye, Abney). This would result in a dilemma which is especially significant because
when programming these robots they need to know if they should be given the function to refuse
an order for better or worse.
In October of 2007, a semi-autonomous cannon used by the South African army
malfunctioned which resulted in the death of nine friendly soldiers and a few other casualties
(Lin, Beckeye, Abney). When utilizing these robots, the soldiers need to be informed of the risks
involved because it would be ignorant to not prepare for such an incident happening again. This
is why the DoD recognizes this as a risk because unlike other situations, we have a precedent to
rely on.
The military uses certain levels of autonomy today in aerial, ground, maritime, and space
systems. Troops commonly use unmanned aerial vehicles (UAV), which are aircraft without a
human pilot aboard. Its flight is controlled either autonomously by onboard computers or by the
remote control of a pilot on the ground or in another vehicle (Dictionary). These systems are
widely used today however the DoD has mentioned that they can make many improvements and
there are a lot of possibilities with how far they could take autonomous systems.
These systems are not meant to replace the human soldier but merely put into place to
assist humans and benefit the military as a whole. These systems are exactly classified under
transhumanist projects which I discussed earlier. This is because they are made to extend
humans reach by offering capabilities without degradation from becoming tired such as a human
would. Autonomous systems have certain attributes such as greater flexibility in dangerous
environments and reaction to speeds beyond our limits. Hopefully, these systems will aid in
reducing workload that is currently expected of supervisors or operators. With all of this saved
and managed time, our leaders will be free to make more complex decisions (The Role of
Autonomy in DoD Systems).
I believe there is great potential for autonomy in the military after all of my research on
this topic. It can be overwhelming to focus on the risks at hand but knowing that there are
substantial amounts of programmers and scientists working with the military and government
makes me optimistic for the future. In response to the inquiry, it is all a matter of opinion. There
are countless benefits when discussing the ideal autonomic robot in the military however many
risks to be considered as well. Autonomy is being further expanded on for military purposes
which I one day might be able to witness and interact with.
Works Cited:
Alexander, David. "U.S. Military Embraces Robots with Greater Autonomy." Chicago Tribune.
N.p., 09 May 2012. Web. 25 Mar. 2014. <http://articles.chicagotribune.com/2012-05-
09/news/sns-rt-us-usa-defense-robotsbre84805n-20120508_1_robots-15-ton-military-truck-
autonomy>.
Dvorsky, George. "The Case Against Autonomous Killing Machines." Io9. N.p., 21 June 2012.
Web. 25 Mar. 2014. <http://io9.com/5920084/making-the-case-against-autonomous-killing-
machines>.
Noorman, Merel, and Deborah G. Johnson. "Login to Atkins Library - J. Murrey Atkins Library
- UNC Charlotte." Login to Atkins Library - J. Murrey Atkins Library - UNC Charlotte. N.p., 18
Feb. 2014. Web. 25 Mar. 2014.
United States of America. Department of Defense. Defense Science Board. The Role of
Autonomy in DoD Systems. N.p.: n.p., n.d. Print.
United States of America. US Department of Navy. Office of Naval Research. Autonomous
Military Robotics: Risk, Ethics, and Design. By Patrick Lin, Ph.D, George Beckeye, Ph.D, and
Keith Abney, M.A. N.p.: n.p., n.d. Print.

You might also like