You are on page 1of 10

Bowers 1

Devon Bowers
Professor Jaime McBeth-Smith
English 1010-037
April 28, 2016
Should We Fear Artificial Intelligence?
Computers have always been interesting to me. Like watching flames
in a fire, the processes of a computer fascinate me, and Im mesmerized by
how it works. I sometimes wonder how much farther we can go before we
reach our technological limits. With todays seemingly endless technological
advancements, it seems we have only scratched the surface of our
technological potential. It begs the question, What is possible? For
example, is there a way to create a computer that functions and behaves like
a human, and if so, how will this technology impact the world? Will life be
better or worse? Like in popular movies and books, we often imagine
doomsday scenarios where artificial intelligence, such as advanced and
independent robots, takes over the world. We always fear the worst in what
we do not understand and the potential it has in destroying or harming our
lives. Should we fear technology if it becomes too intelligent, or should we
leap forward with advancements as quickly as possible, trusting that what
we create will always be under our control and do no harm?

Bowers 2

As I have researched and thought about this topic, many questions


have arisen. I wonder if the technological threats we see today, including
viruses and cyber terrorism, would only be made worse with more advanced
computers and artificial intelligence. Can we trust weapons and drones that
are designed to kill, even though they are not used without human
intervention? What might happen if these weapons were allowed to make
decisions or operate outside of the immediate control of a human? The more
I have explored this issue, I believe it comes down to one basic question:
Should we fear artificial intelligence? In the scientific community this is a
heavily debated topic. There are scientists who believe artificial intelligence
does not pose a significant threat, therefore we should not fear advancing
further into this field. On the other hand, there are scientists who are argue
there is a possibility of artificial intelligence could pose a threat, and that
technological advancement should be approached carefully. In this paper I
will examine both viewpoints. I will begin by addressing and synthesizing
some of the arguments in favor of artificial intelligence, and I will then
discuss the arguments for taking caution in advancing in artificial intelligence
research.
Renowned physicist Stephen Hawking has emerged as one of the
leading voices against artificial intelligence. He argues that the
development of full artificial intelligence could spell the end of the human
race (Luckerson). Perhaps when we improve artificial intelligence to its full
potential we may get to the point where we should begin to worry about

Bowers 3

robots taking over. Stephen Hawking is a high ranking physicist in the


scientific world, and has a voice that should be listened to. What he said is a
bold statement, could he be right about it or do we just have too much
anxiety? Other scientists, including Elon Busk and Nick Bostrum, just to name
a few, think highly of the same scenario.
There are many scientists who believe that robots will become hostile
to humanity, the thing about this is that its all educated guessing; and when
I think about it, nobody really knows for sure, and theres no evidence to
suggest that robots will take over. We all just assume that once robots obtain
intelligence they will have the potential to become evil. But the opposite is
also true, we dont know if robots will become peaceful and kind. A lot of this
descends into an exploding what if battle between those who believe in a
robotic apocalypse and those who dont, but we wont go there.
Scientists will say that it will either happen, it wont, or their
undecided. Im sure we would all be happy if it just didnt happen, but thats
the thing, we dont know. Bianca Bosker argues that Google has been taking
the steps to ensure that their products dont rise up in their wrath against
humanity (Bosker). Since there are concerns over the ethical knowledge that
artificial intelligence might have when they are created. Google acquired a
tech company called DeepMind that has been working on this very thing and
Google even created an AI safety and ethics review board to ensure the
technology is developed safely (Bosker). Google wants to make sure their

Bowers 4

products are behaving properly. It is possible artificial intelligence will have


those moments when they must make ethical decisions in a split second.
Driver-less cars is a prime example. Opponents to A.I. want peace of mind
and to know that these computers are going to be able to make decisions
that will help save lives. Google seems to be taking a step in that direction
and addressing that concern.
A tech firm by the name of Boston Dynamics that Google acquired in
2013 and who worked closely with the U.S. Military, has raised some
questions about Googles intentions. The author says that Google could
build the most sophisticated robot soldiers on the market, paving the way
for man and machine to fight shoulder-to-shoulder in battle (Bosker). Bosker
then asks the question of whether it would be more unethical to let human
soldiers die when robots could take their place. I would say that if we had the
technology and if it was effective, then we should probably implement this
technology in the battlefield. Since we dont have the technology, or limited
technology, then its not going to happen for a long time. Just the thought
that we should implement this kind of technology into war will only get us
closer to a robot apocalypse.
Likewise, a nonprofit organization called The Foundation for
Responsible Robots or FRR, which their plan is to promote responsibility for
the robots embedded in our society (Welsh). They plan to work with the
public and engage policymakers, create interdisciplinary teams of robotic,

Bowers 5

legal, ethical and societal scholars, work to explore what it means to be


responsible as robotics researchers and designers, run workshops and
engage the public (Welsh).
In opposition to opponents of A.I., there are scientists who believe
artificial intelligence is something that will bring good results in the lives of
many in the near future, even if that future is unknown. Sean Welsh is a
Doctoral Candidate at the University of Canterbury. He is a PhD student in
robot ethics, and he points out that rule-driven robots play a mean game of
chess, but feel nothing about winning or losing (Welsh). This is true, robots
are driven by mathematical numbers, and humans are driven by desire such
as love or hate. I understand the merit of this argument. Im certainly not
driven to do something simply because two plus two equals four. I do things
because I enjoy doing them, or I dont do things because I dont like them. I
was born with the potential to learn math, but I wasnt born with a
preprogrammed knowledge of math. For robots and AI its the exact
opposite, they are programmed to do certain things and are driven by
statistical data, making educated guesses the whole time.
Artificial intelligence has been designed to help everyone, and because
of the film industry, artificial intelligence has been portrayed as potentially
dangerous, or even as something evil. There has been much good in the
technology that is being developed, not only in the field of artificial
intelligence but among many other types of technology. Tim Oates, who is

Bowers 6

chief scientist of a software company called CircleBack argues that what we


need is a clearer understanding of the issues: What AI can do, and, more
pressingly, what it cant (Oates). Oates says that its improbable for A.I. to
have an actual consciousness; therefore, many of the fears regarding AI are
unfounded. We will never truly know if an AI can have its own consciousness.
Its been proven that it can learn as it goes, but usually it can only learn
about certain things that its been programmed to do.
Oates went on to explain the four things that would have to occur in
order for AI to overthrow humanity. He says that AI must develop a sense of
self distinct from others and have the intellectual to step outside its intended
programmed boundaries. Oates three other points say that AI must create
and develop some kind of hate or incompatibility with humanity, create a
plan that involves death and destruction, and then enact the plan of action
to overthrow humanity (Oates). When I read this, yes, it does seem logical.
After all, its something that a human would have to do too if they wanted to
take over the earth. So weve never seen anything like this happen in the
history of the world, with a few exceptions such as the rise of Hitler in the
1930s and 1940s.
Tim Oates goes further to explain that robots would have billions of
choices to make, and choosing to overthrow humanity is unlikely. When faced
with all these questions, very few humans would decide to try to take over
the earth. Theres just no reason to think that robots will automatically make

Bowers 7

that choice. Even if a robot did make that choice, there would be very few of
them who would actually try. Another reason to not fear the rise of artificial
intelligence is that most robots and A.I. are programmed for specific things,
such as Deep Blue, who beat world chess champion, but would lose to a
game of checkers against a toddler (Oates).
In the article Understanding Artificial Intelligence and Why We Fear It,
Rean argues that big tech companies are not placing simple rules that can
help prevent humanitys overthrow, and he also says that people arent
paying much attention to applying these rules. The three rules or laws, which
were written by Isaac Asimov, say that robots must not intentionally injure or
injure through inaction, any human. The third law being that robots or AI
must protect itself as long as it doesnt interfere with the first and second
law. These laws or rules can help us move in the right direction.
A lot of the time, artificial intelligence is being developed without ever
taking into account that these machines could potentially become a risk to
humanitys survival. Noel Sharkey, head of the Foundation for Responsible
Robots, is concerned that we are rushing headlong into the robotics
revolution without giving enough policy thought to the social problems that
might arise (Welsh). These social problems could cause robots in the future
to rise against us, but many people and companies dont seem too worried
about the future of robotics and A.I., so far there havent been any problems

Bowers 8

with it, but we also havent gotten to the point where we can develop a
computer with full consciousness.
Artificial intelligence can be amazing, and it will continue to improve
through the years. For myself, I disagree, at the moment there is no evidence
of robots or A.I. capable of taking over. As Ive weighed on the evidence, it
seems to suggest that a robot uprising is extremely unlikely. There are many
robots and A.I. out there today that are working in numerous factories. In the
near future, there will be driverless cars on the road soon, but I dont expect
to see a car just decide to drive off a cliff. So I think that were all taking this
a little too far; weve been influenced heavily by Hollywood. I think its more
of a thinking problem than anything; we really need to think more positively
about humanitys future.

Bowers 9

Works Cited
Bosker, Bianca Googles New A.I. Ethics Board Might Save Humanity
From Extinction. The Huffington Post. January 29, 2014. Web.
http://www.huffingtonpost.com/2014/01/29/google-ai_n_4683343.html
Luckerson, Victor. 5 Very Smart People Who Think Artificial Intelligence
Could Bring the Apocalypse Time. December 2, 2014. Web. March 31, 2016
http://time.com/3614349/artificial-intelligence-singularity-stephen-hawkingelon-musk/
Oates, Tim. Why We Shouldnt Fear Artificial Intelligence
Entrepreneur. April 17, 2015. Web. March 31, 2016.
https://www.entrepreneur.com/article/245212
Rean. Understanding Artificial Intelligence and Why We Fear It.
Hongkiat. Web. April 6, 2016. http://www.hongkiat.com/blog/understandingartificial-intelligence/
Welsh, Sean. The drive towards ethical AI and responsible robots has
begun The Conversation. December 15, 2015. Web.
http://theconversation.com/the-drive-towards-ethical-ai-and-responsiblerobots-has-begun-52300

Bowers 10

Reflection
1. When researching this topic, I started to think about the arguments that
were involved, it was either you believed that a robot apocalypse could
happen or it could not happen. I had to find credible sources that had the
right material I needed. Once I found the sources I wanted, I had to
summarize those the main points I needed to include in the synthesis,
including the rhetorical analysis and a reflection of the sources author. I tried
my best to carefully analyze the material, looking for similar arguments
between each source. Once I found those arguments, I thought about which
arguments to include and the order they should be placed in the paper.
2. In my Tech 1010 class, we had a paper where we had to write about the
future and how the human race will have to adapt to circumstances in the
future. There were many concerns over the future of energy, transportation,
homes, and cities. When researching for this paper I relied heavily upon what
I learned in my English class. Looking for the arguments that were needed to
answer some of these questions, like how are we going to meet future
energy demands, how are we going to help people travel short and long
distances, and so on. Applying what I learned in English class made this
project so much easier.

You might also like