Professional Documents
Culture Documents
Devon Bowers
Professor Jaime McBeth-Smith
English 1010-037
April 28, 2016
Should We Fear Artificial Intelligence?
Computers have always been interesting to me. Like watching flames
in a fire, the processes of a computer fascinate me, and Im mesmerized by
how it works. I sometimes wonder how much farther we can go before we
reach our technological limits. With todays seemingly endless technological
advancements, it seems we have only scratched the surface of our
technological potential. It begs the question, What is possible? For
example, is there a way to create a computer that functions and behaves like
a human, and if so, how will this technology impact the world? Will life be
better or worse? Like in popular movies and books, we often imagine
doomsday scenarios where artificial intelligence, such as advanced and
independent robots, takes over the world. We always fear the worst in what
we do not understand and the potential it has in destroying or harming our
lives. Should we fear technology if it becomes too intelligent, or should we
leap forward with advancements as quickly as possible, trusting that what
we create will always be under our control and do no harm?
Bowers 2
Bowers 3
Bowers 4
Bowers 5
Bowers 6
Bowers 7
that choice. Even if a robot did make that choice, there would be very few of
them who would actually try. Another reason to not fear the rise of artificial
intelligence is that most robots and A.I. are programmed for specific things,
such as Deep Blue, who beat world chess champion, but would lose to a
game of checkers against a toddler (Oates).
In the article Understanding Artificial Intelligence and Why We Fear It,
Rean argues that big tech companies are not placing simple rules that can
help prevent humanitys overthrow, and he also says that people arent
paying much attention to applying these rules. The three rules or laws, which
were written by Isaac Asimov, say that robots must not intentionally injure or
injure through inaction, any human. The third law being that robots or AI
must protect itself as long as it doesnt interfere with the first and second
law. These laws or rules can help us move in the right direction.
A lot of the time, artificial intelligence is being developed without ever
taking into account that these machines could potentially become a risk to
humanitys survival. Noel Sharkey, head of the Foundation for Responsible
Robots, is concerned that we are rushing headlong into the robotics
revolution without giving enough policy thought to the social problems that
might arise (Welsh). These social problems could cause robots in the future
to rise against us, but many people and companies dont seem too worried
about the future of robotics and A.I., so far there havent been any problems
Bowers 8
with it, but we also havent gotten to the point where we can develop a
computer with full consciousness.
Artificial intelligence can be amazing, and it will continue to improve
through the years. For myself, I disagree, at the moment there is no evidence
of robots or A.I. capable of taking over. As Ive weighed on the evidence, it
seems to suggest that a robot uprising is extremely unlikely. There are many
robots and A.I. out there today that are working in numerous factories. In the
near future, there will be driverless cars on the road soon, but I dont expect
to see a car just decide to drive off a cliff. So I think that were all taking this
a little too far; weve been influenced heavily by Hollywood. I think its more
of a thinking problem than anything; we really need to think more positively
about humanitys future.
Bowers 9
Works Cited
Bosker, Bianca Googles New A.I. Ethics Board Might Save Humanity
From Extinction. The Huffington Post. January 29, 2014. Web.
http://www.huffingtonpost.com/2014/01/29/google-ai_n_4683343.html
Luckerson, Victor. 5 Very Smart People Who Think Artificial Intelligence
Could Bring the Apocalypse Time. December 2, 2014. Web. March 31, 2016
http://time.com/3614349/artificial-intelligence-singularity-stephen-hawkingelon-musk/
Oates, Tim. Why We Shouldnt Fear Artificial Intelligence
Entrepreneur. April 17, 2015. Web. March 31, 2016.
https://www.entrepreneur.com/article/245212
Rean. Understanding Artificial Intelligence and Why We Fear It.
Hongkiat. Web. April 6, 2016. http://www.hongkiat.com/blog/understandingartificial-intelligence/
Welsh, Sean. The drive towards ethical AI and responsible robots has
begun The Conversation. December 15, 2015. Web.
http://theconversation.com/the-drive-towards-ethical-ai-and-responsiblerobots-has-begun-52300
Bowers 10
Reflection
1. When researching this topic, I started to think about the arguments that
were involved, it was either you believed that a robot apocalypse could
happen or it could not happen. I had to find credible sources that had the
right material I needed. Once I found the sources I wanted, I had to
summarize those the main points I needed to include in the synthesis,
including the rhetorical analysis and a reflection of the sources author. I tried
my best to carefully analyze the material, looking for similar arguments
between each source. Once I found those arguments, I thought about which
arguments to include and the order they should be placed in the paper.
2. In my Tech 1010 class, we had a paper where we had to write about the
future and how the human race will have to adapt to circumstances in the
future. There were many concerns over the future of energy, transportation,
homes, and cities. When researching for this paper I relied heavily upon what
I learned in my English class. Looking for the arguments that were needed to
answer some of these questions, like how are we going to meet future
energy demands, how are we going to help people travel short and long
distances, and so on. Applying what I learned in English class made this
project so much easier.