Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Army of None: Autonomous Weapons and the Future of War
Army of None: Autonomous Weapons and the Future of War
Army of None: Autonomous Weapons and the Future of War
Audiobook13 hours

Army of None: Autonomous Weapons and the Future of War

Written by Paul Scharre

Narrated by Roger Wayne

Rating: 4 out of 5 stars

4/5

()

About this audiobook

Paul Scharre, a Pentagon defense expert and former U.S. Army Ranger, explores what it would mean to give machines authority over the ultimate decision of life or death.

Scharre's far-ranging investigation examines the emergence of autonomous weapons, the movement to ban them, and the legal and ethical issues surrounding their use. He spotlights artificial intelligence in military technology, spanning decades of innovation from German noise-seeking Wren torpedoes in World War II-antecedents of today's homing missiles-to autonomous cyber weapons, submarine-hunting robot ships, and robot tank armies.

Through interviews with defense experts, ethicists, psychologists, and activists, Scharre surveys what challenges might face "centaur warfighters" on future battlefields, which will combine human and machine cognition. We've made tremendous technological progress in the past few decades, but we have also glimpsed the terrifying mishaps that can result from complex automated systems-such as when advanced F-twenty-two fighter jets experienced a computer meltdown the first time they flew over the International Date Line.
LanguageEnglish
Release dateMay 8, 2018
ISBN9781541489684
Army of None: Autonomous Weapons and the Future of War

Related to Army of None

Related audiobooks

Intelligence (AI) & Semantics For You

View More

Related articles

Related categories

Reviews for Army of None

Rating: 3.960526221052631 out of 5 stars
4/5

38 ratings4 reviews

What did you think?

Tap to rate

Review must be at least 10 words

  • Rating: 4 out of 5 stars
    4/5
    Well written and performed.
    Very focused on “ethics”, I’m looking for a technical primer and future tech

    But we’ll written and supported.
  • Rating: 4 out of 5 stars
    4/5
    This book takes a pretty comprehensive look at autonomous weapons; it's both a primer and a review if technical, tactical, strategic and moral and ethical issues concerning the use of automated weapons from human-guided systems to self-guided weapons. You could say "from stones to Skynet" because Scharre refers to the Terminator series regularly. Chunks of this book are so filled with techno-babble, military jargon and acronyms that they are difficult to read, which is most unfortunate given the nature of the weapons Scharre discusses and the issues those weapons raise. I made the effort to read them carefully and I am glad I did.
  • Rating: 4 out of 5 stars
    4/5
    This book, written by a non-technologist with extensive military experience, describes the intersection of artificial intelligence with United States military affairs. It uses terms like “autonomy” and “semi-autonomy” extensively. Autonomous weapons are weapons that can identify their own targets. Semi-autonomous weapons can track pre-identified targets (that is, targets previously identified by humans). Semi-autonomous weapons are currently in use; no autonomous weapons are known to be in use.

    The line between these two is currently blurring. This is not due to Department of Defense (DARPA) research, but due to research in artificial intelligence (AI) in the commercial sphere. Computers are becoming “intelligent.” This book explores what that means and whether computers can be considered as “alive.” It does not take this excursion as an academic exercise but rather as an exploration into the future of warfare.

    As a technologist, I found myself desiring more optimism in the author. My attitude towards AI is very positive and very inevitable. This author keeps admonishing the reader that humans must remain “in the loop” in military applications so that they can make the ultimate decision whether to go for a kill or not. Again, as a technologist, I see human involvement as more-or-less inevitable. We humans will find a way to make increasingly better use of artificial intelligence because that’s what we’ve done with other technologies throughout thousands of years of human history.

    We must – must – continue to work. I’m not scared of what’s ahead. It’s an opportunity for people like me to continue to work and to impact the future. I’m much more scared of our prospects for the future if countries like the United States stop research on military applications and countries like Russia continue. The field of AI will continue to progress because of its promise in other applications. The only real question is to what extent the military will be “in the loop.” I’d rather us focus our energies rather than following a policy of appeasement towards those with a harsher track-record of human rights.

    Overall, this book achieves its purpose and communicates its message clearly. Those interested in military affairs or technology should pay attention.
  • Rating: 5 out of 5 stars
    5/5
    We are witnessing the evolution of autonomous technologies in our world. As in much of technological evolution, military needs drive much of this development. Peter Scharre has done a remarkable job to explain autonomous technologies and how military establishment embrace autonomy: past, present and future. A critical question: “Would a robot know when it is lawful to kill, but wrong?” Let me jump to Scharre’s conclusion first: “Machines can do many things, but they cannot create meaning. They cannot answer these questions for us. Machines cannot tell us what we value, what choices we should make. The world we are creating is one that will have intelligent machines in it, but it is not for them. It is a world for us.” The author has done a remarkable job to explain what an autonomous world might look like. Scharre spends considerable time to define and explain autonomy, here’s a cogent summary:“Autonomy encompasses three distinct concepts: the type of task the machine is performing; the relationship of the human to the machine when performing that task; and the sophistication of the machine’s decision-making when performing the task. This means there are three different dimensions of autonomy. These dimensions are independent, and a machine can be “more autonomous” by increasing the amount of autonomy along any of these spectrums.” These two quotes summarize some concerns about make autonomous systems fail-safe. (Spoiler alert: it can’t be done…)“Failures may be unlikely, but over a long enough timeline they are inevitable. Engineers refer to these incidents as “normal accidents” because their occurrence is inevitable, even normal, in complex systems. “Why would autonomous systems be any different?” Borrie asked. The textbook example of a normal accident is the Three Mile Island nuclear power plant meltdown in 1979.”“In 2017, a group of scientific experts called JASON tasked with studying the implications of AI for the Defense Department came to a similar conclusion. After an exhaustive analysis of the current state of the art in AI, they concluded: [T]he sheer magnitude, millions or billions of parameters (i.e. weights/biases/etc.), which are learned as part of the training of the net . . . makes it impossible to really understand exactly how the network does what it does. Thus the response of the network to all possible inputs is unknowable.”Here are several passages capturing the future of autnomy. I’m trying to summarize a lot of the author’s work into just a few quotes:“Artificial general intelligence (AGI) is a hypothetical future AI that would exhibit human-level intelligence across the full range of cognitive tasks. AGI could be applied to solving humanity’s toughest problems, including those that involve nuance, ambiguity, and uncertainty.”““intelligence explosion.” The concept was first outlined by I. J. Good in 1964: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” (This is also known as the Technological Singularity)“Hybrid human-machine cognitive systems, often called “centaur warfighters” after the classic Greek myth of the half-human, half-horse creature, can leverage the precision and reliability of automation without sacrificing the robustness and flexibility of human intelligence.”In summary, “Army of None” is well worth reading to gain an understanding of how autonomous technologies impact our world, now and in the future.