Nautilus

Why Robot Brains Need Symbols

Nowadays, the words “artificial intelligence” seem to be on practically everyone’s lips, from Elon Musk to Henry Kissinger. At least a dozen countries have mounted major AI initiatives, and companies like Google and Facebook are locked in a massive battle for talent. Since 2012, virtually all the attention has been on one technique in particular, known as deep learning, a statistical technique that uses sets of of simplified “neurons” to approximate the dynamics inherent in large, complex collections of data. Deep learning has powered advances in everything from speech recognition and computer chess to automatically tagging your photos. To some people, it probably seems like “superintelligence”—machines vastly more intelligent than people—are just around the corner.

The truth is, they are not. Getting a machine to recognize the syllables in your sentence is not the same as it getting to understand the meaning of your sentences. A system like Alexa can understand a simple request like “turn on the lights,” but it’s a long way from holding a meaningful conversation. Similarly, robots can vacuum your floor, but the AI that powers them remains weak, and they are a long way from being clever enough (and reliable enough) to watch your kids. There are lots of things that people can do that machines still can’t.

I tried to take a step back, to explain why deep learning might not be enough, and where we ought to look to take AI to the next level.

And lots of controversy about what we should do next. I should know: For the last three decades, since I started graduate school at the Massachusetts Institute of Technology, studying with the inspiring cognitive scientist Steven Pinker, I have been embroiled in on-again, off-again debate about the nature of the human mind, and the best way to build AI. I have taken the sometimes unpopular position that techniques like deep learning (and predecessors that were around back then) aren’t enough to capture the richness of the human mind.

That on-again off-again debate flared up in an unexpectedly big way last week, leading to a huge Tweetstorm that brought in a host of luminaries, ranging from Yann LeCun, a founder of deep learning and current Chief AI Scientist at Facebook, to (briefly) Jeff Dean, who runs AI at Google, and Judea Pearl, a Turing Award winner at the University of California, Los Angeles.

When 140 characters no longer seemed like enough, I tried to take a step back, to explain why deep learning might not be enough, and where we perhaps ought to look for another idea that might combine with deep learning to take AI to the next level. The following is a slight adaptation of my personal perspective on what the debate is all about.

You’re reading a preview, subscribe to read more.

More from Nautilus

Nautilus3 min read
Making Light of Gravity
1 Gravity is fun! The word gravity, derived by Newton from the Latin gravitas, conveys both weight and deadly seriousness. But gravity can be the opposite of that. As I researched my book during the sleep-deprived days of the pandemic, flashbacks to
Nautilus7 min read
The Feminist Botanist
Lydia Becker sat down at her desk in the British village of Altham, a view of fields unfurling outside of her window. Surrounded by her notes and papers, the 36-year-old carefully wrote a short letter to the most eminent and controversial scientist o
Nautilus10 min read
The Ocean Apocalypse Is Upon Us, Maybe
From our small, terrestrial vantage points, we sometimes struggle to imagine the ocean’s impact on our lives. We often think of the ocean as a flat expanse of blue, with currents as orderly, if sinuous, lines. In reality, it is vaster and more chaoti

Related Books & Audiobooks