You are on page 1of 4

The Symbiotic Nature of AI and

Neuroscience
The Cognitive Intersect of Human and Artificial
Intelligence
Posted Mar 09, 2019

Neuroscience and artificial intelligence (AI) are two very different scientific disciplines.
Neuroscience traces back to ancient civilizations, and AI is a decidedly modern
phenomenon. Neuroscience branches from biology, whereas AI branches from
computer science. At a cursory glance, it would seem that a branch of science of living
systems would have little in common with one that springs from inanimate machines
wholly created by humans. Yet discoveries in one field may result in breakthroughs in
the other— the two fields share a significant problem, and future opportunities.

The origins of modern neuroscience is rooted in ancient human civilizations. One of the
first descriptions of the brain’s structure and neurosurgery can be traced back to 3000 -
2500 B.C. largely due to the efforts of the American Egyptologist Edwin Smith. In 1862
Smith purchased an ancient scroll in Luxor, Egypt. In 1930 James H. Breasted
translated the Egyptian scroll due to a 1906 request from the New York Historical
Society via Edwin Smith’s daughter. The Edwin Smith Surgical Papyrus is an Egyptian
neuroscience handbook circa 1700 B.C. that summarizes a 3000 - 2500 B.C ancient
Egyptian treatise describing the brain’s external surfaces, cerebrospinal fluid,
intracranial pulsations, the meninges, the cranial sutures, surgical stitching, brain
injuries, and more.

In contrast, the roots of artificial intelligence sit squarely in the middle of the twentieth
century. American computer scientist John McCarthy is credited with creating the term
“artificial intelligence” in a 1955 written proposal for a summer research project that he
co-authored with Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon. The
field of artificial intelligence was subsequently launched at a 1956 conference held at
Dartmouth College.

The history of artificial intelligence is a modern one. In 1969 Marvin Minsky and
Seymour Papert published a research paper titled "Perceptrons: an introduction to
computational geometry" that hypothesized the possibility of a powerful artificial
learning technique for more than two artificial neural layers. During the 1970s and
1980s, AI machine learning was in relative dormancy. In 1986 Geoffrey Hinton, David
E. Rumelhart, and Ronald J. Williams published “Learning representations by back-
propagating errors” which illustrated how deep neural networks consisting of more than
two layers could be trained via backpropagation.

During the 1980s to early 2000s, the graphics processing unit (GPU) have evolved from
gaming purpose towards general computing, enabling parallel processing for faster
computing. In 1990s, the internet spawned entire new industries such as cloud-
computing based Software-as-a-Service (SaaS). These trends enabled faster, cheaper,
and more powerful computing.

In 2000s, big data sets emerged along with the rise and proliferation of internet-based
social media sites. Training deep learning requires data sets, and the emergence of big
data accelerated machine learning. In 2012, a major milestone in AI deep learning was
achieved when Geoffrey Hinton, Alex Krizhevsky, and Ilya Sutskever trained a deep
convolutional neural network with 60 million parameters, 650,000 neurons, and five
convolutional layers, to classify 1.2 million high-resolution images into 1,000 different
classes. The team made AI history by through their demonstration of backpropagation
on a GPU implementation on such an impressive scale of complexity. Since then, there
has been a worldwide gold rush to deploy state-of-the-art deep learning techniques
across nearly all industries and sectors.

article continues after advertisement

In the future, the opportunities that neuroscience and AI offer are significant. Global
spending on cognitive and AI systems is expected to reach $57.6 billion by 2021
according to IDC estimates. The current AI renaissance, largely due to deep learning, is
a global movement with worldwide investment from corporations, universities, and
governments. The global neuroscience market is projected to reach $30.8 billion by
2020, according to figures from Grand View Research. Venture capitalists, angel
investors, and pharmaceutical companies are making significant investments in
neuroscience startups.

Today’s wellspring of the global commercial, financial and geopolitical investments in


artificial intelligence owes, in some part, to the human brain. Deep learning, a subset of
AI machine learning, pays homage to the biological brain structure. Deep neural
networks (DNNs) consist of two or more “neural” processing layers with artificial
neurons (nodes). A DNN will have an input layer, an output layer, and many layers in
between—the more artificial neural layers, the deeper the network.

The human brain and its associated functions are complex. Neuroscientists do not know
many of the exact mechanisms of how the human brain works. For example, scientists
do not know the neurological mechanisms of exactly how general anesthesia works on
the brain, or why we sleep or dream.

Similarly, computer scientists do not know exactly how deep learning arrives at its
conclusions due to complexity. An artificial neural network may have billions or more
parameters based on the intricate connections between the nodes—the exact path is a
black-box.

article continues after advertisement

This black-box problem can be disconcerting considering AI’s growing impact on the
future of the global economy. By 2030, artificial intelligence is projected to generate
USD 13 trillion additional economic activity globally according to a 2018 report by
McKinsey Global Institute.
So how can scientists better understand the inner workings of deep learning? One
approach is to deploy concepts from human psychology–the scientific study of the mind
and behavior.

Gary Marcus is a psychology and neural science professor at New York University, and
former CEO and Founder of Geometric Intelligence that was acquired by Uber. In an
article on Medium in 2018, Marcus suggests that the AI community should consider
“incorporating more innate structure into AI system.” He calls for hybrid models that
would “incorporate not just supervised forms of deep learning, but also other techniques
as well, such as symbol-manipulation, and unsupervised learning.”

Google’s DeepMind is using principles from cognitive psychology to understand deep


neural networks in order to address the black-box problem. Cognitive psychology is the
scientific study of mental processes such as thinking, problem solving, perception,
memory, applied language, attention and creativity.

As Demis Hassabis, co-founder and CEO of DeepMind, explained in an article that he


wrote in The Financial Times in April 2017, “As we discover more about the learning
process itself and compare it to the human brain, we could one day attain a better
understanding of what makes us unique, including shedding light on such enduring
mysteries of the mind as dreaming, creativity and perhaps one day even consciousness.”

article continues after advertisement

Fortunately, innovation in neuroscience may serve as inspiration for future


advancements in artificial intelligence, and vice versa. For example, deep learning’s
architecture is a hierarchical computing system. But what if the human brain’s
neurological process for learning is non-hierarchical?

In 2018 scientist and technologist Jeff Hawkins of Numenta introduced a new


framework that goes against decades of commonly held views in neuroscience on how
the human neocortex operates —the “Thousand Brains Theory of Intelligence.”
Hawkins hypothesize that that every part of the human neocortex learns complete
models of objects and concepts by combining input with a grid cell-derived location,
then integrating over movements. Due to the non-hierarchical connections, inference
may occur with movement of the sensors.

It would be interesting to apply the Thousand Brains Theory of Intelligence to develop


new types of artificial intelligence. Can a novel form of machine learning genre be
developed with non-hierarchical connections that connect between artificial processing
systems—across modalities and levels?

The underlying mechanisms of AI deep learning and human cognition are complex
systems. Ironically, humans have created artificial intelligence with inherent opacity
like the biological brain. Together, both fields of science are producing breakthroughs
that may significantly shape the future of humanity.

Copyright © 2019 Cami Rosso All rights reserved.

References
Wilkins, Robert H. “Neurosurgical Classic-XVII Edwin Smith Surgical Papyrus.”
Journal of Neurosurgery. March 1964.

Myers, Andrew. “Stanford's John McCarthy, seminal figure of artificial intelligence,


dies at 84.” Stanford Report. October 25, 2011.

ACM. “John McCarthy.” A.M. Turing Award Winners. Retrieved 3-9-2019 from
https://amturing.acm.org/award_winners/mccarthy_1118322.cfm

AAAI. “A Proposal for the Dartmouth Summer Research Project on Artificial


Intelligence.” AI Magazine. December 15, 2006.

van Rijmenam, Mark. “A Short History Of Big Data.” DataFloq. January 2006.

Williams, Hannah. “The history of cloud computing: A timeline of key moments from
the 1960s to now.” Computer World UK. March 13, 2018.

McFadden, Christopher. “A Chronological History of Social Media.” Interesting


Engineering. October 16, 2018.

DeepMind. “Interpreting Deep Neural Networks using Cognitive Psychology.”


Retrieved 3-9-2019 from https://deepmind.com/blog/cognitive-psychology/

Marcus, Gary. “In defense of skepticism about deep learning.” Medium. January 14,
2018.

You might also like