In building the world’s first airplane at the dawn of the 20th century, the Wright Brothers took inspiration from the “insightful” movements of birds. They observed and reverse-engineered aspects of the wing in nature, which in turn helped them make important discoveries about aerodynamics and propulsion.
Similarly, to build machines that think, why not seek inspiration from the three pounds of matter that operates between our ears? Geoffrey Hinton, a pioneer of artificial intelligence and winner of the Turing Award, seemed to agree: “I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain.”
So what’s next for artificial intelligence (AI)? Could the next wave of AI be inspired by rapid advances in biology? Can the tools for understanding brain circuits at the molecular level lead us to a higher, systems-level understanding of how the human mind works?
The answer is likely yes, and the flow of ideas between learning about biological systems and developing artificial ones has actually been going on for decades.
The origins of machine learning: human brain science
First of all, what does biology have to do with machine learning? It may surprise you to learn that much of the progress in machine learning stems from insights from psychology and neuroscience. Reinforcement learning (RL) — one of the three paradigms of machine learning (the two others being supervised learning and unsupervised learning) — originates from animal and cognitive neuroscience studies going all the way back to the 1940s. RL is central to some of today’s most advanced AI systems, such as AlphaGo, the widely-publicized AI agent developed by leading AI company Google DeepMind. AlphaGo defeated the world’s top-ranked players at Go, a Chinese board game that comprises more board combinations than there are atoms in the universe.
Despite AlphaGo’s superhuman performance in the game of Go, its human opponent still possesses far more general intelligence. He can drive a car, speak languages, play soccer, and perform a myriad of other tasks in any kind of environment. Current AI systems are largely incapable of using the knowledge learned to play poker and transfer it to another task, like playing a game of Cluedo. These systems are focused on a single, narrow environment and require vast amounts of data, and training time. And still, they make simple errors like mistaking a chihuahua for a muffin!
What children and AI systems have in common
Similar to child learning, reinforcement learning is based on the AI system’s interaction with its environment. It performs actions that seek to maximize the reward and avoid punishments. Driven by curiosity, children are active learners that simultaneously explore their surrounding environment and predict their actions’ outcomes, allowing them to build mental models to think causally. If, for example, they decide to push the red car, spill the flower vase, or crawl the other direction, they will adjust their behavior based on the outcomes of their actions.
Children experience different environments in which they find themselves navigating and interacting with various contexts and objects’ dispositions, often in unusual manners. Just as child brain development could inspire the development of AI systems, the RL agent’s learning mechanisms are parallel to the brain’s learning mechanisms driven by the release of dopamine – a neurotransmitter key to the central nervous system – which trains the prefrontal cortex in response to experiences and thus shapes stimulus-response associations as well as outcome predictions.
Synthetic biology and AI
Biology is one of the most promising beneficiaries of artificial intelligence. From investigating mind-boggling combinations of genetic mutations that contribute to obesity to examining the byzantine pathways that lead some cells to go haywire and produce cancer, biology produces an inordinate amount of complex, convoluted data. But the information contained within these datasets often offers valuable insights that could be used to improve our health.
In the field of synthetic biology, where engineers seek to “rewire” living organisms and program them with new functions, many scientists are harnessing AI to design more effective experiments, analyze their data, and use it to create groundbreaking therapeutics. I recently highlighted five companies that are integrating machine learning with synthetic biology to pave the way for better science and better engineering.
Artificial general intelligence: The holy grail of AI
Artificial general intelligence (AGI) describes a system that is capable of mimicking human-like abilities such as planning, reasoning, or emotions. Billions of dollars have been invested in this exciting and potentially lucrative area, leading some to make claims like “data is the new oil.”
Among the many companies working on general artificial intelligence are Google’s DeepMind, the Swiss AI lab IDSIA, Nnaisense, Vicarious, Maluuba, the OpenCog Foundation, Adaptive AI, LIDA, and Numenta. Organizations such as the Machine Intelligence Research Institute and OpenAI also state AGI as their main goal. One of the goals of the international Human Brain Project is to simulate the human brain.
Despite a growing body of talent, tools, and high-quality data needed to achieve AGI, we still have a long way to go to achieve this.
AI in our daily lives
Today, AI techniques such as Machine Learning (ML) are ubiquitous in our society, reaching from healthcare and manufacturing to transportation and warfare but are qualified as “narrow AI”. They indeed process and learn powerfully large amounts of data to identify insightful and informative patterns for a single task, such as predicting airline ticket prices, distinguishing dogs from cats in images, and generating your movie recommendations on Netflix.
In biology, AI is also changing your health care. It is generating more and better drug candidates (Insitro), sequencing your genome (Veritas Genetics), and detecting your cancer earlier and earlier (Freenom).
Where do we go from here?
As humans, we are able to quickly acquire knowledge in one context and generalize it to another environment across novel multiple situations and tasks, which would allow us to develop more efficient self-driving car systems as they need to perform many tasks on the road concurrently. In AI research, this concept is known as transfer learning. It assists an AI system in learning from just a few examples — instead of the millions that traditional computing systems usually need — to build a system that learns from first principles, abstracts the acquired knowledge, and generalizes it to new tasks and contexts.
To produce more advanced AI, we need to better understand the brain’s inner workings that allow us to portray the world around us. There is a synergistic mission between understanding biological intelligence and creating an artificial one, seeking inspiration from our brain might help us bridge that gap.
Acknowledgment: Thank you to Louis N. Andre for additional research and reporting in this article. I’m the founder of SynBioBeta, and some of the companies that I write about — including some named in this article — are sponsors of the SynBioBeta conference (click here for a full list of sponsors).
Credit: Google News