Theodoros Evgeniou, a professor of decision science and technology management at INSEAD, thinks that the AI revolution might be 20 years late.
According to research on technological adoption, most significant technological changes take about 50 years from the lab to widespread adoption in society. However, as we wrote in the last post, the famous Dartmouth workshop that kickstarted the study of artificial intelligence happened in 1956. 63 years ago. Statistical learning theory, the mathematical theory that underpins modern machine learning and deep learning, was mostly developed in the 1980s. 30 years ago. However, the general public only started to see consumer products using machine learning in the last few years.
More damming, those popular products are not even “real AI.” We’ve explained last time how the AI term is problematic. Proponents talk about narrow AI or general AI. Narrow AI is what we have today. It designates limited systems that are very good a specific task. General AI is human-like intelligence that would be good at a wide range of tasks and is still science fiction. That is not wholly convincing, but let’s agree to use the AI term to designate the narrow AI systems that are available today.
Why is the AI revolution so late then? I would argue that it was on time, but the hype came late.
The AIs that are making the news today are based on a technology called machine learning, and more specifically on a subtype called statistical learning. Statistical learning needs two things to work well: data and computing resources. Deep neural networks — aka deep learning — need an unusually large amount of both data and compute. Other algorithms, like gradient boosted trees, require less of both. Some algorithms, like Bayesian networks, can work with very little data but need comparatively more substantial amounts of computing.
In the 1980s, when statistical learning theory was developed and became popular, both data and computing power were scarce. Exploiting the algorithms at scale was challenging. However, some pioneers did apply it. Jim Simmons was a mathematician who had done some work on statistical pattern recognition and a few related fields. After realizing that his mathematical training could give him an edge in financial markets, he founded Renaissance Technologies in 1982. He was joined in 1983 by Robert Mercer, a computer scientist who was part of IBM’s Artificial Intelligence research team. It’s an understatement to say that there were successful. Both are now billionaires. The Medallion fund, started in 1988 to manage Renaissance Technologies’ employees saving, is one of the most profitable funds of all time. It averaged over 35% annual returns for more than two decades. Renaissance Technology and other similar Wall Street quants could do what they did because the financial industry had a lot of data, and the money to buy powerful computers.
Another example is Google. Larry Page and Sergey Brin started the company in 1998. They were capitalizing on their PageRank search algorithm, which they developed for their Ph.D. thesis at Stanford. The basic PageRank algorithm can be considered a form of unsupervised machine learning. It has since been improved upon by Google’s team using more explicit supervised learning methods. Like the methods used by Simmons and Mercer, Google’s search engine technology would nowadays be considered to be AI.
If the financial industry could use AI in the 1980s and technology start-ups in the 1990s, then why are most other industries so behind the curve? The answer is simple. For 20 years, the cost benefice analysis didn’t add up for most businesses. The firms who invested in AI tech in the 1980s and 1990s did it because their core business model depended on it. It made the investments in technology and human capital not just palatable but necessary. Before the cloud computing technologies became widespread, few companies had easy access to enough data and computing power. For most firms in most industries, the barriers to entry were too high. It was taking teams of Ph.D. and programmers months and years to develop a system with decent performance. Most of the software and some of the hardware had to be custom made for the task. Understandably, very few companies had the stomach to invest years and millions of dollars on unproven systems, with no guarantee of positive business outcomes.
Things have changed, however. Nowadays, kids build working AI programs in a few weekends for their school projects. All you need are a few open-source software packages and a decent laptop. A few dollars per hour can buy you insanely fast Tensor Processing Units (TPUs) on the cloud. The costs and the difficulty of technology have plummeted. Now that the barriers to entry have disappeared, the cost-benefice calculation becomes very attractive, and entrepreneurs suddenly see opportunities everywhere. AI tech is becoming ubiquitous in all industries.
The AI revolution might be late, but now there are no excuses anymore.
What do you think? Is the AI revolution late?