We live in a world where half truths, metaphors and fake news pollute the airways. AI is no exception. In recent times, we’ve seen this technology move out of the realms of science fiction and into science fact. But how can you work out what’s real and what’s not in the world of AI?
You may look to those working at the cutting edge of technology. According to LinkedIn, there are more than 11,000 futurists and over 350,000 thought leaders — but how qualified are these individuals when it comes to making sound predictions? Are they really leading our thoughts in the right direction?
AI is increasingly prevalent and humans want to harness its potential to reap the full rewards of this burgeoning phenomenon.
But, when it comes to explaining AI, many tech experts hit a brick wall. They often rely on simple metaphors and assumptions, which fail to convey the complexity — and power — of AI.
Metaphors are one type of model for us to ‘make sense’ of the world. When properly used, metaphors can help us to make a point and gain a better understanding of an unknown phenomenon.
But, when metaphors are used incorrectly, we are simply “jumping to conclusions because things look ‘the same’ ” — to quote leading mathematical researcher, Professor Jack Cowan.
For AI, this is an important point. Even when metaphors are used properly, they have a limited use when dealing with such complex subject matter. To convey this point, let’s look at five ways we have used metaphors over the past 2,000 years to try to explain human intelligence:
- 3rd century BCE: The invention of hydraulic engineering led to the popularity of a hydraulic model of human intelligence. The hydraulic metaphor persisted for more than 1,600 years.
- 1600s: the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain.
- 1700s: discoveries in the fields of electricity and chemistry led to new theories of human intelligence –again, largely metaphorical in nature.
- Mid-1800s: inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.
- 1940s: predictably, just a few years after the dawn of computer technology, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software.
1. Top 5 Open-Source Machine Learning Recommender System Projects With Resources
2. Deep Learning in Self-Driving Cars
3. Generalization Technique for ML models
4. Why You Should Ditch Your In-House Training Data Tools (And Avoid Building Your Own)
When it comes to explaining human intelligence metaphors fail. They oversimplify, often breaking up a large complex problem into meaningless chunks, which lead the reader to the wrong conclusions.
AI is a complex technology, one which has taken decades to develop and covers a myriad of possibilities. Simple metaphors do not cut it and they simply allow false assumptions to still prevail. This simplified approach leads following myths:
- Our brain is like a computer
- AI will take over humanity
- Technology will make things easier
- Data quality is not so important
- Data Science is for Data Scientists
I hate to break this to you, but each one of these statements is a lie.
This is not because tech experts want to mislead you. It’s just that it is very difficult to explain the world of AI to a non-technical audience.
In doing so, many of the metaphors we use regularly to describe AI are unsuitable, at best, or, at worst, completely wrong. This leads to misunderstandings.
This has a snowball effect, where the machines from the world of science fiction blur in our understanding with the machines now used in the real world, governed by science fact. As a result, we find it difficult to understand what machines are and are not capable of.
We are faced with a lot of fake news in the age of AI. We’ll be looking at these misconceptions in more detail over the next few posts in this series.