In my last post, “It’s Time to Demystify Machine Learning,” I shared an easy explanation of machine learning: teaching computers to learn by repeatedly correcting models derived from data until the machine correctly applies the model rapidly to datasets. Now, to take a look at the other side of this, I’ll share three things machine learning can’t do well.
No Pattern Lock-In
Our brain works by collecting sensory data around us and encoding it in pairs at junctions called synapses. The more often these experiences repeat, the stronger the synapses’ chemical bonds become, enabling us to practice improving our skills. Our brains also take in live data and use past experiences as a filter to quickly and effortlessly assess and understand what’s going on around us. Less frequent or one-time experiences have weaker bonds that eventually fall away.
We’ve adapted another mechanism to overcome this problem. In 1991, a tornado formed over Gull Lake in Minnesota. A homeowner took an incredible, real-time video as the storm grew in strength and approached his home. Despite being in real danger, he kept filming until he ran into the house and never shut off the camera even as the house collapsed around his family. See, when a human sees data so unlike normal patterns, we are slow to react and reform processes.
This is why in videos from nightclub fires or bombings we see people that keep dancing, look around or think it’s part of the show when they’re actually in real danger. A cameraman taking live video during the Station Night Club fire saw the fire and backed away almost immediately. As a news photographer, his brain was more familiar with dangerous situations than those dancing away as the band continued to play. Eventually, our failsafe will kick in as soon as it’s clear we’re in danger. The guy filming the tornado and his family all survived, but his slower reaction to abnormal data didn’t do him any favors.
Machine learning models lack this failsafe. When presented with data points outside the norm — which causes human minds to realize the model should be thrown out and refactored — it will disregard those data points rather than change the model. This is one limiting factor for machine learning’s usefulness.
A Gap In Trust
Let’s say we order a product through a huge online brand. On the last screen, we type in credit card information. Most of us get a pit in our stomach. We know it’s safe, but still, there’s a nagging feeling as we click submit. Perhaps that feeling changes to hunger, and we go to a restaurant. In the end, we hand over our credit card — with all the security numbers on the back — and the waiter leaves for five minutes. We don’t bat an eye! What’s the difference? Why do we feel so differently about these two situations?
The answer is that we don’t trust computers, at least not yet. In a fast and simple moment, we develop an interpersonal bond with the waiter, so if we get an errant charge on our credit card, we immediately think, “It was the waiter.”
Which situation seems more secure? The well-established online retailer, obviously. Yet, our brains perceive exactly the opposite. Our instinct to trust humans overrides logic, so we rely on perceived versus real security. Even if the computer model finds the right answer, will we trust it?
This concept isn’t new. CBS decided to use one of the first large scale computers, the UNIVAC, to help predict the outcome of the 1952 presidential election. During the broadcast, anchor Walter Cronkite turned a few times and asked computer operators if they had anything to add. The huge computer sat there, presumably doing nothing. A few comments were made that UNIVAC was nothing but rude. However, it had an answer. Operators held back because it seemed so improbable. Around 8:30 that evening, UNIVAC predicted 100 to 1 odds of Dwight Eisenhower’s victory over Illinois Governor Adlai Stephenson.
If machine learning told us we have cancer, would we trust it? Wouldn’t we consult a human doctor for another opinion? Our natural distrust of machines is a limiting factor for machine learning’s impact on society.
An Inability To Be Creative
Recently, a video made rounds on YouTube and Facebook that displayed a bunch of people’s heads talking and moving. It was very detailed and very high-quality, except that none of those people were real. They were created by a computer model designed to generate photographs of humans.
What the computer accomplished was not creativity. No computer suddenly said, “Hey, I think today I will build a model that generates pictures of people,” or then says, “Let me then write an article about this experience and post it online.” That’s one thing humans still have all to ourselves for the moment — the spark of creativity, the ability to dream something never seen before, if only to see if it can be done.
Machine learning is nothing but teaching computers to learn the way we do, so we can understand the expansive amounts of data coming from everyday devices in the internet of things. It’s not intelligence. Machine learning has significant limitations regarding pattern lock-in and our ability to trust it. It’s most certainly not the spark of creativity. Therefore, not only should we curb our excitement about machine learning’s potential, but we shouldn’t be afraid it will take over humans. We should think about machine learning, instead, as a collaborative tool that humans can use to get business done faster and more efficiently.
Credit: Google News