Credit: Data Science Central
Quora contribution written by Chomba Bupe.
I am actually not even aware of any machine learning (ML) problem that is considered to have been solved recently or in the past. This tells you a lot about how hard things really are in ML. Of course, if you read media outlets, it may seem like researchers are sweeping the floor clean with deep learning (DL), solving ML problems one after the other leaving no stones unturned. In reality, they are not, researchers actually attack relatively simpler problems in the hope of collectively solving the bigger problems, that is just how research works.
You can see DeepMind aim is to “solve intelligence and make the world a better place” but they are busy building game playing algorithms.
Source for picture: Unsolved problems in Deep Machine Learning
What does game playing got to do with solving intelligence? The idea is to isolate factors that enables learning in a complex environment. And games offers interesting properties making them ideal for artificial intelligence (AI) research:
- Games are complex enough but have simple sets of rules that have to be learnt in order for the player to be rewarded, just as in real life.
- Games can also be played much faster than real-time. So it can be that in a day the bot can gain years worth of gaming experience on powerful hardware.
Anyone who is objectively searching for answers to AI/ML problems knows that we are still just tackling low hanging fruits. That makes a lot of sense, since AI is vaguely defined and if at all you wish to make any progress in AI, then you should learn to simplify your objectives or goals.
For example, I remember when I set out to build a machine that “sees” I was so ambitious up until I realized how hard vision really is. I had to scale the scope of the project down to the level where I was able to execute on the, now simplified and much clearer, goal. I can’t say I solved vision, it is just a small niche area I toughed because it is better than not solving anything at all.
Similarily, that is what is going on in ML as researchers are tackling low hanging fruits first. For example, object detection is mostly about recovery of 2D bounding boxes with some limitations on the variation objects can undergo visually. Though, in reality the problem is harder than that, object poses are not in 2D but they are in 3D and it is not trivial to extend a 2D bounding box based object detection algorithm to 3D precise object detection problem.
However, simple datasets like MNIST are considered solved but that is a toy problem when it comes to real world variation in digit appearance. ImageNet, is also solved but it is still not as challenging as everyday vision problems human visual systems solve. So even when you read about algorithms beating humans on ImageNet, that is just a small part of the full picture.
To read the full answer to the original question, click here.