Credit: Google News
Machine learning is one of those buzz words that gets thrown around as a synonym for AI (Artificial Intelligence). But this really is not accurate. Note that machine learning is a subset of AI.
This field has also been around for quite some time, with the roots going back to the late 1950s. It was during this period that IBM’s Arthur L. Samuel created the first machine learning application, which played chess.
So how was this different from any other program? Well, according to Venkat Venkataramani, who is the co-founder and CEO of Rockset, machine learning is “the craft of having computers make decisions without providing explicit instructions, thereby allowing the computers to pattern match complex situations and predict what will happen.”
To pull this off, there needs to be large amounts of quality data as well as sophisticated algorithms and high-powered computers. Consider that when Samuel built his program such factors were severely limited. So it was not until the 1990s that machine learning became commercially viable.
“Current trends in machine learning are mainly driven by the structured data collected by enterprises over decades of transactions in various ERP systems,” said Kalyan Kumar B, who is the Corporate Vice President and Global CTO of HCL Technologies. “In addition, the plethora of unstructured data generated by social media is also a contributing factor to new trends. Major machine learning algorithms classify the data, predict variability and, if required, sequence the subsequent action. For example, an online retail app that can classify a user based on their profile data and purchase history allows the retailer to predict the probability of a purchase based on the user’s search history and enables them to target discounts and product recommendations.”
Now you’ll also hear another buzz word, which often gets confused with machine learning – that is, deep learning. Keep in mind that this is a subset of machine learning and involves sophisticated systems called neural networks that mimic the operation of the brain. Like machine learning, deep learning has been around since the 1950s. Yet it was during the 1980s and 1980s that this field gained traction, primarily from innovative theories of academics like Geoffrey Hinton, Yoshua Bengio and Yann Lecun. Eventually, mega tech operators like Google, Microsoft and Facebook would invest heavily in this technology. The result has been a revolution in AI. For example, if you use something like Google Translate, then you have seen the power of this technology.
But machine learning – supercharged by deep learning neural networks — is also making strides in the enterprise. Here are just a few examples:
- Mist has built a virtual assistant, called Marvis, that is based on machine learning algorithms that mine insights from Wireless LANs. A network administrator can just ask it questions like “How are the wi-fi access points in the Baker-Berry Library performing?” and Marvis will provide answers based on the data. More importantly, the system gets smarter and smarter over time.
- Barracuda Networks is a top player in the cybersecurity market and machine learning is a critical part of the company’s technology. “We’ve found that this technology is exponentially better at stopping personalized social engineering attacks,” said Asaf Cidon, who is the VP of Email Security for Barracuda Networks. “The biggest advantage of this technology is that it effectively allows us to create a ‘custom’ rule set that is unique to each customer’s environment. In other words, we can use the historical communication patterns of each organization to create a statistical model of what a normal email looks like in that organization. For example, if the CFO of the company always sends emails from certain email addresses, at certain times of the day, and logs in using certain IPs and communicates with certain people, the machine learning will absorb this data. We can also learn and identify all of the links that would be ‘typical’ to appear in an organization’s email system. We then use that knowledge and apply different machine learning classifiers that compare employee behavior with what a normal email would be like in the organization.”
Of course, machine learning has drawbacks – and the technology is far from achieving true AI. It cannot understand causation or engage in conceptual thinking. There are also potential risks of bias and overfitting of the models (which means that the algorithms determine that mere noise represents real patterns).
Even something like handling time-series data at scale can be extremely difficult. “An example is the customer journey,” said Anjul Bhambhri, who is the Vice President of Platform Engineering at Adobe. “This kind of dataset involves behavioral data that may have trillions of customer interactions. How important is each of the touch points in the purchase decision? To answer this, you need to find a way to determine a customer’s intent, which is complex and ambiguous. But it is certainly something we are working on.”
Despite all this, machine learning remains an effective way to turn data into valuable insights. And progress is likely to continue at a rapid clip.
“Machine language is important because its predictive power will disrupt numerous industries,” said Sheldon Fernandez, who is the CEO of DarwinAI. “We are already seeing this in the realm of computer vision, autonomous vehicles and natural language processing. Moreover, the implications of these disruptions may have far-reaching impacts to our quality of life, such as with advancements in medicine, health care and pharmaceuticals.”
Tom serves on the advisory boards of tech startups and can be reached at his site.
Credit: Google News