the ever-evolving technological landscape, business needs and outcomes no
longer exist as a default. Organizations across industries are adopting artificial
intelligence (AI) systems to solve complex business problems, design
intelligent and self-sustaining solutions and, essentially, stay competitive at
all times. To this end, continued efforts are being made to reinvent AI systems
so that more can be achieved with less.
AI is a key step in that direction. The reason why it could outpace traditional
machine learning (ML) models in the near future is for its potential to empower
businesses in achieving better outcomes while investing less time, effort and
Why is the traditional machine
learning model not up to the task anymore?
traditional ML model has two pipelines – training and prediction. The training
pipeline collects and ingests data through the various stages of data cleaning,
grouping, transformation, etc. The prediction pipeline analyzes the data to
yield accurate insights and predictions for effective decision making.
having two pipelines to cover the miles between ingestion and insight comes
with its share of downsides. In addition to the obvious, surface-level
challenges, like putting up an elaborate infrastructure for the two pipelines
and the associated cost overheads, is the fact that the turnaround time is
almost always long.
let’s just say we have an organization with the most ideal conditions, i.e. an
organization that reserves a generous budget for AI and is willing to invest
enough time to let the two pipelines wrestle and wrangle all the data. Does that solve problems across the board?
Largely not, because the very nature of traditional AI poses a major challenge
that any organization has to deal with on an ongoing basis.
traditional AI systems, the learning methodologies deployed in production are
- the system’s operational environment changes; or
- the underlining input to the system is altered; or
- the outcome desired by the organization changes.
of these conditions or events can significantly affect the functional accuracy
and efficiency of an AI system.
So, where did the traditional ML model fall behind?
the following example. You run a news
website which has its revenue tied to the number of users that click on the
news items posted throughout the day. Now, a user’s browser history and cookies
help you in user profiling and thus serving them interest-focused news content.
But then a large-scale event concerning the national security takes place — let’s
say tensions over the border with your neighboring country escalate and there’s
a growing fear of a war breaking out. In this matter, the government communicates
that they’ll be holding a press conference some time in the near future. As
expected, everyone is interested in reading about national affairs, including
those who restrict their dose of news to sports or finance.
lies the challenge for you. Even if you
had batch-trained your model every single day, it would still be sharing items
based on the content consumed the day before since the model is not quick
enough to adapt to the dramatic change in user preferences on the same day. Now, when on the following day the data
pertaining to the heightened interest in national affairs is fed to the new
training cycle, the users start to receive the related news recommendations. However,
since the data is from the day before, the users may no longer be as interested
in national affairs as they were on the day of the press conference.
the model is doing its job of refreshing the type of content delivered on a daily
basis, what you would have wanted it to do was to take the latest developments
happening in the country and update the content-type by the minute or second.
This holds true for businesses of all stripes. In the highly competitive and
unpredictable business environment of today, your business can’t afford to wait
an entire day for your AI to adapt and deliver.
How Adaptive AI is Different
Adaptive Learning method employs a single pipeline. With this method, you can
use a continuously enriched learning approach that keeps the system updated and
helps it achieve high performance levels.
The Adaptive Learning process monitors and learns the new changes made
to the input and output values and their associated characteristics. In
addition, it learns from the events that may alter the market behavior in real time
and, hence, maintains its accuracy at all times. Adaptive AI accepts the
feedback received from the operating environment and acts on it to make
our work with communications service providers (CSPs) globally, we’ve evaluated
the results generated through Adaptive Learning in a qualitative and
quantitative manner. The results obtained are consistently accurate, have
excellent coverage, and lead to a significant impact on the performance of the
process eliminates the hassle of creating a training pipeline for ML/AI
systems. The system is flexibly designed to learn from the new observations
while working on older predictions, keeping the processes updated in real time.
This flexibility removes the risk of learning systems becoming obsolete or
working on outdated training samples that have made the conventional methods
Learning tries to solve these problems while building ML models at scale.
Because the model is trained via a streaming approach, its efficient for
domains with highly sparse datasets where noise handling is important. The pipeline is designed to handle billions
of features across vast datasets while each record can have hundreds of
features, leading to sparse data records. This system works on a single
pipeline as opposed to the conventional ML pipelines that are divided into two
parts, as discussed earlier. This provides quick solutions to proof-of-concepts
and easy deployment in production. The initial performance of the Adaptive
Learning system is comparable to batch-model systems but goes on to surpass
them by acting and learning from the feedback received by the system, making it
far more robust and sustainable in long term.
Some Best Practices
CSPs are major beneficiaries of such an approach, here are a few things they should
keep in mind while running an Adaptive Learning pipeline:
- Data processing steps should be kept
similar for new data sources so that all the observations that the AI system learns remain
- The methodology where the AI system transforms and stores individual
observation should be the same over the entire duration of the pipeline.
- Feedback to the Adaptive Learning
method should be readily available so that the system remains current.
Adaptive AI method can replace traditional
supervised (classification/regression) ML methods in all possible use cases
with streaming data use cases showing the maximum improvement. The
characteristics of Adaptive AI make it highly reliable in the dynamic software
environments of CSPs where inputs/outputs change with every system upgrade. It can play a key role in their digital
transformation across network operations, customer care, marketing, security
and IoT – and help transform their customer experience.
About the Authors
Vishal Nigam is Senior Manager of Analytics (AI and ML) at Guavus, an industry-recognized expert in CSP AI, computational learning, and analytics solutions. Vishal leads Guavus’ Research and Development team in Gurgaon, India, where he and his team are responsible for transforming innovative concepts and customer-stated business needs into a precise technical problem and designing powerful customer solutions using ML and AI. Prior to Guavus, he was at Goldman Sachs and Ola Cabs.
Mudit Jain is an Analytics Manager at Guavus, where he is responsible for developing AI-based solutions for the CSP domain. He has more than 7 years of experience in machine learning and artificial intelligence. Previously, he worked as a machine learning analyst at Capital One and Opera Solutions.
Sign up for the free insideBIGDATA newsletter.
Credit: Google News