There seem to be countless articles projecting the incredible economic and social growth that will be driven by AI. For example, PwC predicts that AI technologies and applications will increase global GDP by up to 14% between now and 2030 (see Figure 1).
Figure 1: Source: “Notes from the AI frontier: Modeling the impact of AI on the world economy”
But so many organizations are still ill-prepared to take advantage of this once-in-a-generation AI opportunity and I believe it’s because these organizations are approaching the AI opportunity with the wrong perspective – a rigid construct that perceives that AI programming will be similar to software programming. Let’s talk about the steps necessary for organizations to prepare themselves to exploit the unique financial, economic, customer, society and environmental aspects of AI.
An effective detective (yea, I read lots of “Hardy Boys” books growing up) understands how to dig and explore, ask lots of provocative and challenging questions, create and test a hypothesis, fail and learn from the testing of that hypothesis, and build upon those learnings to create a “new and improved” hypothesis until they create a hypothesis that is “good enough” to prove their case.
The Detective model is a good analogy for an effective AI Programmer – they have a curiosity to continuously explore, ask provocative and challenging questions, and learn in order to build their case or hypothesis. Unfortunately, that’s not how a software programmer approaches their problem.
A software developer defines the criteria for success, but like an effective detective, an AI Programmer or data scientist discovers the criteria for success.
When writing the blog “Why Is Data Science Different than Software Development?” to highlight the differences between Data Science (AI Programmer) and software development I stated:
“The methodologies and processes that support successful software development do not work for data science (AI) projects according to one simple observation: software development knows, with 100% assurance, the expected outcomes, while data science (AI Programming) – through data exploration and hypothesis testing, failing and learning – discovers those outcomes.”
AI Programming differs from traditional Software Programming in the following areas:
- Needs lots of data…preferably Big Data (and lots of it). The more diverse, the more variables and granular the data sets, the better.
- Close and continuous collaboration with business stakeholders and subject matter experts to define the criteria that constitutes business success (Hypothesis Development Canvas).
- Effective Data Exploration and a natural Discovery Curiosity to uncover those variables and metrics that might be better predictors of performance.
- Mastering the Art of Failure and Learning through Failure; that if you don’t have enough “might” moments, you’ll never have any breakthrough moments.
- Understanding when “good enough” is “good enough”, especially with respect to understanding the costs of False Positives and False Negatives; the costs of the model being wrong ultimately determines when good enough is actually good enough.
- Adopting an environment of continuous data and model refinement in order to exploit the unique economic characteristics of data and analytics.
Heck, we even created this really boss infographic to highlight the differences (with the cool Jason and the Argonauts analogy).
Figure 2: Why Is Data Science Different than Software Development
AI Industry Luminary Peter Norvig’s article “AI Programming: So Much Uncertainty” discusses how AI programming is so much different than traditional software programming. Table 1 summarizes some of the key differences traditional Software Programming and AI Programming (Data Science).
AI Programming (Data Science)
Software Programmers tell the computer exactly how to do something step-by-step
Data Scientists build models that learn what to do iteration-by-iteration
Software Programming is about the code
AI Programming is about the model
Traditional Software deals with Certainty – remove $100 from your checking account and the software takes a definitive action to update/balance all the affected balances without further validation
AI programming deals with Probabilities – for example, we can’t say for sure ‘this is fraud’ and ‘this is not.’ We can only say that probabilistically and still requires validation as to whether it was fraud or not
Traditional Software goes through a formal version control / check-in / check-out update process
AI programming updates the models themselves, making changes on the fly, without human intervention or human reprogramming
Table 1: Differences Between Traditional Software Programming and AI Programming
Without human intervention…. that’s the key aspiration of AI programming; that we can build AI models that continuously learn and adapt without the need for a human to reprogram the AI model. And if we are to build AI models that know how to continuously learn and refine their models without human intervention or reprogramming, then it becomes critically critical that we invest the time upfront to understand the sources of value – both rewards and penalties.
A “Rational AI Agent” should strive to “do the right thing”, based on what it can perceive and the actions it can perform. The right action is the one that will cause the rational AI agent to be most successful. A rational AI agent should select an action that is expected to maximize its rewards while minimizing penalties, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
A key aspect of defining a rational AI agent is the definition of the objective criterion that measures the progress and success of an agent’s behavior. For example, the objective criterion of an autonomousvacuum-cleaner agent could be to maximize the amount of dirt cleaned up while reducing the amount of time taken while reducing amount of electricity consumed while reducing the amount of noise generated while reducing the banging and wear-and-tear on the furniture and walls, etc.
An AI agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt) without human intervention or the need for human reprogramming. Autonomous AI agents should be able to perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration); that is, autonomous AI agents should be smart enough to realize when more data is needed and make decisions that do nothing but gather more data that might impact future decisions (see Figure 3).
Figure 3: Source “Intelligent Agents”
In yet another example of the important relationship between AI and Economics (see “Data and Economics 101”), utility determination is a critical aspect in defining both AI and economic success:
- In AI, a utility function assigns values to certain actions that the AI system can take. An AI agent’s preferences over possible outcomes can be captured by a function that maps these outcomes to a utility value; the higher the number the more that agent likes that outcome.
- In economics, utility function is an important concept that measures preferences over a set of goods and services. Utility represents the satisfaction that consumers receive for choosing and consuming a product or service.
So, to create a “rational AI agent” that understands how to take the appropriate actions, the AI programmer must define the utility function across a variety of often-conflicting value dimensions. For example, increase financial value, while reducing operational costs and risks, while improving customer satisfaction and likelihood to recommend, while improving societal value and quality of life, while reducing environmental impact and carbon footprint (see Figure 4).
Figure 4: Value Definition Challenge: Optimizing Across Conflicting Priorities
I suppose that some of you might wonder why I included “spiritual value creation” in my set of conflicting priorities in determining utility. Contemplation of spiritual value creation must be considered if we are to leverage AI programming to create machine learning and deep learning “rational” agents and models that are “good enough” in “doing good” under both short-term and long-term considerations.
Determining when an AI model is “good enough” requires that the AI programmer or data scientist fully and completely understands the holistic utility or value determination associated with the model. Unfortunately, making the utility determination is not part of an AI programmer’s or data scientist’s skill set.
In order to make a holistic utility determination, collaboration across a diverse set of internal and external stakeholders is required to identify those short-term and long-term metrics and KPI’s against which AI model progress and success will be measured. The careful weighing of the short-term and long-term metrics associated with the financial/economic, operational, customer, society, environmental and spiritual dimensions must be taken into consideration if we are to master AI programming for the betterment of society, which ultimately should be the goal of every AI programmer and data scientist.
In a world more and more driven by AI models, AI Programming and Data Science cannot effectively ascertain when an AI model is “good enough” if the AI model hasn’t taken into consideration the utility determination across all the dimensions of financial/economic, operational, customer, society, environmental and spiritual.
Blog key points:
- To exploit the “once in a generation” Artificial Intelligence (AI) opportunity, organizations must reframe how they develop models (AI programming) versus traditional software development approaches
- A good AI Programmer, or Data Scientist, is like a good detective – they have a curiosity to continuously explore, ask provocative and challenging questions, and learn in order to build their case (hypothesis)
- A software developer defines the criteria for success, but like an effective detective, an AI Programmer or data scientist discovers the criteria for success.
- AI model development relies upon the creation of “rational agents” that interact with their environment and learn, guided by the definition of rewards and penalties
- The AI model will try to maximize rewards while minimizing penalties, which requires the careful determination of rewards and penalties across multiple dimensions of utility.
- To create a rational AI agent, value determination must consider the utility dimensions of financial/economic, operational, customer, society, environmental and spiritual.
See the below blogs for more details on leveraging AI, Machine Learning and Deep Learning to create rational agents: