Like everything, boundaries, and framework need to be established, and AI should be no different.
Whether we know it or not, AI is already present in our everyday life. It’s present in the way social media feeds are organized; the way predictive searches show up on Google; how music services such as Spotify make song suggestions, and YouTube recommending videos to you.
The technology is helping transform the way enterprises do business. For example,
- Horizontal AI is basically AI infrastructure and is the picks and shovels for solving problems with AI. These are companies that sell tools to help their customers implement AI. DataRobot, Domino Data Labs, and Scale are examples of companies that provide”horizontal AI.” Specifically, DataRobot provides a fully-automated AI product that a business analyst with modest training can use to build virtually any prediction engine, given the right data. Similarly, Domino Data provides an opinionated framework that enables advanced AI engineers to accelerated development with tools like version control that are custom made for AI applications. Further, a business like Scale outsources the data-labeling problem so that autonomous vehicle teams can focus on building software, not labeling data.
- Vertical AI companies solve a specific vertical problem using AI. For example, ClimaCell uses AI to better predict weather microclimates, Kodiak to operate semi-trucks, Viz.ai to accelerate the identification/treatment of strokes, and Verkada for security and people identification.
According to Data61 principal scientist in strategy and foresight Stefan Hajkowicz, AI creates a “window of problem-solving capabilities”.
AI is going to be able to save many people from cancer, it will improve mental health by AI-enabled counseling session, it will help reduce road accidents, there are huge benefits in the future to your life due to AI.
Humanity desperately needs it. AI is going to be critical in solving dilemmas in healthcare, for instance, where healthcare expenditure is growing at unsustainable rates. AI is going to be a crucial technology that is going to help pretty much every sector in our economy.
A report by PwC released in 2017 predicted that AI will boost global GDP by 14% OR $15.7 trillion by 2030.
What to avoid
Given the impact AI will have, there’s scope to make the technology better.
There are so many ways in which we could use AI for social good, but over the last year or two it has become apparent that there are potentially a lot of unintended consequences, not to mention in which AI could potentially be used for bad reasons, so there are huge demand and interest for developing AI with our values.
Evidence of when AI has gone wrong could be pinpointed to the time in the United States when AI algorithms were used to provide recommendations on prison sentences. A report from ProPublica concluded that the AI system was bias against black defendants as it consistently recommended longer sentences in comparison to white counterparts for the same crime.
The United Nations Educational, Scientific, and Cultural Organisation (UNESCO) recently accused Apple’s Siri, Microsoft’s Cortana, and Amazon’s Alexa, along with other female-voice digital assistants, of reinforcing commonly held gender biases.
As the speech of most voice assistants is female, it sends a signal that women are obliging, docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command like hey or OK. The assistant holds no power of agency beyond what the commander asks of it. It honors commands and responds to queries regardless of tone or hostility. In many communities, this reinforces commonly held gender biases that women are subservient and tolerant of poor treatment.
Another example would have to be when Microsoft’s AI bot, Tay, which was originally designed to interact with people online through casual and playful conversation, ended up hoovering good, bad, and ugly interactions. After less than 16 hours of launch, Tay turned into a brazen anti-Semite, stating, “The Nazis were right”.
There are plentiful examples of how our algorithms can inherit the bias that exists in society we have if we’re not careful.
However, if AI is carefully programmed to ask the right questions and designed by diverse teams, it will make much more just decisions.
Our greatest concern about existing AI are the people who build these systems, which they say are just extracting the value of people working in Silicon Valley that come from elitist backgrounds.
One of the key concerns that are often raised is that AI is being built by a lot of young white males in the 20-30 age bracket because that’s the AI workforce.
I think it immediately means they are building AI that is bias, but I think it’s worth a look into how that is happening, and whether they are creating AI that is genuinely reflective of the diverse world we live in.
Building ethical AI with diversity
Part of the solution to help overcome these systemic biases that are built into existing AI systems is to have open conversations about ethics, with input from diverse views in terms of culture, gender, age, and socio-economic background, and how it could be applied to AI.
What we need to do is figure out how to develop systems that incorporate democratic values and we need to start the discussion within our society about what we want those values to be.
It’s all about constant review and revision and recognizing we do evolve as a society and hopefully, we evolve to becoming morally better.