How your business can use AI ethics as an innovation tool
Artificial intelligence promises great benefits to enterprises. According to research by Markets and Markets, AI will grow to a $190 billion industry by 2025. Gartner reports that AI use for enterprise applications has grown by 270% in only four years.
If you are considering an AI implementation, keep an eye on the ethical risks. More and more business owners are starting to think about how AI ethics can help them design better products and be a tool of innovation. Let me guide you through it.
The ethics of artificial intelligence is an academic field that appeared out of the necessity to regulate intelligent software systems that have a direct impact on society. However, this field has clear business applications. Facebook, Google, IBM, and other tech companies have widely-growing departments that deal with AI ethics on a daily basis.
Artificial intelligence algorithms power everything from social networks to smart vehicles. They have to face moral dilemmas daily in what concerns human lives. How to teach the machines to make decisions that are fair to anyone, including minorities and disadvantaged groups?
Let’s have a look at MIT’s project called The Moral Machine. Here you can try to solve some ethical questions that fully automated driverless cars have to solve. All of these problems feature a car with broken brakes:
- Go straight into a concrete block and kill all those who are inside or save their lives turning but killing pedestrians?
- Whose life is more valuable: of a chief executive or a homeless person?
- Who to save: two men or two children?
Those problems seem to be too cruel to be true ― but on the road, anything can happen. We have to be ready, one day automated vehicles will have to make such difficult choices. But first, we need to decide: who do we prioritize? Or should everything happen at random?
You don’t want to delegate solving such complex issues to someone without proper training. Here are some of the recent cases when companies failed to acknowledge ethical risks of AI and had to bear the financial loss and reputational damage:
- Facebook ads. Facebook’s business model allows anyone to place ads on their website and target a specific demographic group. In 2016, ProPublica discovered that Facebook enables targeting not only by location, gender, or age group but also by ethnicity. Advertisers could choose who they wanted to target or completely exclude a category such as African American, Asian American, or Hispanic. Facebook’s advertising policy directly violated the Fair Housing Act of 1968, which makes illegal any kinds of advertisement that discriminate on the basis of race, color, or national origin. After a group of activists filed a lawsuit against Facebook, the company decided to reconsider their advertising policies. This scandal could have been avoided today since now Facebook has a professional team that assessed ethical risk and was aware of applicable regulations.
1. Why Corporate AI projects fail?
2. How AI Will Power the Next Wave of Healthcare Innovation?
3. Machine Learning by Using Regression Model
4. Top Data Science Platforms in 2021 Other than Kaggle
- Amazon hiring system. A hiring system that automatically sorts applications seems perfect for large companies. However, in 2018 Amazon, one of the leading companies in producing AI-powered products, discovered that their hiring tool doesn’t like women. The algorithm was trained on historical data obtained from successful employees during the previous 10 years. Since software engineering is a field dominated by men, the system saw male candidates as more suitable for any position. It rejected all CVs that contained the words “women ‘’ like in “women’s volleyball team”. Amazon tried to correct the AI but couldn’t achieve neutrality. After all, the project that cost 2 years and $50 million was scrapped.
- Criminal risk assessment systems. In 2016, ProPublica published another research that uncovered racial biases in the risk assessment system used in courtrooms all around the USA. They found out that COMPAS (one of the most widely-used AI algorithms for pretrial assessment) accurately assessed the likelihood of white and black criminals to reoffend at roughly the same rate. But when it was wrong, blacks were almost twice as likely as whites to be labeled a higher risk but not actually re-offend. Racial bias in the USA criminal justice system is not hot news. But it seems that societal biases are deeply ingrained in the algorithmic logic. The development team of Northpointe (creators of COMPAS) rejected any claims about race being one of the factors that influenced the output of the model. They even undermined the validity of research conducted by ProPublica. However, the problem here is that neither the public nor the sentenced have the information regarding on what basis the decision for sentences, parole, or treatment plans is made. The coverage by the Marshall Project, the Guardian, MIT Technology Review, Politico, and other media gave a wide resonance to the problem, which motivated at least some of the states to reconsider their use of COMPAS and move away to other options.
You might say that not all AI systems are used for making such important decisions as criminal risk assessment or hiring. Your product might be dealing with totally different tasks but it doesn’t mean that there is no ethical impact. On the contrary: because the problem is less obvious, it may take years of research to uncover biases in data or other issues. So how do you know if you’re doing a great job in mitigating risks of ethics breach?
Important to note: there is no AI ethics solution that fits all. Your AI ethics program must be customized to fit your industry and your business needs. Nonetheless, there are three universal steps that will help you build a robust and sustainable AI ethics strategy.
Earlier we saw how Facebook, Amazon, and Northpointe struggled to consider biases in data and make their algorithms fair and transparent. The skill they were lacking is the ability to look at certain problems from the position of minorities and disadvantaged groups.
We were all taught at school or by our families that making assumptions about someone based on their gender, race, skin tone, or age is wrong. But let’s be honest: very often it was just a formality. Biases are not about something that we do consciously: they are implicit. And that makes them harmful. We still live in a world where stereotypes guide not only personal actions but even governmental politics.
It can be changed. It is a nice habit to stop looking out for the needs of the majority (there are too many products for them anyway) but for the needs of the minorities. Sounds like a counter-intuitive business model?
A couple of years ago, Julie Passanante Elman published a paper about Fitbit fitness bracelets. She studied how cultural ideas about disability affect the development and implementation of wearable technology. There are hundreds of companies that produce fitness trackers for people with no special needs. But very few products take into consideration the needs of the disabled. People in wheelchairs don’t fit into standard programs that use smart gadgets to track calorie loss. Even today, only Apple provides convenient wearable options for people in wheelchairs.
Important disclaimer: We don’t propose exploiting vulnerable groups to make a profit. But to develop a product that makes a change, a shift of viewpoint can definitely be helpful.
Your existing infrastructure is probably already compliant with CCPA (California Consumer Privacy Act) and/or GDPR (General Data Protection Regulation). If not, make sure you fix it as soon as possible. Managing data-related risks is the first step to balancing the concerns about the ethical side of your AI system. Compliance with these regulations guarantees that your consumer’s rights are protected and reduces your possible reputational, financial, and legal risks.
In addition, many companies today develop guidelines for fairness, ethical design, and non-discriminatory use of AI systems. For inspiration, have a look at the principles that Google follows.
There are several approaches to developing an AI ethics strategy: To do that, you can hire a full-time AI ethicist, outsource it to an R&D firm, or train your employees: those who work in cybersecurity, business development, law, and analytics. TuDelft and University of Helsinki have MOOC courses that can help you to start.
Like any other business strategy, an AI ethics strategy needs to be monitored. Track how organizational awareness about AI ethics affects product development. Educate software engineers, product managers, and data analysts to help them transition.
Re-assessments are also necessary because AI products are not something stable: they can be developed with ethics in mind but deployed completely unethically. Make sure you conduct tests and employ both qualitative and quantitative research to determine how the product will affect the end-users.
No one says that operationalizing AI ethics is simple. But companies that invest in it are secure to reduce legal and ethical risks. Finally, there is nothing more important for a business than clients’ trust. Make an AI solution that deserves it.