AI is one of the most powerful emerging technologies that promises to make a stir in many industries. However, many technological giants have realized that it’s not enough to just implement AI: it’s not some checklist to tick off.
Prior to making AI technologies accessible to everyone, companies should learn how to use AI responsibly, inclusively, and ethically, as there are a number of ways where things can go wrong and inflict harm. As of now, we keep experiencing situations when facial recognition technologies can’t identify faces correctly (especially those behind masks), voice assistants can’t recognize accents, AI-driven software suggests incorrect diagnoses or denies candidates based on biased algorithms.
Since 2018, many technological corporations, such as IBM, Facebook, and Microsoft, have applied new ethical principles to increase fairness of their AI algorithms. Salesforce, the developer of one of the most popular customer success platforms powered by AI, is no exception. Based on Salesforce’s example, let’s review how companies can set up a framework for ethical AI.
Since Salesforce made its Einstein AI technology available in each of its products, the company has been thinking hard to ensure that their technology brings more value than harm. They’ve understood that ethical principles should be incorporated into each product development stage.
Following that lead, they founded the Office of Ethical and Humane Use as a part of the Office of Equality. The new office operates across product, law, policy and ethics fields, develops the framework for the ethical use of technologies, and fosters the framework implementation across all Salesforce products by means of:
– Safeguarding human rights and protecting customer data
– Leveraging feedback for continuous improvement
– Developing transparent user experience
– Respecting societal values
In order to nurture the right mindset for creating ethical products, Salesforce enrolls employees into special programs where they learn to put ethics at the core of their workflows and understand how to interpret AI-powered predictions and recommendations and identify harmful stereotypes regardless of their proficiency in data science. One of such trainings is called Consequence Training and requires participants to consider all potential intended and unintended consequences of their product or service in regard to users, and think how to mitigate potential problems.
As a result, the responsibility for maintaining good ethics is now shared by all the teams, which encourages each team member to participate in different stages of ethical AI product development, raise questions and report concerns, and reveal those problems and risks that could be otherwise overlooked.
1. Write Your First AI Project in 15 Minutes
2. Generating neural speech synthesis voice acting using xVASynth
3. Top 5 Artificial Intelligence (AI) Trends for 2021
4. Why You’re Using Spotify Wrong
Salesforce has built the framework that guides the process of ethical development of AI solutions and pushes engineering and product teams to consider the impact of what they create.
The framework is based on five ethical principles: human rights, privacy, safety, honesty, and inclusion. These have been worked out during interviews with employees and external consultants regarding their views on the ethical use of technologies.
In case there’s an ethical issue, the framework calls for communication with industry experts as well as stakeholders who come from the affected community. The participants analyze the issue by discussing the ethical framing, use cases, and counter-perspectives, and work out a set of recommendations and protection mechanisms for different development stages. For example, in the course of one such brainstorming session, Salesforce decided to prevent their bots from misleading users into thinking that they communicated with a real person.
Salesforce has also established the Data Science Review Board that encourages and implements best practices of data quality management and model training across the company. Be it prototyping or product development, the Board ensures that the engineering and product teams spot and remove bias from the training datasets for machine learning algorithms.
Salesforce commits to transparency by explaining models, using clear terms, and empowering users to control their own data and models that run on it. For example, the company uses model cards that standardize documentation procedures and reveal how machine learning models work, along with their inputs, outputs, working principles, and ethical considerations.
What’s more, those specializing in Salesforce consulting confirm that Salesforce products provide a number of features that help users make ethical choices indeed. For instance, the ‘sensitive field’ feature enables admins to mark fields that can add to the model bias, such as those regarding age, race, or gender. Einstein AI can search for the fields that correlate with the fields marked as ‘sensitive’ and flag them for an admin to review. The admin then decides whether to exclude those fields from the model and reduce possible bias as a result.
Voice assistants based on natural language processing and automatic speech recognition play an important part in our daily activities while also disrupting enterprise operations. Enterprise-level voice assistants have to deal with much more complex questions, and they are not always expected to joke or be emotional. One thing is clear — voice assistants have started the next wave of AI innovation, so it’s a perfect moment to get this technology right when it comes to bias elimination, security, and privacy.
Salesforce has also powered its products for sales and customer service with intelligent voice capabilities. In order to make voice assistants a secure communication channel, Salesforce ensures that only employees with relevant permissions can access this information. To do that, the company encrypts data, stripping it of personally identifiable details automatically. Salesforce also designs personas to match their user types in terms of language formality, complexity of answers, range of suggestions, and more.
Talking about ethics in AI, we should ask ourselves: ethics in relation to whom? Every product and service is the representation of values, experience, and biases of their creators. In this regard, it’s important to gather a diverse team in terms of gender, race, religion, and abilities in order to diversify the corporate ethics culture, apply it at each product development stage, and release truly inclusive AI solutions.