It pays to be cautious around artificial intelligence (AI) especially when cases of problematic use come to light, like the recent Clearview AI controversy and its use by police forces or governments’ use of AI to make decisions on welfare and tax fraud or immigration. Yet AI’s use is more ubiquitous than people think (Hey Google. Alexa, help.) and the potential beneficial applications are exciting: from finding new antibiotics and providing people with arthritis self-management support to preventing starfish from destroying the Great Barrier Reef and assisting in translation with child refugees, as just a few examples.
So there is on offer AI for Good, AI for Social Good, Ethical AI, Explainable AI (XAI) and Trustworthy AI to make the approach to and effects of AI less harmful and more acceptable. As of mid-2019, there were at least 84 publications by companies, government institutions, academic and research institutions, etc. containing ethical principles or guidelines for AI which show concern for transparency, justice and fairness, non-maleficence, responsibility and privacy. Human rights frameworks applied to the development and use of AI systems are now frequently discussed.
The development of AI strategy and policy has been called a “wicked problem”, which is difficult to explain and hard to solve because of the different and moving interacting layers of social, economic, environmental and other issues. To address these problems, there have been calls to be cautious of tech solutionism and to engage multiple stakeholders in iterative and adaptive strategies and involve local people closest to the problem in the design process in order to identify and respond to risks. As the Charities Aid Foundation has promoted, civil society organizations^ (CSOs) need to be involved beyond debates around AI, as CSOs that have knowledge of local communities and underserved and underrepresented populations are best placed to provide insights into human rights and civil liberties issues.
Tech companies and researchers need to not only make an effort to ponder over the potential ethical and societal impacts of the products they make, but also to actively engage directly with the communities on which the product may have an impact. But the question remains on how to do this in a practical way. A human-centered design approach to AI may be the answer as it encourages designers to engage deeply with the end-users’ problems and needs throughout the design process.
The “other stakeholders” in ethical AI
The “public” is often consulted on national and institutional policies, guidelines and regulations: within the past year Ireland, Malta, Brazil and the World Intellectual Property Organization have launched public consultations.
The “stakeholders” are those typically invited to the table to create guidelines and principles: academics and researchers, companies, governments, institutions and CSOs with dedicated interest in AI issues (note: Canada’s Advisory Council on Artificial Intelligence only includes one AI-focused CSO representative).
And “other stakeholders” could be considered CSOs that represent and advocate for communities and underserved and underrepresented populations, those already at a disadvantage and who would be most affected by AI.
All of the principles and guidelines are concerned with creating purposeful and inclusive AI, but don’t offer any concrete path to achieving this vision. As one review sees it, the burgeoning number of ethical guidelines and principles for AI demonstrate a gap between principle and practice or the “what” and the “how” to operationalize “good ethical practices when developing and deploying AI-driven products and services.” Stakeholder participation is viewed as a key operationalizing requirement for the principle of doing good to others (beneficence).Engaging organizations in AI design
To involve “other stakeholders”, it’s all well and good to say that CSOs need to make investments to “promote human rights and play a role in advocating and demonstrating responsible, rights‑based use of technology in its work” including being present at international platforms and engaging in initiatives, but engagement at that level is prohibitive for many in time, expertise and funding.
In addition, the reality for most CSOs is that digital strategy and digital delivery skills are low and the lack of funding, time, skills and culture hold them back from using technology, including AI, to further their cause. The CSOs that are jumping into AI technology for themselves are large and well-funded and have the capacity to take on innovative ventures and partnerships. And the CSOs that are effectively engaging in debates around AI are those already working on issues like privacy and human rights, like those that have drafted and endorsed the Toronto Declaration.
One tool that is suggested for effective stakeholder engagement is a human-centered design approach*. Google’s AI for Social Good guide also recommends a human-centered design approach in practice for non-profits and social enterprises who want to apply AI to social, humanitarian and environmental challenges. Human-centered design is ideal for capturing the “other stakeholders” in stakeholder engagement throughout all the key stages of development and deployment of AI products and services.
In a practical sense, this can look like IDEO’s approach, which calls on designers to bring people along on the design journey:
- During the Inspiration stage, designers should develop a strategy around who to talk to, what to ask them and what information they need to gather and then interview and talk directly to the communities that they want to serve, as well as immersing themselves in those communities.
- During Ideation, there are co-creation sessions to get feedback on ideas and testing with communities during rapid prototyping and on the prototype.
- During Implementation, communities offer feedback during live prototyping, piloting and further iterations.
However, the onus cannot be on CSOs to make this engagement happen as they are already overburdened with trying to provide their own services and programs and are limited financially. People wanting to create AI products and services need to be proactive in their own outreach by delving into how to best provide effective consultations, doing considerable research on who should be approached for involvement and having a discussion about fair compensation for a CSO’s and its clients’ time. And designers should be willing to hear “no, this is not a product you should make.”
The future prospects of AI are both exciting and scary as the positive and negative consequences of technological development are revealed and debated. However, behind all the talk of making AI ethical, trustworthy and good, lies the very real task to make these products and systems by directly working with and listening to the communities they will serve and affect.
^The term “civil society organizations” is prevalent in much of the documents around ethical AI and casts a wide net around charities, non-governmental organizations, foundations, community groups, faith-based organizations, professional associations, trade unions, social movements, coalitions and advocacy groups.
*Human-centered design here is not synonymous with other approaches called human-centered/human-centric/human-in-the-loop design
Further reading:
AI Needs an Ethical Compass. This Tool Can Help. (IDEO)
What we talk about when we talk about fair AI (Fionntán O’Donnell, BBC News Labs)
How to stimulate effective public engagement on the ethics of artificial intelligence (Involve)
Q&A: Jessica Fjeld on a New Berkman Klein Study of AI Ethical Principles (Berkman Klein Center)
AI governance map v.2.0 (Nesta)
Credit: BecomingHuman By: Amy Coulterman