Wednesday, March 3, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Neural Networks

why and how it needs to work with communities

April 10, 2020
in Neural Networks
why and how it needs to work with communities
585
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter
Photo by Clarisse Croset on Unsplash

It pays to be cautious around artificial intelligence (AI) especially when cases of problematic use come to light, like the recent Clearview AI controversy and its use by police forces or governments’ use of AI to make decisions on welfare and tax fraud or immigration. Yet AI’s use is more ubiquitous than people think (Hey Google. Alexa, help.) and the potential beneficial applications are exciting: from finding new antibiotics and providing people with arthritis self-management support to preventing starfish from destroying the Great Barrier Reef and assisting in translation with child refugees, as just a few examples.

So there is on offer AI for Good, AI for Social Good, Ethical AI, Explainable AI (XAI) and Trustworthy AI to make the approach to and effects of AI less harmful and more acceptable. As of mid-2019, there were at least 84 publications by companies, government institutions, academic and research institutions, etc. containing ethical principles or guidelines for AI which show concern for transparency, justice and fairness, non-maleficence, responsibility and privacy. Human rights frameworks applied to the development and use of AI systems are now frequently discussed.

You might also like

The Symbolic World: Raising A Turing’s Child Machine (1/2) | by Puttatida Mahapattanakul | Feb, 2021

The Ways in Which Big Data can Transform Talent Management and Human Resources | by Amelia Jackson | Feb, 2021

Why small businesses and startups should always use Analytics and AI | by Yogesh Chauhan | Feb, 2021

The development of AI strategy and policy has been called a “wicked problem”, which is difficult to explain and hard to solve because of the different and moving interacting layers of social, economic, environmental and other issues. To address these problems, there have been calls to be cautious of tech solutionism and to engage multiple stakeholders in iterative and adaptive strategies and involve local people closest to the problem in the design process in order to identify and respond to risks. As the Charities Aid Foundation has promoted, civil society organizations^ (CSOs) need to be involved beyond debates around AI, as CSOs that have knowledge of local communities and underserved and underrepresented populations are best placed to provide insights into human rights and civil liberties issues.

Big Data Jobs

Tech companies and researchers need to not only make an effort to ponder over the potential ethical and societal impacts of the products they make, but also to actively engage directly with the communities on which the product may have an impact. But the question remains on how to do this in a practical way. A human-centered design approach to AI may be the answer as it encourages designers to engage deeply with the end-users’ problems and needs throughout the design process.

The “other stakeholders” in ethical AI

According to an analysis of key ethical AI statements, the various stakeholders mentioned have been divided into three groups: the “public”, those to be educated and surveyed; “stakeholders”, the experts that make AI happen; and “other stakeholders”, those that have AI happen to them.

The “public” is often consulted on national and institutional policies, guidelines and regulations: within the past year Ireland, Malta, Brazil and the World Intellectual Property Organization have launched public consultations.

The “stakeholders” are those typically invited to the table to create guidelines and principles: academics and researchers, companies, governments, institutions and CSOs with dedicated interest in AI issues (note: Canada’s Advisory Council on Artificial Intelligence only includes one AI-focused CSO representative).

And “other stakeholders” could be considered CSOs that represent and advocate for communities and underserved and underrepresented populations, those already at a disadvantage and who would be most affected by AI.

Photo by Andy Kelly on Unsplash

All of the principles and guidelines are concerned with creating purposeful and inclusive AI, but don’t offer any concrete path to achieving this vision. As one review sees it, the burgeoning number of ethical guidelines and principles for AI demonstrate a gap between principle and practice or the “what” and the “how” to operationalize “good ethical practices when developing and deploying AI-driven products and services.” Stakeholder participation is viewed as a key operationalizing requirement for the principle of doing good to others (beneficence).Engaging organizations in AI design

To involve “other stakeholders”, it’s all well and good to say that CSOs need to make investments to “promote human rights and play a role in advocating and demonstrating responsible, rights‑based use of technology in its work” including being present at international platforms and engaging in initiatives, but engagement at that level is prohibitive for many in time, expertise and funding.

In addition, the reality for most CSOs is that digital strategy and digital delivery skills are low and the lack of funding, time, skills and culture hold them back from using technology, including AI, to further their cause. The CSOs that are jumping into AI technology for themselves are large and well-funded and have the capacity to take on innovative ventures and partnerships. And the CSOs that are effectively engaging in debates around AI are those already working on issues like privacy and human rights, like those that have drafted and endorsed the Toronto Declaration.

One tool that is suggested for effective stakeholder engagement is a human-centered design approach*. Google’s AI for Social Good guide also recommends a human-centered design approach in practice for non-profits and social enterprises who want to apply AI to social, humanitarian and environmental challenges. Human-centered design is ideal for capturing the “other stakeholders” in stakeholder engagement throughout all the key stages of development and deployment of AI products and services.

In a practical sense, this can look like IDEO’s approach, which calls on designers to bring people along on the design journey:

  • During the Inspiration stage, designers should develop a strategy around who to talk to, what to ask them and what information they need to gather and then interview and talk directly to the communities that they want to serve, as well as immersing themselves in those communities.
  • During Ideation, there are co-creation sessions to get feedback on ideas and testing with communities during rapid prototyping and on the prototype.
  • During Implementation, communities offer feedback during live prototyping, piloting and further iterations.

However, the onus cannot be on CSOs to make this engagement happen as they are already overburdened with trying to provide their own services and programs and are limited financially. People wanting to create AI products and services need to be proactive in their own outreach by delving into how to best provide effective consultations, doing considerable research on who should be approached for involvement and having a discussion about fair compensation for a CSO’s and its clients’ time. And designers should be willing to hear “no, this is not a product you should make.”

The future prospects of AI are both exciting and scary as the positive and negative consequences of technological development are revealed and debated. However, behind all the talk of making AI ethical, trustworthy and good, lies the very real task to make these products and systems by directly working with and listening to the communities they will serve and affect.

^The term “civil society organizations” is prevalent in much of the documents around ethical AI and casts a wide net around charities, non-governmental organizations, foundations, community groups, faith-based organizations, professional associations, trade unions, social movements, coalitions and advocacy groups.

*Human-centered design here is not synonymous with other approaches called human-centered/human-centric/human-in-the-loop design

Further reading:

AI Needs an Ethical Compass. This Tool Can Help. (IDEO)

What we talk about when we talk about fair AI (Fionntán O’Donnell, BBC News Labs)

How to stimulate effective public engagement on the ethics of artificial intelligence (Involve)

Q&A: Jessica Fjeld on a New Berkman Klein Study of AI Ethical Principles (Berkman Klein Center)

AI governance map v.2.0 (Nesta)

Credit: BecomingHuman By: Amy Coulterman

Previous Post

Zero-Click and Low Organic Search Clickthrough: What Can Be Done

Next Post

A Quick Introduction to Segmentation, Correlation, and Time Series Modeling

Related Posts

The Symbolic World: Raising A Turing’s Child Machine (1/2) | by Puttatida Mahapattanakul | Feb, 2021
Neural Networks

The Symbolic World: Raising A Turing’s Child Machine (1/2) | by Puttatida Mahapattanakul | Feb, 2021

March 3, 2021
The Ways in Which Big Data can Transform Talent Management and Human Resources | by Amelia Jackson | Feb, 2021
Neural Networks

The Ways in Which Big Data can Transform Talent Management and Human Resources | by Amelia Jackson | Feb, 2021

March 3, 2021
Why small businesses and startups should always use Analytics and AI | by Yogesh Chauhan | Feb, 2021
Neural Networks

Why small businesses and startups should always use Analytics and AI | by Yogesh Chauhan | Feb, 2021

March 2, 2021
Data Annotation Service: a Potential and Problematic Industry Behind AI | by ByteBridge
Neural Networks

Data Annotation Service: a Potential and Problematic Industry Behind AI | by ByteBridge

March 2, 2021
Can India beat the global AI challenge? Can we avoid huge job extinction here? | by Yogesh Chauhan | Jan, 2021
Neural Networks

Can India beat the global AI challenge? Can we avoid huge job extinction here? | by Yogesh Chauhan | Jan, 2021

March 2, 2021
Next Post
A Quick Introduction to Segmentation, Correlation, and Time Series Modeling

A Quick Introduction to Segmentation, Correlation, and Time Series Modeling

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Yum! Brands Acquires AI Company
Machine Learning

Yum! Brands Acquires AI Company

March 3, 2021
Customer Experience Management and Improvement
Marketing Technology

Customer Experience Management and Improvement

March 3, 2021
New app rollout helps reduce paperwork for NSW frontline child protection caseworkers
Internet Security

New app rollout helps reduce paperwork for NSW frontline child protection caseworkers

March 3, 2021
Cloudera: An Enterprise-Level Play On Machine Learning And Big Data – Seeking Alpha
Machine Learning

Cloudera: An Enterprise-Level Play On Machine Learning And Big Data – Seeking Alpha

March 3, 2021
The Symbolic World: Raising A Turing’s Child Machine (1/2) | by Puttatida Mahapattanakul | Feb, 2021
Neural Networks

The Symbolic World: Raising A Turing’s Child Machine (1/2) | by Puttatida Mahapattanakul | Feb, 2021

March 3, 2021
Top 10 ‘Brand Guardian’ Most Famous, Most Reputable CEOs
Marketing Technology

Top 10 ‘Brand Guardian’ Most Famous, Most Reputable CEOs

March 3, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Yum! Brands Acquires AI Company March 3, 2021
  • Customer Experience Management and Improvement March 3, 2021
  • New app rollout helps reduce paperwork for NSW frontline child protection caseworkers March 3, 2021
  • Cloudera: An Enterprise-Level Play On Machine Learning And Big Data – Seeking Alpha March 3, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates