Thursday, February 25, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Machine Learning

The battle for ethical AI at the world’s biggest machine-learning conference

January 25, 2020
in Machine Learning
The battle for ethical AI at the world’s biggest machine-learning conference
585
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Facial-recognition algorithms have been at the centre of privacy and ethics debates.Credit: Qilai Shen/Bloomberg/Getty

Diversity and inclusion took centre stage at one of the world’s major artificial-intelligence (AI) conferences in 2018. But once a meeting with a controversial reputation, last month’s Neural Information Processing Systems (NeurIPS) conference in Vancouver, Canada, saw attention shift to another big issue in the field: ethics.

You might also like

Using machine learning to identify blood biomarkers for early diagnosis of autism

Machine learning speeding up patent classifications at USPTO

Even Small Companies Use AI, Machine Learning

The focus comes as AI research increasingly deals with ethical controversies surrounding the application of its technologies — such as in predictive policing or facial recognition. Issues include tackling biases in algorithms that reflect existing patterns of discrimination in data, and avoiding affecting already vulnerable populations. “There is no such thing as a neutral tech platform,” warned Celeste Kidd, a developmental psychologist at University of California, Berkeley, during her NeurIPS keynote talk about how algorithms can influence human beliefs. At the meeting, which hosted a record 13,000 attendees, researchers grappled with how to meaningfully address the ethical and societal implications of their work.

Ethics gap

Ethicists have long debated the impacts of AI and sought ways to use the technology for good, such as in health care. But researchers are now realizing that they need to embed ethics into the formulation of their research and understand the potential harms of algorithmic injustice, says Meredith Whittaker, an AI researcher at New York University and founder of the AI Now Institute, which seeks to understand the social implications of the technology. At the latest NeurIPS, researchers couldn’t “write, talk or think” about these systems without considering possible social harms, she says. “The question is, will the change in the conversation result in the structural change we need to actually ensure these systems don’t cause harm?”

Conferences such as NeurIPS, which, together with two other annual meetings, publishes the majority of papers in AI, bear some responsibility, says Whittaker. “The field has blown up so much there aren’t enough conferences or reviewers. But everybody wants their paper in. So there is huge leverage there,” she says.

But research presented at NeurIPS doesn’t face a specific ethics check as part of the review process. The pitfalls of this were encapsulated by the reaction to one paper presented at the conference. The study claimed to be able to generate faces — including aspects of a person’s age, gender and ethnicity — on the basis of voices. Machine-learning scientists criticized it on Twitter as being transphobic and pseudoscientific.

Potential solutions

One solution could be to introduce ethical review at conferences. NeurIPS 2019 included for the first time a reproducibility checklist for submitted papers. In the future, once accepted, papers could also be checked for responsibility, says Joelle Pineau, a machine-learning scientist at McGill University in Montreal, Canada, and at Facebook, who is on the NeurIPS organizing committee and developed the checklist.

NeurIPS says that an ethics committee is on hand to deal with concerns during the existing review process but it is considering ways to make its work on ethical and societal impacts more robust. Proposals include asking authors to make a statement about the ethics of their work and training reviewers to spot ethics violations. The organizers of the annual International Conference on Learning Representations — another of the major AI meetings — said it was also discussing the idea of reviewing papers with ethics in mind, after the conversations in the community.

AI Now goes a step further: in a report published last month, it called for all machine-learning research papers to include a section on societal harms, as well as the provenance of their data sets. Such considerations should centre on the perspectives of vulnerable groups, which AI tends to impact disproportionately, Abeba Birhane, a cognitive scientist at University College Dublin, told NeurIPS’s Black in AI workshop, in which her study on ‘relational ethics’1 won best paper. “Algorithms exclude older workers, trans people, immigrants, children,” said Birhane, citing uses of AI in hiring and surveillance. Developers should ask not only how their algorithm might be used, but whether it is necessary in the first place, she said.

Business influences

Tech companies — which are responsible for vast amounts of AI research — are also addressing the ethics of their work (Google alone was responsible for 12% of papers at NeurIPS, according to one estimate). But activists say that they must not be allowed to get away with ‘ethics-washing’. Tech companies suffer from a lack of diversity, and although some firms have staff and entire boards dedicated to ethics,campaigners warn that these often have too little power. Their technical solutions — which include efforts to ‘debias algorithms’ — are also often misguided, says Birhane. The approach wrongly suggests that bias-free data sets exist, and fixing algorithms doesn’t solve the root problems in underlying data, she says.

Forcing tech companies to include people from affected groups on ethics boards would help, said Fabian Rogers, a community organizer from New York City. Rogers represents the Atlantic Plaza Towers Tenants Association, which fought to stop its landlord from installing facial-recognition technology without residents’ consent. “Context is everything, and we need to keep that in mind when we’re talking about technology. It’s hard to do that when we don’t have necessary people to offer that perspective,” he said.

Researchers and tech workers in privileged positions can choose where they work and should vote with their feet, says Whittaker. She worked at Google until last year, and in 2018 organized a walkout of Google staff over the firm’s handing of of sexual-harassment claims. Researchers should demand to know the ultimate use of what they are working on, she says.

Another approach would be to change the questions that they try to solve, said Ria Kalluri, a machine-learning scientist at Stanford University in California. Researchers could shift power towards the people affected by models and on whose data they are built, she said, by tackling scientific questions that make algorithms more transparent and that create ways for non-experts to challenge a model’s inner workings.

Credit: Google News

Previous Post

Hackers target unpatched Citrix servers to deploy ransomware

Next Post

Privacy worries cited as possible reason for DNA test firm 23andMe's sales downturn

Related Posts

Using machine learning to identify blood biomarkers for early diagnosis of autism
Machine Learning

Using machine learning to identify blood biomarkers for early diagnosis of autism

February 25, 2021
Machine learning speeding up patent classifications at USPTO
Machine Learning

Machine learning speeding up patent classifications at USPTO

February 25, 2021
Even Small Companies Use AI, Machine Learning
Machine Learning

Even Small Companies Use AI, Machine Learning

February 25, 2021
Zorroa Launches Boon AI; No-code Machine Learning for Media-driven Organizations
Machine Learning

Zorroa Launches Boon AI; No-code Machine Learning for Media-driven Organizations

February 24, 2021
Machine Learning

Way of using machine learning to aid mental health diagnoses developed

February 24, 2021
Next Post
Privacy worries cited as possible reason for DNA test firm 23andMe’s sales downturn

Privacy worries cited as possible reason for DNA test firm 23andMe's sales downturn

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Ukraine reports cyber-attack on government document management system
Internet Security

Ukraine reports cyber-attack on government document management system

February 25, 2021
KPMG, BitGo, and Coin Metrics launch combined offering for public blockchains
Blockchain

KPMG, BitGo, and Coin Metrics launch combined offering for public blockchains

February 25, 2021
IBM Reportedly Retreating from Healthcare with Watson 
Artificial Intelligence

IBM Reportedly Retreating from Healthcare with Watson 

February 25, 2021
Using machine learning to identify blood biomarkers for early diagnosis of autism
Machine Learning

Using machine learning to identify blood biomarkers for early diagnosis of autism

February 25, 2021
Label a Dataset with a Few Lines of Code | by Eric Landau | Jan, 2021
Neural Networks

Label a Dataset with a Few Lines of Code | by Eric Landau | Jan, 2021

February 25, 2021
How to Identify and Prioritize Marketing Ideas
Marketing Technology

How to Identify and Prioritize Marketing Ideas

February 25, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Ukraine reports cyber-attack on government document management system February 25, 2021
  • KPMG, BitGo, and Coin Metrics launch combined offering for public blockchains February 25, 2021
  • IBM Reportedly Retreating from Healthcare with Watson  February 25, 2021
  • Using machine learning to identify blood biomarkers for early diagnosis of autism February 25, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates