Credit: Google News
Amazon aims to promote the development of “fair” systems that minimize bias and address issues of transparency and accountability in AI. Toward that end, the Seattle company today announced that it’ll partner with the National Science Foundation (NSF) to commit up to $10 million in research grants over the next three years focused on fairness in AI and machine learning.
“With the increasing use of AI in everyday life, fairness in artificial intelligence is a topic of increasing importance across academia, government, and industry,” Prem Natarajan, vice president of natural understanding in the Alexa AI group,” wrote in a blog post. “Here at Amazon, the fairness of the machine learning systems we build to support our businesses is critical to establishing and maintaining our customers’ trust.”
Amazon’s partnership with NSF will specifically target explainability, potential adverse biases and effects, mitigation strategies, validation of fairness, and considerations of inclusivity, with the goal of enabling “broadened acceptance” of AI systems and allowing the U.S. to “further capitalize” on the potential of AI technologies. The two organizations expect proposals, which they’re accepting starting today until May 10, to result in new open source tools, publicly available datasets, and publications.
Amazon will provide partial funding for the program, with NSF making award determinations independently an in accordance with its merit review process. In 2020 and 2021, the program is expected to continue with additional calls for letters of intent.
“We are excited to announce this new collaboration with Amazon to fund research focused on fairness in AI,” said Jim Kurose, NSF’s head for computer and information science and engineering. “This program will support research related to the development and implementation of trustworthy AI systems that incorporate transparency, fairness, and accountability into the design from the beginning.”
With today’s announcement, Amazon joins a growing number of corporations, academic institutions, and consortiums engaged in the study of ethical AI. Already, their collective work has produced algorithmic bias mitigation tools that promise to accelerate progress toward more impartial AI.
In May, Facebook announced Fairness Flow, which automatically warns if an algorithm is making an unfair judgment about a person based on his or her race, gender, or age. Accenture released a toolkit that automatically detects bias in AI algorithms and helps data scientists mitigate that bias. Microsoft launched a solution of its own in May, and in September, Google debuted the What-If Tool, a bias-detecting feature of the TensorBoard web dashboard for its TensorFlow machine learning framework.
IBM, not to be outdone, in the fall released AI Fairness 360, a cloud-based, fully automated suite that “continually provides [insights]” into how AI systems are making their decisions and recommends adjustments — such as algorithmic tweaks or counterbalancing data — that might lessen the impact of prejudice. And recent research from its Watson and Cloud Platforms group has focused on mitigating bias in AI models, specifically as they relate to facial recognition.
Today’s blog post, it’s worth noting, comes months after researchers at the Massachusetts Institute of Technology published a study that found Rekognition, Amazon Web Services’ (AWS) object detection API, failed to reliably determine the sex of female and darker-skinned faces in specific scenarios. The study’s coauthors claimed that in experiments conducted over the course of 2018, Rekognition’s facial analysis feature mistakenly identified pictures of woman as men and darker-skinned women as men 19 percent and 31 percent of the time, respectively.
Amazon disputed — and continues to dispute — those findings. It says that internally, in tests of an updated version of Rekognition, it observed “no difference” in gender classification accuracy across all ethnicities. And it says that the paper in question failed to make clear the confidence threshold used in the experiments.
Credit: Google News