Tuesday, March 2, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Machine Learning

Google’s ML-fairness-gym lets researchers study the long-term effects of AI’s decisions

February 6, 2020
in Machine Learning
Google’s ML-fairness-gym lets researchers study the long-term effects of AI’s decisions
585
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Determining whether an AI system is maintaining fairness in its predictions requires an understanding of models’ short- and long-term effects, which might be informed by disparities in error metrics on a number of static data sets. In some cases, it’s necessary to consider the context in which the AI system operates in addition to the aforementioned error metrics, which is why Google researchers developed ML-fairness-gym, a set of components for evaluating algorithmic fairness in simulated social environments.

ML-fairness-gym — which was published in open source on Github this week –is designed to be used to research the long-term effects of automated systems by simulating decision-making using OpenAI’s Gym framework. AI-controlled agents interact with digital environments in a loop, and at each step, an agent chooses an action that affects the environment’s state.  The environment then reveals an observation that the agent uses to inform its next actions, so that the environment models the system and dynamics of a problem and the observations serve as data.

You might also like

Machine Learning Cuts Through the Noise of Quantum Computing

Novel machine-learning tool can predict PRRSV outbreaks and biosecurity effectiveness

Machine Learning Courses Market Overview, Revenue, Industry Verticals, and Forecast Evaluation 2020 to 2026 – NeighborWebSJ

For instance, given the classic lending problem, where the probability that groups of applicants pay back a bank loan is a function of their credit score, the bank acts as the agent and receives applicants, their scores, and their membership in the form of environmental observations. It makes a decision — accepting or rejecting a loan — and the environment models whether the applicant successfully repays or defaults and adjusts their credit score accordingly. Throughout, ML-fairness-gym simulates the outcomes so that the effects of the bank’s policies on fairness to the applicants can be assessed.

ML-fairness-gym in this way cleverly avoids the pitfalls of static data set analysis. If the test sets (i.e., corpora used to evaluate model performance) in classical fairness evaluations are generated from existing systems, they may be incomplete or reflect the biases inherent to those systems. Furthermore, the actions informed by the output of AI systems can have effects that might influence their future input.

Above: In the lending problem scenario, this graph illustrates changing credit score distributions for two groups over 100 steps of simulation.

Image Credit: Google

“We created the ML-fairness-gym framework to help ML practitioners bring simulation-based analysis to their ML systems, an approach that has proven effective in many fields for analyzing dynamic systems where closed form analysis is difficult,” wrote Google Research software engineer Hansa Srinivasan in a blog post.

Several environments that simulate the repercussions of different automated decisions are available, including one for college admissions, lending, attention allocation, and infectious disease. (The ML-fairness-gym team cautions that the environments aren’t meant to be hyper-realistic and that best-performing policies won’t necessarily translate to the real world.) Each have a set of experiments corresponding to published papers, which are meant to provide examples of ways ML-fairness-gym can be used to investigate outcomes.

The researchers recommend using ML-fairness-gym to explore phenomena like censoring in the observation mechanism, errors from the learning algorithm, and interactions between the decision policy and the environment. The simulations allow for the auditing of agents to assess the fairness of decision policies based on observed data, which can motivate data collection policies. And they can be used in concert with reinforcement learning algorithms — algorithms that spur on agents with rewards — to derive new policies with potentially novel fairness properties.

In recent months, a number of corporations, government agencies, and independent researchers have made attempts at tackling the so-called “black box” problem in AI — the opaqueness of some AI systems — with varying degrees of success.

“Machine learning systems have been increasingly deployed to aid in high-impact decision-making, such as determining criminal sentencing, child welfare assessments, who receives medical attention and many other settings,” continued Srinivasan. “We’re excited about the potential of the ML-fairness-gym to help other researchers and machine learning developers better understand the effects that machine learning algorithms have on our society, and to inform the development of more responsible and fair machine learning systems.”

In 2017, the U.S. Defense Advanced Research Projects Agency launched DARPA XAI, a program that aims to produce “glass box” models that can be easily understood without sacrificing performance. In August, scientists from IBM proposed a “factsheet” for AI that would provide information about a model’s vulnerabilities, bias, susceptibility to adversarial attacks, and other characteristics. A recent Boston University study proposed a framework to improve AI fairness. And Microsoft, IBM, Accenture, and Facebook have developed automated tools to detect and mitigate bias in AI algorithms.

Credit: Google News

Previous Post

Top Data Science Applications - Becoming Human: Artificial Intelligence Magazine

Next Post

Broken processes are the biggest cybersecurity threat to your organization

Related Posts

Machine Learning Cuts Through the Noise of Quantum Computing
Machine Learning

Machine Learning Cuts Through the Noise of Quantum Computing

March 2, 2021
Novel machine-learning tool can predict PRRSV outbreaks and biosecurity effectiveness
Machine Learning

Novel machine-learning tool can predict PRRSV outbreaks and biosecurity effectiveness

March 1, 2021
Machine Learning Courses Market Overview, Revenue, Industry Verticals, and Forecast Evaluation 2020 to 2026 – NeighborWebSJ
Machine Learning

Machine Learning Courses Market Overview, Revenue, Industry Verticals, and Forecast Evaluation 2020 to 2026 – NeighborWebSJ

March 1, 2021
Machine learning could aid mental health diagnoses: Study – ETCIO.com
Machine Learning

Machine learning could aid mental health diagnoses: Study – ETCIO.com

March 1, 2021
Google’s deep learning finds a critical path in AI chips
Machine Learning

Google’s deep learning finds a critical path in AI chips

March 1, 2021
Next Post
Broken processes are the biggest cybersecurity threat to your organization

Broken processes are the biggest cybersecurity threat to your organization

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Singapore eyes more cameras, technology to boost law enforcement
Internet Security

Singapore eyes more cameras, technology to boost law enforcement

March 2, 2021
Why do companies fail to stop breaches despite soaring IT security investment?
Internet Privacy

Why do companies fail to stop breaches despite soaring IT security investment?

March 2, 2021
Tweaking Algorithmic Filtering to Combat Fake News
Data Science

Tweaking Algorithmic Filtering to Combat Fake News

March 2, 2021
Machine Learning Cuts Through the Noise of Quantum Computing
Machine Learning

Machine Learning Cuts Through the Noise of Quantum Computing

March 2, 2021
Google’s Tensorflow Certification & What I’ve Learned Since
Neural Networks

Google’s Tensorflow Certification & What I’ve Learned Since

March 2, 2021
Apple’s data-collection ‘nutrition labels’ for apps will begin appearing next week
Digital Marketing

Pinterest powers up creators during stressful times: Monday’s daily brief

March 2, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Singapore eyes more cameras, technology to boost law enforcement March 2, 2021
  • Why do companies fail to stop breaches despite soaring IT security investment? March 2, 2021
  • Tweaking Algorithmic Filtering to Combat Fake News March 2, 2021
  • Machine Learning Cuts Through the Noise of Quantum Computing March 2, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates