Saturday, March 6, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Machine Learning

How to stop computers being biased

February 13, 2019
in Machine Learning
How to stop computers being biased
586
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Credit: Google News

One of the first major court cases over how algorithms affect people’s lives came in 2012, after a computer decided to slash Medicaid payments to around 4,000 disabled people in the US state of Idaho based on a database that was riddled with gaps and errors.

You might also like

Facebook enhances AI computer vision with SEER

Hands-on Guide to Interpret Machine Learning with SHAP –

Revolution by Artificial Intelligence, Machine Learning and Deep Learning in the healthcare industry

More than six years later, Idaho has yet to fix its now decommissioned computer program. But the falling cost of using computers to make what used to be human decisions has seen companies and public bodies roll out similar systems on a mass scale.

In Idaho, it emerged that officials had decided to forge ahead even though tests showed that the corrupt data would produce corrupt results.

“My hunch is that this kind of thing is happening a lot across the United States and across the world as people move to these computerised systems,” wrote Richard Eppink, the legal director of the American Civil Liberties Union (ACLU) in Idaho, which brought the court case.

“Nobody understands them, they think that somebody else does — but in the end we trust them. Even the people in charge of these programs have this trust that these things are working.”

Today, machine learning algorithms, which are “trained” to make decisions by searching for patterns in large sets of data, are being used in areas as diverse as recruitment, shopping recommendations, healthcare, criminal justice and credit scoring.

Their advantage is greater accuracy and consistency because they are better able to spot statistical connections and always operate by the same set of rules. But the drawbacks are that it is impossible to know how an algorithm arrived at its conclusion and the programs are only as good as the data they are trained on.

Recommended

Wednesday, 6 February, 2019

“You feed them your historical data, variables or statistics, and they come up with a profile or model but you have no intuitive understanding of what the algorithm actually learns about you,” said Sandra Wachter, a lawyer and Research Fellow in artificial intelligence at the Oxford Internet Institute.

“Algorithms can of course give unjust results, because we train them with data that is already biased through human decisions. So it’s unsurprising.”

Examples of algorithms going awry are rife: Amazon’s experimental recruitment algorithm ended up screening out female applicants because of a historic overweighting of male employees in the technology industry.

The e-commerce giant also got in trouble when it used machine learning algorithms to decide where it would roll out its Prime Same Day delivery service; the model cut out primarily black neighbourhoods such as Roxbury in Boston or the South Side of Chicago, denying them the same services as wealthier, white neighbourhoods.

The question to me is never: ‘Is the model perfect?’ No, it never will be. But is it doing better than flawed human decision-makers would do in its absence?

As machine-made decisions become more common, experts are now working out ways to mitigate the bias in the data.

“In the last few years we’ve been forced to open our eyes to the rest of society because AI is going to industry, and industry is putting the products in the hands of everyone,” said Yoshua Bengio, scientific director of the Montreal Institute for Learning Algorithms and a pioneer of deep learning techniques.

Techniques include ways to make an algorithm more transparent, so those affected can understand how it arrived at a decision. For instance, Google has implemented counterfactual explanations, where it allows users to play with the variables, like swapping female for male, and seeing if it changes the outcome.

At IBM, researchers recently launched Diversity in Faces, a tool that can tell companies if the faces in their data sets are diverse enough before they start training facial recognition programs.

“AI systems need to see all of us, not just some of us. It’s been reported a number of times that these systems aren’t necessarily fair when you look at different groups of people,” said John Smith, manager of AI Tech for IBM Research who built the tool.

Mr Bengio’s lab is working on designing models that are blind to sensitive information like gender or race when making decisions. “It’s not sufficient to just remove the variable that says gender or race, because that information could be hidden in other places in a subtle way,” he explained. “Race and where you live are highly correlated in the US, for example. We need systems that can automatically pull out that information from the data.”

He noted that society “needs to set rules of the game more tightly” around the use of algorithms because the incentives of companies are not always aligned with the public good.

In Durham in northern England, police have taken out postal address information from the data company Experian from a program that tries to predict whether people will reoffend after being released from custody.

This variable was removed from the model last year after concerns that it would unfairly punish people from lower-income neighbourhoods.

“People did react to that, because the concern was if you were a human decision-maker classifying risk of reoffending and you knew this person lived in a bad neighbourhood, that might bias your decision,” said Geoffrey Barnes, the Cambridge criminologist who designed the algorithm. “The Experian codes probably had some effect in that direction, so if [removing it] assuages people’s concern about these models, all the better.”

Recommended

Tuesday, 12 February, 2019

But even with the brightest minds working to screen unfair decisions, algorithms will never be error-free because of the complex nature of the decisions they are designed to make. Trade-offs have to be agreed in advance with those deploying the models, and humans have to be empowered to override machines if necessary.

“Human prosecutors make decisions to charge, juries make decisions to convict, judges make decisions to sentence. Every one of those is flawed as well,” said Mr Barnes, who now writes criminal justice algorithms for the Western Australia police force.

“So the question to me is never: ‘Is the model perfect?’ No, it never will be. But is it doing better than flawed human decision-makers would do in its absence? I believe it will.”

But not everyone is convinced the benefits outweigh the dangers.

“The number one way to remove bias is to use a simpler, more transparent method rather than deep learning. I’m not convinced there is a need for [AI] in social decisions,” said David Spiegelhalter, president of the Royal Statistical Society.

“There is an inherent lack of predictability when it comes to people’s behaviour, and collecting huge amounts of data are not going to help. Given the chaotic state of the world, a simpler statistical method is much safer and less opaque.”

Credit: Google News

Previous Post

Data breach confirmed by 500px with 'partial user data' hit

Next Post

Social Scientists Use Transfer Learning to Kick-Start Deep Learning Projects

Related Posts

Facebook enhances AI computer vision with SEER
Machine Learning

Facebook enhances AI computer vision with SEER

March 6, 2021
Hands-on Guide to Interpret Machine Learning with SHAP –
Machine Learning

Hands-on Guide to Interpret Machine Learning with SHAP –

March 6, 2021
Revolution by Artificial Intelligence, Machine Learning and Deep Learning in the healthcare industry
Machine Learning

Revolution by Artificial Intelligence, Machine Learning and Deep Learning in the healthcare industry

March 6, 2021
Will AI Replace Lawyers & Other Myths: Legal AI Mythbusters | Onit
Machine Learning

Will AI Replace Lawyers & Other Myths: Legal AI Mythbusters | Onit

March 6, 2021
Machine learning the news for better macroeconomic forecasting
Machine Learning

Reducing Blind Spots in Cybersecurity: 3 Ways Machine Learning Can Help

March 6, 2021
Next Post
Social Scientists Use Transfer Learning to Kick-Start Deep Learning Projects

Social Scientists Use Transfer Learning to Kick-Start Deep Learning Projects

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Cyberattack shuts down online learning at 15 UK schools
Internet Security

Cyberattack shuts down online learning at 15 UK schools

March 6, 2021
Facebook enhances AI computer vision with SEER
Machine Learning

Facebook enhances AI computer vision with SEER

March 6, 2021
Microsoft Exchange zero-day vulnerabilities exploited in attacks against US local governments
Internet Security

Microsoft Exchange zero-day vulnerabilities exploited in attacks against US local governments

March 6, 2021
Hands-on Guide to Interpret Machine Learning with SHAP –
Machine Learning

Hands-on Guide to Interpret Machine Learning with SHAP –

March 6, 2021
$100 in crypto for a kilo of gold: Scammer pleads guilty to investor fraud
Internet Security

$100 in crypto for a kilo of gold: Scammer pleads guilty to investor fraud

March 6, 2021
Revolution by Artificial Intelligence, Machine Learning and Deep Learning in the healthcare industry
Machine Learning

Revolution by Artificial Intelligence, Machine Learning and Deep Learning in the healthcare industry

March 6, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Cyberattack shuts down online learning at 15 UK schools March 6, 2021
  • Facebook enhances AI computer vision with SEER March 6, 2021
  • Microsoft Exchange zero-day vulnerabilities exploited in attacks against US local governments March 6, 2021
  • Hands-on Guide to Interpret Machine Learning with SHAP – March 6, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates