Monday, April 19, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Machine Learning

Why AI bias can’t be solved with more AI

June 29, 2020
in Machine Learning
Why AI bias can’t be solved with more AI
586
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Alejandro Saucedo says he could spend hours talking about solutions to bias in machine learning algorithms.  

In fact, he has already spent countless hours on the topic via talks at events and in his day-to-day work. 

You might also like

Machine Learning market valuation to surge at 33.8% CAGR through 2025

Twitter analysing harmful impacts of its AI, machine learning algorithms

Machine Learning Helps Optimize Therapeutic Antibodies

It’s an area he is uniquely qualified to tackle. He is engineering director of machine learning at London-based Seldon Technologies, and chief scientist at The Institute for Ethical AI and Machine Leaning. 

His key thesis is that the bias which creeps into AI – a problem far from hypothetical – cannot be solved with more tech but with the reintroduction of human expertise. 

In recent years countless stories detail how AI decisioning has resulted in women being less likely to qualify for loans, minorities being unfairly profiled by police, and facial recognition technology performing more accurately when analysing white, male faces. 

“You are affecting people’s lives,” he tells BusinessCloud, in reference to the magnitude of these automated decisions in the security and defence space, and even in the judicial process. 

Saucedo explains that machine learning processes are, by definition, designed to be discriminatory – but not like this. 

“The purpose of machine learning is to discriminate toward a right answer,” he said. 

“Humans are not born racist, and similarly machine learning algorithms are not by default going to be racist. They are a reflection of the data ingested.” 

If algorithms adopt human bias from our biased data, removing bias therefore suggests the technology has great potential. 

But the discussion often stops at this theoretical level – or acts as a cue for engineers to fine-tune the software in the hopes of a more equitable outcome. 

It’s not that simple, Saucedo suggests. 

“An ethical question of that magnitude shouldn’t fall onto the shoulders of a single data scientist. They will not have the full picture in order to make a call that could have impact on individuals across generations,” he says. 

Instead the approach with the most promise takes one step further back from the problem. 

Going ‘beyond the algorithm’, as he puts it, involves bringing in human experts, increasing regulation, and a much lighter touch when introducing the technology at all. 

“Instead of just dumping an entire encyclopaedia of an industry into a neural network to learn from scratch, you can bring in domain experts to understand how these machines learn,” he explains. 

This approach allows those making the technology to better explain why an algorithm makes the choices it does – something which is almost impossible with the ‘black box’ of a neural network working on its own. 

For instance, a lawyer could help with the building of a legal AI, to guide and review the machine learning’s output for nuances – even small things like words which are capitalised. 

In this way, he says, the resulting machine learning becomes easier to understand.  

This approach means automating a percentage of the process, and requiring a human for the remainder, or what he calls ‘human augmentation’ or ‘human manual remediation’.  

This could slow down the development of potentially lucrative technology battling to win the AI arms race – but it was a choice he said would ultimately be good for business and people. 

“You either take the slow and painful route which works, or you take the quick fix which doesn’t,” he says. 

Saucedo is only calling for red tape which is proportionate to its potential impact. In short, a potential ‘legal sentencing prediction system’ needs more governance than a prototype being tested on a single user. 

He says anyone building machine learning algorithms with societal impact should be asking how they can build a process which still requires review from human domain expertise. 

“If there is no way to introduce a human in to review, the question is: should you even be automating that process? If you should, you need to make sure that you have the ethics structure and some form of ethics board to approve those use cases.”  

And while his premise is that bias is not a single engineer’s problem, he said that this does not make them now exempt.  

“It is important as engineers, individuals and as people providing that data to be aware of the implications. Not only because of the bad use cases, but being aware that most of the incorrect applications of machine learning algorithms are not done through malice but lack of best practice.” 

This self-regulation might be tough for fast-paced AI firms hoping to make sales, but conscious awareness on the part of everyone building these systems is a professional responsibility, he says.  

And even self-regulation is only the first step. Good ethics alone does not guarantee a lack of blind spots. 

That’s why Saucedo also suggests external regulation – and this doesn’t have to slow down innovation.  

“When you introduce regulations that are embedded with what is needed, things are done the right way. And when they’re done the right way, they’re more efficient and there is more room for innovation.” 

For businesses looking to incorporate machine learning, rather than building it, he points to The Institute for Ethical AI & Machine Learning’s ‘AI-RFX Procurement Framework’. 

The idea is to abstract the initial high-level principles created at The Institute, such as the human augmentation mentioned earlier, and trust and privacy by design. It breaks these principles down into a security questionnaire.  

“We’ve taken all of these principles, and we realised that understanding and agreeing on exact best-practice is very hard. What is universally agreed is what bad practice is.”  

This, along with access to the right stakeholders to evaluate the data and content, is enough to sort mature AI businesses from those “selling snake oil”.  

The institute is also contributing to some of the official industry standards that are being created for organisations like the police and the ISO, he explains. 

And the work is far from done – if a basic framework and regulation can be created with enough success to be adopted internationally, even differing Western and Eastern ethics need to be accounted for. 

“In the West you have good and bad, and in the East it is more about balance,” he says. 

There are also the differing concepts of the self versus the community. The considerations quickly become philosophical and messy – a sign that they are a little bit more human.   

“If we want to reach international standards and regulation, we need to be able to align on those foundational components, to know where everyone is coming from,” he says. 


Credit: Google News

Previous Post

eSafety gets AU$10m boost to help Aussies stay safe online post-coronavirus

Next Post

e-Commerce Site Hackers Now Hiding Credit Card Stealer Inside Image Metadata

Related Posts

Machine Learning market valuation to surge at 33.8% CAGR through 2025
Machine Learning

Machine Learning market valuation to surge at 33.8% CAGR through 2025

April 19, 2021
Twitter analysing harmful impacts of its AI, machine learning algorithms
Machine Learning

Twitter analysing harmful impacts of its AI, machine learning algorithms

April 19, 2021
Machine Learning Helps Optimize Therapeutic Antibodies
Machine Learning

Machine Learning Helps Optimize Therapeutic Antibodies

April 18, 2021
Researchers at MIT DAI Lab Have Recently Built Cardea: A Machine Learning Framework That Turns Health Care Data Into Insights
Machine Learning

Researchers at MIT DAI Lab Have Recently Built Cardea: A Machine Learning Framework That Turns Health Care Data Into Insights

April 18, 2021
Automating Drug Discovery With Machine Learning
Machine Learning

Automating Drug Discovery With Machine Learning

April 18, 2021
Next Post
e-Commerce Site Hackers Now Hiding Credit Card Stealer Inside Image Metadata

e-Commerce Site Hackers Now Hiding Credit Card Stealer Inside Image Metadata

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

WordPress could treat Google FloC as a security issue
Internet Security

WordPress could treat Google FloC as a security issue

April 19, 2021
Machine Learning market valuation to surge at 33.8% CAGR through 2025
Machine Learning

Machine Learning market valuation to surge at 33.8% CAGR through 2025

April 19, 2021
Twitter analysing harmful impacts of its AI, machine learning algorithms
Machine Learning

Twitter analysing harmful impacts of its AI, machine learning algorithms

April 19, 2021
Machine Learning Helps Optimize Therapeutic Antibodies
Machine Learning

Machine Learning Helps Optimize Therapeutic Antibodies

April 18, 2021
Researchers at MIT DAI Lab Have Recently Built Cardea: A Machine Learning Framework That Turns Health Care Data Into Insights
Machine Learning

Researchers at MIT DAI Lab Have Recently Built Cardea: A Machine Learning Framework That Turns Health Care Data Into Insights

April 18, 2021
Automating Drug Discovery With Machine Learning
Machine Learning

Automating Drug Discovery With Machine Learning

April 18, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • WordPress could treat Google FloC as a security issue April 19, 2021
  • Machine Learning market valuation to surge at 33.8% CAGR through 2025 April 19, 2021
  • Twitter analysing harmful impacts of its AI, machine learning algorithms April 19, 2021
  • Machine Learning Helps Optimize Therapeutic Antibodies April 18, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates