Friday, February 26, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Machine Learning

What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias

June 24, 2020
in Machine Learning
What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias
585
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

It’s a startling image that illustrates the deep-rooted biases of AI research. Input a low-resolution picture of Barack Obama, the first black president of the United States, into an algorithm designed to generate depixelated faces, and the output is a white man.

It’s not just Obama, either. Get the same algorithm to generate high-resolution images of actress Lucy Liu or congresswoman Alexandria Ocasio-Cortez from low-resolution inputs, and the resulting faces look distinctly white. As one popular tweet quoting the Obama example put it: “This image speaks volumes about the dangers of bias in AI.”

You might also like

How Artificial Intelligence, Machine Learning will further advance Ed-tech sector?

Machine Learning & Big Data Analytics Education Market: Soaring Demand Assures Motivated Revenue Share During 2020-2030 – KSU

New machine learning tool facilitates analysis of health information, clinical forecasting

But what’s causing these outputs and what do they really tell us about AI bias?

First, we need to know a little a bit about the technology being used here. The program generating these images is an algorithm called PULSE, which uses a technique known as upscaling to process visual data. Upscaling is like the “zoom and enhance” tropes you see in TV and film, but, unlike in Hollywood, real software can’t just generate new data from nothing. In order to turn a low-resolution image into a high-resolution one, the software has to fill in the blanks using machine learning.

In the case of PULSE, the algorithm doing this work is StyleGAN, which was created by researchers from NVIDIA. Although you might not have heard of StyleGAN before, you’re probably familiar with its work. It’s the algorithm responsible for making those eerily realistic human faces that you can see on websites like ThisPersonDoesNotExist.com; faces so realistic they’re often used to generate fake social media profiles.

A sample of faces created by StyleGAN, the algorithm that powers PULSE.

What PULSE does is use StyleGAN to “imagine” the high-res version of pixelated inputs. It does this not by “enhancing” the original low-res image, but by generating a completely new high-res face that, when pixelated, looks the same as the one inputted by the user.

This means each depixelated image can be upscaled in a variety of ways, the same way a single set of ingredients makes different dishes. It’s also why you can use PULSE to see what Doom guy, or the hero of Wolfenstein 3D, or even the crying emoji look like at high resolution. It’s not that the algorithm is “finding” new detail in the image as in the “zoom and enhance” trope; it’s instead inventing new faces that revert to the input data.

This sort of work has been theoretically possible for a few years now, but, as is often the case in the AI world, it reached a larger audience when an easy-to-run version of the code was shared online this weekend. That’s when the racial disparities started to leap out.

PULSE’s creators say the trend is clear: when using the algorithm to scale up pixelated images, the algorithm more often generates faces with Caucasian features.

“This bias is likely inherited from the dataset”

“It does appear that PULSE is producing white faces much more frequently than faces of people of color,” wrote the algorithm’s creators on Github. “This bias is likely inherited from the dataset StyleGAN was trained on […] though there could be other factors that we are unaware of.”

In other words, because of the data StyleGAN was trained on, when it’s trying to come up with a face that looks like the pixelated input image, it defaults to white features.

This problem is extremely common in machine learning, and it’s one of the reasons facial recognition algorithms perform worse on non-white and female faces. Data used to train AI is often skewed toward a single demographic, white men, and when a program sees data not in that demographic it performs poorly. Not coincidentally, it’s white men who dominate AI research.

But exactly what the Obama example reveals about bias and how the problems it represents might be fixed are complicated questions. Indeed, they’re so complicated that this single image has sparked heated disagreement among AI academics, engineers, and researchers.

On a technical level, some experts aren’t sure this is even an example of dataset bias. The AI artist Mario Klingemann suggests that the PULSE selection algorithm itself, rather than the data, is to blame. Klingemann notes that he was able to use StyleGAN to generate more non-white outputs from the same pixelated Obama image, as shown below:

I had to try my own method for this problem. Not sure if you can call it an improvement, but by simply starting the gradient descent from different random locations in latent space you can already get more variation in the results. pic.twitter.com/dNaQ1o5l5l

— Mario Klingemann (@quasimondo) June 21, 2020

These faces were generated using “the same concept and the same StyleGAN model” but different search methods to Pulse, says Klingemann, who says we can’t really judge an algorithm from just a few samples. “There are probably millions of possible faces that will all reduce to the same pixel pattern and all of them are equally ‘correct,’” he told The Verge.

(Incidentally, this is also the reason why tools like this are unlikely to be of use for surveillance purposes. The faces created by these processes are imaginary and, as the above examples show, have little relation to the ground truth of the input. However, it’s not like huge technical flaws have stopped police from adopting technology in the past.)

But regardless of the cause, the outputs of the algorithm seem biased — something that the researchers didn’t notice before the tool became widely accessible. This speaks to a different and more pervasive sort of bias: one that operates on a social level.

“People of color are not outliers. We’re not ‘edge cases’ authors can just forget.”

Deborah Raji, a researcher in AI accountability, tells The Verge that this sort of bias is all too typical in the AI world. “Given the basic existence of people of color, the negligence of not testing for this situation is astounding, and likely reflects the lack of diversity we continue to see with respect to who gets to build such systems,” says Raji. “People of color are not outliers. We’re not ‘edge cases’ authors can just forget.”

The fact that some researchers seem keen to only address the data side of the bias problem is what sparked larger arguments about the Obama image. Facebook’s chief AI scientist Yann LeCun became a flashpoint for these conversations after tweeting a response to the image saying that “ML systems are biased when data is biased,” and adding that this sort of bias is a far more serious problem “in a deployed product than in an academic paper.” The implication being: let’s not worry too much about this particular example.

Many researchers, Raji among them, took issue with LeCun’s framing, pointing out that bias in AI is affected by wider social injustices and prejudices, and that simply using “correct” data does not deal with the larger injustices.

Even “unbiased” data can produce biased results

Others noted that even from the point of view of a purely technical fix, “fair” datasets can often be anything but. For example, a dataset of faces that accurately reflected the demographics of the UK would be predominantly white because the UK is predominantly white. An algorithm trained on this data would perform better on white faces than non-white faces. In other words, “fair” datasets can still created biased systems. (In a later thread on Twitter, LeCun acknowledged there were multiple causes for AI bias.)

Raji tells The Verge she was also surprised by LeCun’s suggestion that researchers should worry about bias less than engineers producing commercial systems, and that this reflected a lack of awareness at the very highest levels of the industry.

“Yann LeCun leads an industry lab known for working on many applied research problems that they regularly seek to productize,” says Raji. “I literally cannot understand how someone in that position doesn’t acknowledge the role that research has in setting up norms for engineering deployments.” The Verge reached out to LeCun for comment, but didn’t receive a response prior to publication.

Many commercial AI systems, for example, are built directly from research data and algorithms without any adjustment for racial or gender disparities. Failing to address the problem of bias at the research stage just perpetuates existing problems.

In this sense, then, the value of the Obama image isn’t that it exposes a single flaw in a single algorithm; it’s that it communicates, at an intuitive level, the pervasive nature of AI bias. What it hides, however, is that the problem of bias goes far deeper than any dataset or algorithm. It’s a pervasive issue that requires much more than technical fixes.

As one researcher, Vidushi Marda, responded on Twitter to the white faces produced by the algorithm: “In case it needed to be said explicitly – This isn’t a call for ‘diversity’ in datasets or ‘improved accuracy’ in performance – it’s a call for a fundamental reconsideration of the institutions and individuals that design, develop, deploy this tech in the first place.”


Credit: Google News

Previous Post

ReboundAI: Measuring Economic Mood at Scale

Next Post

Working from home on your own PC? Security is still a confusing mess for many

Related Posts

How Artificial Intelligence, Machine Learning will further advance Ed-tech sector?
Machine Learning

How Artificial Intelligence, Machine Learning will further advance Ed-tech sector?

February 26, 2021
Machine Learning & Big Data Analytics Education Market: Soaring Demand Assures Motivated Revenue Share During 2020-2030 – KSU
Machine Learning

Machine Learning & Big Data Analytics Education Market: Soaring Demand Assures Motivated Revenue Share During 2020-2030 – KSU

February 26, 2021
Basic laws of physics spruce up machine learning
Machine Learning

New machine learning tool facilitates analysis of health information, clinical forecasting

February 26, 2021
Supercomputer-Powered Machine Learning Supports Fusion Energy Reactor Design
Machine Learning

Supercomputer-Powered Machine Learning Supports Fusion Energy Reactor Design

February 26, 2021
Something’s Fishy — New Funding To Tackle Illegal Activities At Sea Using Machine Learning And Data Analytics
Machine Learning

Something’s Fishy — New Funding To Tackle Illegal Activities At Sea Using Machine Learning And Data Analytics

February 26, 2021
Next Post
Working from home on your own PC? Security is still a confusing mess for many

Working from home on your own PC? Security is still a confusing mess for many

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Why your diversity and inclusion efforts should include neurodiverse workers
Internet Security

Why your diversity and inclusion efforts should include neurodiverse workers

February 26, 2021
North Korean Hackers Targeting Defense Firms with ThreatNeedle Malware
Internet Privacy

North Korean Hackers Targeting Defense Firms with ThreatNeedle Malware

February 26, 2021
The Beginner Guide for Creating a Multi-Vendor eCommerce Website
Data Science

The Beginner Guide for Creating a Multi-Vendor eCommerce Website

February 26, 2021
How Artificial Intelligence, Machine Learning will further advance Ed-tech sector?
Machine Learning

How Artificial Intelligence, Machine Learning will further advance Ed-tech sector?

February 26, 2021
Attorney-General urged to produce facts on US law enforcement access to COVIDSafe
Internet Security

Attorney-General urged to produce facts on US law enforcement access to COVIDSafe

February 26, 2021
Machine Learning & Big Data Analytics Education Market: Soaring Demand Assures Motivated Revenue Share During 2020-2030 – KSU
Machine Learning

Machine Learning & Big Data Analytics Education Market: Soaring Demand Assures Motivated Revenue Share During 2020-2030 – KSU

February 26, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Why your diversity and inclusion efforts should include neurodiverse workers February 26, 2021
  • North Korean Hackers Targeting Defense Firms with ThreatNeedle Malware February 26, 2021
  • The Beginner Guide for Creating a Multi-Vendor eCommerce Website February 26, 2021
  • How Artificial Intelligence, Machine Learning will further advance Ed-tech sector? February 26, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates