Thursday, February 25, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Machine Learning

You can trick image-recog AI into, say, mixing up cats and dogs – by abusing scaling code to poison training data • The Register

March 21, 2020
in Machine Learning
You can trick image-recog AI into, say, mixing up cats and dogs – by abusing scaling code to poison training data • The Register
587
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Boffins in Germany have devised a technique to subvert neural network frameworks so they misidentify images without any telltale signs of tampering.

Erwin Quiring, David Klein, Daniel Arp, Martin Johns, and Konrad Rieck, computer scientists at TU Braunschweig, describe their attack in a pair of papers, slated for presentation at technical conferences in May and in August this year – events that may or may not take place given the COVID-19 global health crisis.

You might also like

Machine learning‐based analysis of alveolar and vascular injury in SARS‐CoV‐2 acute respiratory failure – Calabrese – – The Journal of Pathology

Zorroa Boon AI: No-Code Machine Learning Now Open for Media Use

Using machine learning to identify blood biomarkers for early diagnosis of autism

The papers, titled “Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning” [PDF] and “Backdooring and Poisoning Neural Networks with Image-Scaling Attacks [PDF],” explore how the preprocessing phase involved in machine learning presents an opportunity to fiddle with neural network training in a way that isn’t easily detected. The idea being: secretly poison the training data so that the software later makes bad decisions and predictions.

This example image, provided by the academics, of a cat has been modified so that when downscaled by an AI framework for training, it turns into a dog, thus muddying the training dataset

There have been numerous research projects that have demonstrated that neural networks can be manipulated to return incorrect results, but the researchers say such interventions can be spotted at training or test time through auditing.

“Our findings show that an adversary can significantly conceal image manipulations of current backdoor attacks and clean-label attacks without an impact on their overall attack success rate,” explained Quiring and Rieck in the Backdooring paper. “Moreover, we demonstrate that defenses – designed to detect image scaling attacks – fail in the poisoning scenario.”

Their key insight is that algorithms used by AI frameworks for image scaling – a common preprocessing step to resize images in a dataset so they all have the same dimensions – do not treat every pixel equally. Instead, these algorithms, in the imaging libraries of Caffe’s OpenCV, TensorFlow’s tf.image, and PyTorch’s Pillow, specifically, consider only a third of the pixels to compute scaling.

“This imbalanced influence of the source pixels provides a perfect ground for image-scaling attacks,” the academics explained. “The adversary only needs to modify those pixels with high weights to control the scaling and can leave the rest of the image untouched.”

AI

Fool ML once, shame on you. Fool ML twice, shame on… the AI dev? If you can hoodwink one model, you may be able to trick many more

READ MORE

On their explanatory website, the eggheads show how they were able to modify a source image of a cat, without any visible sign of alteration, to make TensorFlow’s nearest scaling algorithm output a dog.

This sort of poisoning attack during the training of machine learning systems can result in unexpected output and incorrect classifier labels. Adversarial examples can have a similar effect, the researchers say, but these work against one machine learning model.

Image scaling attacks “are model-independent and do not depend on knowledge of the learning model, features or training data,” the researchers explained. “The attacks are effective even if neural networks were robust against adversarial examples, as the downscaling can create a perfect image of the target class.”

The attack has implications for facial recognition systems in that it could allow a person to be identified as someone else. It could also be used to meddle with machine learning classifiers such that a neural network in a self-driving car could be made to see an arbitrary object as something else, like a stop sign.

To mitigate the risk of such attacks, the boffins say the area scaling capability implemented in many scaling libraries can help, as can Pillow’s scaling algorithms (so long as it’s not Pillow’s nearest scaling scheme). They also discuss a defense technique that involves image reconstruction.

The researchers plan to publish their code and data set on May 1, 2020. They say their work shows the need for more robust defenses against image-scaling attacks and they observe that other types of data that get scaled like audio and video may be vulnerable to similar manipulation in the context of machine learning. ®

Sponsored:
Webcast: Why you need managed detection and response

Credit: Google News

Previous Post

DDoS botnets have abused three zero-days in LILIN video recorders for months

Next Post

A New Mirai IoT Botnet Variant Targeting Zyxel NAS Devices

Related Posts

Machine learning‐based analysis of alveolar and vascular injury in SARS‐CoV‐2 acute respiratory failure – Calabrese – – The Journal of Pathology
Machine Learning

Machine learning‐based analysis of alveolar and vascular injury in SARS‐CoV‐2 acute respiratory failure – Calabrese – – The Journal of Pathology

February 25, 2021
Zorroa Boon AI: No-Code Machine Learning Now Open for Media Use
Machine Learning

Zorroa Boon AI: No-Code Machine Learning Now Open for Media Use

February 25, 2021
Using machine learning to identify blood biomarkers for early diagnosis of autism
Machine Learning

Using machine learning to identify blood biomarkers for early diagnosis of autism

February 25, 2021
Machine learning speeding up patent classifications at USPTO
Machine Learning

Machine learning speeding up patent classifications at USPTO

February 25, 2021
Even Small Companies Use AI, Machine Learning
Machine Learning

Even Small Companies Use AI, Machine Learning

February 25, 2021
Next Post
A New Mirai IoT Botnet Variant Targeting Zyxel NAS Devices

A New Mirai IoT Botnet Variant Targeting Zyxel NAS Devices

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

RAND Corp. Finds DoD “Significantly Challenged” in AI Posture 
Artificial Intelligence

RAND Corp. Finds DoD “Significantly Challenged” in AI Posture 

February 25, 2021
Machine learning‐based analysis of alveolar and vascular injury in SARS‐CoV‐2 acute respiratory failure – Calabrese – – The Journal of Pathology
Machine Learning

Machine learning‐based analysis of alveolar and vascular injury in SARS‐CoV‐2 acute respiratory failure – Calabrese – – The Journal of Pathology

February 25, 2021
Cloud, data amongst APAC digital skills most needed
Internet Security

Cloud, data amongst APAC digital skills most needed

February 25, 2021
SolarWinds Hackers Targeted Cloud Services as a Key Objective 
Artificial Intelligence

SolarWinds Hackers Targeted Cloud Services as a Key Objective 

February 25, 2021
Zorroa Boon AI: No-Code Machine Learning Now Open for Media Use
Machine Learning

Zorroa Boon AI: No-Code Machine Learning Now Open for Media Use

February 25, 2021
B2B Tech Marketing Channels: 2021 Strategies & Plans
Marketing Technology

B2B Tech Marketing Channels: 2021 Strategies & Plans

February 25, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • RAND Corp. Finds DoD “Significantly Challenged” in AI Posture  February 25, 2021
  • Machine learning‐based analysis of alveolar and vascular injury in SARS‐CoV‐2 acute respiratory failure – Calabrese – – The Journal of Pathology February 25, 2021
  • Cloud, data amongst APAC digital skills most needed February 25, 2021
  • SolarWinds Hackers Targeted Cloud Services as a Key Objective  February 25, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates