Wednesday, March 3, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Machine Learning

Hey Alexa! Sorry I fooled you …

February 8, 2020
in Machine Learning
Hey Alexa! Sorry I fooled you …
585
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

A human can likely tell the difference between a turtle and a rifle. Two years ago, Google’s AI wasn’t so sure. For quite some time, a subset of computer science research has been dedicated to better understanding how machine-learning models handle these “adversarial” attacks, which are inputs deliberately created to trick or fool machine-learning algorithms. 

While much of this work has focused on speech and images, recently, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) tested the boundaries of text. They came up with “TextFooler,” a general framework that can successfully attack natural language processing (NLP) systems — the types of systems that let us interact with our Siri and Alexa voice assistants — and “fool” them into making the wrong predictions. 

You might also like

This Protein Therapeutics Company Integrates Wet Lab For High-Speed Characterization With Machine Learning Technologies To Guide The Search For Better Antibodies

Yum! Brands Acquires AI Company

Cloudera: An Enterprise-Level Play On Machine Learning And Big Data – Seeking Alpha

One could imagine using TextFooler for many applications related to internet safety, such as email spam filtering, hate speech flagging, or “sensitive” political speech text detection — which are all based on text classification models. 

“If those tools are vulnerable to purposeful adversarial attacking, then the consequences may be disastrous,” says Di Jin, MIT PhD student and lead author on a new paper about TextFooler. “These tools need to have effective defense approaches to protect themselves, and in order to make such a safe defense system, we need to first examine the adversarial methods.” 

TextFooler works in two parts: altering a given text, and then using that text to test two different language tasks to see if the system can successfully trick machine-learning models.  

The system first identifies the most important words that will influence the target model’s prediction, and then selects the synonyms that fit contextually. This is all while maintaining grammar and the original meaning to look “human” enough, until the prediction is altered. 

Then, the framework is applied to two different tasks — text classification, and entailment (which is the relationship between text fragments in a sentence), with the goal of changing the classification or invalidating the entailment judgment of the original models. 

In one example, TextFooler’s input and output were:

“The characters, cast in impossibly contrived situations, are totally estranged from reality.” 

“The characters, cast in impossibly engineered circumstances, are fully estranged from reality.” 

In this case, when testing on an NLP model, it gets the example input right, but then gets the modified input wrong. 

In total, TextFooler successfully attacked three target models, including “BERT,” the popular open-source NLP model. It fooled the target models with an accuracy of over 90 percent to under 20 percent, by changing only 10 percent of the words in a given text. The team evaluated success on three criteria: changing the model’s prediction for classification or entailment; whether it looked similar in meaning to a human reader, compared with the original example; and whether the text looked natural enough. 

The researchers note that while attacking existing models is not the end goal, they hope that this work will help more abstract models generalize to new, unseen data. 

“The system can be used or extended to attack any classification-based NLP models to test their robustness,” says Jin. “On the other hand, the generated adversaries can be used to improve the robustness and generalization of deep-learning models via adversarial training, which is a critical direction of this work.” 

Jin wrote the paper alongside MIT Professor Peter Szolovits, Zhijing Jin of the University of Hong Kong, and Joey Tianyi Zhou of A*STAR, Singapore. They will present the paper at the AAAI Conference on Artificial Intelligence in New York. 

Credit: Google News

Previous Post

Build a successful digital banking strategy with the following 5 tech trends

Next Post

Rometty Out as CEO of IBM; Foray into AI in Medicine with Watson Health a Legacy

Related Posts

This Protein Therapeutics Company Integrates Wet Lab For High-Speed Characterization With Machine Learning Technologies To Guide The Search For Better Antibodies
Machine Learning

This Protein Therapeutics Company Integrates Wet Lab For High-Speed Characterization With Machine Learning Technologies To Guide The Search For Better Antibodies

March 3, 2021
Yum! Brands Acquires AI Company
Machine Learning

Yum! Brands Acquires AI Company

March 3, 2021
Cloudera: An Enterprise-Level Play On Machine Learning And Big Data – Seeking Alpha
Machine Learning

Cloudera: An Enterprise-Level Play On Machine Learning And Big Data – Seeking Alpha

March 3, 2021
An open-source machine learning framework to carry out systematic reviews
Machine Learning

An open-source machine learning framework to carry out systematic reviews

March 3, 2021
Microsoft’s Azure Arc multi-cloud platform now supports machine learning workloads – TechCrunch
Machine Learning

Microsoft’s Azure Arc multi-cloud platform now supports machine learning workloads – TechCrunch

March 2, 2021
Next Post
Rometty Out as CEO of IBM; Foray into AI in Medicine with Watson Health a Legacy

Rometty Out as CEO of IBM; Foray into AI in Medicine with Watson Health a Legacy

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Malaysia Airlines suffers data security ‘incident’ spanning nine years
Internet Security

Malaysia Airlines suffers data security ‘incident’ spanning nine years

March 3, 2021
URGENT — 4 Actively Exploited 0-Day Flaws Found in Microsoft Exchange
Internet Privacy

URGENT — 4 Actively Exploited 0-Day Flaws Found in Microsoft Exchange

March 3, 2021
This Protein Therapeutics Company Integrates Wet Lab For High-Speed Characterization With Machine Learning Technologies To Guide The Search For Better Antibodies
Machine Learning

This Protein Therapeutics Company Integrates Wet Lab For High-Speed Characterization With Machine Learning Technologies To Guide The Search For Better Antibodies

March 3, 2021
Breadcrumbing Job Applicants: Bad for Employers
Marketing Technology

Breadcrumbing Job Applicants: Bad for Employers

March 3, 2021
Remote work: 5 things every business needs to know
Internet Security

Remote work: 5 things every business needs to know

March 3, 2021
Yum! Brands Acquires AI Company
Machine Learning

Yum! Brands Acquires AI Company

March 3, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Malaysia Airlines suffers data security ‘incident’ spanning nine years March 3, 2021
  • URGENT — 4 Actively Exploited 0-Day Flaws Found in Microsoft Exchange March 3, 2021
  • This Protein Therapeutics Company Integrates Wet Lab For High-Speed Characterization With Machine Learning Technologies To Guide The Search For Better Antibodies March 3, 2021
  • Breadcrumbing Job Applicants: Bad for Employers March 3, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates