Saturday, April 17, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Artificial Intelligence

McAfee CTO Steve Grobman is Wary of AI Models for Cybersecurity

February 16, 2019
in Artificial Intelligence
McAfee CTO Steve Grobman is Wary of AI Models for Cybersecurity
586
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Credit: AI Trends


You might also like

Ethics Leadership Continues to Churn at Google; Bengio Out, Dr. Croak is In 

QScout Quantum Computer from Sandia Labs Open for Research Business  – AI Trends

Scientists Working on Continual Learning to Overcome ‘Catastrophic Forgetting’ 

Artificial intelligence continues to permeate the information security industry, but Steve Grobman has reservations about the technology’s limitations and effectiveness.

Grobman, senior vice president and CTO of McAfee, spoke about the evolution of artificial intelligence at the 2018 AI World Conference in Boston. McAfee has extolled the benefits ofusing AI models to enhance threat intelligence, which the company said is enormously valuable for detecting threats and eliminating false positives. But Grobman said he also believes AI and machine learning have limitations for cybersecurity, and he warned that the technology can be designed in a way that provides illusory results.

In a Q&A, Grobman spoke with Tech Target following AI World about the ease with which machine learning and AI models can be manipulated and misrepresented to enterprises, as well as how the barrier to entry for the technology has lowered considerably for threat actors. Here is part one of the conversation with Grobman.

Editor’s note: This interview has been edited for length and clarity.

What are you seeing with artificial intelligence in the cybersecurity field? How does McAfee view it?

Steve Grobman: Both McAfee and really the whole industry have embraced AI as a key tool to help develop a new set of cyberdefense technologies. I do think one of the things that McAfee is doing that is a little bit unique is we’re looking at the limitations of AI, as well as the benefits. One of the things that I think a lot about is how different AI is for cybersecurity defense technology compared to other industries where there’s not an adversary.

Steve Grobman, CTO, McAfee

Down the street at AI World, I used the analogy that you’re in meteorology and you’re building a model to track hurricanes. As you get really good at tracking hurricanes, it’s not like the laws of physics decide to change on you, and water evaporates differently. But, in cybersecurity, that’s exactly the pattern that we always see. As more and more defense technologies are AI-based, bad actors are going to focus on techniques that are effective at evading AI or poisoning the training data sets. There are a lot of countermeasures that can be used to disrupt AI.

And one of the things that we found in some of our research is a lot of the AI and machine learning models are actually quite fragile and can be evaded. Part of what we’re very focused on is not only building technology that works well today, but looking at what can we do to build more resilient AI models.

One of the things that we’ve done that’s one of the more effective techniques is investigating this field of adversarial machine learning. It’s essentially the field where you’re studying the technology that would cause machine learning to fail or break down. We can then use adversarial-impacted samples and reintroduce them into our training set. And that actually makes our AI models more resilient.

Thinking about the long-term approach instead of just the near term is important. And I do think one of the things I’m very concerned about for the industry is the lack of nuanced understanding of how to look at solutions built on AI and understand whether or not they’re adding real value. And part of my concern is it’s very easy to build an AI solution that looks amazing. But unless you understand exactly how to evaluate it in detail, it actually can be complete garbage.

Speaking of understanding, there seems to be a lot of confusion about AI and machine learning and the differences between the two and what these algorithms actually do for, say, threat detection. For an area that’s received so much buzz and attention, why do you think there’s so much confusion?

Grobman: Actually, artificial intelligence is an awful name, because it’s really not intelligent, and it’s actually quite misleading. And I think what you’re observing is one of the big problems for AI — that people assume the technology is more capable than it actually is. And it is also susceptible to being presented in a very positive fashion.

I wrote a blog post a while ago; I wanted to actually demonstrate this concept of how a really awful model could be made to look valuable. And I didn’t want do it with cybersecurity, because I wanted to make the point with something everybody understands, because cybersecurity is nuanced and is a complex field. Instead, I built a machine learning model to predict the Super Bowl. It took as inputs things like regular-season record, offensive strength, defensive strength and a couple other kinds of key inputs.

The model executed phenomenally. It actually predicted 9 out of 10 games correctly that were sent into the model. And the one game that it got wrong, it actually predicted both teams would win. It’s actually funny — when I coded this thing up, that wasn’t one of the scenarios I contemplated. It’s a good example of a model [that] sometimes doesn’t actually understand the nuance of the reality of the world, because you can’t have both teams win.

But, other than that, it accurately predicted the games. But the reason I’m not in Vegas making tons of money on sports betting is I intentionally built the model violating all of the sound principles of data science. I did what we call overtraining of the model. I did not hold back the test set of data from the training set. And because I trained the model on data that was actually used within these 10 games that I sent it, it actually learned who the winner of those games were, as opposed to being able to predict the Super Bowl.

If you just send it data from games that it was not trained on, you get a totally different answer. It got about 50% of the games correct, which is clearly no better than flipping a coin. The more critical point that I really wanted to make was if I was a technology vendor selling Super Bowl prediction software, I could walk in and say, ‘This is amazing technology. Let me show you how accurate it is. You know, here’s my neural network; you send in this data and, because of my amazing algorithm, it’s able to predict the outcome of the winners.’

And going back to cybersecurity, that’s a big part of the problem. It’s very easy to build something that tests well if the builder of the technology is able to actually create the test. And that’s why, as we move more and more into this field, having a technical evaluation of complex technology that is able to understand if it is biased — and if it is actually being tested in a way that will be representative of showing whether or not it’s effective — is going to be really, really important.

Read the source post at TechTarget.

Credit: AI Trends By: John Desmond

Previous Post

DeepMind AI breakthrough on protein folding made scientists melancholy

Next Post

Unisys plans consulting push in Latin America

Related Posts

Ethics Leadership Continues to Churn at Google; Bengio Out, Dr. Croak is In 
Artificial Intelligence

Ethics Leadership Continues to Churn at Google; Bengio Out, Dr. Croak is In 

April 16, 2021
QScout Quantum Computer from Sandia Labs Open for Research Business  – AI Trends
Artificial Intelligence

QScout Quantum Computer from Sandia Labs Open for Research Business  – AI Trends

April 16, 2021
Scientists Working on Continual Learning to Overcome ‘Catastrophic Forgetting’ 
Artificial Intelligence

Scientists Working on Continual Learning to Overcome ‘Catastrophic Forgetting’ 

April 16, 2021
Reports of Death of Moore’s Law Are Greatly Exaggerated as AI Expands 
Artificial Intelligence

Reports of Death of Moore’s Law Are Greatly Exaggerated as AI Expands 

April 16, 2021
Dealing With Stubbornness Of AI Autonomous Vehicles 
Artificial Intelligence

Dealing With Stubbornness Of AI Autonomous Vehicles 

April 16, 2021
Next Post
Unisys plans consulting push in Latin America

Unisys plans consulting push in Latin America

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

ML Scaling Requires Upgraded Data Management Plan
Machine Learning

ML Scaling Requires Upgraded Data Management Plan

April 17, 2021
SolarWinds cybersecurity spending tops $3 million in Q4, sees $20 million to $25 million in 2021
Internet Security

SolarWinds: US and UK blame Russian intelligence service hackers for major cyberattack

April 17, 2021
Machine learning can be your best bet to transform your career
Machine Learning

Machine learning can be your best bet to transform your career

April 17, 2021
AI and Human Rights, A Story About Equality | by bundleIQ | Mar, 2021
Neural Networks

AI and Human Rights, A Story About Equality | by bundleIQ | Mar, 2021

April 17, 2021
Monitor Your SEO Placement with SEObase
Learn to Code

Monitor Your SEO Placement with SEObase

April 17, 2021
Google Project Zero testing 30-day grace period on bug details to boost user patching
Internet Security

Google Project Zero testing 30-day grace period on bug details to boost user patching

April 17, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • ML Scaling Requires Upgraded Data Management Plan April 17, 2021
  • SolarWinds: US and UK blame Russian intelligence service hackers for major cyberattack April 17, 2021
  • Machine learning can be your best bet to transform your career April 17, 2021
  • AI and Human Rights, A Story About Equality | by bundleIQ | Mar, 2021 April 17, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates