Sunday, April 11, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Artificial Intelligence

High-quality Deepfake Videos Made with AI Seen as a National Security Threat

January 24, 2020
in Artificial Intelligence
High-quality Deepfake Videos Made with AI Seen as a National Security Threat
587
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Deepfake videos so realistic that they cannot be detected as fakes have the FBI concerned about they pose a national security threat. (GETTY IMAGES)

By AI Trends Staff

You might also like

You Can Bet On The Bring Your Own Algorithm (BYOA) Trend, Including For AI Autonomous Cars 

Retailers Adopting AI and Cloud Computing More Aggressively  

Elite Math Students Found by Study to Come From Same Countries Over Time 

The FBI is concerned that AI is being used to create deepfake videos that are so convincing they cannot be distinguished from reality.

The alarm was sounded by an FBI executive at a WSJ Pro Cybersecurity Symposium held recently in San Diego. “What we’re concerned with is that, in the digital world we live in now, people will find ways to weaponize deep-learning systems,” stated Chris Piehota, executive assistant director of the FBI’s science and technology division, in an account in WSJPro.

The technology behind deepfakes and other disinformation tactics are enhanced by AI. The FBI is concerned natural security could be compromised by fraudulent videos created to mimic public figures. “As the AI continues to improve and evolve, we’re going to get to a point where there’s no discernible difference between an AI-generated video and an actual video,” Piehota stated.

Chris Piehota, executive assistant director, FBI science and technology division

The word ‘deepfake’ is a portmanteau of “deep learning” and “fake.” It refers to a branch of synthetic media in which artificial neural networks are used to generate fake images or videos based on a person’s likeness.

The FBI has created its own deepfakes in a test lab, that have been able to create artificial personas that can pass some measures of biometric authentication, Piehota stated. The technology can also be used to create realistic images of people who do not exist. And 3-D printers powered with AI models can be used to copy someone’s fingerprints—so far, FBI examiners have been able to tell the difference between real and artificial fingerprints.

Threat to US Elections Seen

Some are quite concerned about the impact of deepfakes on US democratic elections and on the attitude of voters. The AI-enhanced deepfakes can undermine the public’s confidence in democratic institutions, even if proven false, warned Suzanne Spaulding, a senior adviser at the Center for Strategic and International Studies, a Washington-based nonprofit.

“It really hastens our move towards a post-truth world, in which the American public becomes like the Russian population, which has really given up on the idea of truth, and kind of shrugs its shoulders. People will tune out, and that is deadly for democracy,” she stated in the WSJ Pro account.

Suzanne Spaulding, senior adviser, Center for Strategic and International Studies

Deepfake tools rely on a technology called generative adversarial networks (GANs), a technique invented in 2014 by Ian Goodfellow, a Ph.D. student who now works at Apple, according to an account in Live Science.

A GAN algorithm generates two AI streams, one that generates content such as photo images, and an adversary that tries to guess whether the images are real or fake. The generating AI starts off with the advantage, meaning its partner can easily distinguish real from fake photos. But over time, the AI gets better and begins producing content that looks lifelike.

For an example, see NVIDIA’s project www.thispersondoesnotexist.com which uses a GAN to create completely fake—and completely lifelike—photos of people.

Example material is starting to mount. In 2017, researchers from the University of Washington in Seattle trained a GAN can change a video of former President Barack Obama, so his lips moved consistent with the words, but from a different speech. That work was published in the journal ACM Transactions on Graphics (TOG). In 2019, a deepfake could generate realistic movies of the Mona Lisa talking, moving and smiling in different positions. The technique can also be applied to audio files, to splice new words into a video of a person talking, to make it appear they said something they never said.

All this will cause attentive viewers to be more wary of content on the internet.

High tech is trying to field a defense against deepfakes.

Google in October 2019 released several thousand deepfake videos to help researchers train their models to recognize them, according to an account in Wired. The hope is to build filters that can catch deepfake videos the way spam filters identify email spam.

The clips Google released were created in collaboration with Alphabet subsidiary Jigsaw. They focused on technology and politics, featuring paid actors who agreed to have their faces replaced. Researchers can use the videos to benchmark the performance of their filtering tools. The clips show people doing mundane tasks, or laughing or scowling into the camera. The face-swapping is easy to spot in some instances and not in others.

Some researchers are skeptical this approach will be effective. “The dozen or so that I looked at have glaring artifacts that more modern face-swap techniques have eliminated,” stated Hany Farid, a digital forensics expert at UC Berkeley who is working on deepfakes, to Wired. “Videos like this with visual artifacts are not what we should be training and testing our forensic techniques on. We need significantly higher quality content.”

Going further, the Deepfake  Detection Challenge competition was launched in December 2019 by Facebook — along with Amazon Web Services (AWS), Microsoft, the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley; University of Maryland, College Park; and State University of New York at Albany, according to an account in VentureBeat.

Facebook has budged more than $10 million to encourage participation in the competition; AWS is contributing up to $1 million in service credits and offering to host entrants’ models if they choose; and Google’s Kaggle data science and machine learning platform is hosting both the challenge and the leaderboard.

“‘Deepfake’ techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online,” noted Facebook CTO Mike Schroepfer in a blog post. “Yet the industry doesn’t have a great data set or benchmark for detecting them. The [hope] is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer.”

The data set contains 100,000-plus videos and was tested through a targeted technical working session in October at the International Conference on Computer Vision, stated Facebook AI Research Manager Christian Ferrer.  The data does not include any personal user identification and features only participants who have agreed to have their images used. Access to the dataset is gated so that only teams with a license can access it.

The Deepfake Detection Challenge is overseen by the Partnership on AI’s Steering Committee on AI and Media Integrity. It is scheduled to run through the end of March 2020.

Read the source articles in  WSJPro,  Live Science, Wired and VentureBeat.

Credit: AI Trends By: Benjamin Ross

Previous Post

Biometrics, AI, machine learning innovations to boost gaming industry growth

Next Post

Sonos CEO apologises but will not provide software updates for legacy products

Related Posts

You Can Bet On The Bring Your Own Algorithm (BYOA) Trend, Including For AI Autonomous Cars 
Artificial Intelligence

You Can Bet On The Bring Your Own Algorithm (BYOA) Trend, Including For AI Autonomous Cars 

April 9, 2021
Retailers Adopting AI and Cloud Computing More Aggressively  
Artificial Intelligence

Retailers Adopting AI and Cloud Computing More Aggressively  

April 9, 2021
Elite Math Students Found by Study to Come From Same Countries Over Time 
Artificial Intelligence

Elite Math Students Found by Study to Come From Same Countries Over Time 

April 9, 2021
Meteorologists Aim to Use AI To Get an Edge on Natural Hazards and Disasters 
Artificial Intelligence

Meteorologists Aim to Use AI To Get an Edge on Natural Hazards and Disasters 

April 9, 2021
Autonomous Vehicle Safety Standards Evolving in US and Worldwide 
Artificial Intelligence

Autonomous Vehicle Safety Standards Evolving in US and Worldwide 

April 9, 2021
Next Post
Sonos CEO apologises but will not provide software updates for legacy products

Sonos CEO apologises but will not provide software updates for legacy products

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Can a Machine Learning Model Predict T2D?
Machine Learning

Can a Machine Learning Model Predict T2D?

April 11, 2021
Leveraging SAP’s Enterprise Data Management tools to enable ML/AI success
Data Science

Leveraging SAP’s Enterprise Data Management tools to enable ML/AI success

April 11, 2021
Machine Learning in Finance Market is exclusively demanding in forecast 2029 | Ignite Ltd, Yodlee, Trill A.I., MindTitan, Accenture, ZestFinance – KSU
Machine Learning

Machine Learning in Finance Market is exclusively demanding in forecast 2029 | Ignite Ltd, Yodlee, Trill A.I., MindTitan, Accenture, ZestFinance – KSU

April 10, 2021
Vue.js vs AngularJS Development in 2021: Side-by-Side Comparison
Data Science

Vue.js vs AngularJS Development in 2021: Side-by-Side Comparison

April 10, 2021
IBM releases Qiskit modules that use quantum computers to improve machine learning
Machine Learning

IBM releases Qiskit modules that use quantum computers to improve machine learning

April 10, 2021
Hackers Tampered With APKPure Store to Distribute Malware Apps
Internet Privacy

Hackers Tampered With APKPure Store to Distribute Malware Apps

April 10, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Can a Machine Learning Model Predict T2D? April 11, 2021
  • Leveraging SAP’s Enterprise Data Management tools to enable ML/AI success April 11, 2021
  • Machine Learning in Finance Market is exclusively demanding in forecast 2029 | Ignite Ltd, Yodlee, Trill A.I., MindTitan, Accenture, ZestFinance – KSU April 10, 2021
  • Vue.js vs AngularJS Development in 2021: Side-by-Side Comparison April 10, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates