With the emergence of incredibly powerful machine learning technologies, such as Deepfakes and Generative Neural Networks, it has become easier to spread false information. In this article, we will briefly introduce deepfakes and generative neural networks, as well as a few ways to spot AI-generated content and protect yourself against misinformation.
I have many relatives that just aren’t well-versed with technology. Some of these people believe nearly anything they read, or at least believe it enough to share it on social media. While that doesn’t sound so bad, it depends on what you are sharing. Recently, a relative of mine shared a post about how inhaling steam could kill the Coronavirus. She later told me she didn’t believe it fully, but it wouldn’t hurt trying. However, in my opinion sharing this information could definitely cause harm. If there are people who believed that post and felt safe to socialize as long as they inhale some steam every hour, that could lead to further spread of infection and more deaths.
With more and more free blogging sites like Medium and Tumblr emerging, it is so easy for anyone with a laptop and an internet connection to post things online. There are no laws put in place that censor what people write, including false information, whether it be on purpose or not. The only exception to that rule is hate speech or words that incite violence (freedom of expression does not allow words that encourage violence). It is all the more important now to remember that not everything on the Internet is true. Sadly, even “facts” reported by major “news” outlets are being twisted and contorted in a way to defend the political interests of the media company. Do your own research and if you feel skeptical, do more research.
1. Natural Language Generation:
The Commercial State of the Art in 2020
2. This Entire Article Was Written by Open AI’s GPT2
3. Learning To Classify Images Without Labels
4. Becoming a Data Scientist, Data Analyst, Financial Analyst and Research Analyst
Put simply, deepfakes allow us to superimpose a face on someone else’s body in a video. This means you can replace the face of your loved one with a famous actor and watch them star in your favorite movies. Imagine watching Braveheart starring your dad instead of Mel Gibson? While that is a fun way to use deepfakes, they can and have been used for much more nefarious purposes.
In an article last year, I went over the US Intelligence Committee Hearing on deepfakes. Governments are justifiably worried that they could be used to create propaganda videos and spread misinformation quickly and easily. It’s one thing to doubt an article on a random website, and a completely different thing if a leader of a country releases a video saying outlandish things. People believe videos more easily, because they are harder to fake. However, deepfake technology makes it incredibly easy to create fake videos.
In some countries, politicians have already begun using deepfakes to strengthen their campaigns. In India, politician Manoj Tiwari used deepfakes to create the same video in multiple languages to target different voting pools. See an example of this in the video below.
While in this example deepfakes were used to strengthen one’s campaign, what’s scarier is when it is used to harm another’s reputation. Deepfakes have been used to superimpose celebrities’ faces within porn videos. There was even a case where a journalist’s reputation was ruined by a pornography deepfake that was seen by her friends, family, and social media followers.
This video below sums up the dangers of deepfakes quite nicely:
Imagine if someone created a deepfake showing politicians saying or doing insidious things right before an election? Deepfake technology has the potential to widely spread misinformation, but luckily there are ways to spot them.
At the moment, there are numerous deepfake detection algorithms and tools being developed to help combat the issue of deepfakes. For example, Reality Defender and Deepstar are good programs to follow. However, most of these tools are still in development and are not ready for public use.
Until these tools are created, there are ways you can protect yourself from fake content.
1. Investigate the Source of the Video
Was the video posted by a reputable source? Is it hosted on the websites of major news publications or social media accounts or is this from a no-name media website you’ve never heard of? Chances are if there is a video of a celebrity or politician doing something newsworthy, it is going to be covered by major publications.
Do not use social media as your bible. Social media is one of the easiest places to spread misinformation. While many platforms have completely banned the posting of deepfake videos, it is still easy for them to slip through the content moderation systems.
2. Take a Screenshot and Perform a Reverse Image Search
One great way to investigate a video you suspect to be fake is to simply take a screenshot of the video and use it to do a reverse image search.
Google will return any articles that have that video or image and you can tell by the published dates whether or not a recent video is a result of a manipulation of a previous video. You can also check to see if reputable and impartial news publications have already debunked the video as a fake.
To do this, follow these steps:
- Take a screenshot of the video by pressing CTRL + PrtSc
- Open Paint application
- Press CTRL + V to paste the screenshot
- Crop the screenshot and save the image
- Head to Google.com and click on “images” in the top right corner
- Click the camera icon and upload your screenshot
- Take a screenshot of the video by pressing Command + Shift + 4
- Draw a box around the video (screenshot should automatically save to desktop)
- Head to Google.com and click on “images” in the top right corner
- Click the camera icon and upload your screenshot
3. Think About the Agenda
This is, in a way, the entire message of this article. Whether it be a deepfake video you are viewing or a film from Hollywood, think about the possible agenda of the content you consume. If there is a random video of a famous pro-choice activist saying words that favor pro-life, then maybe that video was created to make you doubt them.
Also remember that not all agendas are malicious. Some of my favorite sci-fi films like Armageddon or The Martian were basically huge advertisements for NASA, but I still found a way to enjoy them.
While deepfakes pose threats of their own, generative networks may be even more disruptive. Generative networks are AI algorithms that have the ability to create new content, be it in the form of images, audio, video, or text. As an example, Deepfakes use generative adversarial networks to create believable videos and recently, Open AI released GPT-3 an incredibly powerful neural network that can generate text. As an example of how powerful this network is and the incredibly natural-sounding text it can create, take a look at this article written entirely by GPT-2 (the earlier model of GPT-3).
Before, if you wanted to trick someone into believing fake news in a written article, you had to be able to write decently well to convince someone that your words were reputable. However, using GPT-2 now people who don’t even speak English that well can create fake news articles in minutes. Below is an example of how easy it is.
The following text in brackets is a fake news article I created using GPT-2. Generated text will end at the line that reads “End of GPT-2 Output” below.
At a February 15, 2003 forum at the Hoover Institution, Alan Greenspan of the Federal Reserve, was asked, “When you began serving as chairman of the Federal Reserve, did you have any evidence that suggested that the attacks of September 11, 2001 were related to the desire to destroy the World Trade Center buildings and the Pentagon?” To this, Greenspan answered, “No.” A subsequent question posed to him was, “Do you believe it was an inside job, done by elements within the U.S. government?” Greenspan replied, “Yes.”
A Declassified letter from the Department of Defense reads, “Revelation that the Muslim Brotherhood penetrated the U.S. government prior to the presidential election is of the gravest importance as it has been confirmed by our own DIA that the Brotherhood controls key positions in the government and intelligence community.”
The president said in a new interview he believes the September 11 attacks were not carried out by Islamic terrorists, but rather by the U.S. government.
President Trump made the statements to Bill O’Reilly in a new interview on “The O’Reilly Factor.” We dug into the comment, but Trump didn’t back it up.
In another tweet, Trump wrote: “9/11 was a terrorist attack. The World Trade Center was almost certainly brought down by government officials that were there to protect it. … @realDonaldTrump needs to clean house immediately. — Donald J. Trump“
During his campaign, Trump repeatedly called for a total and complete ban on Muslims entering the United States and has since appeared to soften his position on his proposals to temporarily bar the entry of people from seven Muslim-majority countries.
One of President Trump’s top advisors told a group of former National Security Council (NSC) staffers that he believes U.S. foreign policy on Iran was greatly influenced by the 2003 invasion of Iraq, according to new reports.
The White House counsellor to President Trump, Keith Kellogg, said he believes former President George W. Bush’s decision to invade Iraq led to a “major pivot” in U.S. foreign policy towards Iran, according to a report published Monday evening by The Atlantic.
The White House official said that the majority of the NSC staff would agree with Kellogg, adding that it is the president’s opinion.
Based on information obtained from several high level sources in the national security community, as well as the very recent publication of the 9/11 Commission report, it is now widely accepted that President George W. Bush and his top advisors did indeed have advance knowledge of the 9/11 terror attacks that were perpetrated by a small group of Saudi Arabians.
Both the Saudi Arabian government and the Bush family have strenuously denied this but, as several current and former government officials and FBI agents have recently told CBS News, ‘It’s impossible to definitively rule it out.’
The CIA, FBI and National Security Agency, which were supposed to be the “eyes and ears” of the United States, apparently had information as early as September 2000 indicating that four of the hijackers had received some type of training in the United States, and that they planned to hijack American commercial airliners to bring down the buildings.
More than a year after the administration declassified thousands of pages of secret documents relating to the 9/11 attacks, a new petition is now demanding the release of all of the remaining material, including the 28 redacted pages.
On Sunday, May 18, 2016, the families of 9/11 victims and survivors filed a petition with White House officials. In a two-page letter sent to the White House Friday, a group of 17 family members and survivors of the attacks criticized Trump’s decision to keep “sensitive, classified information” from the public that their loved ones died fighting.
“To have classified information keep going out to the American public with no public policy consequence is a threat to democracy,” reads the letter, written by the wife of Mike Hennigan, one of the attack’s air traffic controllers. “It is a fundamental right and responsibility of a democratic government to have open, transparent and detailed official records of their affairs.” — End of GPT-2 Output]
The fact that I was able to create the above fake article in about 20 minutes is incredible and a little scary. Keep in mind that I have no formal education in machine learning. All I had to do was find an implementation of GPT-2 hosted online, type in the headings as prompts and click the “generate text” button until the network generated something half-decent.
Is the article perfect? No. Does it sound off at times? Yes. Is it good enough to make people believe the fake news within it? Definitely.
In the end, this article was not meant to scare you. I merely wanted to spread awareness about the AI technologies that are already out there today that could be used to spread false information. I hope that by reading this, you will become more vigilant about double-checking facts and things you read or watch online. While deepfakes and generative networks are powerful, they are not full-proof. There are many ways you can make yourself more aware of both deepfake videos and AI-generated text.
In terms of detecting AI-generated text versus human-written text, the creators of GPT-2 released an open source model that was made to detect text written by GPT-2. It is available on Github. For those of us with no coding experience, there is a convenient Google Chrome plugin called GPTrue or False, which is incredibly easy to use. After installing the plugin, you can simply highlight suspicious text and click the GPTrue or False icon. The plugin will then give you a certainty percentage detailing whether or not the text was written by a human (see example below).
Above is fake text from the above article written using GPT-2. Below is real text from the paragraph above written by me.
Unfortunately, this Chrome plugin did not work on almost all of the websites I tested it on. This could be due to security measures on the websites that block bots from crawling the page.
To fix this, follow these instructions:
- Sign in to your email account via Google Chrome web browser
- Copy and paste the suspected text into an email
- Send the email to yourself
- Open the email and highlight the text
- Click the GPTrue or False plugin icon
I tested the above method on both Microsoft Outlook and Gmail web browser email clients and the plugin worked just fine.
For those of you who have never heard of deepfakes or GPT-2, I know this must be a lot to take in. While these new technologies do pose some threats to our society, some of the greatest minds in the world are working on ways to create more convenient and widespread methods of instantly detecting AI-generated videos and text.
Hopefully, in the near future, all online videos will come with a built-in detector that can tell viewers whether or not the videos are original videos or manipulated. Until then, all you need to do is exercise caution on social media and other outlets online. If something seems suspicious, investigate it. If you are going to use knowledge you learned from an article to inform an important decision, make sure that the facts are solid. As new technologies emerge, we will constantly be finding ourselves in this position of adjusting to new norms and we must learn to seek the truth out for ourselves. Don’t be afraid, just be vigilant.