Facebook has deleted 18 million pieces of terrorist content in the first three quarters of this year.
The social media company’s policy lead for counter terrorism and dangerous organisations, Dr Erin Marie Saltman, revealed how ‘machine learning’ was helping the company to censor content from groups including Islamists and white supremacists.
In a speech yesterday at the Institute of International and European Affairs (IIEA) in Dublin, Dr Saltman explained how the company uses the United Nations list when identifying terrorist originations.
“That is not just Daesh and al-Qa’ida, that does include our entire list of terrorist organisations that include some of the white supremacy groups that includes some of the more regional located groups,” she said.
Please log in or register with Independent.ie for free access to this article.
Between 98pc and 99pc of what is removed from the site is found by Facebook itself, according to Dr Saltman.
“What that means is that our machine learning tools or our teams that do investigations found the content before anyone flagged it to us.
“So it wasn’t flagged by government or community members. It was flagged internally.”
She said the company had more than 350 people working just on terrorism and dangerous organisations and 35,000 on safety and operations teams around the world.
The company’s office in Dublin employs almost 5,000 people, including staff in content policy and moderation, and is its biggest outside California.
“They might look at a lot of different harm types but they’re reviewing content and they’re getting constantly updated training,” she said.
“We see lots of shifts, we see lots of group names changing,” she said.
One way the company uses machine learning is to identify a specific image and then have it removed across the board on the platform if it is deemed necessary to censor.
Dr Saltman said the company had a programme on “counter-speech” to help people use Facebook to speak out against hatred and extremism.
Credit: Google News