Facebook on Tuesday announced it will implement various policy changes that aim to prevent the spread of terrorist and extremist content.
The changes to Facebook’s content policies were prompted by the Christchurch Call that took place in May. At the time, New Zealand Prime Minister Jacinda Ardern rallied 17 other governments and eight tech companies, including Facebook, to jointly agree upon a set of commitments and ongoing collaboration to eliminate terrorist and violent extremist content online, following the horrific terrorist attack in Christchurch.
The policy changes include an updated definition of terrorist organisations, improved technology use when detecting harmful online content, and an expansion of its content reviewing process.
Facebook said its new definition of terrorist organisations will not only focus on organisations that commit violent acts, but also those that act with the “intent to coerce, intimidate and/or influence a civilian population, government, or international organisation”.
This means, according to Facebook, content that attempts to promote violence, particularly when directed toward civilians with the intent to coerce and intimidate, will be banned by the social platform.
In addition, the AI techniques previously used by Facebook to ban content from terrorist groups, such as ISIS and Al-Qaeda, will be expanded and applied onto a wider range of dangerous organisations like white supremacist groups.
Facebook has banned over 200 white supremacist organisations from its platform since it banned white supremacist, nationalist, and separatist content in March from its social media platforms Facebook and Instagram.
When it made the decision to ban white supremacist, nationalist, and separatist content, Facebook added a feature to its search function so users in the United States who searched for that content would be redirected to sources that help people leave hate groups. This feature has been expanded to two more countries as part of the newly announced policy changes.
People residing in Australia and Indonesia who search for teams associated with hate and extremism will be redirected to EXIT Australia and ruangobrol.id, respectively.
Facebook’s content reviewing teams will also start identifying content made from people and organisations that proclaim or are engaged in violence leading to real-world harm instead of only placing efforts on counterterrorism.
See also: Morrison sells Australia’s terrorism video streaming plan to the G20
“This new structure was informed by a range of factors, but we were particularly driven by the rise in white supremacist violence and the fact that terrorists increasingly may not be clearly tied to specific terrorist organisations before an attack occurs,” Facebook said.
Ardern has welcomed the policy changes made by Facebook to improve its measures for preventing harm caused online, saying it highlights that “real change is happening”.
“These are the kinds of efforts the Christchurch Call to Action was designed for as we try to eliminate the spread of terrorist and violent extremist content online,” she said.
Governments around the world have been considering how to tighten their rules around what content is permitted at online platforms. The G20 nations came together in July to urge online platforms to meet our citizens’ expectations to prevent terrorist and violent extremism conducive to terrorism content from being streamed, uploaded, or re-uploaded.
The Australian government, meanwhile, has officially given its eSafety Commissioner the power to force the nation’s telcos to block certain content during crisis events.
“The shocking events that took place in Christchurch demonstrated how digital platforms and websites can be exploited to host extreme violent and terrorist content,” Australian Prime Minister Scott Morrison said at the time.
“That type of abhorrent material has no place in Australia and we are doing everything we can to deny terrorists the opportunity to glorify their crimes, including taking action locally and globally.”