Friday, March 5, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Machine Learning

Facebook, YouTube, and Twitter warn that AI systems could make mistakes

March 18, 2020
in Machine Learning
Facebook, YouTube, and Twitter warn that AI systems could make mistakes
586
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

You might also like

UVA doctors give us a glimpse into the future of artificial intelligence

Machine intelligence – Spy agencies have high hopes for AI | Science & technology

AI and machine learning’s moment in health care

A day after Facebook announced it would rely more heavily on artificial-intelligence-powered content moderation, some users are complaining that the platform is making mistakes and blocking a slew of legitimate posts and links, including posts with news articles related to the coronavirus pandemic, and flagging them as spam.

While trying to post, users appear to be getting a message that their content — sometimes just a link to an article — violates Facebook’s community standards. “We work hard to limit the spread of spam because we do not want to allow content that is designed to deceive, or that attempts to mislead users to increase viewership,” read the platform’s rules.

The problem also comes as social media platforms continue to combat Covid-19-related misinformation. On social media, some now are floating the idea that Facebook’s decision to send its contracted content moderators home might be the cause of the problem.

Facebook is pushing back against that notion, and the company’s vice president for integrity, Guy Rosen, tweeted that “this is a bug in an anti-spam system, unrelated to any changes in our content moderator workforce.” Rosen said the platform is working on restoring the posts.

Recode contacted Facebook for comment, and we’ll update this post if we hear back.

The issue at Facebook serves as a reminder that any type of automated system can still screw up, and that fact might become more apparent as more companies, including Twitter and YouTube, depend on automated content moderation during the coronavirus pandemic. The companies say they’re doing so to comply with social distancing, as many of their employees are forced to work from home. This week, they also warned users that, because of the increase in automated moderation, more posts could get taken down in error.

In a blog post on Monday, YouTube told its creators that the platform will turn to machine learning to help with “some of the work normally done by reviewers.” The company warned that the transition will mean some content will be taken down without human review, and that both users and contributors to the platform might see videos removed from the site that don’t actually violate any of YouTube’s policies.

The company also warned that “unreviewed content may not be available via search, on the homepage, or in recommendations.”

Similarly, Twitter has told users that the platform will increasingly rely on automation and machine learning to remove “abusive and manipulated content.” Still, the company acknowledged that artificial intelligence would be no replacement for human moderators.

“We want to be clear: while we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes,” said the company in a blog post.

To compensate for potential errors, Twitter said it won’t permanently suspend any accounts “based solely on our automated enforcement systems.” YouTube, too, is making adjustments. “We won’t issue strikes on this content except in cases where we have high confidence that it’s violative,” the company said, adding that creators would have the chance to appeal these decisions.

Facebook, meanwhile, says it’s working with its partners to send its content moderators home and to ensure that they’re paid. The company is also exploring remote content review for some of its moderators on a temporary basis.

“We don’t expect this to impact people using our platform in any noticeable way,” said the company in a statement on Monday. “That said, there may be some limitations to this approach and we may see some longer response times and make more mistakes as a result.”

The move toward AI moderators isn’t a surprise. For years, tech companies have pushed automated tools as a way to supplement their efforts to fight the offensive and dangerous content that can fester on their platforms. Although AI can help content moderation move faster, the technology can also struggle to understand the social context for posts or videos and, as a result make inaccurate judgments about their meaning. In fact, research has shown that algorithms that detect racism can be biased against black people, and the technology has been widely criticized for being vulnerable to discriminatory decision-making.

Normally, the shortcomings of AI have led us to rely on human moderators who can better understand nuance. Human content reviewers, however, are by no means a perfect solution either, especially since they can be required to work long hours analyzing traumatic, violent, and offensive words and imagery. Their working conditions have recently come under scrutiny.

But in the age of the coronavirus pandemic, having reviewers working side by side in an office could not only be dangerous for them, it could also risk further spreading the virus to the general public. Keep in mind that these companies might be hesitant to allow content reviewers to work from home as they have access to lots of private user information, not to mention highly sensitive content.

Amid the novel coronavirus pandemic, content review is just another way we’re turning to AI for help. As people stay indoors and look to move their in-person interactions online, we’re bound to get a rare look at how well this technology fares when it’s given more control over what we see on the world’s most popular social platforms. Without the influence of human reviewers that we’ve come to expect, this could be a heyday for the robots.

Update, March 17, 2020, 9:45 pm ET: This post has been updated to include new information about Facebook posts being flagged as spam and removed.

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.


Credit: Google News

Previous Post

WordPress and Apache Struts account for 55% of all weaponized vulnerabilities

Next Post

ASPI wants statutory authority to prevent foreign interference through social media

Related Posts

UVA doctors give us a glimpse into the future of artificial intelligence
Machine Learning

UVA doctors give us a glimpse into the future of artificial intelligence

March 5, 2021
Machine intelligence – Spy agencies have high hopes for AI | Science & technology
Machine Learning

Machine intelligence – Spy agencies have high hopes for AI | Science & technology

March 5, 2021
AI and machine learning’s moment in health care
Machine Learning

AI and machine learning’s moment in health care

March 4, 2021
Could Privacy-Preserving, Machine-Learning Tools Recover Private Data? [STUDY]
Machine Learning

Could Privacy-Preserving, Machine-Learning Tools Recover Private Data? [STUDY]

March 4, 2021
Machine learning: is there a limit to technological patents in Brazil?
Machine Learning

The use of artificial intelligence in life sciences and the protection of the IP rights

March 4, 2021
Next Post
ASPI wants statutory authority to prevent foreign interference through social media

ASPI wants statutory authority to prevent foreign interference through social media

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

With its acquisition of Auth0, Okta goes all in on CIAM
Internet Security

With its acquisition of Auth0, Okta goes all in on CIAM

March 5, 2021
Survey Finds Many Companies Do Little or No Management of Cloud Spending  
Artificial Intelligence

Survey Finds Many Companies Do Little or No Management of Cloud Spending  

March 5, 2021
UVA doctors give us a glimpse into the future of artificial intelligence
Machine Learning

UVA doctors give us a glimpse into the future of artificial intelligence

March 5, 2021
Labeling Case Study — Agriculture— Pigs’ Productivity, Behavior, and Welfare Image Labeling | by ByteBridge | Feb, 2021
Neural Networks

Labeling Case Study — Agriculture— Pigs’ Productivity, Behavior, and Welfare Image Labeling | by ByteBridge | Feb, 2021

March 5, 2021
Brand Positioning and Competitors’ Positioning
Marketing Technology

Brand Positioning and Competitors’ Positioning

March 5, 2021
Singapore Airlines frequent flyer members hit in third-party data security breach
Internet Security

Singapore Airlines frequent flyer members hit in third-party data security breach

March 5, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • With its acquisition of Auth0, Okta goes all in on CIAM March 5, 2021
  • Survey Finds Many Companies Do Little or No Management of Cloud Spending   March 5, 2021
  • UVA doctors give us a glimpse into the future of artificial intelligence March 5, 2021
  • Labeling Case Study — Agriculture— Pigs’ Productivity, Behavior, and Welfare Image Labeling | by ByteBridge | Feb, 2021 March 5, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates