is about to reposition how its massive online video service . Following a record $170 million penalty, announced Wednesday, for violating kids’ data privacy, Google’s YouTube pledged to disable comments, notifications and personalized ads on all videos directed at children. And its machine learning will police YouTube’s sprawling catalog to keep kids videos in line, the company said.
One problem: YouTube’s machine learning was supposed to be suspending comments on videos featuring young minors already. It hasn’t.
Comment-enabled videos prominently depicting young kids are still easy to find on YouTube. A single YouTube search for one kids-focused subject — “pretend play” — returned more than 100 videos with comments enabled, all prominently featuring infants, preschoolers and other children young enough to still have their baby teeth.
After CNET contacted YouTube with a list of these videos, comments were disabled on nearly half of them.
“We invest significantly in the teams and technologies that allow us to provide minors and families the best protection possible,” YouTube spokeswoman Ivy Choi said. “We’ve suspended comments on hundreds of millions of videos featuring minors in risky situations and implemented a classifier that helps us remove two times the number of violative comments. We continue to disable comments on hundreds of thousands of videos a day and improve our classifiers.”
YouTube is the world’s biggest online video source, with 2 billion monthly users — so big, in fact, it’s the world’s top source for kids videos too. Kids content is one of its most-watched categories, but YouTube has come under fire for a range of scandals involving children. That $170 million penalty addressed the data YouTube collects on kids without parents’ consent. But YouTube has also faced scandals involving videos of child abuse and exploitation and nightmarish content in its YouTube Kids app, pitched as a kid-safe zone.
YouTube’sis only one problem in a parade that the Google-owned site has faced in the last few years, including claims it proliferates , spreads and against some creators. Google’s one of the big tech companies facing increasing questions about the power it wields, with the into big tech.
In February, YouTube said it would disable comments on videos with young kids following an outcry over a ring of softcore pedophilia. Some videos featuring young children included comments with predatory links. Clicking on the links would transport viewers to other moments in YouTube videos with a minor in a sexually suggestive position. And once you fall in that rabbit hole, YouTube’s recommendation algorithm appeared to feed you more of the same.
So YouTube said it would suspend comments on videos featuring minors who were 13 and younger, as well as on videos featuring older minors who could be at risk of attracting predatory behavior. The changes would take place “over the next few months,” YouTube said then. YouTube would make an exception for “a small number of channels that actively moderate their comments and take additional steps to protect children,” the company said at the time.
What we found
Six months later, CNET’s single search found more than 100 videos posted in the last month by more than 100 different channels. They all featured young children — babies, toddlers and kids clearly no older than elementary school students. All had comments enabled.
The videos ranged from clips with almost no views on channels with zero subscribers to videos that have been viewed nearly 23 million times. One video had 1,750 comments. Several videos showed children in limited clothing, like a young girl in a bathing suit or a baby in a diaper.
Of the more than 100 videos, YouTube suspended comments on 48 of them after CNET provided a list of links — the video that had 1,750 comments was among them. Generally, these videos with newly disabled comments didn’t have any adults on camera; YouTube said that adult supervision is one of the several things it evaluates when disabling comments.
Most comments on these videos appeared innocent: heart-eye emojis, praise about adorableness or feedback about toys.
But occasionally one would raise eyebrows: A video of three girls eating lollipops and playing on fairground rides, for example, had two total comments. One was posted by an account with a swastika avatar going by the name “Kurdish Nazi.” The comment, translated from Kurdish, appeared to be a reference to bitcoin.
(YouTube disabled comments on that video after CNET reached out, which erased the Kurdish Nazi comment from public view.)
Other searches found YouTube videos of older children in scant clothing. The pedophilia-ring scandal earlier this year was triggered by a vlogger exposing predatory links after he searched the term “bikini haul.” In response, YouTube said it would suspend comments on videos featuring children aged 14 to 17, too, if the subject had potential for abuse.
A search for “teen bikini haul” videos posted in the last month returned one video by a girl identifying herself as 16 years old, modeling different swimsuits. It has 75 comments. Another was by an influencer who discloses her age — 17 — and her birthdate in the video’s description. Her video, showing off more than a dozen two-piece suits, has 309 comments, including one comment asking her to “show your uncensored sweet tushy.”
“Whats the point of a try-on haul if you dont show your butt??,” another wrote. “You’re not nude, and you wear these in public right..?? good grief…waste of bandwidth.”
Machines learning minors
One difficulty in relying on algorithms to police videos with minors is that no technology will catch everything, machine learning experts said.
“Whatever they do, it’s never going to be perfect,” said Christo Wilson, a professor of computer science at Northeastern University. “We just have to accept that an adult could be flagged as a child, or it just doesn’t see a child. Regardless of how much machine learning they have, they need to have some sort of human process behind the scenes.”
YouTube has more than 10,000 human moderators tasked with addressing videos that violate any of its policies.
The scale of YouTube, where 500 hours of video are uploaded every minute, adds to the challenge. Even if the baseline success rate of YouTube’s machine learning is high, the amount of videos it fails to catch will still be significant.
“You may still only have one needle in a haystack. But add more and more haystacks, and it’ll be easier for someone somewhere to find it,” said Christian Shelton, a professor of computer science at the University of California, Riverside. “The technology will never be perfect. No other solution would also be perfect, but you shouldn’t let the technology off the hook.”
Google and YouTube’s scale works in its favor, in some respects. Algorithms need data to learn, and YouTube has more video and data about it than anyone else. Machine learning for video, which essentially looks at videos as collections of still frames, also requires a level of computational power that’s more feasible for a company with Google’s resources.
And one of YouTube’s policy changes announced last week could help its machine learning improve. As part of its settlement with the FTC, YouTube will require uploaders to identify videos that are “made for kids,” it said, effectively introducing more labels on its data.
Algorithms need annotations like these to learn, and the more content that’s getting processed, the more the annotations are necessary, according to Arnav Jhala, a computer science professor at North Carolina State University. Algorithms find patterns and correlations between labels and visible features in the frames.
“The more labels they have, the higher correlation they will have, and on unlabeled video, the algorithms will have a higher accuracy,” he said. “But you are dealing with almost an adversary on the other side.”
That is, some uploaders have motives to misidentify their videos.
Trolls, for example, could mislabel inappropriate content as kids videos, aiming to sneak sensitive images in front of children’s eyes. YouTube previously had instances of kids videos with self-harm tips spliced into it. A trend of supposedly “child-friendly” YouTube videos that had familiar kids characters engaged in bizarre, violent or disturbing behavior earned its own moniker: Elsagate. Mislabeled data would pollute the information training a machine-learning algorithm.
Beyond that, video that targets kid audiences is “pretty freaking vague” as a directive to give an algorithm, Wilson said.
And YouTube’s track record so far, according to Wilson? “Not great.”
Credit: Google News