YouTube Expands AI Detection to Age-Gate More Uploads

YouTube has announced an expansion of its automated content detection process, which it currently uses to catch uploads which depict graphic violence, nudity or hate speech.

Now, YouTube will expand its usage of its AI tools to cover more types of rule violations, and subsequently classify more uploads that are not appropriate for users under the age of 18.

As explained by YouTube:

“Today, our Trust & Safety team applies age-restrictions when, in the course of reviewing content, they encounter a video that isn’t appropriate for viewers under 18. Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions.”

When a video is age-restricted, users will need to be signed-in to view it.

“If they aren’t, they see a warning and are redirected to find other content that is age-appropriate. Our Community Guidelines include guidance to uploaders about when content should be age-restricted.” 

The expanded enforcement effort will help to keep younger users safe on the platform – which is no doubt a significant concern for the many parents trying to keep their kids entertained during the COVID-19 lockdowns. Indeed, in a recent survey, 64% of respondents indicated that they’ve been watching more YouTube content during the lockdown period, while to kids, YouTube stars are now key influencers, arguably more so than traditional TV presenters.

YouTube has been known to lead viewers down concerning rabbit holes at times, via its related video recommendations. With this new push, it should mean that fewer of those ‘Up Next’ clips end up leading younger viewers astray.

And YouTube is expecting to see an increase in content tagged as ‘over 18 only’ as a result:

“Because our use of technology will result in more videos being age-restricted, our policy team took this opportunity to revisit where we draw the line for age-restricted content. After consulting with experts and comparing ourselves against other global content rating frameworks, only minor adjustments were necessary. Our policy pages have been updated to reflect these changes. All the changes outlined above will roll out over the coming months.”

Uploaders will be able to appeal any decision that they believe has been incorrectly applied, but YouTube says that it’s not anticipating the change to have any major impacts on creator revenue, because most of the impacted videos also violate its advertiser-friendly guidelines, and are therefore not eligible for ads either way.  

YouTube has been developing its systems on this front for some time. Last month, YouTube reported that between April and June this year, it removed 11,401,696 videos for violating its content rules, with the vast majority of them being automatically flagged by its systems.

YouTube community enforcement

As such, expanding its systems seems like a relatively safe bet – and again, with so many kids spending time on the platform, it’s an important move, which could have major positive benefits.

But it will also, no doubt, lead to false positives and mistaken restrictions. That could impact YouTube creators looking to monetize their content, but as with all of YouTube’s new rules, it only takes a little while for any adjustment, and creators can generally manage any impacts. 

It’s also a key element in YouTube’s ad efforts. Back in 2017, various major brands pulled or reduced their YouTube ad spend after their ads were displayed alongside offensive content. That boycott reportedly cost YouTube millions in revenue, and is it what really sparked YouTube to improve its automated detection systems, which has now lead to this latest update.

At the same time, YouTube is under increasing pressure to reduce its reliance on humans for content moderation, with a new lawsuit being brought against the platform over PTSD claims from former moderation staff. As such, YouTube has significant motivation to boost its reliance on AI tools for the task, even if it does end up leading to more incorrect categorizations and restrictions.

In addition to this, YouTube’s also adding a new age verification process:

“As part of this process some European users may be asked to provide additional proof of age when attempting to watch mature content. If our systems are unable to establish that a viewer is above the age of 18, we will request that they provide a valid ID or credit card to verify their age.”

The update is in line with Europe’s evolving rules on digital content, including the revised Audiovisual Media Services Directive (AVMSD) which was enacted last year. 

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment

Why You Need A Website

Now