Google uses AI to tackle extreme content

To tackle the rise of illicit and extremist content and hate speech, Google says advanced machine learning is more useful than humans.

The search engine is battling against the rise of extremist content after finding that advanced programmes that use machine learning is more accurate and faster than humans at hiding illicit content from YouTube.

One of YouTube’s “major issues” is that users upload up to 400 hours of content every minute, making filtering content a challenge that “only an algorithmic approach is likely to manage”, according to the company.

YouTube announced that its machine learning systems have made “great leaps in tackling the problem”, with help from 15 other institutions, including the Anti-Defamation League, the No Hate Speech Movement, and the Institute for Strategic Dialogue.

‘Systems are more accurate than humans’

A YouTube spokesperson said to The Guardian: “While these tools aren’t perfect, and aren’t right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed.

“Our initial use of machine learning has more than doubled both the number of videos we have removed for violent extremism, as well as the rate at which we have taken this kind of content down. Over 75% of the videos we have removed for violent extremism over the past month were taken down before receiving a single human flag.”

Google plans to continue developing the machine learning technology and to collaborate with other technology companies to tackle online extremism.

Written by Leah Alger