The case revolves around Section 230 of the Communications Decency Act of 1996, which protects against lawsuits over content moderation decisions that involve artificial intelligence (AI).
Gonzalez v. Google revolves around YouTube’s algorithms recommending pro-ISIS content to users. The 27-year-old law cannot legislate on modern developments such as artificially intelligent algorithms.
Google argues that the internet has grown so much since 1996 that incorporating artificial intelligence into content moderation solutions has become necessary. “Virtually no modern website would function if users had to sort content themselves,” it said in the filing.
“An abundance of content” means that tech companies must use algorithms to sort content.
Google said that under existing law, tech companies simply refusing to moderate their platforms is a perfectly legal route to avoid liability. However, this puts the internet at risk of being a “virtual cesspool”.
The tech giant also pointed out that YouTube’s community guidelines expressly disavow terrorism, adult content, violence and “other dangerous or offensive content” and that it is continually tweaking its algorithms to pre-emptively block prohibited content.
It claimed that “approximately” 95 per cent of videos violating YouTube’s ‘Violent Extremism policy’ were automatically detected in Q2 2022.
Nevertheless, the petitioners in the case maintain that YouTube has failed to remove all Isis-related content and, in doing so, has assisted “the rise of ISIS” to prominence.
Google responded by saying that YouTube’s algorithms recommend content to users based on similarities between a piece of content and the content a user is already interested.