Print this page
Published in News

Google’s DeepMind tries to tackle AI’s sexism and racism

by on09 December 2021


Trying to improve big, fat, sexist, racist language databases

Google’s AI DeepMind programme is trying to fix a problem with large language models (LLMs) which are being used for everything from improving Google's search engine to creating text-based fantasy games.

The models suffer serious problems, including regurgitating sexist and racist language and failing tests of logical reasoning, mostly caused by the humans that populate the data base.

Alphabet's AI lab DeepMind released three of research papers which talk about how to scale up systems to deliver plenty of improvements.

DeepMind research scientist Jack Rae said: "One key finding of the paper is that the progress and capabilities of large language models is still increasing. This is not an area that has plateaued."

DeepMind, which regularly feeds its work into Google products, has probed the capabilities of this LLMs by building a language model with 280 billion parameters named Gopher.

Parameters are a quick measure of a language's models size and complexity, meaning that Gopher is larger than OpenAI's GPT-3 (175 billion parameters) but not as big as some more experimental systems, like Microsoft and Nvidia's Megatron model (530 billion parameters).

It's true in the AI world that bigger is better, with larger models usually offering higher performance. DeepMind's research confirms this trend and suggests that scaling up LLMs does offer improved performance on the most common benchmarks testing things like sentiment analysis and summarisation.

However, researchers also cautioned that some issues inherent to language models will need more than just data and compute to fix.

"I think right now it really looks like the model can fail in variety of ways. Some subset of those ways are because the model just doesn't have sufficiently good comprehension of what it's reading, and I feel like, for those class of problems, we are just going to see improved performance with more data and scale."

He said that there were  "other categories of problems, like the model perpetuating stereotypical biases or the model being coaxed into giving mistruths.  The issue here is that scaling up will turn a small racist database into a big fat racist database.

In this case no one at DeepMind thinks scale will be the solution. What is needed is that language models will need "additional training routines" like feedback from human users.

Last modified on 09 December 2021
Rate this item
(1 Vote)