Published in AI

Google thinks its language AI can manage a trillion parameters

by on14 January 2021


Can’t be worse than what it has now

While firms are under the mistaken believe that they can save a buck or two on translation services using AI, the technology is a long way away from being remotely reliable. So Google boffins have been flat out trying to make it better.

Google researchers developed and benchmarked techniques they claim enabled them to train a language model containing more than a trillion parameters.

It says their 1.6-trillion-parameter model, which appears to be the largest of its size to date, achieved an up to four times speedup over the previously largest Google-developed language model (T5-XXL).

As the researchers note in a paper detailing their work, large-scale training is an effective path toward powerful models. Simple architectures, backed by large datasets and parameter counts, surpass far more complicated algorithms. But effective, large-scale training is extremely computationally intensive.

That is why the researchers pursued what they call the Switch Transformer, a "sparsely activated" technique that uses only a subset of a model's weights, or the parameters that transform input data within the model.

In an experiment, the researchers pretrained several different Switch Transformer models using 32 TPU cores on the Colossal Clean Crawled Corpus, a 750GB-sized dataset of text scraped from Reddit, Wikipedia, and other web sources.

They tasked the models with predicting missing words in passages where 15 percent of the words had been masked out, as well as other challenges, like retrieving text to answer a list of increasingly difficult questions.

The researchers claim their 1.6-trillion-parameter model with 2,048 experts (Switch-C) exhibited "no training instability at all", in contrast to a smaller model (Switch-XXL) containing 395 billion parameters and 64 experts.

However, on one benchmark -- the Sanford Question Answering Dataset (SQuAD) -- Switch-C scored lower (87.7) versus Switch-XXL (89.6), which the researchers attribute to the opaque relationship between fine-tuning quality, computational requirements, and the number of parameters.

The Switch Transformer led to gains in several downstream tasks. For example, it enabled an over seven times pretraining speedup while using the same number of computational resources, according to the researchers.

They found that the large sparse models could be used to create smaller, dense models fine-tuned on tasks with 30 percent of the quality gains of the larger model. In one test where a Switch Transformer model was trained to translate between over 100 different languages, the researchers observed "a universal improvement" across 101 languages, with 91 percent of the languages benefitting from an over four times speedup compared with a baseline model.

"Though this work has focused on extremely large models, we also find that models with as few as two experts improve performance while easily fitting within memory constraints of commonly available GPUs or TPUs", the researchers wrote in the paper.

"We cannot fully preserve the model quality, but compression rates of 10 to 100 times are achievable by distilling our sparse models into dense models while achieving 30 per cent of the quality gain of the expert model."

So still not as good as a real human translator.

 

Last modified on 14 January 2021
Rate this item
(1 Vote)

Read more about: