Published in AI

Meta shows off new tarted-up Llama Model

by on19 April 2024


Needs to see off competition from OpenAI

Meta has just unveiled the latest iteration of its mighty Llama AI model to stay in the race with rival tech from the likes of OpenAI, and Google.

The firm is touting its new Llama 3 8B and Llama 3 70B models, which pack 8 billion and 70 billion parameters, respectively, as a significant improvement over their forerunners.

Meta confidently asserts that its Llama 3 models, refined on a custom 24,000 GPU cluster, are among the top-tier generative AI models in their size category. To substantiate this claim, Meta highlights the models' exceptional performance on renowned AI benchmarks like MMLU, ARC, and DROP. These tests, despite some scepticism, remain a reliable method to evaluate AI models.

The Llama 3 8B model is proving its worth, surpassing other open-source models such as Mistral's Mistral 7B and Google's Gemma 7B in at least nine benchmarks. Its proficiency in diverse areas, from biology and physics to chemistry, maths, and even commonsense reasoning, is demonstrated.

So Mistral 7B and Gemma 7B might not be cutting-edge (Mistral 7B hit the scene last September), and in some of the benchmarks, Meta's banging on about Llama 3 8B only nudges ahead by a smidge. But Meta's also claiming that its beefier Llama 3 model, the Llama 3 70B, can hold its own against the big guns of generative AI, including Google's latest gem, the Gemini 1.5 Pro.

Last modified on 19 April 2024
Rate this item
(0 votes)