Llama 3 Is Coming in May—Should OpenAI Be Worried?
At a high-profile AI event in London, Meta executives on Tuesday provided the first official confirmation and details about the imminent release of Llama 3, the highly-anticipated next iteration of the company’s open-source large language model.
«Within the next month, actually less, hopefully in a very short period of time, we hope to start rolling out our new suite of next-generation foundation models, Llama 3,» Nick Clegg, Meta’s president of global affairs, announced at Meta AI Day London, reported TechCrunch.
Clegg said Llama 3 consists of «a number of different models with different capabilities, different versatilities» that will begin rolling out over this year.
Once it launches, Llama 3 is expected to be the most advanced open-source model available, with Meta investing heavily in its development. The model was trained with140 billion parameters, Meta says, twice the capacity of Llama 2. Meta CEO Mark Zuckerburg had teased some of the technical details in January.
“We’re building massive compute infrastructure to support our future roadmap, including 350k H100s by the end of this year—and overall almost 600k H100s equivalents of compute if you include other GPUs,” Zuckerberg said at the time. This amount of computing power is significantly greater than that used by OpenAI to train GPT-4, which was estimated to require around 25,000 GPUs in 90 to 100 days.
Zuckerberg also revealed that Meta AI, its AI assistant, is set to be powered by Llama 3.
Chris Cox, Chief Product Officer, said that Llama 3 will be integrated across Meta.
«Our plan will be to have Llama 3 powering multiple different products and experiences across our family of apps,» he said.
The open-source strategy
The impact of the release of Llama 3 extends far beyond Meta, given the company’s philosophical commitment to developing it as an open-source model, in clear contrast to the closed, proprietary approach taken by rivals like OpenAI with ChatGPT.
By open sourcing their language models, Meta aims to nurture an ecosystem of open AI development and position the Llama family as the foundation for a diverse range of tools and applications created by third-party developers and researchers.
“It’s very important to realize that innovations always build on prior contributions from others, sometimes very similar ones,” Yann LeCun, Meta’s head of AI research, tweeted last month. “This is why open research is so important: it makes the field advance faster for everyone.”
From a distance, it looks like innovations spontaneously appear out of the vacuum.
But it’s very important to realize that innovations always build on prior contributions from others, sometimes very similar ones.
This is why open research is so important: it makes the field… https://t.co/JMvQD2h5OZ— Yann LeCun (@ylecun) March 20, 2024
This open ethos has already spawned a vibrant community rallying around Llama. Some of the most advanced open-source language models today, such as Mistral, Falcon, and Beluga, are built by fine-tuning the earlier Llama 2 foundation model. Several of these community models have matched or outperformed GPT-3.5 on certain benchmarks.
The release of Llama-3 as another open-source foundational model likely paves the way for a new generation of LLMs that will set the bar even higher in terms of quality and efficiency in AI.
Eh, I think open source will match or beat this year. pic.twitter.com/y99qKJ2iKF
— Ryan Casey (@ryansweb) January 1, 2024
Challenging OpenAI dominance
Llama 3’s open-source premise poses a formidable and multi-layered challenge to OpenAI’s current market dominance and—by extension—to other proprietary models like Claude and Gemini.
The open-source community will soon be able to build upon Llama 3 and rapidly iterate their variations to potentially match or exceed GPT-4’s capabilities—just as they did against GPT-3.5. With lower training costs shared across contributors, the open ecosystem could leapfrog OpenAI’s proprietary model development, which requires immense compute resources and costs.
Should open-source offerings regularly achieve parity with commercial offerings, enterprises may gravitate toward the more accessible and cost-effective ecosystems like Llama rather than relying on and paying for OpenAI. Currently, GPT-4 is the most expensive model on the market in terms of cost per token.
Further, the open-source community grows stronger as more people get involved with it. Meta benefits from having a huge community building on top of the model, fine-tuning it, developing new technologies, and improving it for free. This makes it easier for Meta to develop better versions of its model while monetizing it through alternative schemes like licensing it for commercial use by large industries.
In other words, continued inertia and network effects could make it harder for OpenAI’s proprietary models attract users and customers in the future.
To be sure, OpenAI currently holds a strong lead in terms of profitability. Anthropic can boast having the best-performing LLM in the AI space. But Llama 3 will represent another strategic strike by Meta to upend the generative AI landscape.
Of course, much depends on Llama 3’s real-world performance and adoption over the coming year. But the open-source AI community is quite active — and already loves Llama-2. Things will get very interesting in the next few months, especially with OpenAI’s GPT-5 right around the corner.