Artificial Intelligence 14 min read

Meta Unveils Llama 4: New Multimodal AI Models with Mixture‑of‑Experts Architecture and 10 Million‑Token Context

Meta announced the Llama 4 series—Scout, Maverick and Behemoth—featuring multimodal capabilities, Mixture‑of‑Experts design, up to 10 million‑token context windows, and state‑of‑the‑art performance on STEM, multilingual and image benchmarks, with models now downloadable from llama.com and Hugging Face.

DataFunTalk
DataFunTalk
DataFunTalk
Meta Unveils Llama 4: New Multimodal AI Models with Mixture‑of‑Experts Architecture and 10 Million‑Token Context

Meta released the Llama 4 family, comprising three models—Llama 4 Scout, Llama 4 Maverick and Llama 4 Behemoth—trained on massive amounts of unlabelled text, image and video data to provide broad visual‑understanding abilities.

Meta GenAI head Ahmad Al‑Dahle emphasized that Llama 4 reflects Meta’s long‑term commitment to open‑source AI, believing that open systems will yield the best small, medium and emerging large models.

Google’s CEO praised the release, noting that the AI world never gets boring.

In the Llama Arena benchmark, Llama 4 Maverick ranked second overall (the fourth model to exceed 1400 points), leading in difficult prompts, programming, mathematics and creative writing, and surpassing the previous Llama 3 405B model.

Model characteristics:

Llama 4 Scout has 17 billion activation parameters, 16 experts, and a 10 million‑token context window—the longest in the industry—outperforming Gemma 3, Gemini 2.0 Flash‑Lite and Mistral 3.1 on a range of benchmarks.

Llama 4 Maverick also has 17 billion activation parameters but uses 128 routing experts plus one shared expert (total 4000 billion parameters), delivering top‑tier multimodal performance that beats GPT‑4o and Gemini 2.0 while costing less than half the activation parameters of larger models.

Llama 4 Behemoth is a teacher model with 288 billion activation parameters, 16 experts and nearly 2 trillion total parameters, achieving state‑of‑the‑art results on STEM benchmarks and surpassing GPT‑4.5, Claude 3.7 Sonnet and Gemini 2.0 Pro.

All three models support native multimodal interaction—users can upload an image and ask any question about it.

The series adopts a native multimodal design with early fusion of text and visual tokens, and a visual encoder based on MetaCLIP for better alignment.

Meta introduced a Mixture‑of‑Experts (MoE) architecture for the first time, where each token activates only a subset of the total parameters, improving training and inference efficiency under a fixed FLOPs budget.

For example, Llama 4 Maverick uses alternating dense and MoE layers with 128 routing experts and one shared expert, allowing the model to run on a single NVIDIA H100 DGX host or be distributed for higher efficiency.

The models also feature an interleaved‑attention RoPE (iRoPE) architecture that removes positional embeddings and enables effectively unlimited context length.

Training employed several innovations: MetaP for robust hyper‑parameter setting, FP8 precision for high FLOPs utilization (390 TFLOPs per GPU), and a data mix of over 30 trillion tokens (twice the amount used for Llama 3), including diverse text, image and video sources.

Mid‑training extended the context window to 10 million tokens for Scout, unlocking new use‑cases such as long‑form memory and personalized interactions.

Post‑training followed a pipeline of lightweight Supervised Fine‑Tuning (SFT) → Online Reinforcement Learning (RL) → Direct Preference Optimization (DPO). Meta trimmed over 50 % of “easy” data for the 2‑trillion‑parameter Behemoth and focused RL on harder prompts, achieving significant gains in reasoning, coding and mathematics.

Meta also built an asynchronous online RL framework and optimized MoE parallelism, improving training speed by roughly tenfold compared to previous distributed systems.

Performance tables (shown in the original article) demonstrate that Llama 4 Maverick leads among multimodal models in encoding, inference, multilingual, long‑context and image benchmarks, while Llama 4 Scout sets new records for context length and multimodal grounding.

Both Llama 4 Scout and Llama 4 Maverick are now publicly downloadable from llama.com and the Meta‑Llama repository on Hugging Face.

llama.com: https://www.llama.com/llama-downloads/

Hugging Face: https://huggingface.co/meta-llama

Reference: https://ai.meta.com/blog/llama-4-multimodal-intelligence/

multimodal AIMixture of Expertslarge language modelmodel traininglong contextLlama 4
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.