Tag

Llama 4

1 views collected around this technical thread.

DataFunTalk
DataFunTalk
Apr 8, 2025 · Artificial Intelligence

Meta AI VP Responds to Llama 4 Controversies and Allegations of Benchmark Manipulation

Meta AI Vice President Ahmad Al‑Dahle addressed recent criticisms of the newly released Llama 4 model, denying claims of test‑set cheating, explaining quality variations as post‑release optimization, and acknowledging internal concerns that led to staff resignations and calls for transparency.

Artificial IntelligenceBenchmarkingLlama 4
0 likes · 5 min read
Meta AI VP Responds to Llama 4 Controversies and Allegations of Benchmark Manipulation
DevOps
DevOps
Apr 7, 2025 · Artificial Intelligence

Meta Llama 4 Scout, Maverick, and Behemoth: Architecture, NoPE Innovation, and Training Advances

The article introduces Meta's newly open‑sourced Llama 4 series—including Scout with a 1 billion‑token context window, Maverick with 400 billion parameters, and the upcoming Behemoth teacher model—detailing their expert‑mix architecture, the NoPE positional‑encoding removal, training pipelines, performance benchmarks, and infrastructure improvements for large‑scale AI research.

AI researchContext WindowLlama 4
0 likes · 8 min read
Meta Llama 4 Scout, Maverick, and Behemoth: Architecture, NoPE Innovation, and Training Advances
DataFunTalk
DataFunTalk
Apr 7, 2025 · Artificial Intelligence

Llama 4 Open‑Source Release Marred by Performance Failures and Alleged Training‑Data Cheating

Meta's newly released Llama 4 quickly became a controversy as internal leaks reveal training‑data cheating, benchmark over‑optimization, and disappointing code‑generation performance that fails to match even older models, prompting resignations and widespread criticism from the AI community.

AI model performanceLlama 4Meta AI
0 likes · 7 min read
Llama 4 Open‑Source Release Marred by Performance Failures and Alleged Training‑Data Cheating
DataFunTalk
DataFunTalk
Apr 6, 2025 · Artificial Intelligence

Meta Unveils Llama 4: New Multimodal AI Models with Mixture‑of‑Experts Architecture and 10 Million‑Token Context

Meta announced the Llama 4 series—Scout, Maverick and Behemoth—featuring multimodal capabilities, Mixture‑of‑Experts design, up to 10 million‑token context windows, and state‑of‑the‑art performance on STEM, multilingual and image benchmarks, with models now downloadable from llama.com and Hugging Face.

Llama 4Mixture of Expertslarge language model
0 likes · 14 min read
Meta Unveils Llama 4: New Multimodal AI Models with Mixture‑of‑Experts Architecture and 10 Million‑Token Context