DeepMind Introduces AlphaEvolve: A Gemini‑Powered Agent for Evolving Advanced Algorithms
DeepMind's AlphaEvolve, a Gemini-powered coding agent, combines large language models with an evolutionary framework to automatically discover, optimize, and evolve advanced algorithms, achieving notable gains in data‑center scheduling, TPU design, matrix multiplication, and even solving longstanding mathematical problems.
DeepMind has launched AlphaEvolve, an AI‑driven "algorithm breeding" system that leverages its Gemini family of large language models (Gemini Flash for broad exploration and Gemini Pro for deep insight) together with an automated evaluator and an evolutionary loop to iteratively improve algorithmic designs.
The workflow lets engineers define a framework, after which a prompt sampler feeds tasks to the LLM, the LLM generates new code (algorithms), an automated "examiner" scores the results, and successful candidates are added to a repository that guides the next generation of prompts.
AlphaEvolve is already deployed internally at Google, where it has contributed to several core services.
Engineering impact: It discovered a simple yet effective heuristic for Google’s Borg data‑center scheduler, reclaiming about 0.7% of global compute capacity. In TPU chip design, it edited Verilog to streamline a matrix‑multiplication circuit now used in the next‑generation TPU. For AI training and inference, it introduced a smarter matrix‑multiplication decomposition that speeds up Gemini’s core component by 23%, saving roughly 1% of overall training time, and it achieved up to 32.5% acceleration for FlashAttention on GPUs.
Mathematical breakthroughs: AlphaEvolve generated new matrix‑multiplication algorithms, such as a 48‑scalar‑multiplication method for 4×4 complex matrices that outperforms the best known Strassen‑type approach since 1969. It tackled over 50 open problems from analysis, geometry, combinatorics, and number theory, rediscovering optimal solutions in about 75% of cases and improving them in roughly 20%. Notably, it advanced the 11‑dimensional kissing‑number problem by finding a configuration with 593 outer spheres, raising the known lower bound.
Researchers can explore the results via a public Colab notebook ( https://colab.research.google.com/github/google-deepmind/alphaevolve_results/blob/master/mathematical_results.ipynb ) and read the full paper ( https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf ). An early‑access program is also open for academic collaborators ( https://docs.google.com/forms/d/e/1FAIpQLSfaLUgKtUOJWdQtyLNAYb3KAkABAlKDmZoIqPbHtwmy3YXlCg/viewform ).
Overall, AlphaEvolve demonstrates how large language models can evolve beyond code generation to autonomously discover and refine sophisticated algorithms, bridging engineering optimization and fundamental mathematical research.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.