The Future of Large Language Models: From Consumer Q&A to Agentic Workflows
Andrew Ng highlights that large language models are shifting from optimizing simple question‑answering for consumers to supporting complex agentic workflows, including tool usage, computer interaction, and multi‑agent collaboration, signaling a major evolution in AI capabilities.
Recently, Andrew Ng, founder of deeplearning.ai and former Baidu chief scientist, presented an insightful analysis on the evolution of large language models (LLMs) and predicted that the next emerging direction will be the optimization of agentic workflows.
Ng emphasizes that LLMs are transitioning from primarily answering consumer questions to supporting intelligent agent workflows such as tool usage, computer operations, and multi‑agent collaboration.
Model Optimization Toward Agentic Workflows
Earlier, the main goal of LLM training was to answer questions well. Since the launch of ChatGPT, the focus has been on improving user experience for consumer‑level queries and executing human‑directed tasks.
However, as AI applications expand, models are being upgraded to handle agentic tasks, which raises the performance and versatility of LLMs in these scenarios.
Previously, LLMs were fine‑tuned on instruction‑adjusted datasets to generate targeted, practical answers for consumer‑oriented use cases.
Now, AI agents demand higher standards of model behavior, requiring capabilities such as self‑reflection, tool‑assisted decision making, detailed planning, and collaboration among multiple agents.
Evolution of Tool Use: From Prompt Engineering to Native Function Support
Tool invocation is a key function of AI agents. For example, answering a weather query requires the model to generate an API call rather than rely solely on its training data.
Before native function calling in models like GPT‑4, developers used complex prompt designs (e.g., ReAct variants) to coax the model into generating function‑call strings, which were then parsed externally.
With native function‑calling support, tool usage becomes more efficient and reliable, allowing LLMs to autonomously decide which functions to invoke for retrieval‑augmented generation, code execution, email sending, online ordering, and more.
Anthropic’s Breakthrough in Computer‑Use Capability
Anthropic announced that Claude can now use a computer, simulating mouse clicks and keyboard actions, enabling direct interaction with computer environments.
This marks a significant breakthrough in native computer‑interaction support from major LLM providers, simplifying development and accelerating the growth of RPA and other intelligent applications.
Future Expectations
As models become better at agentic tasks, future LLMs will evolve from efficient answerers to multifunctional intelligent agents capable of information integration, task allocation, and execution in complex, multi‑tool, multi‑agent settings.
Ng identifies several key points:
Developers are guiding LLMs to perform specific agentic behaviors, often fine‑tuning them for reliability, though premature fine‑tuning should be avoided.
When capabilities like tool use become valuable, major LLM providers will embed them natively, driving substantial performance gains in agentic reasoning and planning over the next few years.
He concludes that the optimization direction of LLMs will increasingly adapt to agentic workflows, leading to major advances.
Following Anthropic’s announcement, Chinese company Zhipu AI introduced AutoGLM, enabling LLMs to operate smartphones for tasks such as hotel booking, searching, ordering food, and acting as office assistants for email and meeting notes, thereby boosting productivity.
Imagine a future where LLMs are not monolithic models but a collection of specialized agents, each fine‑tuned for specific workflow tasks, collaborating dynamically to accomplish complex objectives.
The day when AI can act independently and solve problems is eagerly anticipated.
DevOps
Share premium content and events on trends, applications, and practices in development efficiency, AI and related technologies. The IDCF International DevOps Coach Federation trains end‑to‑end development‑efficiency talent, linking high‑performance organizations and individuals to achieve excellence.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.