The Evolution of AI and Intelligent Computing: History, Risks, and China’s Roadmap
This lecture outlines the rapid rise of generative AI models, traces the four eras of computing technology, examines the four stages of intelligent computing, highlights AI safety challenges, and proposes strategic choices for China to advance secure, affordable, and open‑source AI-driven intelligent computing.
AI and Intelligent Computing Development
Artificial intelligence has entered an explosive growth phase driven by generative large‑model AI. In November 2022 OpenAI released ChatGPT, which quickly reached 100 million users and sparked a wave of large‑model development such as Gemini, Wenxin Yi, Copilot, LLaMA, SAM and SORA, making 2022 the “year of large models.” AI breakthroughs are increasingly empowering all industries, and the Chinese leadership emphasizes AI as a key driver for high‑quality development.
I. Overview of Computing Technology
The history of computing can be divided into four generations: the mechanical era (abacus), the electronic era (electronic components and computers), the network era (Internet), and the current intelligent‑computing era.
Early devices ranged from manual aids to semi‑automatic calculators, with milestones such as the 1672 step‑wise calculator. Charles Babbage’s Difference Engine and Analytical Engine introduced automatic mechanical computation, and Ada Lovelace wrote the first algorithm, separating hardware from software.
In the first half of the 20th century, Boolean algebra, the Turing machine, the von Neumann architecture, and the transistor formed the scientific foundations of modern computing, enabling rapid industry growth.
Since the first electronic computer ENIAC in 1946, five platform types have emerged: high‑performance computing, enterprise servers, personal computers, smartphones, and embedded computers. A sixth “intelligent‑computing” platform is still emerging.
Modern computing development is often described as three eras: IT 1.0 (electronic computing, 1950‑1970), IT 2.0 (network computing, 1980‑2020) and IT 3.0 (intelligent computing, from 2020), which adds the “thing” dimension and embeds AI across cloud, edge and devices.
II. Intelligent Computing Development
Intelligent computing, encompassing AI technologies and their hardware, has progressed through four stages: general‑purpose computers, logic‑reasoning expert systems, deep‑learning systems, and large‑model systems.
The first stage began with the 1946 universal computer, inspired by Turing and von Neumann’s attempts to emulate brain‑like reasoning.
The second stage (around 1990) featured expert systems that encoded human knowledge as symbolic rules, but suffered from knowledge‑base construction costs and limited scalability.
The third stage (circa 2014) introduced deep‑learning systems, dramatically improving pattern‑recognition performance through large neural networks and specialized hardware such as GPUs and AI‑accelerators.
The fourth stage (2020 onward) is dominated by large‑model systems. Models like GPT‑3/ChatGPT use massive parameter counts (e.g., 170 billion for GPT‑3) and training data (trillions of tokens) requiring thousands of high‑end GPUs. New AI‑centric supercomputers with hundreds of H100 chips are being built to meet these demands.
Large models bring three transformative effects: scaling laws that boost performance with size, explosive demand for compute resources, and profound impacts on the labor market.
Future AI frontiers include multimodal large models, video‑generation models such as SORA, embodied intelligence for robots and autonomous vehicles, and AI‑for‑research (AI4R) that could accelerate scientific discovery.
III. AI Safety Risks
AI’s rapid progress also creates significant safety risks that must be addressed technically and legally.
Key threats include widespread misinformation (deep‑fake digital avatars, fabricated videos of leaders, fake news generated by LLMs), voice‑cloning fraud, illicit generation of non‑consensual sexual content, and the “hallucination” problem where models produce false statements or biased outputs.
Large models also raise trustworthiness concerns: factual errors, political bias, susceptibility to adversarial prompts, and data‑security issues where user inputs become part of training data.
Regulatory responses have emerged worldwide, such as China’s AI ethics guidelines (2021), machine‑learning security standards (2022), and the EU AI Act (2024), alongside U.S. AI‑rights proposals.
IV. China’s Intelligent‑Computing Challenges
China faces several obstacles: lagging behind the U.S. in core AI talent, algorithms, foundational models, training data and compute; export bans on high‑end chips (A100, H100, B200); a weak domestic AI‑software ecosystem compared with NVIDIA’s CUDA; and high cost and technical barriers for AI adoption in non‑Internet industries.
V. China’s Path Choices for Intelligent Computing
Three strategic routes are proposed:
Path A – Compatibility with the U.S.‑led “A” ecosystem : adopt CUDA‑compatible hardware and software, but face limited control and high dependence.
Path B – Closed, specialized “B” ecosystem : build proprietary stacks for vertical domains (military, weather, judiciary) using domestic chips, offering full control but limited openness.
Path C – Global, open‑source “C” ecosystem : leverage open standards (e.g., RISC‑V) and shared models/data to create a collaborative, low‑cost stack.
Additional decisions involve whether to prioritize algorithmic breakthroughs or new infrastructure, emphasizing data, compute and algorithm as foundational services, and balancing AI investment between virtual and real economies, with China favoring strong support for the real (manufacturing, energy, transportation) sector.
Ultimately, China should pursue affordable, trustworthy AI that narrows the digital divide, empowers small‑and‑medium enterprises, and sustains high‑quality development of its manufacturing and other core industries.
Data Thinking Notes
Sharing insights on data architecture, governance, and middle platforms, exploring AI in data, and linking data with business scenarios.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.