Artificial Intelligence 11 min read

ChatGPT Boom Fuels Surge in AI Chip Demand, Boosting Nvidia, Samsung, and SK Hynix

The explosive growth of ChatGPT and other AI chatbots is driving unprecedented demand for high‑performance AI chips and high‑bandwidth memory, positioning Nvidia as the primary beneficiary while also creating significant market opportunities for Samsung, SK Hynix, and other semiconductor manufacturers.

DataFunSummit
DataFunSummit
DataFunSummit
ChatGPT Boom Fuels Surge in AI Chip Demand, Boosting Nvidia, Samsung, and SK Hynix

Recent months have seen ChatGPT’s user base skyrocket from one million to one hundred million within just two months, setting a new record for AI chatbot adoption.

The surge is reshaping the search market, with Microsoft’s Bing, Google’s Bard, Baidu, Alibaba, Yandex, Naver, Kakao and others planning to integrate AI chatbots into their services.

AI chatbots require massive data training and fast computation, causing a sharp increase in demand for AI chips; companies such as Nvidia, Samsung and SK Hynix stand to benefit, though Nvidia’s server‑grade AI chips face challenges of heat and power consumption, prompting Google, Amazon AWS, Samsung, SK Hynix, Baidu and others to develop their own specialized AI chips.

Industry experts note that while traditional AI chips dominate today, they are approaching physical limits, and the future of AI processing may lie in quantum chips.

Nvidia is the biggest beneficiary of the ChatGPT wave because its AI GPUs are essential for training and inference of large models, effectively monopolizing the global server AI‑chip market.

OpenAI’s partnership with Nvidia and Microsoft has resulted in an Azure HPC cloud supercomputer comprising over 285,000 CPU cores and more than 10,000 AI chips, with each ChatGPT query costing roughly US $0.02 in GPU usage.

Analysts estimate that ChatGPT could generate $3‑$11 billion in additional sales for Nvidia within the next year, and data‑center revenue is expected to overtake gaming revenue, reaching 57% of Nvidia’s total sales.

While Intel and AMD also produce server AI chips, they lack Nvidia’s CUDA parallel‑processing advantage and hold only single‑digit market shares; Apple, Qualcomm and others focus on edge‑device AI chips.

The demand for high‑bandwidth memory (HBM) in Nvidia GPUs benefits Samsung, SK Hynix and Micron, as AI workloads require HBM3 for training and GDDR6/7 for inference, driving a surge in memory orders.

Both Samsung and SK Hynix anticipate growing demand for large‑capacity, high‑performance HBM and DRAM in servers, with SK Hynix noting a rapid shift from 64 GB to 128 GB server memory.

Although HBM offers superior bandwidth, its price is roughly three times that of the highest‑performance DRAM, making it a high‑margin product for memory manufacturers.

To address power‑consumption and performance bottlenecks, the industry is exploring low‑power AI chips and processing‑in‑memory (PIM) solutions, with Korea investing billions of won in PIM development.

Beyond traditional AI chips, quantum computing is seen as a potential breakthrough; however, practical quantum chips are still years away, with IBM, SK Telecom and China’s Benyuan Quantum pursuing large‑scale quantum processors.

Source: TechWeb (http://www.techweb.com.cn/viewpoint/2023-02-14/2919494.shtml).

ChatGPTNvidiamarket analysissemiconductorAI hardwareAI chipsHBM
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.