Operations 11 min read

How to Diagnose and Resolve 900% CPU Spikes in MySQL and Java Processes

This article explains common scenarios that cause MySQL or Java processes to consume 900% CPU, walks through step‑by‑step diagnosis using Linux tools, and provides concrete optimization techniques such as indexing, caching, thread analysis, and code adjustments to bring CPU usage back to normal levels.

Architect's Guide
Architect's Guide
Architect's Guide
How to Diagnose and Resolve 900% CPU Spikes in MySQL and Java Processes

CPU usage exceeding 200% is a frequent production issue; the article focuses on two extreme cases where MySQL and Java processes reach 900% CPU and shows how to handle them.

Scenario 1 – MySQL CPU Spike

When many concurrent low‑performance SQL statements run without proper indexes, CPU can skyrocket, especially if slow‑query logging is enabled.

Diagnosis steps:

Use top to confirm mysqld is the culprit.

Run show processlist; to find heavy‑weight sessions.

Inspect the execution plan of the offending SQL, checking for missing indexes or large data scans.

Remediation process:

Kill the problematic threads and observe CPU drop.

Add missing indexes, rewrite inefficient queries, and tune memory parameters.

Limit connection counts and enable caching (e.g., Redis) to reduce query frequency.

Real MySQL case

A production query without an index on user_code caused CPU to stay above 900%.

Key commands used:

show processlist;
select id from user where user_code = 'xxxxx';
show index from user;

After adding the missing index and disabling the slow‑query log, CPU dropped to 70‑80%.

Scenario 2 – Java CPU Spike

Java processes normally stay within 100‑200% CPU, but high concurrency can lead to infinite loops, massive GC, or selector spin, pushing usage to 900%.

Diagnosis steps:

Identify the high‑CPU PID with top .

List threads of that PID using top -Hp <PID> .

Convert the thread ID to hex: printf "%x\n" <tid> .

Extract the thread stack with jstack -l <PID> > jstack_result.txt and grep the hex nid.

Locate the offending method in the source code.

Typical culprits include empty loops, excessive object creation causing GC, and selector spin.

Code example causing CPU spin

The original loop continuously polled an empty LinkedBlockingQueue :

while (isRunning) {
    if (dataQueue.isEmpty()) {
        continue;
    }
    byte[] buffer = device.getMinicap().dataQueue.poll();
    int len = buffer.length;
}

Because the queue stayed empty, the loop spun and consumed CPU.

Fix by using the blocking take() method, which waits for data instead of busy‑waiting:

while (isRunning) {
    try {
        byte[] buffer = device.getMinicap().dataQueue.take();
        // process buffer
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
}

After redeploying, the Java process CPU fell below 10% and remained stable.

Key Takeaways

Avoid enabling slow‑query logs during high load; they can exacerbate CPU pressure.

Use show processlist and jstack to pinpoint problematic SQL or Java threads.

Add missing indexes, employ caching, and tune memory parameters for MySQL.

Replace busy‑polling loops with blocking queue operations in Java.

Consider connection limits and GC tuning as part of overall performance optimization.

JavaperformanceOptimizationLinuxMySQLTroubleshootingCPU
Architect's Guide
Written by

Architect's Guide

Dedicated to sharing programmer-architect skills—Java backend, system, microservice, and distributed architectures—to help you become a senior architect.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.