Databases 10 min read

SQL Slow Query Diagnosis and Optimization Techniques with a Practical Case Study

This article explains how to identify slow‑query symptoms, gather problematic SQL statements, apply common writing and indexing tips, read execution plans, and walk through a complete MySQL optimization case, enabling readers to become proficient database performance engineers.

Full-Stack Internet Architecture
Full-Stack Internet Architecture
Full-Stack Internet Architecture
SQL Slow Query Diagnosis and Optimization Techniques with a Practical Case Study

When a query runs slowly, you can first look for system‑level signs such as high CPU usage, severe I/O wait, long page response time, or timeout errors in application logs, and SQL‑level signs like overly long statements, long execution time, full‑table scans, or large rows and cost values in the execution plan.

Use sar or top to check the current system status, or monitoring tools like Prometheus and Grafana to observe server metrics.

Different databases provide various ways to obtain slow queries: MySQL offers the slow‑query log, LoadRunner, or Percona's pt‑query; Oracle provides AWR reports, internal views such as v$session_wait , and GRID CONTROL; DM (DaMeng) also supports AWR, LoadRunner, and its own performance monitoring tools.

Common SQL writing tips include using appropriate indexes, preferring UNION ALL over UNION , avoiding SELECT * , indexing join columns, simplifying complex statements, and steering clear of patterns like WHERE 1=1 or ORDER BY RAND() .

Before optimizing, always examine the execution plan with EXPLAIN . The plan shows fields such as id , select_type , table , type , possible_keys , key , key_len , rows , filtered , and extra . A type of ALL or extra containing Using filesort or Using temporary indicates a problematic query.

Below is a concrete optimization case for MySQL.

Table structures :

CREATE TABLE `a` (
    `id` int(11) NOT NULL AUTO_INCREMENT,
    `seller_id` bigint(20) DEFAULT NULL,
    `seller_name` varchar(100) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL,
    `gmt_create` varchar(30) DEFAULT NULL,
    PRIMARY KEY (`id`)
);
CREATE TABLE `b` (
    `id` int(11) NOT NULL AUTO_INCREMENT,
    `seller_name` varchar(100) DEFAULT NULL,
    `user_id` varchar(50) DEFAULT NULL,
    `user_name` varchar(100) DEFAULT NULL,
    `sales` bigint(20) DEFAULT NULL,
    `gmt_create` varchar(30) DEFAULT NULL,
    PRIMARY KEY (`id`)
);
CREATE TABLE `c` (
    `id` int(11) NOT NULL AUTO_INCREMENT,
    `user_id` varchar(50) DEFAULT NULL,
    `order_id` varchar(100) DEFAULT NULL,
    `state` bigint(20) DEFAULT NULL,
    `gmt_create` varchar(30) DEFAULT NULL,
    PRIMARY KEY (`id`)
);

Original query (search orders within ±10 hours for a user):

SELECT a.seller_id,
       a.seller_name,
       b.user_name,
       c.state
FROM a, b, c
WHERE a.seller_name = b.seller_name
  AND b.user_id = c.user_id
  AND c.user_id = 17
  AND a.gmt_create BETWEEN DATE_ADD(NOW(), INTERVAL -600 MINUTE)
                      AND DATE_ADD(NOW(), INTERVAL 600 MINUTE)
ORDER BY a.gmt_create;

Initial execution time and plan showed a full‑table scan ( type=ALL ) and large row counts, confirming a “bad smell”.

Optimization steps :

Align data types: change user_id from varchar(50) to int in tables b and c to avoid implicit conversion.

Add indexes on join columns: create an index on b.user_id , c.user_id , and a composite index on b(user_id, seller_name) .

Create an index on a.seller_name to speed up the join with b .

Use a composite index covering a(gmt_create, seller_name, seller_id) to eliminate temporary tables and sorting.

SQL statements applied:

ALTER TABLE b MODIFY `user_id` INT(10) DEFAULT NULL;
ALTER TABLE c MODIFY `user_id` INT(10) DEFAULT NULL;
ALTER TABLE c ADD INDEX `idx_user_id`(`user_id`);
ALTER TABLE b ADD INDEX `idx_user_id_sell_name`(`user_id`,`seller_name`);
ALTER TABLE a ADD INDEX `idx_sellname_gmt_sellid`(`gmt_create`,`seller_name`,`seller_id`);

After these changes, execution time dropped dramatically and the new execution plan no longer contained ALL or costly extra warnings.

Further refinement converted a.gmt_create to a proper DATETIME column:

ALTER TABLE a MODIFY `gmt_create` DATETIME DEFAULT NULL;

Final checks confirmed that the query now runs efficiently with a clean execution plan.

Optimization summary :

Inspect the execution plan with EXPLAIN .

Check warnings via SHOW WARNINGS .

Review table structures and indexes.

Based on the plan, adjust data types, add appropriate indexes, and rewrite SQL if needed.

Validate improvements by comparing execution times and plans.

Iterate the process until performance meets expectations.

SQLIndexingMySQLDatabase Optimizationslow queryexecution plan
Full-Stack Internet Architecture
Written by

Full-Stack Internet Architecture

Introducing full-stack Internet architecture technologies centered on Java

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.