How Online DDL Impacts TPS/QPS in OceanBase: A Comprehensive Test
This article details a systematic performance test of OceanBase's online DDL capabilities, measuring TPS and QPS impacts across various DDL operations, describing the test environment, scripts, results, and conclusions for reliable database schema changes.
1. Test Objectives
Validate OceanBase's Online DDL capability – assess compatibility and stability of online transactions (DML) under continuous workload.
Quantify performance impact – measure how different DDL actions (index changes, column modifications, partition adjustments) affect transaction throughput (TPS/QPS) and latency.
Provide optimization guidance – supply data to choose appropriate DDL execution windows and configuration parameters for production.
2. Test Significance
Risk prediction – identify high‑risk DDL types that could cause service interruption.
Performance baseline – establish OceanBase performance benchmarks under mixed‑load scenarios.
Technical selection reference – compare Online vs. Offline DDL implementations.
3. Test Methodology
Test Toolchain
Sysbench – generates OLTP read/write mixed load.
Shell script – automates execution of DDL command sequences.
Python script – parses logs and visualizes TPS/QPS trends.
4. Test Process and Scripts
4.1 Test Environment Preparation
Generate two million test rows:
<code>/usr/bin/sysbench /usr/share/sysbench/oltp_read_write.lua \
--mysql-host=dbtest1 --mysql-port=2881 --mysql-db=sbtest \
--mysql-user=root@spytest --mysql-password='密码' \
--table_size=2000000 --tables=1 --threads=30 \
--report-interval=10 --time=60 --db-driver=mysql prepare</code>Initial table schema:
<code>CREATE TABLE `sbtest1` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`k` int(11) NOT NULL DEFAULT '0',
`c` char(120) NOT NULL DEFAULT '',
`pad` char(60) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `k_1` (`k`) BLOCK_SIZE 16384 LOCAL
) AUTO_INCREMENT=2000001 CHARSET=utf8mb4 ROW_FORMAT=DYNAMIC COMPRESSION='zstd_1.3.8' REPLICA_NUM=3 BLOCK_SIZE=16384 USE_BLOOM_FILTER=FALSE TABLET_SIZE=134217728 PCTFREE=0;</code>4.2 DDL Commands to Test
<code>/* Add index */ ALTER TABLE sbtest1 ADD INDEX idx_k1 (k), ADD INDEX idx_c1(c);
/* Drop index */ ALTER TABLE sbtest1 DROP INDEX idx_k1;
/* Mixed index */ ALTER TABLE sbtest1 ADD INDEX idx_k_c (k, c), DROP INDEX idx_c1;
/* Rename index */ ALTER TABLE sbtest1 RENAME INDEX idx_k_c TO idx_k_c_new;
/* Append column */ ALTER TABLE sbtest1 ADD COLUMN c_new VARCHAR(60) NOT NULL DEFAULT 'abc';
/* Insert column after k */ ALTER TABLE sbtest1 ADD COLUMN c_new2 VARCHAR(60) AFTER k;
/* Add virtual column */ ALTER TABLE sbtest1 ADD COLUMN k_virtual INT AS (k % 100) VIRTUAL;
/* Add generated column */ ALTER TABLE sbtest1 ADD COLUMN c_hash CHAR(32) AS (MD5(c)) STORED;
/* Drop generated column */ ALTER TABLE sbtest1 DROP COLUMN c_hash;
/* Drop virtual column */ ALTER TABLE sbtest1 DROP COLUMN k_virtual;
/* Modify column to NULL */ ALTER TABLE sbtest1 MODIFY COLUMN c_new VARCHAR(60) NULL DEFAULT 'abc';
/* Modify column to NOT NULL */ ALTER TABLE sbtest1 MODIFY COLUMN c_new VARCHAR(60) NOT NULL DEFAULT 'abc';
/* Set column default */ ALTER TABLE sbtest1 MODIFY COLUMN c_new2 VARCHAR(60) DEFAULT '';
/* Remove column default */ ALTER TABLE sbtest1 MODIFY COLUMN c_new2 VARCHAR(60);
/* Drop column */ ALTER TABLE sbtest1 DROP COLUMN c_new2;
/* Rename column */ ALTER TABLE sbtest1 CHANGE COLUMN c_new c_new_new VARCHAR(60) NOT NULL DEFAULT 'abc';
/* Extend CHAR length */ ALTER TABLE sbtest1 MODIFY COLUMN c CHAR(200) NOT NULL;
/* Extend VARCHAR length */ ALTER TABLE sbtest1 MODIFY COLUMN c_new_new VARCHAR(200) NOT NULL DEFAULT 'abc';
/* Add DECIMAL column */ ALTER TABLE sbtest1 ADD COLUMN d1 DECIMAL(5,2) NOT NULL DEFAULT 0.00;
/* Extend DECIMAL precision */ ALTER TABLE sbtest1 MODIFY COLUMN d1 DECIMAL(10,4) NOT NULL DEFAULT 0.00;
/* Mixed column change */ ALTER TABLE sbtest1 MODIFY COLUMN c CHAR(255) NOT NULL, ADD COLUMN ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP;
/* Modify column type */ ALTER TABLE sbtest1 MODIFY COLUMN k VARCHAR(255) NOT NULL DEFAULT '';
/* Change compression */ ALTER TABLE sbtest1 COMPRESSION='lz4_1.9.1';
/* Add unique constraint */ ALTER TABLE sbtest1 ADD UNIQUE uk_pad (pad) GLOBAL;
/* Change row format */ ALTER TABLE sbtest1 ROW_FORMAT=COMPACT;
/* Change character set */ ALTER TABLE sbtest1 CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_bin;
/* Reorder column */ ALTER TABLE sbtest1 MODIFY COLUMN pad CHAR(60) AFTER k;
/* Convert to partitioned table */ ALTER TABLE sbtest1 PARTITION BY RANGE(id) (
PARTITION p0 VALUES LESS THAN (500000),
PARTITION p1 VALUES LESS THAN (1000000),
PARTITION p2 VALUES LESS THAN (1500000),
PARTITION p3 VALUES LESS THAN (3000000)
);
/* Add partition */ ALTER TABLE sbtest1 ADD PARTITION (PARTITION p4 VALUES LESS THAN MAXVALUE);
/* Drop partition */ ALTER TABLE sbtest1 DROP PARTITION p0;
/* Truncate partition */ ALTER TABLE sbtest1 TRUNCATE PARTITION p1;
/* Adjust partition parameters */ ALTER TABLE sbtest1 SET TABLET_SIZE=268435456 PCTFREE=10;
/* Adjust auto‑increment */ ALTER TABLE sbtest1 AUTO_INCREMENT=90000001;
/* Remove primary key auto‑increment */ ALTER TABLE sbtest1 MODIFY COLUMN id int(11) NOT NULL;
/* Add new auto‑increment column */ ALTER TABLE sbtest1 ADD COLUMN id2 int(11) NOT NULL AUTO_INCREMENT;
/* Remove auto‑increment from column */ ALTER TABLE sbtest1 MODIFY COLUMN id2 int(11) NOT NULL DEFAULT 0;
/* Add auto‑increment to primary key */ ALTER TABLE sbtest1 MODIFY COLUMN id int(11) NOT NULL AUTO_INCREMENT;
/* Attempt to drop primary key (fails) */ ALTER TABLE sbtest1 DROP COLUMN id;</code>4.3 Test Procedure
Set ob_query_timeout to 300 s to avoid sysbench timeout.
Run sysbench against sbtest1 and write output to sysbench.log .
Execute each DDL, recording start and end timestamps (seconds since test start) into ddl_timestamps.csv .
After all DDLs, stop sysbench, then run the Python parsing script to generate performance plots.
4.4 Test Script
<code>#!/bin/bash
OB_HOST="dbtest1"
OB_PORT=2881
OB_USER="root@spytest"
OB_PASS="密码"
OB_DB="sbtest"
echo "Cleaning environment..."
rm -f ddl_timestamps.csv sysbench.log
mysql -h$OB_HOST -P$OB_PORT -u$OB_USER -p$OB_PASS -D$OB_DB -e "drop table sbtest1"
echo "Preparing test data..."
/usr/bin/sysbench /usr/share/sysbench/oltp_read_write.lua \
--mysql-host=dbmeta-ob.pek02.rack.zhihu.com --mysql-port=2881 \
--mysql-db=sbtest --mysql-user=root@spytest --mysql-password='密码' \
--table_size=2000000 --tables=1 --threads=30 --report-interval=10 \
--time=60 --db-driver=mysql prepare
echo "Starting sysbench load..."
/usr/bin/sysbench /usr/share/sysbench/oltp_read_write.lua \
--mysql-host=dbmeta-ob.pek02.rack.zhihu.com --mysql-user=root@spytest \
--mysql-password='密码' --mysql-port=2881 --mysql-db=sbtest \
--table_size=2000000 --tables=1 --threads=30 --report-interval=1 \
--time=999999 --db-ps-mode=disable --db-driver=mysql run > sysbench.log &
SYSBENCH_PID=$!
TEST_START=$(date +%s)
sleep 10
TOTAL_CMD_NUM=$(wc -l < config/ddl_commands.conf)
CURRENT_CMD_NUM=0
while read ddl_cmd; do
CURRENT_CMD_NUM=$((CURRENT_CMD_NUM + 1))
echo "[$(date +'%T')] Starting DDL [$CURRENT_CMD_NUM / $TOTAL_CMD_NUM]: $ddl_cmd"
start_sec=$(( $(date +%s) - TEST_START ))
mysql -h$OB_HOST -P$OB_PORT -u$OB_USER -p$OB_PASS -D$OB_DB -e "$ddl_cmd"
end_sec=$(( $(date +%s) - TEST_START ))
echo "\"$ddl_cmd\",$start_sec,$end_sec" >> ddl_timestamps.csv
sleep 20
done < config/ddl_commands.conf
kill $SYSBENCH_PID
wait
echo "Sysbench stopped"
python3.12 scripts/parse_log.py</code>4.5 Plotting Script
<code>import pandas as pd
import matplotlib.pyplot as plt
import re
from matplotlib.ticker import AutoMinorLocator, MultipleLocator
def parse_sysbench_log(log_path):
data = []
with open(log_path) as f:
for line in f:
time_match = re.search(r'\[ (\d+)s \]', line)
if not time_match:
continue
elapsed_seconds = int(time_match.group(1))
tps_match = re.search(r'tps: ([\d.]+)', line)
qps_match = re.search(r'qps: ([\d.]+)', line)
lat_match = re.search(r'lat \(ms,95%\): ([\d.]+)', line)
if tps_match and qps_match and lat_match:
tps = float(tps_match.group(1))
qps = float(qps_match.group(1))
latency = float(lat_match.group(1))
data.append([elapsed_seconds, tps, qps, latency])
return pd.DataFrame(data, columns=['elapsed_seconds', 'tps', 'qps', 'latency'])
ddl_df = pd.read_csv('ddl_timestamps.csv', names=['ddl', 'start_sec', 'end_sec'])
metrics_df = parse_sysbench_log('sysbench.log')
fig, ax1 = plt.subplots(figsize=(55, 8))
ax1.plot(metrics_df['elapsed_seconds'], metrics_df['tps'], 'b-', label='TPS')
ax1.set_xlabel('Time')
ax1.set_ylabel('TPS', color='b')
ax1.tick_params(axis='y', labelcolor='b')
ax1.set_ylim(bottom=0, top=max(metrics_df['tps']) * 1.1)
ax1.xaxis.set_major_locator(MultipleLocator(100))
ax1.xaxis.set_minor_locator(AutoMinorLocator(10))
ax2 = ax1.twinx()
ax2.plot(metrics_df['elapsed_seconds'], metrics_df['qps'], 'r--', label='QPS')
ax2.set_ylabel('QPS', color='r')
ax2.tick_params(axis='y', labelcolor='r')
ax2.set_ylim(bottom=0, top=max(metrics_df['qps']) * 1.1)
for _, row in ddl_df.iterrows():
ax2.axvspan(row['start_sec'], row['end_sec'], color='red', alpha=0.3, label=row['ddl'])
ax2.text(row['start_sec'], metrics_df['tps'].max(), row['ddl'].split()[1], rotation=45)
plt.xlim(left=0)
plt.rcParams['font.sans-serif'] = ['Heiti TC']
lg = plt.legend(bbox_to_anchor=(1.05, 1.0), loc='upper left')
plt.savefig('results/ddl_impact.png', bbox_extra_artists=(lg,), bbox_inches='tight')
</code>5. Test Results
200 w data volume:
2000 w data volume:
The blue line represents TPS and the red line represents QPS. Both data sizes show similar TPS/QPS fluctuation trends when the same DDL statements are executed. In OceanBase, Offline DDL causes a brief QPS drop after execution, while Online DDL behaves as documented.
Conclusion: The official documentation on Online/Offline DDL is validated; combining the test results with an internal operations‑control platform enables the definition of Online DDL approval rules, avoiding the need for on‑call DBA intervention for Offline DDL.
6. Additional Notes
(1) Tests for auto‑increment column and primary‑key modifications are pending.
(2) Using OMS can online‑ify Offline DDL, reducing TPS/QPS impact.
7. References
Online DDL and Offline DDL official documentation
https://www.oceanbase.com/docs/common-oceanbase-database-cn-1000000001503358
Xiaolei Talks DB
Sharing daily database operations insights, from distributed databases to cloud migration. Author: Dai Xiaolei, with 10+ years of DB ops and development experience. Your support is appreciated.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.