Databases 13 min read

Performance Evaluation of Inserting Hundreds of Millions of Records into MySQL Using Java, MyBatis, and JDBC

This article experimentally compares three Java‑based insertion strategies—MyBatis without transaction, plain JDBC with and without transaction, and JDBC batch processing—with and without transactions, measuring how they affect the time required to insert tens of millions of rows into MySQL.

Architect's Guide
Architect's Guide
Architect's Guide
Performance Evaluation of Inserting Hundreds of Millions of Records into MySQL Using Java, MyBatis, and JDBC

The author needed a large data set to evaluate MySQL query performance, so random person records (ID, name, gender, age, email, phone, address) were generated and inserted into a MySQL database for testing.

Three insertion strategies were tested in five scenarios:

MyBatis lightweight framework insertion (no transaction)

Direct JDBC handling (with and without transaction)

JDBC batch processing (with and without transaction)

Strategy 1 – MyBatis without transaction

MyBatis requires ORM conversion for each record, which slows bulk inserts when transactions are not used.

//代码内无事务
private long begin = 33112001; //起始id
private long end = begin+100000; //每次循环插入的数据量
private String url = "jdbc:mysql://localhost:3306/bigdata?useServerPrepStmts=false&rewriteBatchedStatements=true&useUnicode=true&characterEncoding=UTF-8";
private String user = "root";
private String password = "0203";

@org.junit.Test
public void insertBigData2(){
    ApplicationContext context = new ClassPathXmlApplicationContext("applicationContext.xml");
    PersonMapper pMapper = (PersonMapper) context.getBean("personMapper");
    Person person = new Person();
    long bTime = System.currentTimeMillis();
    for(int i=0;i<5000000;i++){
        person.setId(i);
        person.setName(RandomValue.getChineseName());
        person.setSex(RandomValue.name_sex);
        person.setAge(RandomValue.getNum(1,100));
        person.setEmail(RandomValue.getEmail(4,15));
        person.setTel(RandomValue.getTel());
        person.setAddress(RandomValue.getRoad());
        pMapper.insert(person);
        begin++;
    }
    long eTime = System.currentTimeMillis();
    System.out.println("插入500W条数据耗时:"+(eTime-bTime));
}

The test inserted about 520,000 rows before being stopped, taking roughly 7–9 minutes; inserting 10,000 rows took 28.6 seconds.

Strategy 2 – JDBC direct handling

Two modes were evaluated: with transaction (auto‑commit disabled) and without transaction.

//该代码为开启事务
private long begin = 33112001; //起始id
private long end = begin+100000; //每次循环插入的数据量
private String url = "jdbc:mysql://localhost:3306/bigdata?useServerPrepStmts=false&rewriteBatchedStatements=true&useUnicode=true&characterEncoding=UTF-8";
private String user = "root";
private String password = "0203";

@org.junit.Test
public void insertBigData3(){
    Connection conn = null;
    PreparedStatement pstm = null;
    try{
        Class.forName("com.mysql.jdbc.Driver");
        conn = DriverManager.getConnection(url, user, password);
        conn.setAutoCommit(false);
        String sql = "INSERT INTO person VALUES (?,?,?,?,?,?,?)";
        pstm = conn.prepareStatement(sql);
        long bTime1 = System.currentTimeMillis();
        for(int i=0;i<10;i++){
            long bTime = System.currentTimeMillis();
            while(begin < end){
                pstm.setLong(1, begin);
                pstm.setString(2, RandomValue.getChineseName());
                pstm.setString(3, RandomValue.name_sex);
                pstm.setInt(4, RandomValue.getNum(1,100));
                pstm.setString(5, RandomValue.getEmail(4,15));
                pstm.setString(6, RandomValue.getTel());
                pstm.setString(7, RandomValue.getRoad());
                pstm.execute();
                begin++;
            }
            conn.commit();
            end += 10000;
            long eTime = System.currentTimeMillis();
            System.out.println("成功插入1W条数据耗时:"+(eTime-bTime));
        }
        long eTime1 = System.currentTimeMillis();
        System.out.println("插入10W数据共耗时:"+(eTime1-bTime1));
    }catch(SQLException e){e.printStackTrace();}
    catch(ClassNotFoundException e1){e1.printStackTrace();}
}

Without transaction, inserting 100,000 rows took about 212 seconds (≈21.2 s per 10,000 rows). With transaction, the same amount took about 39 seconds (≈3.9 s per 10,000 rows).

Strategy 3 – JDBC batch processing

Batch processing required enabling the rewriteBatchedStatements flag in the URL and moving the PreparedStatement creation outside the loop.

String url = "jdbc:mysql://localhost:3306/User?rewriteBatchedStatements=true&useServerPrepStmts=false";

private long begin = 33112001; //起始id
private long end = begin+100000; //每次循环插入的数据量
private String url = "jdbc:mysql://localhost:3306/bigdata?useServerPrepStmts=false&rewriteBatchedStatements=true&useUnicode=true&characterEncoding=UTF-8";
private String user = "root";
private String password = "0203";

@org.junit.Test
public void insertBigData(){
    Connection conn = null;
    PreparedStatement pstm = null;
    try{
        Class.forName("com.mysql.jdbc.Driver");
        conn = DriverManager.getConnection(url, user, password);
        //conn.setAutoCommit(false); // optional
        String sql = "INSERT INTO person VALUES (?,?,?,?,?,?,?)";
        pstm = conn.prepareStatement(sql);
        long bTime1 = System.currentTimeMillis();
        for(int i=0;i<10;i++){
            long bTime = System.currentTimeMillis();
            while(begin < end){
                pstm.setLong(1, begin);
                pstm.setString(2, RandomValue.getChineseName());
                pstm.setString(3, RandomValue.name_sex);
                pstm.setInt(4, RandomValue.getNum(1,100));
                pstm.setString(5, RandomValue.getEmail(4,15));
                pstm.setString(6, RandomValue.getTel());
                pstm.setString(7, RandomValue.getRoad());
                pstm.addBatch();
                begin++;
            }
            pstm.executeBatch();
            //conn.commit(); // optional
            end += 100000;
            long eTime = System.currentTimeMillis();
            System.out.println("成功插入10W条数据耗时:"+(eTime-bTime));
        }
        long eTime1 = System.currentTimeMillis();
        System.out.println("插入100W数据共耗时:"+(eTime1-bTime1));
    }catch(SQLException e){e.printStackTrace();}
    catch(ClassNotFoundException e1){e1.printStackTrace();}
}

Results:

Without transaction, batch inserting 1,000,000 rows took ~21.7 s (≈2.1 s per 100,000 rows).

With transaction, the same volume took ~19.0 s (≈1.9 s per 100,000 rows).

Conclusion

Combining JDBC batch processing with transaction management yields the fastest insertion speed for massive single‑row data loads; in the author's final test, inserting 100 million rows took about 174.8 seconds.

JavaBatch ProcessingPerformance TestingMySQLDatabase OptimizationMyBatisJDBC
Architect's Guide
Written by

Architect's Guide

Dedicated to sharing programmer-architect skills—Java backend, system, microservice, and distributed architectures—to help you become a senior architect.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.