How to Build a High‑Availability GreatSQL MGR Cluster with Docker‑Compose
This article explains the role of distributed architecture for high‑performance internet systems, introduces GreatSQL as a native distributed relational database, compares it with MySQL, and provides step‑by‑step Docker‑Compose instructions to set up, start, and verify a three‑node MGR cluster, plus integration with the PIG microservice platform.
For internet‑facing high‑performance, high‑concurrency, high‑availability systems, distributed architecture is an inevitable evolution.
SpringCloud and Service Mesh address application‑layer scaling and deployment.
Redis Cluster and Codis solve distributed caching.
Kubernetes advances the operating‑system layer for better resource efficiency.
In the database layer, sharding, high‑availability, and other governance solutions are moving to distributed databases (NewSQL) as an unstoppable trend.
What Is GreatSQL
GreatSQL is a native distributed relational database featuring dynamic scaling, strong data consistency, and high‑availability clusters. It uses a shared‑nothing architecture with data redundancy, replica management, sharding, and MPP technology to deliver high performance and supports dynamic node expansion. It is widely used in finance, telecom, energy, government, and internet core systems and fully compatible with domestic operating systems, chips, and other hardware.
GreatSQL can serve as a free, drop‑in replacement for MySQL or Percona Server in production environments.
GreatSQL vs. MySQL Community Edition
GreatSQL offers geographic tags, a new flow‑control algorithm, InnoDB parallel query and transaction‑lock optimizations, superior handling of network partitions, large‑transaction processing, node‑failure recovery, consistent read performance, increased MGR throughput, no data loss in multi‑write or single‑master failover, faster cluster startup, robust disk‑full handling, and resolves TCP self‑connect issues—advantages not present in MySQL Community Edition.
Building an MGR Cluster
docker‑compose Setup
Use docker‑compose to create a three‑node MGR cluster for testing.
Additional network configuration and password settings are included for immediate use.
<code>version: '3'
services:
mgr1:
image: greatsql/greatsql
container_name: mgr1
hostname: mgr1
restart: unless-stopped
environment:
TZ: Asia/Shanghai
MYSQL_ROOT_PASSWORD: root
MYSQL_INIT_MGR: 1
MYSQL_MGR_LOCAL: '172.27.0.2:33061'
MYSQL_MGR_SEEDS: '172.27.0.2:33061,172.27.0.3:33061,172.27.0.4:33061'
extra_hosts:
- "mgr1:172.27.0.2"
- "mgr2:172.27.0.3"
- "mgr3:172.27.0.4"
ports:
- 3306:3306
networks:
mgr-net:
ipv4_address: 172.27.0.2
mgr2:
image: greatsql/greatsql
container_name: mgr2
hostname: mgr2
restart: unless-stopped
depends_on:
- "mgr1"
environment:
TZ: Asia/Shanghai
MYSQL_ROOT_PASSWORD: root
MYSQL_INIT_MGR: 1
MYSQL_MGR_LOCAL: '172.27.0.3:33061'
MYSQL_MGR_SEEDS: '172.27.0.2:33061,172.27.0.3:33061,172.27.0.4:33061'
extra_hosts:
- "mgr1:172.27.0.2"
- "mgr2:172.27.0.3"
- "mgr3:172.27.0.4"
networks:
mgr-net:
ipv4_address: 172.27.0.3
mgr3:
image: greatsql/greatsql
container_name: mgr3
hostname: mgr3
restart: unless-stopped
depends_on:
- "mgr2"
environment:
TZ: Asia/Shanghai
MYSQL_ROOT_PASSWORD: root
MYSQL_INIT_MGR: 1
MYSQL_MGR_LOCAL: '172.27.0.4:33061'
MYSQL_MGR_SEEDS: '172.27.0.2:33061,172.27.0.3:33061,172.27.0.4:33061'
extra_hosts:
- "mgr1:172.27.0.2"
- "mgr2:172.27.0.3"
- "mgr3:172.27.0.4"
networks:
mgr-net:
ipv4_address: 172.27.0.4
networks:
mgr-net:
ipam:
config:
- subnet: 172.27.0.0/16</code>Start MGR Service
Enter the PRIMARY node container (mgr1) and enable group replication.
<code>[root@greatsql]# docker exec -it mgr1 bash
[root@mgr1 /]# mysql
...
[root@GreatSQL][(none)]> set global group_replication_bootstrap_group=ON;
[root@GreatSQL][(none)]> start group_replication;</code>Start replication on the other two nodes.
<code>[root@greatsql]# docker exec -it mgr2 bash
[root@mgr2 /]# mysql
...
[root@GreatSQL][(none)]> start group_replication;
Query OK, 0 rows affected (2.76 sec)
[root@greatsql]# docker exec -it mgr3 bash
[root@mgr3 /]# mysql
...
[root@GreatSQL][(none)]> start group_replication;
Query OK, 0 rows affected (2.10 sec)</code>Check MGR Status
<code>select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 07455a01-0898-11ec-ac58-0242ac1b0002 | mgr1 | 3306 | ONLINE | PRIMARY | 8.0.25 |
| group_replication_applier | 124a583d-0898-11ec-bbb7-0242ac1b0003 | mgr2 | 3306 | ONLINE | SECONDARY | 8.0.25 |
| group_replication_applier | 1bf7df25-0898-11ec-9dd5-0242ac1b0004 | mgr3 | 3306 | ONLINE | SECONDARY | 8.0.25 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.01 sec)</code>PIG Microservice Integration
Modify the hosts mapping of the existing pig‑mysql service to point to GreatSQL.
The latest PIG master (3.3) is fully compatible with GreatSQL 8.0.25‑15.
Conclusion
The PIG microservice platform currently supports the following enterprise‑grade distributed databases:
GreatSQL 8.0.25‑15
TiDB 4.x
OceanBase 3.1
References
SpringCloud: https://spring.io/projects/spring-cloud
Service Mesh: https://istio.io/latest/about/service-mesh
Codis: https://github.com/CodisLabs/codis
GreatSQL background: https://www.greatdb.com
PIG: https://pig4cloud.com
Java Architecture Diary
Committed to sharing original, high‑quality technical articles; no fluff or promotional content.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.