Cloud Native 15 min read

Step-by-Step Guide to Deploying a Nacos Cluster on Linux and Understanding Its Raft Leader Election

This article provides a comprehensive tutorial on setting up a Nacos cluster on Linux, covering environment preparation, database configuration, application property tuning, cluster file creation, startup procedures, microservice integration, and an in‑depth explanation of the Raft‑based leader election and data synchronization mechanisms.

Top Architect
Top Architect
Top Architect
Step-by-Step Guide to Deploying a Nacos Cluster on Linux and Understanding Its Raft Leader Election

Official recommendations suggest placing all services under a single VIP and exposing them via a domain name for better readability and easy IP replacement; the recommended access pattern is http://nacos.com:port/openAPI (internal SLB only).

Nacos cluster design highlights:

Microservices should access services via domain names rather than direct IPs, using DNS to resolve IPs and hide backend changes.

Nacos provides built‑in inter‑node communication on ports 8848 (API & data sync) and 7848 (leader election); a MySQL instance is required for persistent configuration, permissions, and history.

Each server should expose a virtual IP (VIP) bound by DNS to avoid exposing physical IPs and to provide a unified entry point.

Nacos Cluster Deployment

Linux Deployment

Step 1 – Environment preparation. A Nacos cluster needs at least three nodes (odd number) to form a valid Raft group. Install JDK 1.8 on each node and set JAVA_HOME .

Prepare a MySQL 5.7/8.0 instance for storing Nacos configuration and user data.

Step 2 – Download Nacos. Visit https://github.com/alibaba/nacos/releases/ , download version 2.0.2, upload the tarball to /usr/data on each server and extract it:

tar -xvf nacos-server-2.0.2.tar.gz

Step 3 – Configure the database. Create a database nacos_config and execute the script /usr/data/nacos/conf/nacos-mysql.sql to create the required tables ( config_* , users , roles , permissions ).

Step 4 – Configure Nacos data source. Edit /usr/data/nacos/conf/application.properties and set the MySQL connection, e.g.:

### 设置数据库平台为mysql
spring.datasource.platform=mysql
### Count of DB: 数据库总数
db.num=1
### Connect URL of DB: 数据库连接
db.url.0=jdbc:mysql://xxx:3306/nacos_config?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC
db.user=root
db.password=root

Step 5 – Cluster node configuration. Copy the example file cluster.conf.example to cluster.conf and list the three node addresses:

ip1:8848
ip2:8848
ip3:8848

Place the same cluster.conf on every Nacos server.

Step 6 – Start Nacos servers. Run the startup script on each node:

sh /usr/local/nacos/bin/startup.sh

Do not add the “-m” flag; the default mode is cluster. Monitor the startup with tail -f /usr/local/nacos/logs/start.out . Successful startup logs include messages such as “INFO Nacos started successfully in cluster mode. use external storage”.

After all nodes report UP , access the console via http://ip:8848/nacos/#/clusterManagement to verify the cluster list.

Step 7 – Microservice integration. In a Spring Cloud application, configure the discovery address and credentials, for example:

# Application name (also the service ID)
spring.application.name=sample-service
# Nacos server addresses
spring.cloud.nacos.discovery.server-addr=ip1:8848,ip2:8848,ip3:8848
# Credentials
spring.cloud.nacos.discovery.username=nacos
spring.cloud.nacos.discovery.password=nacos
# Service port
server.port=9000

After starting the service, the three console URLs ( http://ip1:8848/nacos/#/serviceManagement , etc.) will show identical service lists, confirming data synchronization.

Nacos Cluster Working Principle

Raft Leader Election

Nacos uses the Raft algorithm to elect a Leader node, which holds the authority to process data and issue commands. Each node can be a Leader, Candidate, or Follower.

Election triggers include:

Initial startup when no Leader exists.

Change in cluster membership.

Leader failure.

Terms (numeric epochs) are incremented for each election. A Candidate becomes Leader only after receiving votes from a majority of nodes.

Typical election flow:

All nodes start as Followers with term 0.

When a node starts, it becomes a Candidate and requests votes from others.

If a majority is obtained, the Candidate becomes Leader; otherwise it retries in the next term.

If the Leader crashes, remaining nodes hold a new election.

As long as the number of UP nodes is ≥ 1 + N/2 , the cluster remains operational; fewer nodes still serve basic requests but cannot guarantee consistency.

Data Synchronization Between Nodes

Only the Leader can write data. When a microservice registers with a Follower, the Follower forwards the heartbeat to the Leader, which performs the registration, then instructs Followers to replicate the log. Followers acknowledge (ACK); once a majority ACKs, the Leader returns a success response to the microservice.

If a Follower is unavailable, the Leader keeps retrying until the node catches up.

Overall, the article walks through the entire Nacos cluster setup, explains the Raft‑based leader election, and details how microservices achieve consistent registration across the cluster.

MicroservicesService DiscoveryLinuxNacosRaftCluster Deployment
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.