Backend Development 9 min read

Peer-to-Peer Decentralized Architecture and High‑Availability Configuration for Eureka Server Cluster

This article explains how to build a peer‑to‑peer, decentralized Eureka service‑registry cluster, covering the underlying architecture, CAP trade‑offs, Maven and YAML configurations for multiple peers, client load‑balancing, and a step‑by‑step test deployment.

Full-Stack Internet Architecture
Full-Stack Internet Architecture
Full-Stack Internet Architecture
Peer-to-Peer Decentralized Architecture and High‑Availability Configuration for Eureka Server Cluster

Introduction

Deploying multiple Eureka instances eliminates single‑point failures and achieves high availability. Nodes communicate in a peer‑to‑peer fashion, forming a decentralized distributed architecture.

Peer‑to‑Peer Decentralized Architecture: Solving the Single‑Point Problem

By deploying several Eureka instances, each node registers with others via peer to peer communication, ensuring data synchronization without a central master.

In this setup, peer nodes register each other using the serviceUrl property, allowing every node to know the full list of services.

If a Eureka server goes down, the client automatically switches to another server; when the failed server recovers, it rejoins the cluster and participates in request replication, unlike the centralized Zookeeper master/slave model that requires a single master.

Eureka Client Load Balancing

After a microservice registers with the registry, Eureka returns HTTP 204 and caches the service list locally, enabling client‑side load balancing for Feign and Ribbon calls.

Peer Node Communication Mechanism

Each peer node copies the full registration list from other peers, so every node maintains an identical replica of all services.

CAP Principle

The CAP theorem emerged from the evolution of distributed systems, especially microservice architectures that require distributed databases.

Eureka follows an AP model: it prioritises availability, allowing nodes to remain operational even when isolated, at the cost of possible data inconsistency.

Eureka Server Cluster High‑Availability Configuration

Create three projects: dcp-eureka-peer1 , dcp-eureka-peer2 , and dcp-eureka-peer3 , and add the Spring Cloud Eureka Server dependency:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>

Configure each peer's bootstrap.yml :

server:
  port: 8761

spring:
  application:
    name: dcp-eureka-peer1

eureka:
  client:
    register-with-eureka: false
    fetch-registry: false
    service-url:
      defaultZone: http://dcp-eureka-peer2:8762/eureka/,http://dcp-eureka-peer3:8763/eureka/...

Repeat the same for peer2 (port 8762) and peer3 (port 8763) with the appropriate defaultZone URLs.

Add host entries:

127.0.0.1 dcp-eureka-peer1
127.0.0.1 dcp-eureka-peer2
127.0.0.1 dcp-eureka-peer3

Start class for each peer:

@EnableEurekaServer
@SpringBootApplication
public class EurekaPeer1 {
  public static void main(String[] args) {
    SpringApplication.run(EurekaPeer1.class, args);
  }
}

Eureka Client Configuration

Create a service project dcp-helloworld-service with the following Maven dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>

Configure bootstrap.yml for the client:

server:
  port: 8001

spring:
  application:
    name: dcp-hellworld-service

eureka:
  client:
    serviceUrl:
      defaultZone: http://dcp-eureka-peer1:8761/eureka/,http://dcp-eureka-peer2:8761/eureka/,http://dcp-eureka-peer3:8761/eureka/
    instance:
      status-page-url-path: /info
      instanceId: ${spring.application.name}:${random.value}
      prefer-ip-address: true
      registry-fetch-interval-seconds: 30
      lease-renewal-interval-in-seconds: 15
      lease-expiration-duration-in-seconds: 15

The key parameter is eureka.client.serviceUrl.defaultZone , which points to all three Eureka peers.

Small Test

Start each peer; the logs show peer discovery, e.g.:

c.n.eureka.cluster.PeerEurekaNodes : Adding new peer nodes [http://dcp-eureka-peer3:8763/eureka/, http://dcp-eureka-peer2:8762/eureka/]

Start the client service; the registration log shows HTTP 204:

com.netflix.discovery.DiscoveryClient : DiscoveryClient_DCP-HELLWORLD-SERVICE/dcp-hellworld-service:... - registration status: 204

Open http://localhost:8761/ to view the Eureka dashboard, peer node information, registered microservices, and host details (images omitted for brevity).

Conclusion

Deploying a three‑node Eureka cluster on Kubernetes or Docker ensures that if any node fails, the platform automatically removes the faulty container and starts a new one, maintaining continuous high availability for service discovery.

MicroservicesBackend Developmenthigh availabilityservice discoveryEurekaSpring CloudPeer-to-Peer
Full-Stack Internet Architecture
Written by

Full-Stack Internet Architecture

Introducing full-stack Internet architecture technologies centered on Java

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.