Comprehensive Guide to Using Ctrip's Open‑Source Apollo Distributed Configuration Center
This article provides an in‑depth tutorial on Apollo, Ctrip's open‑source distributed configuration center, covering its concepts, features, architecture, four management dimensions, client design, Maven integration, SpringBoot implementation, testing procedures, and deployment on Kubernetes with Docker.
Apollo is an open‑source configuration management center developed by Ctrip that centralizes configuration for applications across environments, clusters, and namespaces, offering real‑time updates, gray releases, version control, permission management, and API access.
The core model consists of three steps: users modify and publish configurations, the configuration service notifies Apollo clients of updates, and clients pull the latest configuration and apply it locally.
Apollo manages configurations along four dimensions—application, environment (FAT, UAT, DEV, PRO), cluster, and namespace (public, private, inherited)—allowing fine‑grained control of settings such as database URLs or feature flags.
Clients cache configurations locally (e.g., /opt/data/{appId}/config-cache on Linux) to ensure availability when the server is unreachable, storing files named {appId}+{cluster}+{namespace}.properties .
The client maintains a long‑polling HTTP connection to receive push notifications; if no change occurs within 60 seconds the server returns 304, and the client periodically pulls updates (default every 5 minutes, configurable via apollo.refreshInterval ).
Overall system design includes Config Service for client reads/pushes, Admin Service for portal management, both stateless and registered with Eureka, with a Meta Server handling service discovery and load‑balancing.
High availability is achieved through multi‑instance, stateless services; failures of individual config or admin nodes have no impact, while full service outages fall back to local cache.
To create a SpringBoot demo project, add the following Maven dependencies:
<dependency>
<groupId>com.ctrip.framework.apollo</groupId>
<artifactId>apollo-client</artifactId>
<version>1.4.0</version>
</dependency>Configure application.yml with Apollo settings, for example:
server:
port: 8080
app:
id: apollo-test
apollo:
meta: http://192.168.2.11:30002
cluster: default
cacheDir: /opt/data/
bootstrap:
enabled: true
namespaces: application
eagerLoad:
enabled: falseCreate a controller that injects a configuration key:
@RestController
public class TestController {
@Value("${test:默认值}")
private String test;
@GetMapping("/test")
public String test() { return "test的值为:" + test; }
}Define the SpringBoot entry point:
@SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}Run the application with JVM arguments to specify the environment and config service, e.g., -Dapollo.configService=http://192.168.2.11:30002 -Denv=DEV . The service will read the test key from Apollo, and changes in the portal are reflected instantly; rollbacks or deletions revert to default values or cached values as described.
For Kubernetes deployment, first build a Docker image using the following Dockerfile:
FROM openjdk:8u222-jre-slim
VOLUME /tmp
ADD target/*.jar app.jar
RUN sh -c 'touch /app.jar'
ENV JAVA_OPTS "-XX:MaxRAMPercentage=80.0 -Duser.timezone=Asia/Shanghai"
ENV APP_OPTS ""
ENTRYPOINT ["sh","-c","java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar $APP_OPTS"]Deploy the image with a Kubernetes manifest that sets JAVA_OPTS and APP_OPTS environment variables to pass Apollo configuration (e.g., --app.id=apollo-demo , --apollo.meta=http://service-apollo-config-server-dev.mydlqcloud:8080 , etc.). The service is exposed via a NodePort, allowing access to http:// :31080/test which returns the value stored in Apollo.
Code Ape Tech Column
Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.