ElasticJob‑Lite Overview and Implementation Details
This article provides a comprehensive overview of ElasticJob‑Lite, a distributed scheduling solution for high‑concurrency environments, covering its architecture, ZooKeeper‑based registration and sharding mechanisms, core listener components, key code examples, and the UI console deployment.
ElasticJob‑Lite is a distributed scheduling solution designed for internet‑scale, high‑concurrency tasks. It relies on ZooKeeper for coordination, using leader election to ensure high availability and scalability.
Architecture : The system registers each job instance as a ZooKeeper node, stores configuration (e.g., sharding count) under /config , and maintains instance, leader, server, and sharding nodes. When a new instance starts, it creates an instance node containing its IP and a random identifier.
Registration : Upon startup, an application writes an instance node to ZooKeeper. The sharding count determines how many times the job should run concurrently; the relationship between registration and sharding is encoded in the instance node.
Listeners : ElasticJob‑Lite starts several listeners via listenerManager.startAllListeners() , including ElectionListener, ShardingListener, FailoverListener, MonitorExecutionListener, ShutdownListener, TriggerListener, RescheduleListener, GuaranteeListener, and a connection‑state listener. These listeners react to changes in ZooKeeper nodes, such as configuration updates or server status changes.
Sharding Process : When the /config node changes and the sharding total count is non‑zero, the master node creates a /leader/sharding/necessary node, updates the local cache, and proceeds with sharding. The sharding algorithm (default AverageAllocationJobShardingStrategy ) distributes shards among available instances, writes instance IDs into /sharding/{index}/instance nodes, and cleans up temporary nodes.
Key Code Snippets :
public void registerStartUpInfo(final boolean enabled) {
listenerManager.startAllListeners();
leaderService.electLeader();
serverService.persistOnline(enabled);
instanceService.persistOnline();
if (!reconcileService.isRunning()) {
reconcileService.startAsync();
}
} public void startAllListeners() {
electionListenerManager.start();
shardingListenerManager.start();
failoverListenerManager.start();
monitorExecutionListenerManager.start();
shutdownListenerManager.start();
triggerListenerManager.start();
rescheduleListenerManager.start();
guaranteeListenerManager.start();
jobNodeStorage.addConnectionStateListener(regCenterConnectionStateListener);
} public void shardingIfNecessary() {
List
availableJobInstances = instanceService.getAvailableJobInstances();
if (!isNeedSharding() || availableJobInstances.isEmpty()) return;
if (!leaderService.isLeaderUntilBlock()) { blockUntilShardingCompleted(); return; }
// ... load config, set processing node, reset sharding info, apply strategy, execute transaction ...
}Console UI : ElasticJob‑Lite‑UI (3.x) provides a separate management console for monitoring job status, events, and configuration. Deployment instructions are available in external tutorials; the console reads ZooKeeper nodes and database event tables to display real‑time information and allows operations such as modifying node data or triggering jobs.
Overall, the article walks through the registration, sharding, listener mechanisms, and provides practical code examples to help developers understand and extend ElasticJob‑Lite.
政采云技术
ZCY Technology Team (Zero), based in Hangzhou, is a growth-oriented team passionate about technology and craftsmanship. With around 500 members, we are building comprehensive engineering, project management, and talent development systems. We are committed to innovation and creating a cloud service ecosystem for government and enterprise procurement. We look forward to your joining us.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.