How to Pick a Task Scheduling Framework: Quartz, ElasticJob, XXL‑JOB
This article compares popular Java task scheduling solutions—including Quartz, ElasticJob‑Lite, XXL‑JOB, and custom implementations—explaining their core components, clustering strategies, distributed lock mechanisms, and practical code examples to help engineers choose the right framework for their needs.
Reading a comment about task‑scheduling framework selection sparked the author to explain the core logic of building a scheduling system.
1 Quartz
Quartz is an open‑source Java task‑scheduling framework and a common starting point for many Java engineers.
Quartz’s core consists of three components:
Job – represents the scheduled task.
Trigger – defines the schedule (time rule) for executing the task; a Job can have multiple Triggers, but a Trigger is bound to a single Job.
Scheduler – factory that creates the Scheduler and executes tasks according to Trigger rules.
In the example image, Quartz’s JobStore is RAMJobStore , storing Triggers and Jobs in memory.
The core execution class is QuartzSchedulerThread .
Scheduler thread fetches the list of Triggers to be executed from the JobStore and updates their states.
Fire the Trigger, modify its next fire time and status, and persist the changes.
Create the concrete task object and run it via a worker thread pool.
Cluster deployment requires creating Quartz tables for each DB type (MySQL, Oracle) and using JobStoreSupport . The cluster relies on row‑level locks in the database to guarantee that only one node triggers a task.
MySQL lock example (image):
The lock name is built from the prefix
QRTZ_(configurable) plus the scheduler instance name and lock identifier (e.g., TRIGGER_ACCESS , STATE_ACCESS ).
This architecture solves distributed scheduling, ensuring a task runs on only one node; however, heavy contention on DB locks can degrade performance under many short‑lived tasks.
2 Distributed Lock Mode
Quartz’s cluster mode can be intrusive; some teams prefer a distributed‑lock approach.
Scenario: an e‑commerce order is cancelled if not paid within a timeout.
Typical implementation uses a Spring @Scheduled task that runs every two minutes:
<code>@Scheduled(cron = "0 */2 * * * ? ")
public void doTask() {
log.info("定时任务启动");
// execute order closing logic
orderService.closeExpireUnpayOrders();
log.info("定时任务结束");
}
</code>In a clustered environment, multiple instances may execute the same task simultaneously, causing chaos.
Solution: acquire a Redis distributed lock before executing the task:
<code>@Scheduled(cron = "0 */2 * * * ? ")
public void doTask() {
log.info("定时任务启动");
String lockName = "closeExpireUnpayOrdersLock";
RedisLock redisLock = redisClient.getLock(lockName);
// try to lock, wait up to 3 seconds, auto‑unlock after 5 minutes
boolean locked = redisLock.tryLock(3, 300, TimeUnit.SECONDS);
if (!locked) {
log.info("没有获得分布式锁:{}", lockName);
return;
}
try {
orderService.closeExpireUnpayOrders();
} finally {
redisLock.unlock();
}
log.info("定时任务结束");
}
</code>Redis offers excellent read/write performance; the lock can also be replaced by a Zookeeper lock.
However, this combination has two drawbacks:
In distributed scenarios the scheduled task may run empty and cannot be sharded.
Manual triggering requires additional code.
3 ElasticJob‑Lite
ElasticJob‑Lite is a lightweight, decentralized solution delivered as a jar.
Define a task class that implements SimpleJob :
<code>public class MyElasticJob implements SimpleJob {
@Override
public void execute(ShardingContext context) {
switch (context.getShardingItem()) {
case 0:
// do something for shard 0
break;
case 1:
// do something for shard 1
break;
case 2:
// do something for shard 2
break;
// case n: ...
}
}
}
</code>Example: five tasks (A‑E) where task E is split into four sub‑tasks and deployed on two machines.
ElasticJob ultimately relies on Quartz for execution but uses Zookeeper for coordination, allowing load‑balanced distribution of tasks to Quartz Scheduler containers.
From a user perspective it is simple, but the scheduler and executor still run inside the same JVM, requiring load‑balancing logic; frequent restarts trigger leader election, which is relatively heavyweight.
The console shows job status but its functionality is limited.
4 Centralized Approaches
The principle is to separate the scheduling center from the executors, allowing both to scale independently.
4.1 MQ Mode
In the author’s experience, Quartz cluster sends messages to RabbitMQ; business services consume the messages and execute the tasks. This leverages MQ’s decoupling but introduces strong dependence on the message‑queue system and its performance characteristics.
4.2 XXL‑JOB
XXL‑JOB is a distributed task‑scheduling platform designed for rapid development, simplicity, lightweight, and easy extensibility.
Architecture uses a server‑worker model. The scheduling center is a SpringBoot application listening on port 8080; the executor embeds a server listening on port 9994.
Executors register themselves; the center maintains an online executor list and routes tasks based on strategies such as random, broadcast, and sharding.
Core scheduling class JobTriggerPoolHelper starts two threads: scheduleThread and ringThread .
scheduleThread periodically loads tasks from the database using a row‑level lock (SQL example):
<code>Connection conn = XxlJobAdminConfig.getAdminConfig().getDataSource().getConnection();
connAutoCommit = conn.getAutoCommit();
conn.setAutoCommit(false);
PreparedStatement preparedStatement = conn.prepareStatement(
"select * from xxl_job_lock where lock_name = 'schedule_lock' for update");
preparedStatement.execute();
# trigger task scheduling (pseudo‑code)
for (XxlJobInfo jobInfo : scheduleList) {
// ...
}
conn.commit();
</code>scheduleThread puts immediate tasks into a thread pool; tasks due within five seconds are placed into a
ringDatastructure.
ringThread periodically fetches tasks from
ringDataand executes them via the thread pool.
5 Self‑Developed Scheduler
In 2018 the author built a custom scheduler compatible with an internal RPC framework, studying XXL‑JOB source code and Alibaba Cloud SchedulerX.
SchedulerX architecture consists of a console, a server, and a client (worker) that registers with the server.
SchedulerX console – creates and manages tasks.
SchedulerX server – core scheduling component.
SchedulerX client – each application process acts as a worker.
The author chose RocketMQ’s remoting module for communication, registering processors such as CallBackProcessor , HeartBeatProcessor , and TriggerTaskProcessor .
<code>public void registerProcessor(int requestCode, NettyRequestProcessor processor, ExecutorService executor);
</code> <code>public interface NettyRequestProcessor {
RemotingCommand processRequest(ChannelHandlerContext ctx, RemotingCommand request) throws Exception;
boolean rejectRequest();
}
</code>The custom scheduler also uses Quartz cluster mode for stability, achieving about 40‑50 million executions over four months, but row‑level lock limits scalability.
Demo version removes external registry, adds Zookeeper coordination, and replaces Quartz with a time‑wheel (Dubbo‑style) implementation.
6 Technology Selection
A comparison table (image) places open‑source frameworks (Quartz, ElasticJob) alongside commercial SchedulerX.
Framework‑level solutions are lightweight; centralized products offer clearer architecture and richer features such as map‑reduce sharding and workflow.
XXL‑JOB provides an out‑of‑the‑box experience that satisfies most teams.
Key best practices: ensure task idempotency and use proper troubleshooting (logs, JStack, timeout settings).
7 Conclusion
Both ElasticJob and XXL‑JOB opened source in 2015. The author reflects on personal growth, the importance of creativity, and looks forward to exploring newer systems like PowerJob.
macrozheng
Dedicated to Java tech sharing and dissecting top open-source projects. Topics include Spring Boot, Spring Cloud, Docker, Kubernetes and more. Author’s GitHub project “mall” has 50K+ stars.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.