Backend Development 7 min read

JSF 1.7.6 Preheat Strategy: Practice and Performance Test Report

This report details the background, implementation, and results of using the JSF 1.7.6 dynamic preheat strategy to mitigate performance spikes during service deployments on JD's VOP platform, comparing scenarios with and without preheat and providing concrete monitoring data and configuration guidance.

JD Retail Technology
JD Retail Technology
JD Retail Technology
JSF 1.7.6 Preheat Strategy: Practice and Performance Test Report

The JD VOP platform provides API integration for enterprise internal procurement malls, requiring high‑concurrency, high‑availability interfaces as the service scales to thousands of SaaS customers. Frequent deployment‑induced alerts highlighted the need for a smoother rollout mechanism.

JSF 1.7.6 introduces a dynamic preheat strategy that adjusts traffic weights for newly launched nodes, allowing a small‑volume warm‑up period defined by configurable rules. This feature promises to reduce sudden latency spikes and improve provider stability.

Two test scenarios were executed: (1) external service exposure where certain APIs timed out shortly after application start‑up, and (2) provider‑side API release causing JSF timeout requests. Both scenarios triggered TP99 and availability alarms.

Test environment consisted of five 4‑core 8 GB servers (four providers at 11.94.2.225, 11.94.13.242, 11.94.65.31, 11.94.65.45 and one consumer at 11.38.181.175). The HTTP consumer endpoint https://bizapi.jd.com/api/area/getTown and the provider method com.jd.ka.vop.soa.address.sdk.provider.QueryAddressOpenProvider#queryJdAreaIdList were used for the experiments.

Testing steps: a load generator simulated stable traffic, then a 50 % gradual rollout was performed on two provider machines. Monitoring screenshots (provider and consumer UMP graphs) captured latency and error metrics before and after preheat configuration.

Results without preheat showed noticeable latency spikes and error bursts during the rollout. After enabling preheat with an initial weight of 1 and a 60 s ramp‑up period, the same rollout produced a 2.8‑15× reduction in performance impact on one machine and virtually no impact (≈16 ms) on the other, while the baseline query TP99 remained around 11 ms.

The conclusion is that automatic preheat effectively smooths cold‑start latency for providers, dramatically reducing deployment‑induced performance degradation. Future work includes broader adoption of the preheat feature across services and further tuning of weight and period parameters.

deploymentload balancingperformance testingJSFpreheat
JD Retail Technology
Written by

JD Retail Technology

Official platform of JD Retail Technology, delivering insightful R&D news and a deep look into the lives and work of technologists.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.