Artificial Intelligence 24 min read

Iterative Evolution of JD Search EE System: Adaptive Exploration, Scenario Modeling, Scoring‑Insertion Consistency, and Context‑Aware Brand Store Detection

This article details the multi‑stage evolution of JD's search Explore‑Exploit (EE) system—covering an adaptive dynamic detection model, scenario‑modeling upgrades, end‑to‑end scoring and insertion consistency, and context‑aware brand/store dimension detection—demonstrating how each iteration improves result diversity, user experience, and key online metrics while maintaining search efficiency.

DataFunSummit
DataFunSummit
DataFunSummit
Iterative Evolution of JD Search EE System: Adaptive Exploration, Scenario Modeling, Scoring‑Insertion Consistency, and Context‑Aware Brand Store Detection

Introduction – E‑commerce search often suffers from a head‑effect where high‑traffic items dominate exposure, limiting opportunities for high‑quality mid‑ and long‑tail products. JD's EE system aims to interleave such items into ranking results to enhance diversity and user experience.

1. Adaptive Dynamic Detection Model – The original EE model relied on exposure and scoring confidence, ignoring user intent differences ("browse" vs. "buy"). The new adaptive model introduces an Explore‑Net alongside the existing Exploit‑Net , explicitly modeling user exploration preference and browsing depth as a regression task. Key upgrades include:

Differentiated user‑preference modeling ("browse" vs. "buy").

Incorporating browsing depth as a sub‑task to capture exploration intent.

Increasing the weight of exploration features in the model.

Feature engineering removes unrelated features (e.g., query length) and applies log‑scaled smoothing to depth labels. A convex "U‑shaped" sample‑weight scheme balances shallow and deep sessions.

2. Scenario Modeling Upgrade – The previous single‑task click‑rate model was extended to a multi‑task framework that jointly predicts click‑through‑rate (CTR) and conversion‑rate (CTCVR). A lightweight share‑bottom architecture with two towers preserves low latency while improving modeling of conversion signals. Fusion strategies (weighted sum, multiplication, exponentiation) were evaluated, and the weighted‑sum approach was selected for best offline and online performance.

3. Scoring and Insertion End‑to‑End Consistency Upgrade – Previously, scoring and insertion were controlled by separate models, causing mismatched aggressiveness. The upgraded pipeline ties the EE model’s browsing‑depth prediction directly to the number of items to insert, ensuring that deeper sessions receive more exploratory items and that scoring and insertion remain consistent.

4. Context‑Aware Brand Store Dimension Detection – To mitigate brand/store concentration, the system now perceives query intent (brand, model, or store keywords) and the distribution of top‑k results. When a brand or store exceeds a predefined share threshold, the EE mechanism disables insertion for that dimension, preventing further head‑effect amplification.

Experimental Results

• Exploration‑Utilization Analysis : Average detection strength grows with browsing depth, confirming session‑level differentiation.

• Insertion Position Analysis : Deeper sessions see earlier insertion positions, indicating stronger exploration.

• Online Metrics : With search efficiency unchanged, EE core metrics improved, raising liquidity and exploration‑success rate by ~0.5%.

• Multi‑Dimensional Detection : Brand/store‑aware filtering reduced head‑brand insertions and increased overall result diversity across queries such as "gas water heater" and "dishwasher".

Metric

XGB Model

Explore‑Net

RMSLE

0.2053

0.0903

Conclusion & Outlook – The iterative upgrades—from adaptive exploration to scenario modeling, end‑to‑end consistency, and context‑aware detection—demonstrate a systematic approach to alleviating the Matthew effect in e‑commerce search. Future work includes expanding training data, extending EE to the full ranking pipeline, and broadening product representation for better mid‑ and long‑tail coverage.

e-commercemachine learningmulti-task learningsearch rankingonline experimentationadaptive modelingexplore‑exploit
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.