Artificial Intelligence 11 min read

Learnable Index Structures for Large‑Scale Retrieval: Deep Retrieval Model and Training Methods

This article introduces ByteDance's Deep Retrieval (DR) framework, describing its learnable index structure that aligns embedding training with retrieval objectives, detailing the core model, structure‑loss training via EM and online EM algorithms, beam‑search serving, multi‑task learning, and practical insights from Q&A.

DataFunTalk
DataFunTalk
DataFunTalk
Learnable Index Structures for Large‑Scale Retrieval: Deep Retrieval Model and Training Methods

Traditional recall algorithms rely on two‑tower structures combined with approximate nearest neighbor (ANN) or maximum inner product search (MIPS) methods such as fast ball tree (FBT) or hierarchical navigable small world (HNSW), but their embedding training goals differ from ANN objectives, causing a mismatch.

ByteDance's AML team proposes a learnable index structure—Deep Retrieval (DR)—that unifies embedding training and index construction, improving recall for both advertising and recommendation/search scenarios.

Core Model of Deep Retrieval

DR builds a K×D matrix to generate hierarchical paths, treating each path as a cluster that can contain many items, and allowing items to belong to multiple paths, thus preserving multi‑semantic information (e.g., "Tongrentang" appearing in both traditional medicine and comedy categories).

Structure Loss in Training

The probability of a path given a user embedding x is factorized as p(c|x)=p(c₁|x)p(c₂|c₁,x)p(c₃|c₂,c₁,x). For a positive pair (x, y) with known path π(y), the log‑likelihood is maximized by summing log probabilities of each node along the path.

Beam Search in Serving

During inference, a beam search selects the top‑B nodes at each level (typically B≈10), propagates them downward, and finally merges items from the selected paths, enabling efficient retrieval without exhaustive enumeration.

Training the Structure Model

The model parameters θ and the item‑to‑path mapping π are jointly optimized using an EM algorithm. The E‑step updates θ via gradient‑based methods and computes hidden scores s[v,c] for each item‑path pair; the M‑step updates π by selecting high‑scoring paths while applying a patch‑size penalty f(x)=x⁴ (α‑controlled) to avoid overly concentrated paths.

Online EM for Streaming Data

For streaming training, a sliding‑window average of hidden scores and a fixed‑size hidden‑path set are maintained in the E‑step, while the M‑step periodically reads these from a Parameter Server, recomputes true paths with the penalty, and writes them back.

Multi‑Task Learning

DR adopts a multi‑task framework: the structure loss trains the index, while separate factorization‑machine (FFM) or neural network models train user/item embeddings for a reranker. Beam‑searched candidates are first filtered by the reranker before final ranking, balancing coarse and fine retrieval stages.

Discussion and Q&A Highlights

DR clusters focus on user behavior rather than pure item similarity, yielding higher diversity than traditional ANN.

Items can belong to multiple paths, and the model works well for ad and recommendation products, though it may reduce relevance for pure search.

EM convergence is typically achieved after 3‑5 M‑steps offline or 5‑10 M‑steps online.

Without multi‑task learning, paths become imbalanced; the penalty and negative examples in the reranker mitigate this.

The model has been deployed across many ByteDance products with successful online performance.

The session concludes with thanks to the audience.

multi-task learningRecommendation systemsbeam searchdeep retrievalEM algorithmlearnable index
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.