Sequence Feature Modeling in Large-Scale Recommendation Systems and Fast Deployment with EasyRec
This article reviews the evolution of behavior‑sequence modeling methods—from pooling and target‑attention to RNN, capsule, transformer, and graph neural networks—explains their industrial relevance, and demonstrates how to quickly apply these techniques in the EasyRec framework with practical configuration examples.
The article begins with an overview of sequence features in recommendation and advertising, emphasizing that modeling users' historical behavior sequences can capture dynamic preferences and improve click‑through‑rate (CTR) predictions.
It then surveys major families of sequence‑modeling approaches:
Pooling methods such as Youtube, which treat all items equally using mean, sum or max pooling.
Target‑Attention methods (e.g., DIN, DSTN) that assign attention scores to each historical item based on its relevance to the target item.
RNN‑based methods (DIEN, DUPN, HUP, DHAN) that capture temporal order and evolving user interests.
Capsule methods (MIND, ComiRec) that perform dynamic routing to obtain multiple user‑interest vectors for diverse recall.
Transformer methods (ATRank, BST, DSIN, TISSA, SDM, KFAtt, DFN, SIM, DMT, AliSearch) that leverage multi‑head self‑attention to model long‑range dependencies.
Graph Neural Network methods (SURGE) that construct an interest graph from interaction sequences and apply graph convolutions to aggregate contextual information.
After the methodological review, the article introduces EasyRec , an open‑source large‑scale distributed recommendation framework from Alibaba Cloud. It outlines the steps to configure and run EasyRec, including data preparation, environment setup, and model training.
For practical deployment, a DIN‑based configuration example is provided. The sequence feature is defined in the model config as follows:
feature_configs : {
input_names: 'tag_brand_list'
feature_type: SequenceFeature
separator: '|'
hash_bucket_size: 100000
embedding_dim: 16
}
feature_configs : {
input_names: 'tag_category_list'
feature_type: SequenceFeature
separator: '|'
hash_bucket_size: 100000
embedding_dim: 16
}The raw input format for these features uses ':' to separate name and value, '#' to separate multiple features, and ';' to separate multiple sequences, e.g.,
4281|4281|4281|4281|4281|4281|4281|4281|4281|4281|4281|4281|4281|4281|4526|4526,283837|283837|283837|283837|283837|283837|283837|283837|283837|283837|283837|283837|283837|283837|367594|367594After offline training, the model can be exported and deployed for online serving, enabling rapid application of cutting‑edge sequence‑feature techniques in real‑world recommendation scenarios.
The article concludes with a comprehensive reference list of the cited papers and resources.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.