Artificial Intelligence 13 min read

Optimizing Fliggy Search Ranking with Product Inclusion Relationships: The DIRN Model

This article presents the DIRN model, which leverages product inclusion graphs and graph‑based embeddings to address the challenges of ranking both single‑item and complex travel products on Fliggy, demonstrating significant CTR, CVR, and GMV improvements through offline experiments and online A/B testing.

DataFunSummit
DataFunSummit
DataFunSummit
Optimizing Fliggy Search Ranking with Product Inclusion Relationships: The DIRN Model

Fliggy (Alibaba's travel platform) distinguishes between single‑item products (e.g., hotel, ticket) and complex products (e.g., tour packages) that consist of multiple single items, creating a product‑inclusion relationship that complicates search ranking.

The ranking pipeline consists of Query Planner, recall (text, attribute, LBS, vector, graph), coarse ranking, and fine ranking, where fine ranking uses a Learning‑to‑Rank (LTR) model fed with CTR, CVR, GMV, and category predictions.

Three key challenges arise from product inclusion: (1) extracting relational information from the inclusion graph, (2) quantifying association strength between items connected by paths, and (3) incorporating both item‑wise similarity and path‑based association into user interest modeling.

To address these, the authors propose the DIRN model, which comprises three modules: (a) a graph‑based embedding generator (GraphSAGE pre‑trained on the inclusion graph), (b) a representation‑based interest layer that computes attention‑weighted similarity between candidate items and the user's historical click sequence, and (c) a relation‑path interest layer (AMU) that quantifies the strongest association among all shortest paths between historical and candidate items using Dijkstra‑derived paths and GraphSAGE embeddings.

The final feature vector concatenates dynamic interest, path‑based interest, similarity scores, user and item attributes, and context features, which are fed into a fully‑connected network to predict click or purchase probability.

Offline experiments on 156 million exposure‑click samples (over 1.3 million items, 300 k complex items) show that DIRN outperforms strong baselines such as DMR, achieving a 0.75 % absolute AUC gain for CTR and superior LogLoss. Ablation studies confirm the effectiveness of both the graph embedding layer and the relation‑path interest layer.

Online deployment consists of offline training (GraphSAGE pre‑training on AliGraph, path association computation, ESMM multi‑task CTR/CVR training) and real‑time serving on Alibaba's TPP platform, where the CTR/CVR scores are combined with other estimations in an LTR model for final ranking.

Live A/B testing demonstrates DIRN improves CTR by 1.7 %, CVR by 2.4 %, and GMV by 6.0 % compared with the DMR baseline.

The presentation concludes with a Q&A covering GraphSAGE embedding benefits, embedding dimensionality, unified modeling of single and package items, and the use of attention‑based user interest vectors.

Alibabamachine learningCTR predictionsearch rankingGraph Neural NetworksDIRNproduct inclusion
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.