WeChat 'Kan Kan' Content Understanding: Architecture and Techniques for Recommendation
This article details the technical architecture behind WeChat's 'Kan Kan' content understanding platform, covering text and multimedia analysis, tag extraction, entity recognition, knowledge graph construction, and how these components enhance recommendation recall, ranking, and user engagement across the ecosystem.
WeChat's "Kan Kan" recommendation product relies on a comprehensive content understanding platform that extracts semantic information from massive heterogeneous data sources.
The platform normalizes incoming content and performs multi‑dimensional analysis, including text understanding (classification with LSTM, TextCNN, fastText and BERT, tag extraction via TF‑IDF, LDA, TextRank, CRF and deep LSTM‑CRF models), multimedia understanding (video classification, multi‑label prediction, face detection, OCR, and multimodal embedding), and knowledge graph construction (entity, relation and attribute extraction, fusion, reasoning and embedding).
Tag and entity pipelines use both unsupervised methods (TF‑IDF, LDA, TextRank) and supervised deep models (BiLSTM‑CRF, BERT) to generate high‑quality tags, which are further mapped to the internal taxonomy through Tag2Tag and Context2Tag strategies.
Video processing includes frame sampling, TSN‑ResNet50 classification, video‑shuffle feature exchange, NetVLAD pooling, cover‑image selection via K‑Means clustering, aesthetic scoring, and GIF generation; embeddings are learned from visual, facial, OCR and audio modalities and applied to retrieval, deduplication and downstream recommendation.
To drive business impact, the extracted content signals are fed into full‑link features for recall, coarse‑ranking and fine‑ranking models, and into target‑prediction models (DNN, Product‑Neural‑Network, DeepFM, xDeepFM) that estimate click‑through, share and view‑value scores for new items.
Applications span full‑link feature injection, content probing for cold‑start items, construction of quality content libraries for specific user groups (e.g., elderly, high‑share videos), and intelligent creative tools such as automatic cover selection, GIF creation and title generation.
The system is continuously service‑oriented, with NLP and vision services deployed on internal platforms, A/B testing pipelines, and real‑time content understanding loops that keep the recommendation ecosystem up‑to‑date.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.