Artificial Intelligence 13 min read

Cold‑Start Optimization for Content Recommendation on Alibaba’s Home‑Decor Platform

Alibaba’s home‑decor platform introduced a two‑stage cold‑start pipeline—Uniform Guarantee and Boost Amplification—combined with a Wide & Deep content‑potential model that predicts new item popularity, dramatically reducing exposure latency, boosting click‑through rates by ~8 % and overall exposure by 13 %.

DaTaobao Tech
DaTaobao Tech
DaTaobao Tech
Cold‑Start Optimization for Content Recommendation on Alibaba’s Home‑Decor Platform

The article describes a series of cold‑start improvements for the "Every Square Meter" (每平每屋) content channel of Alibaba’s home‑decor platform. The platform serves a large variety of visual content (2D/3D images, videos, VR tours) and faces challenges in delivering fresh, high‑quality items to both creators and consumers.

Background : Traditional recommendation pipelines favor head content, causing long‑tail items to receive little exposure. To address this, the team introduced a two‑stage cold‑start flow: Uniform Guarantee and Boost Amplification .

Uniform Guarantee ensures that newly published items with low exposure ( pv < x ) receive a baseline amount of impressions for a fixed period, while limiting the daily flow per creator based on a historical efficiency index.

Boost Amplification dynamically increases traffic for items that have passed the uniform stage and achieve a click‑through‑rate above a threshold. Items are assigned to one of several boost levels, and their allocated traffic grows with real‑time performance.

Because cold‑start items lack feedback data, a dedicated recall‑ranking pipeline was built. A Content Potential Prediction Model predicts the probability that a new item will become popular within seven days. The model uses content attributes (style, space, product IDs, price) and visual embeddings from cover images, deliberately avoiding interaction‑based features. Two labeling schemes were explored: (1) strict PV/CTR thresholds, and (2) dimension‑wise CTR normalization.

The model is based on a Wide & Deep architecture, concatenating image embeddings with sparse feature embeddings. Sample confidence is weighted by exposure PV during training.

Offline Evaluation shows that the first labeling scheme yields higher precision for top‑10 % hot items. Online A/B tests (7‑day) demonstrate significant lifts: uniform guarantee reduces first‑exposure time, while boost amplification improves metrics such as pCTR (+8 %), uCTR (+8 %), and overall exposure ratio (+13 %). The potential‑score feature also improves ranking and recall cut‑offs.

Conclusion : The redesigned cold‑start pipeline dramatically shortens the exposure latency for new content, enhances traffic freshness, and increases the efficiency of cold‑start traffic. Future work includes finer‑grained strategies per content genre and incorporating more real‑time features.

Alibabamachine learningrecommendationA/B testingcold startContent Distribution
DaTaobao Tech
Written by

DaTaobao Tech

Official account of DaTaobao Technology

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.