Artificial Intelligence 10 min read

Angel Recommendation Algorithms for Game Platforms

This article introduces Tencent's self‑developed Angel distributed machine‑learning platform and demonstrates how its recommendation algorithms, including classic linear models, collaborative filtering, and the non‑linear DeepFM model, are applied to game recommendation scenarios such as Steam, WeGame, and the Tesla platform.

DataFunSummit
DataFunSummit
DataFunSummit
Angel Recommendation Algorithms for Game Platforms

Angel is Tencent's self‑developed distributed high‑performance machine‑learning platform, supporting machine learning, deep learning, graph computing, and federated learning, and its recommendation algorithms are applied in many Tencent scenarios.

Game platform recommendation

The Steam platform uses a tag‑based recommendation method, where tags are collected from user selections. Because the catalog is large and deep, user‑chosen tags can abstract feature vectors that would otherwise require collaborative‑filtering ALS calculations.

WeGame’s recommendation algorithm does not rely on manually extracted tags; it uses collaborative filtering (CF) and DeepFM based on user behavior data.

Tesla platform recommendation algorithm

The Tesla platform can be trialed at https://cloud.tencent.com . By following the wiki documentation, users can generate models and define parameters. Traditional algorithms such as CF‑ALS can be used, with hyper‑parameters (Rank, Lambda, Alpha) tuned iteratively.

Linear features of classic algorithms

Content‑based tag recommendation is often subjective, while collaborative filtering (item‑based, user‑based, or hybrid) suffers from sparse matrices and long‑tail recommendation problems. To address long‑tail items, clustering can be performed before applying CF, and user/item profiles can be combined with DeepFM.

Non‑linear features of DeepFM

DeepFM extends FM by accepting not only UserID and ItemID but also additional user/item features, allowing second‑order feature interactions. Its non‑linear part resembles CNN weight decomposition, and visualizations (e.g., heatmaps) can be used to interpret learned representations.

DeepFM automatically embeds categorical features and can adjust feature dimensions based on error feedback, effectively performing automatic clustering and mitigating cold‑start issues.

DeepFM application process

Data preprocessing typically uses a Vector Assembler to combine fields into a feature vector, which is then normalized. Categorical columns can be transformed into sparse matrices via Feature Hasher, and the resulting vector is fed into DeepFM.

Validation AUC correlates with click‑through rate; higher AUC generally indicates higher CTR.

Target data collected from specific positions should be modeled separately to preserve feature relevance.

Increasing training data size noticeably improves AUC.

Post‑ranking filters should remove already owned or played items and prioritize new or hot games.

FM traditionally requires user_id, leading to large label spaces and cold‑start problems; DeepFM can rely on rich feature inputs to alleviate this.

Finally, thanks to the TEG team for providing a robust Spark in‑memory computing platform and cutting‑edge machine‑learning algorithms on the Tesla platform.

Thank you for listening.

machine learningrecommendationcollaborative filteringGame PlatformdeepFM
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.