Artificial Intelligence 9 min read

Applying Angel Recommendation Algorithms to Game Recommendation: From Linear Classic Models to DeepFM

This article explains how Tencent's Angel platform powers game recommendation on Steam, Wegame, and the Tesla platform, comparing traditional linear tag‑based and collaborative‑filtering methods with the non‑linear DeepFM model and offering practical deployment tips.

DataFunTalk
DataFunTalk
DataFunTalk
Applying Angel Recommendation Algorithms to Game Recommendation: From Linear Classic Models to DeepFM

Angel is Tencent's self‑developed distributed high‑performance machine‑learning platform that supports machine learning, deep learning, graph computing, and federated learning; its deep learning component is widely used in Tencent. This talk introduces the application of Angel's recommendation algorithms in game recommendation.

On the Steam platform, game recommendation relies on user‑selected tags, which abstract feature vectors that would otherwise be computed by collaborative‑filtering ALS. Wegame, by contrast, employs collaborative filtering and DeepFM instead of manually extracted tags.

The Tesla platform offers a trial URL (https://cloud.tencent.com) where users can generate models from wiki documentation, configure parameters, and run recommendations. It also supports traditional algorithms such as CF‑ALS, with hyper‑parameters like Rank, Lambda, and Alpha tuned iteratively.

Classic linear algorithms include content‑based tag recommendation (subjective) and collaborative filtering (item‑based, user‑based, or hybrid), which face sparse matrix and long‑tail challenges. A common solution is to cluster items first and then apply collaborative filtering, while integrating user and item profiles into DeepFM.

DeepFM extends the input beyond user‑ID and item‑ID to include additional user/item features and performs second‑order feature crossing, similar to weight decomposition in CNNs. Its random component handles many categorical features, automatically embedding them and allowing dimensionality reduction, which can replace user‑ID to mitigate cold‑start problems.

In practice, data pipelines use VectorAssembler to combine fields into a vector and standardize it, while FeatureHasher converts categorical columns into sparse vectors; the resulting vector is fed into the DeepFM model.

Key practical tips: Validation AUC often correlates with click‑through rate; target data should be scene‑specific and not merged across different positions; increasing training data size improves AUC; post‑ranking filters already owned or played items and re‑ranks for new or hot games; FM requires user_id, but DeepFM can operate without it when rich features are available.

The article concludes with acknowledgments to the TEG team for their work on the Tesla platform and the Spark in‑memory computing platform, and thanks the audience.

machine learningAIcollaborative filteringRecommendation systemsdeepFMgaming platforms
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.