Artificial Intelligence 8 min read

Building Graph Algorithm Tasks on Tencent Cloud TI-ONE with Angel

This article introduces Tencent Cloud's TI-ONE AI platform, explains its built‑in Angel algorithm support, demonstrates how to visually construct a graph‑algorithm workflow such as GraphSage, and outlines the resource configuration, execution, and result retrieval process for developers.

DataFunTalk
DataFunTalk
DataFunTalk
Building Graph Algorithm Tasks on Tencent Cloud TI-ONE with Angel

TI-ONE is a one‑stop machine‑learning service platform for AI engineers, offering end‑to‑end support from data preprocessing to model evaluation, and includes a rich set of algorithm components and frameworks such as PyTorch, TensorFlow, and Angel.

The platform addresses common challenges faced by AI practitioners, including limited GPU resources, rapidly evolving frameworks, high learning curves, time‑consuming hyper‑parameter tuning, and costly product deployment cycles.

TI-ONE provides solutions such as on‑demand compute resources, drag‑and‑drop task design, integration of popular deep‑learning frameworks, built‑in algorithm libraries (CNN, RNN, clustering, etc.), flexible execution modes, one‑click deployment, and notebook‑style interactive modeling.

Angel is supported on TI-ONE in two forms: the Spark‑on‑Angel framework for running custom code, and pre‑packaged Angel algorithm components (graph algorithms, PyTONA, machine‑learning algorithms) with detailed usage documentation.

Users can create custom training jobs by dragging the Spark‑on‑Angel component onto the canvas, configuring job JARs, main classes, parameters, and resource specifications (executor, driver, PS nodes), and then monitor logs through the integrated console.

The visual modeling canvas allows users to select algorithm modules, connect them automatically, adjust parameters via side menus, and execute the workflow to generate results, logs, and model artifacts stored in COS.

A concrete example shows building a GraphSage graph‑algorithm task: the GraphSage component is dragged onto the canvas, COS data is linked, and various I/O and algorithm parameters (batch size, learning rate, partitions, etc.) are configured; resources ranging from 2‑core/4 GB to 64‑core/256 GB can be allocated and are released automatically after use.

After execution, users can view logs, retrieve model links, and access results directly from COS, completing the end‑to‑end pipeline from data upload to model deployment within TI-ONE.

machine learningVisualizationAI PlatformTencent CloudTI-ONEGraphSAGEAngel
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.