Big Data 17 min read

Insights on Graph Computing: Technology, Applications, and Future Directions

Professor Chen Wenguang discusses how graph computing—originating from graph theory—offers a powerful way to model relationships across industries, its rapid development in China, challenges in scaling, integration with AI via graph neural networks, and the collaborative efforts needed between academia and industry to advance the field.

AntTech
AntTech
AntTech
Insights on Graph Computing: Technology, Applications, and Future Directions

Graph computing, based on graph theory rather than images, studies relationships between entities and enables description, analysis, and computation of complex connections. Gartner predicts that by 2025 graph technology will be used in 80% of data and analytics innovations, highlighting its growing importance in finance, manufacturing, energy, and even brain science.

In a Founder Park interview, Professor Chen Wenguang explains that a graph consists of nodes and edges, illustrating examples such as social networks, road networks, and directed versus undirected graphs. He emphasizes that graph data can naturally represent massive relationships, often involving billions of nodes and edges.

He contrasts traditional relational databases, which store data in tables, with graph databases that natively capture relationships, arguing that graphs provide a more expressive way to model connections than lists or trees.

Graph computing can be viewed narrowly—performing calculations on a static graph (e.g., shortest path in a road network)—or broadly—handling dynamic graphs with changing edge attributes or streaming data, which also encompasses graph databases.

The development of graph computing in China accelerated when Chen, after early work on compiler optimizations, shifted to graph research in 2010. His team built a distributed in‑memory graph system in 2016 that outperformed existing frameworks like GraphX by up to 100× while using only one‑tenth of the memory.

Industry applications span financial risk control (detecting circular guarantees and real‑time fraud detection within 20 ms), power‑grid fault analysis, e‑commerce recommendation, epidemic contact tracing, and even Ant Forest’s energy‑stealing feature, all leveraging graph structures.

Integration with AI has given rise to graph neural networks (GNNs), which embed node and edge features into vectors for neural processing, improving tasks such as credit scoring in Sesame Credit.

Chen also explores using graph computing for brain simulation, aiming to model neuronal connections and dynamics, a challenging interdisciplinary effort shared by academia and industry.

Challenges ahead include attracting talent, creating market feedback loops, and bridging the gap between academic research and practical enterprise needs. Collaboration initiatives, such as open‑source graph database benchmarks and joint patent releases by companies like Ant Group, aim to align academic problems with real‑world data and scenarios.

Globally, competition involves traditional database vendors (Oracle, SAP), dedicated graph companies (Neo4j, TigerGraph), and domestic startups. Ant Group holds strong internal capabilities but seeks to improve product generalization for external customers.

The interview concludes with reflections on the importance of defining original problems, fostering academia‑industry partnerships, and encouraging innovative research to drive the next wave of graph computing advancements.

Big DataAIGraph Neural Networksgraph computingIndustry ApplicationsGraph Processing
AntTech
Written by

AntTech

Technology is the core driver of Ant's future creation.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.