LOGIN: Large‑Model‑Assisted Graph Neural Networks for User Behavior Risk Control
This article presents the latest advances from the Chinese Academy of Sciences in graph machine learning for user behavior risk control, introducing the LOGIN framework that leverages large language models as consultants to iteratively enhance GNN training, and demonstrates its effectiveness through extensive experiments on homogeneous and heterogeneous graph benchmarks.
Graph machine learning offers natural advantages for risk‑control tasks because it can preserve the full interaction graph of users and logs, avoiding lossy feature engineering and enabling multi‑account relational analysis, complex interaction modeling, and automatic node representation learning.
Applying GNNs to risk control faces three major challenges: severe class imbalance of fraudulent samples, adversarial attacks that require robust models, and continual distribution drift as new fraud patterns emerge, all of which demand specialized training strategies.
To address these issues, the authors propose a new paradigm called LLMs‑as‑Consultants and instantiate it with the LOGIN method. LOGIN consists of four stages: (1) selecting uncertain (hard) nodes via dropout‑based variance; (2) constructing textual prompts that describe the local sub‑graph of each hard node; (3) consulting a large language model for predictions and explanations; and (4) feeding the LLM feedback back into the GNN—enhancing node attributes when the LLM is correct and performing structure denoising when it is wrong.
The training follows a transductive setting: both training and test nodes reside in the same graph, and the graph is iteratively refined by adding enriched attributes or removing suspicious edges before re‑training the GNN. During inference, only the enhanced GNN is used.
Extensive experiments on homogeneous (e.g., PubMed) and heterogeneous (e.g., Texas) benchmark graphs show that LOGIN‑augmented vanilla GNNs achieve performance comparable to or surpassing state‑of‑the‑art GNNs. Ablation studies confirm the importance of both feature‑enhancement and structure‑denoising components, and comparisons with the prior LLMs‑as‑Predictors and LLMs‑as‑Enhancers paradigms demonstrate LOGIN’s consistent superiority.
The work concludes that large models can serve as effective consultants to improve GNN‑based risk‑control systems, and outlines future challenges such as scaling to billions of user interactions, token‑budget constraints for prompt construction, and reducing latency and cost of LLM queries.
DataFunSummit
Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.