Artificial Intelligence 40 min read

Explainability in Graph Neural Networks: A Taxonomic Survey

This article surveys recent advances in graph neural network explainability, systematically categorizing instance‑level and model‑level methods, reviewing datasets, evaluation metrics, and proposing new benchmark graph datasets for interpretable GNN research, and highlighting future research directions.

DataFunTalk
DataFunTalk
DataFunTalk
Explainability in Graph Neural Networks: A Taxonomic Survey

Abstract

Recent progress in deep learning interpretability for images and text has not been matched in the graph domain. This survey systematically reviews GNN explanation techniques, introduces a unified taxonomy, provides benchmark graph datasets, and discusses evaluation metrics, offering a comprehensive foundation for future research.

Introduction

Explaining black‑box models is essential for trust, especially in safety‑critical applications. Explanation methods are divided into input‑dependent (instance‑level) and input‑independent (model‑level) approaches, but graph‑specific interpretability remains under‑explored.

Overall Framework

Explanation techniques are organized into two major categories: instance‑level methods that identify important nodes, edges, or features for a specific graph, and model‑level methods that reveal general graph patterns influencing GNN behavior. Figure 1 (in the original paper) illustrates this taxonomy.

Methodology

Instance‑Level Methods

Gradient/Feature‑Based Methods (e.g., SA, Guided BP, CAM, Grad‑CAM)

Perturbation‑Based Methods (e.g., GNNExplainer, PGExplainer, GraphMask, ZORRO, Causal Screening)

Surrogate Methods (e.g., GraphLime, RelEx, PGM‑Explainer)

Decomposition Methods (e.g., LRP, Excitation BP, GNN‑LRP)

Model‑Level Methods

The sole representative is XGNN, which generates graph patterns that maximize a target prediction via reinforcement‑learning‑based graph generation.

Evaluation

Due to the lack of ground‑truth explanations, the survey summarizes common datasets (synthetic motifs, sentiment graphs, molecular graphs) and evaluation metrics such as fidelity/infidelity, sparsity, stability, and accuracy. These metrics assess how well explanations align with model behavior and human‑interpretable patterns.

Conclusion

The paper provides a thorough taxonomy of GNN explanation methods, detailed analyses of each technique, and introduces three human‑understandable sentiment graph datasets for future benchmarking, highlighting the need for standardized datasets and metrics in this emerging field.

machine learningGNNGraph Neural NetworksexplainabilityInterpretabilitySurveybenchmark datasets
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.