Distribution-aware Graph Prompt Tuning (DAGPrompT) for Heterophilic Graphs
Distribution‑aware Graph Prompt Tuning (DAGPrompT) tackles the pre‑training/downstream mismatch on heterophilic graphs by jointly applying low‑rank GLoRA adaptation and hop‑specific prompts that recast tasks as link‑prediction, yielding up to 4.79% accuracy gains and an average 2.43% improvement in few‑shot node classification.
The pre-train‑then‑fine‑tune paradigm has advanced Graph Neural Networks (GNNs), but mismatches between pre‑training objectives and downstream tasks limit performance, especially on heterophilic graphs where neighboring nodes often have different labels.
Existing graph prompting methods freeze the GNN encoder and use simple prompts, which hampers adaptation to distribution shifts and ignores the diversity of node hops. To address these issues, we propose Distribution‑aware Graph Prompt Tuning (DAGPrompT). DAGPrompT introduces a GLoRA module that jointly optimizes low‑rank approximations of the GNN’s projection matrix and message‑passing mechanism, and a hop‑specific prompting system that dynamically adjusts to structural and semantic changes across hops.
We formalize the prompting stage as a task‑reconstruction problem, converting downstream tasks (node classification, graph classification) into link‑prediction forms via pseudo nodes or pseudo graphs. The objective maximizes the probability of correct label prediction on the reconstructed task.
During pre‑training, we use unsupervised link prediction. In the prompting phase, the GLoRA module adjusts frozen GNN parameters with learnable low‑rank matrices, while the hierarchical prompting strategy computes similarity matrices for each layer’s node embeddings and class tokens, then fuses them with learnable weights.
Experiments on ten datasets and fourteen baselines for few‑shot node classification show that DAGPrompT consistently outperforms existing methods, achieving up to a 4.79% accuracy gain on heterophilic graphs (e.g., Texas) and an average improvement of 2.43% across all datasets.
Our results demonstrate that distribution‑aware prompting and low‑rank adaptation effectively bridge the gap between pre‑training and downstream tasks on complex, heterophilic graphs.
Alimama Tech
Official Alimama tech channel, showcasing all of Alimama's technical innovations.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.