Diffusion-Guided Graph Data Augmentation

Maria Marrium1, Arif Mahmood1, Muhammad Haris Khan2, Muhammad Saad Shakeel3, Wenxiong Kang3
1Information Technology University, Lahore, Pakistan
2MBZUAI, Abu Dhabi, UAE
3South China University of Technology, Guangdong, China
NeurIPS 2025

Abstract

Graph Neural Networks (GNNs) have achieved remarkable success in a wide range of applications. However, when trained on limited or low-diversity datasets, GNNs are prone to overfitting and memorization, which impacts their generalization. To address this, graph data augmentation (GDA) has become a crucial task to enhance the performance and generalization of GNNs.

Traditional GDA methods employ simple transformations that result in limited performance gains. Although recent diffusion-based augmentation methods offer improved results, they are sparse, task-specific, and constrained by class labels.

In this work, we propose D-GDA, a more general and effective diffusion-based GDA framework that is task-agnostic and label-free. For better training stability and reduced computational cost, we employ a graph variational auto-encoder (GVAE) to learn a compact latent graph representation. A diffusion model is used in the learned latent space to generate both consistent and diverse augmentations.

Key Contributions

1. Task-Agnostic Framework

We propose D-GDA, a label-free, diffusion-based graph data augmentation framework that excels in node classification, link prediction, and graph classification across semi-supervised, supervised, and long-tailed settings. It supports test-time augmentation for enhanced performance.

2. Neighborhood-Aware Node Generation

D-GDA leverages a variational auto-encoder and latent diffusion model for proposed neighborhood-aware node generation, ensuring that augmentations preserve local graph structure while introducing meaningful diversity.

3. Target Sample Selector

D-GDA introduces a Target Sample Selector to identify effective candidates for augmentation, resulting in overall performance improvement for a fixed augmentation budget by focusing on challenging regions in the training data space.

4. Enhanced ML Safety

D-GDA enhances ML safety measures including calibration, resistance to corruption, and consistency. It is more robust against adversarial attacks (Random, DICE, GF, Meta-Attack) and converges to a flatter minima for improved generalization.

Method Overview

D-GDA comprises three main components that work together to generate high-quality augmentations:

1. Target Sample Selector (TSS)

Identifies samples that would benefit most from augmentation using entropy-based uncertainty estimation. For a fixed augmentation budget, TSS selects nodes with high prediction uncertainty, focusing augmentation on challenging regions of the data space.

2. Graph Variational Autoencoder (GVAE)

Learns compact latent representations of graph structure using:

  • GCN-based Encoder: Maps node features and adjacency to latent space
  • Feature Decoder: Reconstructs node features from latent representations
  • Link Predictor: Reconstructs graph structure from latent space

3. Latent Diffusion Model (LDM)

Generates diverse augmentations conditioned on neighborhood embeddings. Unlike class-label conditioning, our neighborhood-aware conditioning ensures generated nodes preserve local graph structure while introducing meaningful diversity.

4. Test-Time Augmentation

Unlike label-dependent methods, D-GDA enables test-time augmentation for improved inference performance without requiring labels.

D-GDA Architecture
Figure 1: Overall architecture of D-GDA showing (A) Node Classification, (B) Link Prediction, and (C) Graph Classification pipelines.

Experimental Results

Performance Comparison for Node Classification

Performance Comparison forLink Prediction

Performance Comparison for Graph Classification

ML Safety Measures

D-GDA significantly enhances machine learning safety across multiple dimensions:

  • Calibration: Better alignment between predicted probabilities and actual correctness.
  • Corruption Robustness: Improved resistance to Gaussian, shot, impulse noise, and feature shifts.
  • Prediction Consistency: More stable predictions under minor input perturbations.

Adversarial Robustness

D-GDA significantly enhances adversarial robustness against different attacks:

  • Random: Add/drop edges randomly.
  • DICE: Deletes intra-class edges and adds inter-class one.
  • GF-Attack: Optimizes a low-rank loss for structural perturbations.
  • Meta-Attack: Uses meta-gradient-based loss maximization.

Diversity vs. Consistency

Diversity vs Consistency
Figure 2: D-GDA achieves an optimal balance between diversity and consistency, outperforming existing methods across multiple datasets.

Citation

If you find our work useful, please cite:

@inproceedings{marrium2025dgda, title={Diffusion-Guided Graph Data Augmentation}, author={Marrium, Maria and Mahmood, Arif and Khan, Muhammad Haris and Shakeel, Muhammad Saad and Kang, Wenxiong}, booktitle={39th Conference on Neural Information Processing Systems (NeurIPS)}, year={2025} }