papers_with_code.md

November 7, 2023 · View on GitHub

TitleTypeVenueCodeYear
0Revisiting Graph Adversarial Attack and Defense From a Data Distribution Perspective⚔Attack📝ICLR:octocat:Code2023
1Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph Neural Networks via Reinforcement Learning⚔Attack📝AAAI:octocat:Code2023
2GUAP: Graph Universal Attack Through Adversarial Patching⚔Attack📝arXiv:octocat:Code2023
3Node Injection for Class-specific Network Poisoning⚔Attack📝arXiv:octocat:Code2023
4Unnoticeable Backdoor Attacks on Graph Neural Networks⚔Attack📝WWW:octocat:Code2023
5Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem⚔Attack📝WSDM:octocat:Code2022
6Inference Attacks Against Graph Neural Networks⚔Attack📝USENIX Security:octocat:Code2022
7Model Stealing Attacks Against Inductive Graph Neural Networks⚔Attack📝IEEE Symposium on Security and Privacy:octocat:Code2022
8Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation⚔Attack📝WWW:octocat:Code2022
9Neighboring Backdoor Attacks on Graph Convolutional Network⚔Attack📝arXiv:octocat:Code2022
10Understanding and Improving Graph Injection Attack by Promoting Unnoticeability⚔Attack📝ICLR:octocat:Code2022
11Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs⚔Attack📝AAAI:octocat:Code2022
12Black-box Node Injection Attack for Graph Neural Networks⚔Attack📝arXiv:octocat:Code2022
13Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization⚔Attack📝Asia CCS:octocat:Code2022
14Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees⚔Attack📝CVPR:octocat:Code2022
15Transferable Graph Backdoor Attack⚔Attack📝RAID:octocat:Code2022
16Cluster Attack: Query-based Adversarial Attacks on Graphs with Graph-Dependent Priors⚔Attack📝IJCAI:octocat:Code2022
17Are Gradients on Graph Structure Reliable in Gray-box Attacks?⚔Attack📝CIKM:octocat:Code2022
18BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection⚔Attack📝ICDM:octocat:Code2022
19Sparse Vicious Attacks on Graph Neural Networks⚔Attack📝arXiv:octocat:Code2022
20Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks⚔Attack📝ICDM:octocat:Code2022
21Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection⚔Attack📝arXiv:octocat:Code2022
22GANI: Global Attacks on Graph Neural Networks via Imperceptible Node Injections⚔Attack📝arXiv:octocat:Code2022
23Are Defenses for Graph Neural Networks Robust?⚔Attack📝NeurIPS:octocat:Code2022
24Towards Reasonable Budget Allocation in Untargeted Graph Structure Attacks via Gradient Debias⚔Attack📝NeurIPS:octocat:Code2022
25Structack: Structure-based Adversarial Attacks on Graph Neural Networks⚔Attack📝ACM Hypertext:octocat:Code2021
26Graph Adversarial Attack via Rewiring⚔Attack📝KDD:octocat:Code2021
27TDGIA: Effective Injection Attacks on Graph Neural Networks⚔Attack📝KDD:octocat:Code2021
28Adversarial Attack on Large Scale Graph⚔Attack📝TKDE:octocat:Code2021
29SAGE: Intrusion Alert-driven Attack Graph Extractor⚔Attack📝KDD Workshop:octocat:Code2021
30Adversarial Diffusion Attacks on Graph-based Traffic Prediction Models⚔Attack📝arXiv:octocat:Code2021
31VIKING: Adversarial Attack on Network Embeddings via Supervised Network Poisoning⚔Attack📝PAKDD:octocat:Code2021
32GraphAttacker: A General Multi-Task GraphAttack Framework⚔Attack📝arXiv:octocat:Code2021
33Graph Stochastic Neural Networks for Semi-supervised Learning⚔Attack📝arXiv:octocat:Code2021
34Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings⚔Attack📝arXiv:octocat:Code2021
35Single-Node Attack for Fooling Graph Neural Networks⚔Attack📝KDD Workshop:octocat:Code2021
36Poisoning Knowledge Graph Embeddings via Relation Inference Patterns⚔Attack📝ACL:octocat:Code2021
37Single Node Injection Attack against Graph Neural Networks⚔Attack📝CIKM:octocat:Code2021
38Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications⚔Attack📝ICDM:octocat:Code2021
39Robustness of Graph Neural Networks at Scale⚔Attack📝NeurIPS:octocat:Code2021
40Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models⚔Attack📝IJCAI:octocat:Code2021
41Adversarial Attacks on Graph Classification via Bayesian Optimisation⚔Attack📝NeurIPS:octocat:Code2021
42Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods⚔Attack📝EMNLP:octocat:Code2021
43UNTANGLE: Unlocking Routing and Logic Obfuscation Using Graph Neural Networks-based Link Prediction⚔Attack📝ICCAD:octocat:Code2021
44GraphMI: Extracting Private Graph Data from Graph Neural Networks⚔Attack📝IJCAI:octocat:Code2021
45Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation⚔Attack📝ICLR:octocat:Code2020
46Towards More Practical Adversarial Attacks on Graph Neural Networks⚔Attack📝NeurIPS:octocat:Code2020
47Adversarial Label-Flipping Attack and Defense for Graph Neural Networks⚔Attack📝ICDM:octocat:Code2020
48Exploratory Adversarial Attacks on Graph Neural Networks⚔Attack📝ICDM:octocat:Code2020
49A Targeted Universal Attack on Graph Convolutional Network⚔Attack📝arXiv:octocat:Code2020
50Backdoor Attacks to Graph Neural Networks⚔Attack📝SACMAT:octocat:Code2020
51Adversarial Attack on Community Detection by Hiding Individuals⚔Attack📝WWW:octocat:Code2020
52A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models⚔Attack📝AAAI:octocat:Code2020
53Scalable Attack on Graph Data by Injecting Vicious Nodes⚔Attack📝ECML-PKDD:octocat:Code2020
54Network disruption: maximizing disagreement and polarization in social networks⚔Attack📝arXiv:octocat:Code2020
55Structured Adversarial Attack Towards General Implementation and Better Interpretability⚔Attack📝ICLR:octocat:Code2019
56PeerNets Exploiting Peer Wisdom Against Adversarial Attacks⚔Attack📝ICLR:octocat:Code2019
57Adversarial Attacks on Node Embeddings via Graph Poisoning⚔Attack📝ICML:octocat:Code2019
58Adversarial Attacks on Graph Neural Networks via Meta Learning⚔Attack📝ICLR:octocat:Code2019
59Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective⚔Attack📝IJCAI:octocat:Code2019
60Adversarial Examples on Graph Data: Deep Insights into Attack and Defense⚔Attack📝IJCAI:octocat:Code2019
61A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning⚔Attack📝NeurIPS:octocat:Code2019
62Adversarial Attacks on Neural Networks for Graph Data⚔Attack📝KDD:octocat:Code2018
63Adversarial Attack on Graph Structured Data⚔Attack📝ICML:octocat:Code2018
64Adversarial Sets for Regularising Neural Link Predictors⚔Attack📝UAI:octocat:Code2017
65Adversarial Training for Graph Neural Networks: Pitfalls, Solutions, and New Directions🛡Defense📝NeurIPS:octocat:Code2023
66Empowering Graph Representation Learning with Test-Time Graph Transformation🛡Defense📝ICLR:octocat:Code2023
67Robust Training of Graph Neural Networks via Noise Governance🛡Defense📝WSDM:octocat:Code2023
68Self-Supervised Graph Structure Refinement for Graph Neural Networks🛡Defense📝WSDM:octocat:Code2023
69Revisiting Robustness in Graph Machine Learning🛡Defense📝ICLR:octocat:Code2023
70Unsupervised Adversarially-Robust Representation Learning on Graphs🛡Defense📝AAAI:octocat:Code2022
71Towards Robust Graph Neural Networks for Noisy Graphs with Sparse Labels🛡Defense📝WSDM:octocat:Code2022
72Mind Your Solver! On Adversarial Attack and Defense for Combinatorial Optimization🛡Defense📝arXiv:octocat:Code2022
73Graph Neural Network for Local Corruption Recovery🛡Defense📝arXiv:octocat:Code2022
74Defending Graph Convolutional Networks against Dynamic Graph Perturbations via Bayesian Self-supervision🛡Defense📝AAAI:octocat:Code2022
75SimGRACE: A Simple Framework for Graph Contrastive Learning without Data Augmentation🛡Defense📝WWW:octocat:Code2022
76GUARD: Graph Universal Adversarial Defense🛡Defense📝arXiv:octocat:Code2022
77Bayesian Robust Graph Contrastive Learning🛡Defense📝arXiv:octocat:Code2022
78Reliable Representations Make A Stronger Defender: Unsupervised Structure Refinement for Robust GNN🛡Defense📝KDD:octocat:Code2022
79Appearance and Structure Aware Robust Deep Visual Graph Matching: Attack, Defense and Beyond🛡Defense📝CVPR:octocat:Code2022
80How does Heterophily Impact Robustness of Graph Neural Networks? Theoretical Connections and Practical Implications🛡Defense📝KDD:octocat:Code2022
81Robust Graph Neural Networks using Weighted Graph Laplacian🛡Defense📝SPCOM:octocat:Code2022
82Robust Tensor Graph Convolutional Networks via T-SVD based Graph Augmentation🛡Defense📝KDD:octocat:Code2022
83Robust Node Classification on Graphs: Jointly from Bayesian Label Transition and Topology-based Label Propagation🛡Defense📝CIKM:octocat:Code2022
84On the Robustness of Graph Neural Diffusion to Topology Perturbations🛡Defense📝NeurIPS:octocat:Code2022
85Spectral Adversarial Training for Robust Graph Neural Network🛡Defense📝TKDE:octocat:Code2022
86You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets🛡Defense📝LoG:octocat:Code2022
87Learning to Drop: Robust Graph Neural Network via Topological Denoising🛡Defense📝WSDM:octocat:Code2021
88Understanding Structural Vulnerability in Graph Convolutional Networks🛡Defense📝IJCAI:octocat:Code2021
89A Robust and Generalized Framework for Adversarial Graph Embedding🛡Defense📝arXiv:octocat:Code2021
90Information Obfuscation of Graph Neural Network🛡Defense📝ICML:octocat:Code2021
91Elastic Graph Neural Networks🛡Defense📝ICML:octocat:Code2021
92Node Similarity Preserving Graph Convolutional Networks🛡Defense📝WSDM:octocat:Code2021
93NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data🛡Defense📝TKDE:octocat:Code2021
94Power up! Robust Graph Convolutional Network against Evasion Attacks based on Graph Powering🛡Defense📝AAAI:octocat:Code2021
95Unveiling the potential of Graph Neural Networks for robust Intrusion Detection🛡Defense📝arXiv:octocat:Code2021
96A Lightweight Metric Defence Strategy for Graph Neural Networks Against Poisoning Attacks🛡Defense📝ICICS:octocat:Code2021
97Node Feature Kernels Increase Graph Convolutional Network Robustness🛡Defense📝arXiv:octocat:Code2021
98Not All Low-Pass Filters are Robust in Graph Convolutional Networks🛡Defense📝NeurIPS:octocat:Code2021
99Graph Neural Networks with Adaptive Residual🛡Defense📝NeurIPS:octocat:Code2021
100Graph Posterior Network: Bayesian Predictive Uncertainty for Node Classification🛡Defense📝NeurIPS:octocat:Code2021
101Topological Relational Learning on Graphs🛡Defense📝NeurIPS:octocat:Code2021
102Variational Inference for Graph Convolutional Networks in the Absence of Graph Data and Adversarial Settings🛡Defense📝NeurIPS:octocat:Code2020
103Graph Random Neural Networks for Semi-Supervised Learning on Graphs🛡Defense📝NeurIPS:octocat:Code2020
104Reliable Graph Neural Networks via Robust Aggregation🛡Defense📝NeurIPS:octocat:Code2020
105Graph Adversarial Networks: Protecting Information against Adversarial Attacks🛡Defense📝ICLR OpenReview:octocat:Code2020
106A Feature-Importance-Aware and Robust Aggregator for GCN🛡Defense📝CIKM:octocat:Code2020
107Graph Information Bottleneck🛡Defense📝NeurIPS:octocat:Code2020
108Graph Contrastive Learning with Augmentations🛡Defense📝NeurIPS:octocat:Code2020
109Graph Structure Reshaping Against Adversarial Attacks on Graph Neural Networks🛡Defense📝None:octocat:Code2020
110Adversarial Privacy Preserving Graph Embedding against Inference Attack🛡Defense📝arXiv:octocat:Code2020
111GNNGuard: Defending Graph Neural Networks against Adversarial Attacks🛡Defense📝NeurIPS:octocat:Code2020
112Transferring Robustness for Graph Neural Network Against Poisoning Attacks🛡Defense📝WSDM:octocat:Code2020
113All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs🛡Defense📝WSDM:octocat:Code2020
114Robust Detection of Adaptive Spammers by Nash Reinforcement Learning🛡Defense📝KDD:octocat:Code2020
115Graph Structure Learning for Robust Graph Neural Networks🛡Defense📝KDD:octocat:Code2020
116On The Stability of Polynomial Spectral Graph Filters🛡Defense📝ICASSP:octocat:Code2020
117On the Robustness of Cascade Diffusion under Node Attacks🛡Defense📝WWW:octocat:Code2020
118Enhancing Graph Neural Network-based Fraud Detectors against Camouflaged Fraudsters🛡Defense📝CIKM:octocat:Code2020
119DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder🛡Defense📝arXiv:octocat:Code2020
120Graph-Revised Convolutional Network🛡Defense📝ECML-PKDD:octocat:Code2020
121Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure🛡Defense📝TKDE:octocat:Code2019
122Bayesian graph convolutional neural networks for semi-supervised classification🛡Defense📝AAAI:octocat:Code2019
123Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning🛡Defense📝arXiv:octocat:Code2019
124Adversarial Training Methods for Network Embedding🛡Defense📝WWW:octocat:Code2019
125Batch Virtual Adversarial Training for Graph Convolutional Networks🛡Defense📝ICML:octocat:Code2019
126Latent Adversarial Training of Graph Convolution Networks🛡Defense📝LRGSD@ICML:octocat:Code2019
127Characterizing Malicious Edges targeting on Graph Neural Networks🛡Defense📝ICLR OpenReview:octocat:Code2019
128Robust Graph Convolutional Networks Against Adversarial Attacks🛡Defense📝KDD:octocat:Code2019
129Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications🛡Defense📝NAACL:octocat:Code2019
130Adversarial Personalized Ranking for Recommendation🛡Defense📝SIGIR:octocat:Code2018
131Hierarchical Randomized Smoothing🔐Certification📝NeurIPS'2023:octocat:Code2023
132(Provable) Adversarial Robustness for Group Equivariant Tasks: Graphs, Point Clouds, Molecules, and More🔐Certification📝NeurIPS'2023:octocat:Code2023
133Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks🔐Certification📝NeurIPS'2022:octocat:Code2022
134Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation🔐Certification📝KDD'2021:octocat:Code2021
135Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks🔐Certification📝ICLR'2021:octocat:Code2021
136Certified Robustness of Graph Convolution Networks for Graph Classification under Topological Attacks🔐Certification📝NeurIPS'2020:octocat:Code2020
137Efficient Robustness Certificates for Discrete Data: Sparsity - Aware Randomized Smoothing for Graphs, Images and More🔐Certification📝ICML'2020:octocat:Code2020
138Certifiable Robustness of Graph Convolutional Networks under Structure Perturbation🔐Certification📝KDD'2020:octocat:Code2020
139Certifiable Robustness and Robust Training for Graph Convolutional Networks🔐Certification📝KDD'2019:octocat:Code2019
140Certifiable Robustness to Graph Perturbations🔐Certification📝NeurIPS'2019:octocat:Code2019
141Towards a Unified Framework for Fair and Stable Graph Representation Learning⚖Stability📝UAI'2021:octocat:Code2021
142Shift-Robust GNNs: Overcoming the Limitations of Localized Graph Training data⚖Stability📝NeurIPS'2021:octocat:Code2021
143When Do GNNs Work: Understanding and Improving Neighborhood Aggregation⚖Stability📝IJCAI Workshop'2019:octocat:Code2019
144Evaluating Robustness and Uncertainty of Graph Models Under Structural Distributional Shifts🚀Others📝arXiv‘2023:octocat:Code2023
145A Systematic Evaluation of Node Embedding Robustness🚀Others📝LoG‘2022:octocat:Code2022
146FLAG: Adversarial Data Augmentation for Graph Neural Networks🚀Others📝arXiv'2020:octocat:Code2020
147Training Robust Graph Neural Network by Applying Lipschitz Constant Constraint🚀Others📝CentraleSupélec'2020:octocat:Code2020
148DeepRobust: a Platform for Adversarial Attacks and Defenses⚙Toolbox📝AAAI’2021:octocat:DeepRobust2021
149GreatX: A graph reliability toolbox based on PyTorch and PyTorch Geometric⚙Toolbox📝arXiv’2022:octocat:GreatX2022
150Evaluating Graph Vulnerability and Robustness using TIGER⚙Toolbox📝arXiv‘2021:octocat:TIGER2021
151Graph Robustness Benchmark: Rethinking and Benchmarking Adversarial Robustness of Graph Neural Networks⚙Toolbox📝NeurIPS'2021:octocat:Graph Robustness Benchmark (GRB)2021