Awesome AI Security
March 8, 2026 · View on GitHub
A curated list of AI security resources inspired by awesome-adversarial-machine-learning & awesome-ml-for-cybersecurity.
Legend:
| Type | Icon |
|---|---|
| Research | |
| Slides | |
| Video | |
| Website / Blog post | |
| Code | |
| Other |
Keywords:
▲ Adversarial examples
▲ Evasion
▲ Poisoning
| Type | Title |
|---|---|
| Poisoning Behavioral Malware Clustering | |
| Efficient Label Contamination Attacks Against Black-Box Learning Models | |
| Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization |
▲ Feature selection
| Type | Title |
|---|---|
| Is Feature Selection Secure against Training Data Poisoning? |
▲ Misc
▲ Code
▲ Links
| Type | Title |
|---|---|
| EvadeML - Machine Learning in the Presence of Adversaries | |
| Adversarial Machine Learning - PRA Lab | |
| Adversarial Examples and their implications |