๐Ÿง  NullSec Adversarial

March 7, 2026 ยท View on GitHub

๐Ÿง  NullSec Adversarial

Adversarial Machine Learning Attack Toolkit

Python License NullSec

Evasion, poisoning, and extraction attacks against ML models


๐ŸŽฏ Overview

NullSec Adversarial is a comprehensive toolkit for testing machine learning model robustness. It implements state-of-the-art adversarial attacks โ€” evasion (FGSM, PGD, C&W, AutoAttack), model extraction, membership inference, and model inversion โ€” across image classifiers, NLP models, and tabular ML pipelines.

โšก Features

FeatureDescription
Evasion AttacksFGSM, PGD, C&W, DeepFool, AutoAttack
Model ExtractionQuery-based model stealing with knockoff networks
Membership InferenceDetermine if a sample was in training data
Model InversionReconstruct training data from model outputs
TransferabilityGenerate transferable adversarial examples
Defence EvaluationTest adversarial training, certified defences
Framework SupportPyTorch, TensorFlow, scikit-learn, ONNX

๏ฟฝ๏ฟฝ Attack Matrix

AttackTypeDomainThreat Model
FGSMEvasionImage/NLPWhite-box
PGDEvasionImage/NLPWhite-box
C&WEvasionImageWhite-box
AutoAttackEvasionImageWhite-box
HopSkipJumpEvasionImageBlack-box
Knockoff NetsExtractionAnyBlack-box
Shadow ModelsMembershipAnyBlack-box
MI-FACEInversionImageWhite-box

๐Ÿš€ Quick Start

# Run PGD evasion attack on an image classifier
nullsec-adversarial evasion pgd --model resnet50.onnx --input samples/ --eps 0.03

# Black-box model extraction
nullsec-adversarial extract --target-url http://api.example.com/predict --queries 10000

# Membership inference attack
nullsec-adversarial membership --model target.pt --members train.csv --non-members test.csv

# Evaluate adversarial robustness
nullsec-adversarial benchmark --model model.pt --dataset cifar10 --attacks all
ProjectDescription
nullsec-llmredLLM red-teaming framework
nullsec-datapoisoningTraining data poisoning detection
nullsec-modelauditML model security auditing
nullsec-promptinjectPrompt injection payloads
nullsec-linuxSecurity Linux distro (140+ tools)

For authorized ML security testing only. Do not use against models or systems without explicit permission.

๐Ÿ“œ License

MIT License โ€” @bad-antics


Part of the NullSec AI/ML Security Suite