โ˜ ๏ธ NullSec DataPoisoning

March 7, 2026 ยท View on GitHub

โ˜ ๏ธ NullSec DataPoisoning

Training Data Poisoning Detection & Simulation

Python License NullSec

Detect, simulate, and defend against training data poisoning attacks


๐ŸŽฏ Overview

NullSec DataPoisoning provides tools for detecting and simulating data poisoning attacks against machine learning pipelines. It implements backdoor injection (BadNets, Trojaning), clean-label attacks, and gradient-based poisoning, alongside detection methods like spectral signatures, activation clustering, and STRIP.

โšก Features

FeatureDescription
Backdoor InjectionBadNets, Trojan, blend, and warp triggers
Clean-Label AttacksFeature collision, convex polytope, Witches' Brew
Detection EngineSpectral signatures, activation clustering, STRIP
Neural CleanseReverse-engineer trigger patterns from poisoned models
Dataset AuditScan datasets for anomalous samples and label flips
Pipeline ScannerAudit ML pipelines for poisoning entry points

๐Ÿ“‹ Attack & Defence Matrix

TechniqueCategoryType
BadNetsBackdoorAttack
Trojan AttackBackdoorAttack
Clean-Label FCPoisoningAttack
Witches' BrewPoisoningAttack
Spectral SignaturesStatisticalDefence
Activation ClusteringNeuralDefence
STRIPRuntimeDefence
Neural CleanseReverse EngineeringDefence

๐Ÿš€ Quick Start

# Scan a dataset for poisoning indicators
nullsec-datapoisoning scan --dataset training_data/ --model model.pt

# Simulate backdoor attack
nullsec-datapoisoning inject --dataset clean.csv --trigger patch --target-label 0 --poison-rate 0.01

# Run Neural Cleanse detection
nullsec-datapoisoning cleanse --model suspect_model.pt --num-classes 10

# Audit an ML pipeline config
nullsec-datapoisoning audit --pipeline pipeline.yaml
ProjectDescription
nullsec-adversarialAdversarial ML attack toolkit
nullsec-modelauditML model security auditing
nullsec-llmredLLM red-teaming framework
nullsec-promptinjectPrompt injection payloads
nullsec-linuxSecurity Linux distro (140+ tools)

For authorized ML security research only. Poisoning production training data without authorization is illegal.

๐Ÿ“œ License

MIT License โ€” @bad-antics


Part of the NullSec AI/ML Security Suite