README.md

January 26, 2026 · View on GitHub

Superagent

Superagent SDK

Make your AI apps safe.

Website · Docs · Discord · HuggingFace

Y Combinator GitHub stars MIT License


An open-source SDK for AI agent safety. Block prompt injections, redact PII and secrets, scan repositories for threats, and run red team scenarios against your agent.

Features

Guard

Detect and block prompt injections, malicious instructions, and unsafe tool calls at runtime.

TypeScript:

import { createClient } from "safety-agent";

const client = createClient();

const result = await client.guard({
  input: userMessage
});

if (result.classification === "block") {
  console.log("Blocked:", result.violation_types);
}

Python:

from safety_agent import create_client

client = create_client()

result = await client.guard(input=user_message)

if result.classification == "block":
    print("Blocked:", result.violation_types)

Redact

Remove PII, PHI, and secrets from text automatically.

TypeScript:

const result = await client.redact({
  input: "My email is john@example.com and SSN is 123-45-6789",
  model: "openai/gpt-4o-mini"
});

console.log(result.redacted);
// "My email is <EMAIL_REDACTED> and SSN is <SSN_REDACTED>"

Python:

result = await client.redact(
    input="My email is john@example.com and SSN is 123-45-6789",
    model="openai/gpt-4o-mini"
)

print(result.redacted)
# "My email is <EMAIL_REDACTED> and SSN is <SSN_REDACTED>"

Scan

Analyze repositories for AI agent-targeted attacks such as repo poisoning and malicious instructions.

TypeScript:

const result = await client.scan({
  repo: "https://github.com/user/repo"
});

console.log(result.result);  // Security report
console.log(`Cost: $${result.usage.cost.toFixed(4)}`);

Python:

result = await client.scan(repo="https://github.com/user/repo")

print(result.result)  # Security report
print(f"Cost: ${result.usage.cost:.4f}")

Test

Run red team scenarios against your production agent. (Coming soon)

const result = await client.test({
  endpoint: "https://your-agent.com/chat",
  scenarios: ["prompt_injection", "data_exfiltration"]
});

console.log(result.findings);  // Vulnerabilities discovered

Get Started

Sign up at superagent.sh to get your API key.

TypeScript:

npm install safety-agent

Python:

uv add safety-agent

Set your API key:

export SUPERAGENT_API_KEY=your-key

Integration Options

OptionDescriptionLink
TypeScript SDKEmbed guard, redact, and scan directly in your appsdk/typescript
Python SDKEmbed guard, redact, and scan directly in Python appssdk/python
CLICommand-line tool for testing and automationcli
MCP ServerUse with Claude Code and Claude Desktopmcp

Why Superagent SDK?

  • Works with any model — OpenAI, Anthropic, Google, Groq, Bedrock, and more
  • Open-weight models — Run Guard on your infrastructure with 50-100ms latency
  • Low latency — Optimized for runtime use
  • Open source — MIT license with full transparency

Open-Weight Models

Run Guard on your own infrastructure. No API calls, no data leaving your environment.

ModelParametersUse Case
superagent-guard-0.6b0.6BFast inference, edge deployment
superagent-guard-1.7b1.7BBalanced speed and accuracy
superagent-guard-4b4BMaximum accuracy

GGUF versions for CPU: 0.6b-gguf · 1.7b-gguf · 4b-gguf

Resources

License

MIT