PraisonAI 🦞

May 9, 2026 Β· View on GitHub

PraisonAI Logo

Total Downloads Latest Stable Version License MCP Registry

PraisonAI 🦞

MervinPraison%2FPraisonAI | Trendshift

PraisonAI 🦞 β€” Hire a 24/7 AI Workforce. Stop writing boilerplate and start shipping autonomous, self-improving agents that research, plan, and execute tasks across your apps. From one agent to an entire organization, deployed in 5 lines of code.

curl -fsSL https://praison.ai/install.sh | bash

Highlighted by Elon Musk

PraisonAI Dashboard

PraisonAI AgentFlow

 β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—     β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—
 β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ•‘    β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘
 β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘
 β–ˆβ–ˆβ•”β•β•β•β• β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘    β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘
 β–ˆβ–ˆβ•‘     β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘    β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘
 β•šβ•β•     β•šβ•β•  β•šβ•β•β•šβ•β•  β•šβ•β•β•šβ•β•β•šβ•β•β•β•β•β•β• β•šβ•β•β•β•β•β• β•šβ•β•  β•šβ•β•β•β•    β•šβ•β•  β•šβ•β•β•šβ•β•

 pip install praisonai

PraisonAI command execution

* export TAVILY_API_KEY=xxxxx

Documentation


🎯 Use Cases

AI agents solving real-world problems across industries:

Use CaseDescription
πŸ” Research & AnalysisConduct deep research, gather information, and generate insights from multiple sources automatically
πŸ’» Code GenerationWrite, debug, and refactor code with AI agents that understand your codebase and requirements
✍️ Content CreationGenerate blog posts, documentation, marketing copy, and technical writing with multi-agent teams
πŸ“Š Data PipelinesExtract, transform, and analyze data from APIs, databases, and web sources automatically
πŸ€– Customer SupportDeploy 24/7 support bots on Telegram, Discord, Slack with memory and knowledge-backed responses
βš™οΈ Workflow AutomationAutomate multi-step business processes with agents that hand off tasks, verify results, and self-correct

πŸš€ Meet your first Agent (Under 1 Minute)

  1. Install the lightweight core SDK:
pip install praisonaiagents
export OPENAI_API_KEY="your-api-key"
  1. Run your first autonomous agent:
from praisonaiagents import Agent

# Give your agent a goal, and watch it work.
agent = Agent(instructions="You are a senior data analyst.")
agent.start("Analyze the top 3 tech trends of 2026 and format as a markdown table.")

🌌 The PraisonAI Ecosystem

Start simple with the core SDK, or expand to full visual builders and dashboards when you're ready.

  • Core SDK (praisonaiagents): For pure Python development. pip install praisonaiagents
  • πŸ’» PraisonAI CLI (praisonai): For terminal-based developers. pip install praisonai
  • 🦞 Claw Dashboard: Connect agents directly to Telegram, Slack, or Discord. pip install "praisonai[claw]"
  • πŸ”— Flow Visual Builder: Drag-and-drop workflow creation. pip install "praisonai[flow]"
  • πŸ€– PraisonAI UI: Clean chat interface. pip install "praisonai[ui]"

JavaScript SDK

npm install praisonai

🧠 Supported Providers & Features

Powered by 100+ LLMs (OpenAI, Anthropic, Gemini & local models).

OpenAI Anthropic Google Gemini DeepSeek Azure Ollama Groq Mistral Cerebras Cohere OpenRouter Perplexity Fireworks AWS Bedrock xAI Grok Vertex AI HuggingFace Together AI Databricks Replicate Cloudflare

View all 24 providers with examples
ProviderExample
OpenAIExample
AnthropicExample
Google GeminiExample
OllamaExample
GroqExample
DeepSeekExample
xAI GrokExample
MistralExample
CohereExample
PerplexityExample
FireworksExample
Together AIExample
OpenRouterExample
HuggingFaceExample
Azure OpenAIExample
AWS BedrockExample
Google VertexExample
DatabricksExample
CloudflareExample
AI21Example
ReplicateExample
SageMakerExample
MoonshotExample
vLLMExample
Highlighted by Elon Musk

"Grok 3 customer support" β€” Elon Musk quoting PraisonAI's tutorial



🌟 Why PraisonAI?

FeatureHow
πŸ”ŒMCP Protocol β€” stdio, HTTP, WebSocket, SSEtools=MCP("npx ...")
🧠Planning Mode β€” plan β†’ execute β†’ reasonplanning=True
πŸ”Deep Research β€” multi-step autonomous researchDocs
πŸ€–External Agents β€” orchestrate Claude Code, Gemini CLI, CodexDocs
πŸ”„Agent Handoffs β€” seamless conversation passinghandoff=True
πŸ›‘οΈGuardrails β€” input/output validationDocs
Web Search + Fetch β€” native browsingweb_search=True
πŸͺžSelf Reflection β€” agent reviews its own outputDocs
πŸ”€Workflow Patterns β€” route, parallel, loop, repeatDocs
🧠Memory (zero deps) β€” works out of the boxmemory=True
View all 25 features
FeatureHow
πŸ’‘Prompt Caching β€” reduce latency + costprompt_caching=True
πŸ’ΎSessions + Auto-Save β€” persistent state across restartsauto_save="my-project"
πŸ’­Thinking Budgets β€” control reasoning depththinking_budget=1024
πŸ“šRAG + Quality-Based RAG β€” auto quality scoring retrievalDocs
πŸ“ŠModel Router β€” auto-routes to cheapest capable modelDocs
🧊Shadow Git Checkpoints β€” auto-rollback on failureDocs
πŸ“‘A2A Protocol β€” agent-to-agent interopDocs
πŸ“Context Compaction β€” never hit token limitsDocs
πŸ“‘Telemetry β€” OpenTelemetry traces, spans, metricsDocs
πŸ“œPolicy Engine β€” declarative agent behavior controlDocs
πŸ”„Background Tasks β€” fire-and-forget agentsDocs
πŸ”Doom Loop Detection β€” auto-recovery from stuck agentsDocs
πŸ•ΈοΈGraph Memory β€” Neo4j-style relationship trackingDocs
πŸ–οΈSandbox Execution β€” isolated code executionDocs
πŸ–₯️Bot Gateway β€” multi-agent routing across channelsDocs

πŸ“˜ Using Python Code

1. Single Agent

from praisonaiagents import Agent
agent = Agent(instructions="You are a helpful AI assistant")
agent.start("Write a movie script about a robot in Mars")

2. Multi Agents

from praisonaiagents import Agent, Agents

research_agent = Agent(instructions="Research about AI")
summarise_agent = Agent(instructions="Summarise research agent's findings")
agents = Agents(agents=[research_agent, summarise_agent])
agents.start()

3. MCP (Model Context Protocol)

from praisonaiagents import Agent, MCP

# stdio - Local NPX/Python servers
agent = Agent(tools=MCP("npx @modelcontextprotocol/server-memory"))

# Streamable HTTP - Production servers
agent = Agent(tools=MCP("https://api.example.com/mcp"))

# WebSocket - Real-time bidirectional
agent = Agent(tools=MCP("wss://api.example.com/mcp", auth_token="token"))

# With environment variables
agent = Agent(
    tools=MCP(
        command="npx",
        args=["-y", "@modelcontextprotocol/server-brave-search"],
        env={"BRAVE_API_KEY": "your-key"}
    )
)

πŸ“– Full MCP docs β€” stdio, HTTP, WebSocket, SSE transports

4. Custom Tools

from praisonaiagents import Agent, tool

@tool
def search(query: str) -> str:
    """Search the web for information."""
    return f"Results for: {query}"

@tool
def calculate(expression: str) -> float:
    """Safely evaluate a numeric arithmetic expression."""
    import ast
    import operator
    
    # Define allowed operations
    _OPS = {
        ast.Add: operator.add,
        ast.Sub: operator.sub,
        ast.Mult: operator.mul,
        ast.Div: operator.truediv,
        ast.Pow: operator.pow,
        ast.USub: operator.neg,
        ast.UAdd: operator.pos,
    }
    
    def _safe_eval(node):
        if isinstance(node, ast.Constant) and isinstance(node.value, (int, float)):
            return node.value
        elif isinstance(node, ast.BinOp) and type(node.op) in _OPS:
            return _OPS[type(node.op)](_safe_eval(node.left), _safe_eval(node.right))
        elif isinstance(node, ast.UnaryOp) and type(node.op) in _OPS:
            return _OPS[type(node.op)](_safe_eval(node.operand))
        else:
            raise ValueError("Unsupported expression")
    
    try:
        return _safe_eval(ast.parse(expression, mode="eval").body)
    except (ValueError, SyntaxError, TypeError, ZeroDivisionError, OverflowError):
        raise ValueError("Invalid arithmetic expression")

agent = Agent(
    instructions="You are a helpful assistant",
    tools=[search, calculate]
)
agent.start("Search for AI news and calculate 15*4")

⚠️ Security Note: Never use eval(), exec(), or subprocess in tool functions that process LLM-generated or user-supplied input. Always validate and sanitize inputs to prevent code injection attacks. πŸ“– Full tools docs β€” BaseTool, tool packages, 100+ built-in tools

5. Persistence (Databases)

from praisonaiagents import Agent, db

agent = Agent(
    name="Assistant",
    db=db(database_url="postgresql://localhost/mydb"),
    session_id="my-session"
)
agent.chat("Hello!")  # Auto-persists messages, runs, traces

πŸ“– Full persistence docs β€” PostgreSQL, MySQL, SQLite, MongoDB, Redis, and 20+ more

6. PraisonAI Claw 🦞 (Dashboard UI)

Connect your AI agents to Telegram, Discord, Slack, WhatsApp and more β€” all from a single command.

pip install "praisonai[claw]"
praisonai claw

Open http://localhost:8082 β€” the dashboard comes with 13 built-in pages: Chat, Agents, Memory, Knowledge, Channels, Guardrails, Cron, and more. Add messaging channels directly from the UI.

πŸ“– Full Claw docs β€” platform tokens, CLI options, Docker, and YAML agent mode

7. Langflow Integration πŸ”— (Visual Flow Builder)

Build multi-agent workflows visually with drag-and-drop components in Langflow.

pip install "praisonai[flow]"
praisonai flow

Open http://localhost:7861 β€” use the Agent and Agent Team components to create sequential or parallel workflows. Connect Chat Input β†’ Agent Team β†’ Chat Output for instant multi-agent pipelines.

πŸ“– Full Flow docs β€” visual agent building, component reference, and deployment

8. PraisonAI UI πŸ€– (Clean Chat)

Lightweight chat interface for your AI agents.

pip install "praisonai[ui]"
praisonai ui

πŸ“„ Using YAML (No Code)

Example 1: Two Agents Working Together

Create agents.yaml:

framework: praisonai
topic: "Write a blog post about AI"

agents:
  researcher:
    role: Research Analyst
    goal: Research AI trends and gather information
    instructions: "Find accurate information about AI trends"
    
  writer:
    role: Content Writer
    goal: Write engaging blog posts
    instructions: "Write clear, engaging content based on research"

Run with:

praisonai agents.yaml

The agents automatically work together sequentially

Example 2: Agent with Custom Tool

Create two files in the same folder:

agents.yaml:

framework: praisonai
topic: "Calculate the sum of 25 and 15"

agents:
  calculator_agent:
    role: Calculator
    goal: Perform calculations
    instructions: "Use the add_numbers tool to help with calculations"
    tools:
      - add_numbers

tools.py:

def add_numbers(a: float, b: float) -> float:
    """
    Add two numbers together.
    
    Args:
        a: First number
        b: Second number
    
    Returns:
        The sum of a and b
    """
    return a + b

Run with:

praisonai agents.yaml

πŸ’‘ Tips:

  • Use the function name (e.g., add_numbers) in the tools list, not the file name
  • Tools in tools.py are automatically discovered
  • The function's docstring helps the AI understand how to use it

🎯 CLI Quick Reference

CategoryCommands
Executionpraisonai, --auto, --interactive, --chat
Researchresearch, --query-rewrite, --deep-research
Planning--planning, --planning-tools, --planning-reasoning
Workflowsworkflow run, workflow list, workflow auto
Memorymemory show, memory add, memory search, memory clear
Knowledgeknowledge add, knowledge query, knowledge list
Sessionssession list, session resume, session delete
Toolstools list, tools info, tools search
MCPmcp list, mcp create, mcp enable
Developmentcommit, docs, checkpoint, hooks
Schedulingschedule start, schedule list, schedule stop

πŸ“– Full CLI reference


✨ Key Features

πŸ€– Core Agents
FeatureCodeDocs
Single AgentExampleπŸ“–
Multi AgentsExampleπŸ“–
Auto AgentsExampleπŸ“–
Self Reflection AI AgentsExampleπŸ“–
Reasoning AI AgentsExampleπŸ“–
Multi Modal AI AgentsExampleπŸ“–
πŸ”„ Workflows
FeatureCodeDocs
Simple WorkflowExampleπŸ“–
Workflow with AgentsExampleπŸ“–
Agentic Routing (route())ExampleπŸ“–
Parallel Execution (parallel())ExampleπŸ“–
Loop over List/CSV (loop())ExampleπŸ“–
Evaluator-Optimizer (repeat())ExampleπŸ“–
Conditional StepsExampleπŸ“–
Workflow BranchingExampleπŸ“–
Workflow Early StopExampleπŸ“–
Workflow CheckpointsExampleπŸ“–
πŸ’» Code & Development
FeatureCodeDocs
Code Interpreter AgentsExampleπŸ“–
AI Code Editing ToolsExampleπŸ“–
External Agents (All)ExampleπŸ“–
Claude Code CLIExampleπŸ“–
Gemini CLIExampleπŸ“–
Codex CLIExampleπŸ“–
Cursor CLIExampleπŸ“–
🧠 Memory & Knowledge
FeatureCodeDocs
Memory (Short & Long Term)ExampleπŸ“–
File-Based MemoryExampleπŸ“–
Claude Memory ToolExampleπŸ“–
Add Custom KnowledgeExampleπŸ“–
RAG AgentsExampleπŸ“–
Chat with PDF AgentsExampleπŸ“–
Data Readers (PDF, DOCX, etc.)CLIπŸ“–
Vector Store SelectionCLIπŸ“–
Retrieval StrategiesCLIπŸ“–
RerankersCLIπŸ“–
Index Types (Vector/Keyword/Hybrid)CLIπŸ“–
Query Engines (Sub-Question, etc.)CLIπŸ“–
πŸ”¬ Research & Intelligence
FeatureCodeDocs
Deep Research AgentsExampleπŸ“–
Query Rewriter AgentExampleπŸ“–
Native Web SearchExampleπŸ“–
Built-in Search ToolsExampleπŸ“–
Unified Web SearchExampleπŸ“–
Web Fetch (Anthropic)ExampleπŸ“–
πŸ“‹ Planning & Execution
FeatureCodeDocs
Planning ModeExampleπŸ“–
Planning ToolsExampleπŸ“–
Planning ReasoningExampleπŸ“–
Prompt ChainingExampleπŸ“–
Evaluator OptimiserExampleπŸ“–
Orchestrator WorkersExampleπŸ“–
πŸ‘₯ Specialized Agents
FeatureCodeDocs
Data Analyst AgentExampleπŸ“–
Finance AgentExampleπŸ“–
Shopping AgentExampleπŸ“–
Recommendation AgentExampleπŸ“–
Wikipedia AgentExampleπŸ“–
Programming AgentExampleπŸ“–
Math AgentsExampleπŸ“–
Markdown AgentExampleπŸ“–
Prompt Expander AgentExampleπŸ“–
🎨 Media & Multimodal
FeatureCodeDocs
Image Generation AgentExampleπŸ“–
Image to Text AgentExampleπŸ“–
Video AgentExampleπŸ“–
Camera IntegrationExampleπŸ“–
πŸ”Œ Protocols & Integration
FeatureCodeDocs
MCP TransportsExampleπŸ“–
WebSocket MCPExampleπŸ“–
MCP SecurityExampleπŸ“–
MCP ResumabilityExampleπŸ“–
MCP Config ManagementDocsπŸ“–
LangChain Integrated AgentsExampleπŸ“–
πŸ›‘οΈ Safety & Control
FeatureCodeDocs
GuardrailsExampleπŸ“–
Human ApprovalExampleπŸ“–
Rules & InstructionsDocsπŸ“–
βš™οΈ Advanced Features
FeatureCodeDocs
Async & Parallel ProcessingExampleπŸ“–
ParallelisationExampleπŸ“–
Repetitive AgentsExampleπŸ“–
Agent HandoffsExampleπŸ“–
Stateful AgentsExampleπŸ“–
Autonomous WorkflowExampleπŸ“–
Structured Output AgentsExampleπŸ“–
Model RouterExampleπŸ“–
Prompt CachingExampleπŸ“–
Fast ContextExampleπŸ“–
πŸ› οΈ Tools & Configuration
FeatureCodeDocs
100+ Custom ToolsExampleπŸ“–
YAML ConfigurationExampleπŸ“–
100+ LLM SupportExampleπŸ“–
Callback AgentsExampleπŸ“–
HooksExampleπŸ“–
Middleware SystemExampleπŸ“–
Configurable ModelExampleπŸ“–
Rate LimiterExampleπŸ“–
Injected Tool StateExampleπŸ“–
Shadow Git CheckpointsExampleπŸ“–
Background TasksExampleπŸ“–
Policy EngineExampleπŸ“–
Thinking BudgetsExampleπŸ“–
Output StylesExampleπŸ“–
Context CompactionExampleπŸ“–
πŸ“Š Monitoring & Management
FeatureCodeDocs
Sessions ManagementExampleπŸ“–
Auto-Save SessionsDocsπŸ“–
History in ContextDocsπŸ“–
TelemetryExampleπŸ“–
Langfuse TracingDocsπŸ“–
Project Docs (.praison/docs/)DocsπŸ“–
AI Commit MessagesDocsπŸ“–
@Mentions in PromptsDocsπŸ“–
πŸ–₯️ CLI Features
FeatureCodeDocs
Slash CommandsExampleπŸ“–
Autonomy ModesExampleπŸ“–
Cost TrackingExampleπŸ“–
Repository MapExampleπŸ“–
Interactive TUIExampleπŸ“–
Git IntegrationExampleπŸ“–
Sandbox ExecutionExampleπŸ“–
CLI CompareExampleπŸ“–
Profile/BenchmarkDocsπŸ“–
Auto ModeDocsπŸ“–
InitDocsπŸ“–
File InputDocsπŸ“–
Final AgentDocsπŸ“–
Max TokensDocsπŸ“–
πŸ§ͺ Evaluation
FeatureCodeDocs
Accuracy EvaluationExampleπŸ“–
Performance EvaluationExampleπŸ“–
Reliability EvaluationExampleπŸ“–
Criteria EvaluationExampleπŸ“–
🎯 Agent Skills
FeatureCodeDocs
Skills ManagementExampleπŸ“–
Custom SkillsExampleπŸ“–
⏰ 24/7 Scheduling
FeatureCodeDocs
Agent SchedulerExampleπŸ“–

πŸ’» Using JavaScript Code

npm install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
const { Agent } = require('praisonai');
const agent = new Agent({ instructions: 'You are a helpful AI assistant' });
agent.start('Write a movie script about a robot in Mars');

⚑ Performance

PraisonAI is built for speed, with agent instantiation in under 4ΞΌs. This reduces overhead, improves responsiveness, and helps multi-agent systems scale efficiently in real-world production workloads.

Performance MetricPraisonAI
Avg Instantiation Time3.77 ΞΌs


⭐ Star History

Star History Chart


πŸ” Langfuse Tracing

pip install "praisonai[langfuse]"
praisonai langfuse

PraisonAI Langfuse Tracing


πŸŽ“ Video Tutorials

Learn PraisonAI through our comprehensive video series:

View all 22 video tutorials
TopicVideo
AI Agents with Self ReflectionSelf Reflection
Reasoning Data Generating AgentReasoning Data
AI Agents with ReasoningReasoning
Multimodal AI AgentsMultimodal
AI Agents WorkflowWorkflow
Async AI AgentsAsync
Mini AI AgentsMini
AI Agents with MemoryMemory
Repetitive AgentsRepetitive
IntroductionIntroduction
Tools OverviewTools Overview
Custom ToolsCustom Tools
Firecrawl IntegrationFirecrawl
User InterfaceUI
Crawl4AI IntegrationCrawl4AI
Chat InterfaceChat
Code InterfaceCode
Mem0 IntegrationMem0
TrainingTraining
Realtime Voice InterfaceRealtime
Call InterfaceCall
Reasoning Extract AgentsReasoning Extract

πŸ‘₯ Contributing

We welcome contributions! Fork the repo, create a branch, and submit a PR β†’ Contributing Guide.


❓ FAQ & Troubleshooting

ModuleNotFoundError: No module named 'praisonaiagents'

Install the package:

pip install praisonaiagents
API key not found / Authentication error

Ensure your API key is set:

export OPENAI_API_KEY=your_key_here

For other providers, see Models docs.

How do I use a local model (Ollama)?
# Start Ollama server first
ollama serve

# Set environment variable
export OPENAI_BASE_URL=http://localhost:11434/v1

See Models docs for more details.

How do I persist conversations to a database?

Use the db parameter:

from praisonaiagents import Agent, db

agent = Agent(
    name="Assistant",
    db=db(database_url="postgresql://localhost/mydb"),
    session_id="my-session"
)

See Persistence docs for supported databases.

How do I enable agent memory?
from praisonaiagents import Agent

agent = Agent(
    name="Assistant",
    memory=True,  # Enables file-based memory (no extra deps!)
    user_id="user123"
)

See Memory docs for more options.

How do I run multiple agents together?
from praisonaiagents import Agent, Agents

agent1 = Agent(instructions="Research topics")
agent2 = Agent(instructions="Summarize findings")
agents = Agents(agents=[agent1, agent2])
agents.start()

See Agents docs for more examples.

How do I use MCP tools?
from praisonaiagents import Agent, MCP

agent = Agent(
    tools=MCP("npx @modelcontextprotocol/server-memory")
)

See MCP docs for all transport options.

Getting Help


Made with ❀️ by the PraisonAI Team

πŸ“š Documentation β€’ GitHub β€’ ▢️ YouTube β€’ 𝕏 X β€’ πŸ’Ό LinkedIn