README.md

May 9, 2026 Β· View on GitHub

FIM One Banner

Python 3.11+ CI License Discord Follow on X

🌐 English | πŸ‡¨πŸ‡³ δΈ­ζ–‡ | πŸ‡―πŸ‡΅ ζ—₯本θͺž | πŸ‡°πŸ‡· ν•œκ΅­μ–΄ | πŸ‡©πŸ‡ͺ Deutsch | πŸ‡«πŸ‡· FranΓ§ais

All-in-One Agent Platform for Global Γ— China Enterprises. Wire every system you already run β€” global SaaS to the China stack β€” through one agent core.

🌐 Website Β· πŸ“– Docs Β· πŸ“‹ Changelog Β· πŸ› Report Bug Β· πŸ’¬ Discord Β· 🐦 Twitter Β· πŸ† Product Hunt

Tip

☁️ Skip the setup β€” try FIM One on Cloud. A managed version is live at cloud.fim.ai β€” no Docker, no API keys, no config. Sign in and start connecting your systems in seconds. Early access, feedback welcome.


Overview

Global enterprises run a sprawl of systems that don't talk to each other β€” ERP, CRM, OA, HR, finance, databases, IM platforms across regions. FIM One is the all-in-one agent platform that wires every system you already run into one agent core β€” global SaaS on one side, the full China stack (Feishu, WeCom, DingTalk, DM, Kingbase, etc.) on the other. One brain. Every system. Global SaaS Γ— China Stack.

ModeWhat it isAccess
StandaloneGeneral-purpose AI assistant β€” search, code, KBPortal
CopilotAI embedded in a host system's UIiframe / widget / embed
HubCentral AI orchestration across all connected systemsPortal / API
graph LR
    ERP <--> Hub["🧠 FIM One Agent Core"]
    Database <--> Hub
    Lark <--> Hub
    Hub <--> CRM
    Hub <--> OA
    Hub <--> API[Custom API]

Screenshots

Dashboard β€” stats, activity trends, token usage, and quick access to agents and conversations.

Dashboard

Agent Chat β€” ReAct reasoning with multi-step tool calling against a connected database.

Agent Chat

DAG Planner β€” LLM-generated execution plan with parallel steps and live status tracking.

DAG Planner

Demo

Using Agents

Using Agents

Using Planner Mode

Using Planner Mode

Quick Start

git clone https://github.com/fim-ai/fim-one.git
cd fim-one

cp example.env .env
# Edit .env: set LLM_API_KEY (and optionally LLM_BASE_URL, LLM_MODEL)

docker compose up --build -d

Open http://localhost:3000 β€” on first launch you'll create an admin account. That's it.

docker compose up -d          # start
docker compose down           # stop
docker compose logs -f        # view logs

Local Development

Prerequisites: Python 3.11+, uv, Node.js 18+, pnpm.

git clone https://github.com/fim-ai/fim-one.git && cd fim-one

cp example.env .env           # Edit: set LLM_API_KEY

uv sync --all-extras
cd frontend && pnpm install && cd ..

./start.sh dev                # hot reload: Python --reload + Next.js HMR
CommandWhat startsURL
./start.shNext.js + FastAPIlocalhost:3000 (UI) + :8000
./start.sh devSame, with hot reloadSame
./start.sh dev:apiAPI only, dev mode (hot reload)localhost:8000
./start.sh dev:uiFrontend only, dev mode (HMR)localhost:3000
./start.sh apiFastAPI only (headless)localhost:8000/api

For production deployment (Docker, reverse proxy, zero-downtime updates), see the Deployment Guide.

Key Features

Cross-Border Connectivity

  • Three delivery modes β€” Standalone assistant, embedded Copilot, or central Hub; same agent core.
  • Any system, one pattern β€” Connect APIs, databases, MCP servers. Actions auto-register as agent tools with auth injection. Progressive disclosure meta-tools reduce token usage by 80%+ across all tool types.
  • Database connectors β€” PostgreSQL, MySQL, Oracle, SQL Server, and enterprise databases common in China (DM, KingbaseES, GBase, Highgo) that most global platforms can't reach. Schema introspection and AI-powered annotation.
  • Three ways to build β€” Import OpenAPI spec, AI chat builder, or connect MCP servers directly.

Planning & Execution

  • Dynamic DAG planning β€” LLM decomposes goals into dependency graphs at runtime. No hard-coded workflows.
  • Concurrent execution β€” Independent steps run in parallel via asyncio; auto re-plan up to 3 rounds.
  • ReAct agent β€” Structured reasoning-and-acting loop with automatic error recovery.
  • Agent harness β€” Production-grade execution environment: ContextGuard for 5-layer token-budget management, progressive-disclosure meta-tools to keep the tool surface tractable, and self-reflection loops to counter goal drift.
  • Hook System β€” Deterministic enforcement that runs outside the LLM loop. First shipped: FeishuGateHook gates sensitive tool calls behind a human approval card posted to a Feishu group. Extensible to audit logging, read-only-mode guards, and rate limits (v0.9).
  • Content guardrails β€” Three-layer safety: tool-permission hooks (actions), credential / SSRF / MCP-auth checks (protocols), and content guardrails (input/output text). Default jailbreak-phrase detector aborts the turn before the LLM is invoked, saving tokens and surfacing a clear blocked notice in chat. Output guardrails optional via FIM_GUARDRAILS_OUTPUT.
  • Auto-routing β€” Classifies queries and routes to optimal mode (ReAct or DAG). Configurable via AUTO_ROUTING.
  • Extended thinking β€” Chain-of-thought for OpenAI o-series, Gemini 2.5+, Claude.
  • Prompt-cache observability β€” Anthropic prompt-cache read/create token counts captured per turn, surfaced in the chat done payload and logged so operators can verify cache hits and detect relay stations that don't honor the discount.

Workflow & Tools

  • Visual workflow editor β€” 12 node types, drag-and-drop canvas (React Flow v12), import/export as JSON.
  • Smart file handling β€” Uploaded files auto-inlined into context (small) or readable on-demand via read_uploaded_file tool. Intelligent document processing: PDFs, DOCX, and PPTX files get vision-aware processing with embedded image extraction when the model supports vision. Smart PDF mode extracts text from text-rich pages and renders scanned pages as images.
  • Universal document conversion β€” Built-in convert_to_markdown tool turns PDF / Word / Excel / PowerPoint / HTML / images / audio / Outlook .msg / EPUB / YouTube transcripts into clean Markdown via Microsoft MarkItDown. Vision-capable LLMs OCR embedded images and scanned pages β€” works with Claude, Gemini, Bedrock, and any LiteLLM-supported provider, no per-provider adapter code.
  • Pluggable tools β€” Python, Node.js, shell exec with optional Docker sandbox (CODE_EXEC_BACKEND=docker).
  • V4A patch editing β€” Beyond find_replace, agents can apply line-hunk patches with fuzzy whitespace matching via file_ops.apply_patch β€” robust to multi-line edits where exact-substring match would be brittle.
  • Full RAG pipeline β€” Jina embedding + LanceDB + hybrid retrieval + reranker + inline [N] citations. Vision-aware ingestion routes scanned PDFs and Office embedded images through the workspace's default vision LLM for OCR.
  • Tool artifacts β€” Rich outputs (HTML previews, files) rendered in-chat.

Messaging Channels (v0.8)

  • Org-scoped IM bridge β€” BaseChannel abstraction for outbound messaging across Slack, Microsoft Teams, Discord, Feishu (Lark), WeCom, and DingTalk. First shipping implementation is Feishu; Slack / Teams / WeCom / Email are next on the v0.9 roadmap.
  • Fernet-encrypted credentials β€” App secrets and encrypt keys encrypted at rest; every inbound callback signature-verified.
  • Interactive approval cards β€” Channel-native GateHook (Feishu today, Slack/Teams next) posts an Approve / Reject card to your group when a sensitive tool call fires; the tool blocks until a group member taps a verdict. Human-in-the-loop approval without a custom workflow engine.
  • Configurable approval routing per agent β€” Three modes (Auto / Inline only / Channel only) with an approver-scope selector (initiator / agent owner / any org member). One audit path stamps approver_user_id and decided_at whether the verdict came from chat or from the channel. Auto mode falls back to inline if no channel is linked, so agents always get a real approval UX.
  • Task-completion notifications β€” Long-running ReAct or DAG agents can push a summary card to the org's channel when work finishes. Configurable per-agent in Settings β†’ Agent β†’ Notifications.
  • Browse-and-pick UI β€” No copying raw channel IDs from the vendor console; the portal calls the IM platform's API and shows a group picker.

Platform

  • Multi-tenant β€” JWT auth, org isolation, admin panel with usage analytics and connector metrics. Multi-worker support via WORKERS=N with a Redis interrupt broker for cross-worker relay.
  • Marketplace β€” Publish and subscribe to agents, connectors, KBs, skills, workflows.
  • Global skills (SOPs) β€” Reusable operating procedures loaded for every user; progressive mode cuts tokens ~80%.
  • Stripe billing & per-user quotas β€” Optional Pro-plan upgrade via Stripe Checkout + Customer Portal. Quota chain (per-user override β†’ plan tier β†’ system default) with 0 for unlimited. Admin feature flag gates the entire pipeline; private deployments without Stripe stay clean.
  • Evaluation Center β€” Test-dataset management, parallel eval runs with LLM-graded judgments, per-case pass/fail/latency/token results viewer with auto-polling.
  • Conversation recovery β€” Synthetic tool_result rows persist after interrupted turns; clients auto-reconnect dropped SSE streams via /chat/resume with exponential backoff and a "Reconnecting…" indicator.
  • 6 languages β€” EN, ZH, JA, KO, DE, FR. Translations are fully automated β€” single glossary drives every LLM translation call (JSON, MDX, README), pre-commit hook refuses manual edits to generated locale files.
  • First-run setup wizard, dark/light theme, command palette, streaming SSE, DAG visualization.

Deep dive: Architecture Β· Hook System Β· Channels Β· Execution Modes Β· Why FIM One Β· Competitive Landscape

Architecture

graph TB
    subgraph app["Application Layer"]
        a["Portal Β· API Β· iframe Β· Feishu Β· Slack Β· WeCom Β· DingTalk Β· Teams Β· Email Β· Contract Systems Β· Custom Webhooks"]
    end
    subgraph mid["FIM One"]
        direction LR
        m1["Connectors<br/>+ MCP Hub"] ~~~ m2["Orch Engine<br/>ReAct / DAG"] ~~~ m3["RAG /<br/>Knowledge"] ~~~ m5["Hook System<br/>+ Channels"] ~~~ m4["Auth /<br/>Admin"]
    end
    subgraph biz["Business Systems"]
        b["ERP Β· CRM Β· OA Β· Finance Β· Databases Β· Contract Mgmt Β· Custom APIs"]
    end
    app --> mid --> biz

Each connector and channel is a standardized bridge β€” the agent doesn't know or care whether it's talking to SAP, a custom contract system, or a Feishu group. The Hook System runs platform code outside the LLM loop for approvals, audit, and rate limits; Channels carry outbound notifications and approval cards to external IM platforms. See Connector Architecture, Hook System, and Channels for details.

Configuration

FIM One works with any OpenAI-compatible provider:

ProviderLLM_API_KEYLLM_BASE_URLLLM_MODEL
OpenAIsk-...(default)gpt-4o
DeepSeeksk-...https://api.deepseek.com/v1deepseek-chat
Anthropicsk-ant-...https://api.anthropic.com/v1claude-sonnet-4-6
Ollama (local)ollamahttp://localhost:11434/v1qwen2.5:14b

Minimal .env:

LLM_API_KEY=sk-your-key
# LLM_BASE_URL=https://api.openai.com/v1   # default
# LLM_MODEL=gpt-4o                         # default
JINA_API_KEY=jina_...                       # unlocks web tools + RAG

Full reference: Environment Variables

Tech Stack

LayerTechnology
BackendPython 3.11+, FastAPI, SQLAlchemy, Alembic, asyncio
FrontendNext.js 14, React 18, Tailwind CSS, shadcn/ui, React Flow v12
AI / RAGOpenAI-compatible LLMs, Jina AI (embed + search), LanceDB
DatabaseSQLite (dev) / PostgreSQL (prod)
MessagingBaseChannel abstraction (Slack, Teams, Discord, Feishu/Lark, WeCom, DingTalk), Fernet-encrypted credentials, HMAC signature verification
InfraDocker, uv, pnpm, SSE streaming

Development

uv sync --all-extras          # install dependencies
pytest                         # run tests
pytest --cov=fim_one           # with coverage
ruff check src/ tests/         # lint
mypy src/                      # type check
bash scripts/setup-hooks.sh    # install git hooks (enables auto i18n)

Roadmap

See the full Roadmap for version history and planned features.

FAQ

Common questions about deployment, LLM providers, system requirements, and more β€” see the FAQ.

Contributing

We welcome contributions of all kinds β€” code, docs, translations, bug reports, and ideas.

Pioneer Program: The first 100 contributors who get a PR merged are recognized as Founding Contributors with permanent credits, a badge, and priority issue support. Learn more β†’

Quick links:

Security: To report a vulnerability, please open a GitHub issue with the [SECURITY] tag. For sensitive disclosures, contact us via Discord DM.

Star History

Star History Chart

Activity

Alt

Contributors

Thanks to these wonderful people (emoji key):

Tao An
Tao An

πŸ’» 🚧 🎨 πŸ“– πŸ“† πŸ€” πŸš‡
Teo Gonzalez Collazo
Teo Gonzalez Collazo

πŸ’» ⚠️
Houx.
Houx.

πŸ’» πŸ›

This project follows the all-contributors specification. Contributions of any kind welcome!

License

FIM One Source Available License. This is not an OSI-approved open source license.

Permitted: internal use, modification, distribution with license intact, embedding in non-competing applications.

Restricted: multi-tenant SaaS, competing agent platforms, white-labeling, removing branding.

For commercial licensing inquiries, please open an issue on GitHub.

See LICENSE for full terms.


🌐 Website Β· πŸ“– Docs Β· πŸ“‹ Changelog Β· πŸ› Report Bug Β· πŸ’¬ Discord Β· 🐦 Twitter Β· πŸ† Product Hunt