πŸš€ Coro Code

September 18, 2025 Β· View on GitHub

Language: English | δΈ­ζ–‡

A high-performance AI coding agent written in Rust with a rich terminal UI

demo

Rust License


Coro Code is a high-performance AI coding agent written in Rust with a rich terminal UI. Formerly known as Trae Agent Rust, it remains compatible with the original tool spec while focusing on speed, reliability, and great UX.

✨ Highlights

  • πŸš€ High Performance: Written in Rust for speed and memory safety
  • 🎨 Rich Terminal UI: Beautiful, interactive interface with real-time updates
  • πŸ”§ Easy Configuration: Support for multiple LLM providers with flexible config options
  • πŸ› οΈ Powerful Tools: Built-in bash execution, file operations, and extensible tool system
  • πŸ”„ Environment Variables: Comprehensive support for API keys, base URLs, and model configuration
  • πŸ“¦ Cross-Platform: Works seamlessly on macOS, Linux, and Windows

πŸš€ Quick Start

πŸ“‹ Prerequisites

  • πŸ¦€ Rust stable (1.70+)
  • πŸ”‘ An API key (OpenAI recommended; Anthropic/Google coming soon)

πŸ“¦ Install

cargo install --git https://github.com/Blushyes/coro-code --bin coro

▢️ Run

# Interactive mode (recommended)
coro

# Single task
coro "Fix the bug in main.rs"

Configuration

Option A: Environment variables

# OpenAI
export OPENAI_API_KEY="your_openai_api_key"
export OPENAI_MODEL="gpt-4o"

# Optional: Custom base URL and model for OpenAI-compatible APIs
export OPENAI_BASE_URL="https://api.deepseek.com"
export OPENAI_MODEL="deepseek-chat"

# Or use generic overrides for any protocol
export CORO_BASE_URL="https://api.custom.com"
export CORO_MODEL="custom-model"

Option B: Configuration file

Create a coro.json file:

{
  "protocol": "openai",
  "base_url": "https://api.deepseek.com",
  "api_key": "your-api-key",
  "model": "deepseek-chat",
  "params": {
    "max_tokens": 131072,
    "temperature": 0.7,
    "top_p": 0.9
  }
}

Usage

# Interactive mode
coro

# Direct command
coro "Help me refactor this function"

# With specific config
coro --config custom.json "Analyze this codebase"

πŸ€– Supported Models

ProviderModelsStatus
🟒 OpenAIgpt-4o, gpt-4o-miniβœ… Ready
🟑 Anthropicclaude-3.5 family🚧 Coming
πŸ”΅ Googlegemini-1.5 family🚧 Coming

πŸ”§ Environment Variables Reference

VariableDescriptionExample
OPENAI_API_KEYOpenAI API keysk-...
OPENAI_BASE_URLCustom base URL for OpenAI-compatible APIshttps://api.deepseek.com
OPENAI_MODELCustom model for OpenAI-compatible APIsgpt-4o, deepseek-chat
ANTHROPIC_API_KEYAnthropic API keysk-ant-...
ANTHROPIC_BASE_URLCustom base URL for Anthropic APIhttps://api.anthropic.com
ANTHROPIC_MODELCustom model for Anthropic APIclaude-3-5-sonnet-20241022
GOOGLE_API_KEYGoogle AI API keyAIza...
GOOGLE_BASE_URLCustom base URL for Google AI APIhttps://generativelanguage.googleapis.com
GOOGLE_MODELCustom model for Google AI APIgemini-pro, gemini-1.5-pro
AZURE_OPENAI_API_KEYAzure OpenAI API key...
AZURE_OPENAI_BASE_URLAzure OpenAI endpointhttps://your-resource.openai.azure.com
AZURE_OPENAI_MODELCustom model for Azure OpenAIgpt-4, gpt-35-turbo
CORO_BASE_URLGeneric base URL override (any protocol)https://api.custom.com
CORO_PROTOCOLForce specific protocolopenai, anthropic
CORO_MODELGeneric model override (any protocol)gpt-4o, claude-3-5-sonnet

πŸ—ΊοΈ Roadmap

Status Legend: βœ… Completed | 🚧 In Progress | πŸ“‹ Planned

πŸš€ Phase 1: Core Experience
PriorityStatusFeatureDescription
πŸ”₯ High🚧First-time Setup ManagementGuided wizard (detect/create openai.json or env vars), API key validation, default models & examples
πŸ”₯ Highβœ…Refactor Config Loading LogicUnified priority (CLI args > env vars > JSON file), friendly error messages & diagnostics, optional hot reload
πŸ”₯ HighπŸ“‹Tool Call Permission SystemTool/command/directory whitelist, interactive confirmation, privilege escalation & sensitive operation warnings
🎨 Phase 2: User Experience Enhancement
PriorityStatusFeatureDescription
🟑 MediumπŸ“‹CORO.md Custom Prompts SupportProject/subdirectory level overrides, scenario templates (bugfix/refactor/docs/test)
🟑 Medium🚧UI Layout Optimization & UnificationHeader/Status/Input style consistency, keyboard shortcuts & interaction consistency optimization
🟑 MediumπŸ“‹Trajectory Replay & ExportTrajectory visualization, one-click replay, export to JSON/Markdown
🎨 LowπŸ“‹Logo Design (gemini-cli style)Visual identity design
πŸ€– Phase 3: Intelligence & Performance
PriorityStatusFeatureDescription
🟑 MediumπŸ“‹Multi-model & Auto RoutingAuto model selection by task type, failure auto-downgrade & retry strategies
🟑 MediumπŸ“‹Context Optimization & CachingFile summary caching, duplicate reference deduplication, token budget control
🟑 Mediumβœ…Token CompressionIntelligent context compression, selective token reduction, adaptive context windows
πŸ”΅ LowπŸ“‹MCP Extension EcosystemCommon provider presets & templates, one-click start/stop external tools
🌐 Phase 4: Platform & Ecosystem
PriorityStatusFeatureDescription
πŸ”΅ LowπŸ“‹Core WASM SupportBrowser/plugin environment ready, isomorphic tool interface & minimal runtime
πŸ”΅ LowπŸ“‹Cross-platform EnhancementmacOS/Linux/Windows/WSL detail adaptation & stability improvements
πŸ”΅ LowπŸ“‹Plugin Tool SystemThird-party tool registration spec, version & dependency declaration
πŸ›‘οΈ Phase 5: Security & Quality
PriorityStatusFeatureDescription
🟑 MediumπŸ“‹Security & Rate LimitingSandbox mode (restricted bash/network switches), concurrency & rate limiting
πŸ”΅ LowπŸ“‹Testing & BenchmarksEnd-to-end test cases, performance benchmarks & comparison reports

πŸ› οΈ Development

Context Export/Restore (Persistence)

The core supports exporting the conversation and execution context to JSON and restoring it later:

use coro_core::agent::{AgentBuilder, PersistedAgentContext};

// Export
let json = agent.export_context_json()?;                    // as JSON string
agent.export_context_to_file(".coro/context.json")?;       // or to file

// Restore
agent.restore_context_from_json(&json)?;                    // from JSON
agent.restore_context_from_file(".coro/context.json")?;    // or from file

// Work with the structured snapshot directly:
let snap = agent.export_context_snapshot()?;
let json2 = snap.to_json()?;
let snap2 = PersistedAgentContext::from_json(&json2)?;
agent.restore_context_from_snapshot(snap2)?;

Notes:

  • Snapshot contains conversation_history, AgentExecutionContext, and optional AgentConfig.
  • On restore, saved config (if present) is applied; missing tool-result pairs are handled automatically on next execution.
  • No need to manually re-inject a system prompt; the agent handles that as needed.

Pre-commit Hooks

We strongly recommend setting up pre-commit hooks to maintain code quality. The repository includes scripts to automatically install hooks that run formatting, linting, and tests before each commit.

Choose the appropriate script for your platform:

# Linux/macOS
./scripts/setup-pre-commit-hooks.sh

# Windows PowerShell
.\scripts\setup-pre-commit-hooks.ps1

# Windows Command Prompt
scripts\setup-pre-commit-hooks.bat

The pre-commit hook will automatically run:

  • Code formatting (cargo fmt --check)
  • Linting (cargo clippy)
  • Tests (cargo test)

For more details, see scripts/README.md.

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Set up pre-commit hooks (recommended)
  4. Make your changes
  5. Ensure all tests pass
  6. Submit a pull request

πŸ“„ License

Dual licensed under your choice of:

πŸ™ Acknowledgments

  • Trae Agent for the original Python implementation and spec
  • iocraft for the beautiful terminal UI framework
  • OpenAI, Anthropic, and Google for model APIs
  • Rust community for the amazing ecosystem

Made with ❀️ in Rust