Contributors

December 5, 2025 ยท View on GitHub

๐Ÿ” Observability๐Ÿ•ธ๏ธ Agent Tracing๐Ÿš‚ LLM Routing
๐Ÿ’ฐ Cost & Latency Tracking๐Ÿ“š Datasets & Fine-tuning๐ŸŽ›๏ธ Automatic Fallbacks

helicone logo


Contributors GitHub stars GitHub commit activity GitHub closed issues Y Combinator

Docs โ€ข Changelog โ€ข Bug reports โ€ข See Helicone in Action! (Free)

Helicone is an AI Gateway & LLM Observability Platform for AI Engineers

  • ๐ŸŒ AI Gateway: Access 100+ AI models with 1 API key through the OpenAI API with intelligent routing and automatic fallbacks. Get started in 2 minutes.
  • ๐Ÿ”Œ Quick integration: One-line of code to log all your requests from OpenAI, Anthropic, LangChain, Gemini, Vercel AI SDK, and more.
  • ๐Ÿ“Š Observe: Inspect and debug traces & sessions for agents, chatbots, document processing pipelines, and more
  • ๐Ÿ“ˆ Analyze: Track metrics like cost, latency, quality, and more. Export to PostHog in one-line for custom dashboards
  • ๐ŸŽฎ Playground: Rapidly test and iterate on prompts, sessions and traces in our UI.
  • ๐Ÿง  Prompt Management: Version prompts using production data. Deploy prompts through the AI Gateway without code changes. Your prompts remain under your control, always accessible.
  • ๐ŸŽ›๏ธ Fine-tune: Fine-tune with one of our fine-tuning partners: OpenPipe or Autonomi (more coming soon)
  • ๐Ÿ›ก๏ธ Enterprise Ready: SOC 2 and GDPR compliant

๐ŸŽ Generous monthly free tier (10k requests/month) - No credit card required!

Open Sourced LLM Observability & AI Gateway Platform

Quick Start โšก๏ธ

  1. Get your API key by signing up here and add credits at helicone.ai/credits

  2. Update the baseURL in your code and add your API key.

    import OpenAI from "openai";
    
    const client = new OpenAI({
      baseURL: "https://ai-gateway.helicone.ai",
      apiKey: process.env.HELICONE_API_KEY,
    });
    
    const response = await client.chat.completions.create({
      model: "gpt-4o-mini",  // claude-sonnet-4, gemini-2.0-flash or any model from https://www.helicone.ai/models
      messages: [{ role: "user", content: "Hello!" }]
    });
    
  3. ๐ŸŽ‰ You're all set! View your logs at Helicone and access 100+ models through one API.

Self-Hosting Open Source LLM Observability

Docker

Helicone is simple to self-host and update. To get started locally, just use our docker-compose file.

# Clone the repository
git clone https://github.com/Helicone/helicone.git
cd docker
cp .env.example .env

# Start the services
./helicone-compose.sh helicone up

Helm

For Enterprise workloads, we also have a production-ready Helm chart available. To access, contact us at enterprise@helicone.ai.

Manual deployment is not recommended. Please use Docker or Helm. If you must, follow the instructions here.

Architecture

Helicone is comprised of five services:

  • Web: Frontend Platform (NextJS)
  • Worker: Proxy Logging (Cloudflare Workers)
  • Jawn: Dedicated Server for serving collecting logs (Express + Tsoa)
  • Supabase: Application Database and Auth
  • ClickHouse: Analytics Database
  • Minio: Object Storage for logs.

Integrations ๐Ÿ”Œ

Inference Providers

IntegrationSupportsDescription
AI GatewayJS/TS, Python, cURLUnified API for 100+ providers with intelligent routing, automatic fallbacks, and unified observability
Async Logging (OpenLLMetry)JS/TS, PythonAsynchronous logging for multiple LLM platforms
OpenAIJS/TS, PythonInference provider
Azure OpenAIJS/TS, PythonInference provider
AnthropicJS/TS, PythonInference provider
OllamaJS/TSRun and use large language models locally
AWS BedrockJS/TSInference provider
Gemini APIJS/TSInference provider
Gemini Vertex AIJS/TSGemini models on Google Cloud's Vertex AI
Vercel AIJS/TSAI SDK for building AI-powered applications
AnyscaleJS/TS, PythonInference provider
TogetherAIJS/TS, PythonInference provider
HyperbolicJS/TS, PythonInference provider
GroqJS/TS, PythonHigh-performance models
DeepInfraJS/TS, PythonServerless AI inference for various models
Fireworks AIJS/TS, PythonFast inference API for open-source LLMs

Frameworks

FrameworkSupportsDescription
LangChainJS/TS, PythonUse AI Gateway with LangChain for unified provider access
LlamaIndexPythonFramework for building LLM-powered data applications
LangGraphPythonBuild stateful, multi-actor applications with LLMs
Vercel AI SDKJS/TSAI SDK for building AI-powered applications
Semantic KernelC#, PythonMicrosoft's AI orchestration framework
CrewAIPythonFramework for orchestrating role-playing AI agents
ModelFusionJS/TSAbstraction layer for integrating AI models into JavaScript and TypeScript applications
PostHogJS/TS, Python, cURLProduct analytics platform. Build custom dashboards.
RAGASPythonEvaluation framework for retrieval-augmented generation
Open WebUIJS/TSWeb interface for interacting with local LLMs
MetaGPTYAMLMulti-agent framework
Open DevinDockerAI software engineer
Mem0 EmbedChainPythonFramework for building RAG applications
DifyNo code requiredLLMOps platform for AI-native application development

This list may be out of date. Don't see your provider or framework? Check out the latest integrations in our docs. If not found there, request a new integration by contacting help@helicone.ai.

Contributing

We โค๏ธ our contributors! We warmly welcome contributions for documentation, integrations, costs, and feature requests.

If you have an idea for how Helicone can be better, create a GitHub issue.

License

Helicone is licensed under the Apache v2.0 License.

Additional Resources

For more information, visit our documentation.

Contributors