Ch
May 10, 2026 · View on GitHub
Table of Contents
- Overview
- Vision
- Demo
- Quick Start
- Features
- Installation
- Configuration
- Usage
- Platform Compatibility
- Website
- Development
- Contributing
- Uninstall
- License
Overview
Ch is a lightweight, GoLang-based CLI tool for AI interaction. As the successor to the now-deprecated Cha project, Ch delivers the same core functionality with over 10x faster startup and significantly improved performance. Ch prioritizes speed and efficiency, making it ideal for developers who need rapid AI interaction with minimal overhead and full user control.
Vision
Ch provides direct terminal access to powerful AI models with minimal overhead, transparent operations, and explicit user control. It integrates seamlessly into developer environments, minimizing context switching and empowering users to leverage AI's full potential through explicit control and flexible, user-driven interactions without automated decisions or hidden costs.
Demo
Watch the full demo on YouTube (published on October 20, 2025) to see Ch in action with interactive chat, multi-platform switching, shell session recording, code export, and more. It demonstrates how Ch keeps a lightweight focused core while remaining powerful through integration with other CLI tools.
Quick Start
Install:
curl -fsSL https://raw.githubusercontent.com/MehmetMHY/ch/main/install.sh | bash
Configure:
export OPENAI_API_KEY="your-api-key-here"
Start using:
ch "What are the key features of Go programming language?"
Features
- High Performance: Built for speed with minimal startup overhead
- Multi-Platform Support: OpenAI, OpenRouter, Groq, DeepSeek, Anthropic, XAI, Together, Google Gemini, Mistral AI, Amazon Bedrock, and Ollama
- Multi-Region Support: Switch between regional endpoints for platforms like Amazon Bedrock (22 AWS regions)
- Interactive & Direct Modes: Chat interactively or run single queries
- Unix Piping: Pipe any command output or file content directly to Ch
- Seamless Pipe Output: Automatically suppresses colors and UI elements when output is piped, perfect for shell pipelines and automation
- Smart File Handling: Load text files, PDFs, Word docs (DOCX/ODT/RTF), spreadsheets (XLSX/CSV), images (with OCR text extraction), and directories
- Advanced Export: Interactive chat export with fzf selection and editor integration
- AI-Suggested Filenames: When exporting, the current model proposes short snake_case filenames based on chat context. Configurable and fully optional, with a graceful fallback to the deterministic hash-based names.
- Code Block Export: Extract and save markdown code blocks with proper file extensions
- Session State Viewer: Check current session details like model, platform, and token usage
- Token Counting: Estimate token usage for files with model-aware tokenization
- Text Editor Integration: Use your preferred editor for complex prompts
- Dynamic Switching: Change models and platforms mid-conversation
- Smart Model Sorting: Model lists are sorted newest-first using API-provided timestamps, with alphabetical fallback for platforms that don't provide them
- Chat Backtracking: Revert to any point in conversation history
- Session Continuation: Automatically save and restore sessions to continue conversations later
- Session History Search: Search and load any previous session from history with fuzzy or exact matching. Supports time-based filters (1d, 1w, 1m, 1y), epoch ranges, and direct session file loading.
- Code Dump: Package entire directories for AI analysis (text and document files only)
- Shell Session Recording: Record terminal sessions and provide them as context to the model
- Web Scraping & Search: Built-in URL scraping and web search capabilities
- Thinking/Reasoning Display: Shows model thinking tokens (reasoning) in gray before the response, supporting
reasoning_content,reasoning(Ollama), and<think>tag formats - Clipboard Integration: Copy AI responses to clipboard with cross-platform support
- Colored Output: Platform and model names displayed in distinct colors
Installation
curl -fsSL https://raw.githubusercontent.com/MehmetMHY/ch/main/install.sh | bash
Alternative methods:
# using wget
wget -qO- https://raw.githubusercontent.com/MehmetMHY/ch/main/install.sh | bash
# manual clone and install
git clone https://github.com/MehmetMHY/ch.git
cd ch
./install.sh
Uninstall:
# safe uninstall with confirmation prompt (recommended)
./install.sh --safe-uninstall
# or uninstall without confirmation
./install.sh --uninstall
The installer automatically:
- Checks for Go 1.21+ and dependencies (fzf, yt-dlp, tesseract).
- Installs missing dependencies via system package managers (apt, brew, pkg, etc.).
- Builds and installs Ch to
~/.ch/bin/chwith temporary files in~/.ch/tmp/. - Attempts to create a global symlink at
/usr/local/bin/ch(or$PREFIX/bin/chon Android/Termux). - If the symlink creation fails due to permissions, it will automatically install to
~/.ch/binand provide instructions to add it to yourPATH. - Gracefully handles missing tesseract development libraries by building without OCR support. If tesseract dev headers are missing, the app will still install and work normally, image-to-text extraction will simply be disabled.
Configuration
API Keys
Set up API keys for your chosen platforms. OPENAI_API_KEY is required for core functionality, and BRAVE_API_KEY is required for the web search feature.
Important Note on API Keys
By default, Ch uses the openai platform. If you run ch without setting the OPENAI_API_KEY, you will see an error. Here’s how to get started:
- Set the API Key: If you want to use OpenAI, set the environment variable:
export OPENAI_API_KEY="your-openai-key" - Switch Platforms: Use a different platform that you have configured. For example, to use Groq:
ch -p groq "Hello" - Use a Local Model: For a completely free and offline experience, use Ollama:
ch -p ollama "Hello"
# required
export OPENAI_API_KEY="your-openai-key"
export BRAVE_API_KEY="your-brave-api-key" # for web search
# optional
export OPENROUTER_API_KEY="your-openrouter-key"
export GROQ_API_KEY="your-groq-key"
export DEEP_SEEK_API_KEY="your-deepseek-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export XAI_API_KEY="your-xai-key"
export TOGETHER_API_KEY="your-together-key"
export GEMINI_API_KEY="your-gemini-key"
export MISTRAL_API_KEY="your-mistral-key"
export AWS_BEDROCK_API_KEY="your-bedrock-key"
You can find links to obtain API keys below:
| Platform | Get API Key |
|---|---|
| OpenAI | https://openai.com/api/ |
| Brave Search | https://brave.com/search/api/ |
| OpenRouter | https://openrouter.ai/settings/keys |
| Google Gemini | https://ai.google.dev/gemini-api/docs/api-key |
| xAI | https://x.ai/api |
| Groq | https://console.groq.com/keys |
| Mistral AI | https://docs.mistral.ai/getting-started/quickstart |
| Anthropic | https://console.anthropic.com/ |
| Together AI | https://docs.together.ai/docs/quickstart |
| DeepSeek | https://api-docs.deepseek.com/ |
| Amazon Bedrock | https://aws.amazon.com/bedrock/ |
Default Settings
Customize default platform and model via environment variables:
# default: openai
export CH_DEFAULT_PLATFORM="groq"
# default: gpt-5.4-mini
export CH_DEFAULT_MODEL="llama3-8b-8192"
Config File
For persistent configuration, create ~/.ch/config.json to override default settings without needing environment variables:
{
"default_model": "grok-4-fast-non-reasoning",
"current_platform": "xai",
"preferred_editor": "vim",
"show_search_results": true,
"show_thinking": true,
"num_search_results": 10,
"search_country": "us",
"search_lang": "en",
"system_prompt": "You are a helpful assistant.",
"slow_model_patterns": ["^o\\d+", "^gpt-5$"]
}
Available config options:
default_model- Set default model (automatically sets current_model if not specified)current_model- Set current active modelcurrent_platform- Set default platformcurrent_base_url- Set default base URL/region for multi-region platforms like Amazon Bedrockpreferred_editor- Set preferred text editor (default: "vim")show_search_results- Show/hide web search results (default: false)num_search_results- Number of search results to display (default: 5)search_country- Set the country for web searches (default: "us")search_lang- Set the language for web searches (default: "en")system_prompt- Customize the system promptenable_session_save- Enable/disable automatic session saving for continuation (default: false)save_all_sessions- Save all sessions with timestamps instead of overwriting the latest (default: false). When enabled, each session gets a unique timestamped file; when disabled, only the latest session is keptshow_thinking- Show/hide model thinking/reasoning tokens (default: true). When enabled, thinking content is displayed in gray before the response. Supportsreasoning_content,reasoning(Ollama), and<think>tag formatsslow_model_patterns- List of regex patterns for models that should use non-streaming mode with a loading animation (default: empty). Example:["^o\\d+", "^gpt-5$"]shallow_load_dirs- Directories to load with only 1-level depth for!land!eoperations (default: major system directories like/,/home/,/usr/,$HOME, etc.). Set to[]to disable.ai_name_disable- Disable AI-suggested filenames in!eexport modes (default: false). When false, the current model is asked to propose short snake_case filenames before each export filename prompt.ai_name_char_threshold- Minimum non-system chat content (in characters) before AI-suggested filenames are generated (default: 500). Below this, the AI naming step is skipped.ai_name_count- Number of AI-suggested filename candidates to request per export (default: 8).ai_name_timeout_seconds- Cancel the AI naming request after this many seconds and fall back to the hash list (default: 30).ai_name_prompt- Instruction sent to the model when generating filename suggestions. Use{count}as a placeholder forai_name_count. The default asks for output as a single fencedtextcode block.- Plus all other configuration options using snake_case JSON field names
For a complete list of all configuration options and their defaults, see internal/config/config.go. But note that the config file takes precedence over environment variables and provides a convenient way to customize Ch without setting environment variables for each session.
Local & Open-Source Setup
Ch supports local models via Ollama, allowing you to run it without relying on third-party services. This provides a completely private, open-source, and offline-capable environment.
-
Install Ollama: Follow the official instructions at ollama.com.
-
Pull a model:
ollama pull llama3 -
Run Ch with Ollama:
ch -p ollama "What is the capital of France?"
Since Ollama runs locally, no API key is required.
Usage
Basic Usage
# interactive mode
ch
# direct query
ch "Explain quantum computing"
# platform-specific query
ch -p groq "Write a Go function to reverse a string"
# model-specific query
ch -m gpt-4o "Create a REST API in Python"
# platform and model together
ch -o openai|gpt-4o "Create a REST API in Python"
# export code blocks to files
ch -e "Write a Python script to sort a list"
# load and display file content
ch -l document.pdf
ch -l document.docx # or .odt, .rtf
ch -l spreadsheet.xlsx
ch -l screenshot.png
# scrape web content
ch -l https://example.com
ch -l https://youtube.com/watch?v=example
# count tokens in files
ch -t ./README.md
ch -m "gpt-4" -t ./main.go
# disable session saving for this run (only works if enable_session_save is true in config)
ch -n "What is AI?"
ch --no-history "Explain quantum computing"
# piping support (colors/UI automatically suppressed)
cat main.py | ch "What does this code do?"
echo "hello world" | ch "Translate to Spanish"
ls -la | ch "Summarize this directory"
# perfect for shell pipelines and automation
ch "list 5 fruits" | grep apple
ch "explain golang" > output.txt
ch -w "golang features" | head -10
# session continuation - automatically saves and restores conversations
ch -c # continue last session interactively
ch -c "follow up question" # continue with a new query
ch -c /path/to/history.json # load custom history file and continue
ch -a # fuzzy search and load a previous session
ch -hs # same as -a (alias for --history)
ch -a exact # exact match search for previous sessions
ch -a 1w # filter sessions from the last week
ch -a 1776500000-1776542796 # filter sessions by epoch range
ch -a ch_session_latest.json # load a specific session file directly
ch --clear # clear all temporary files and sessions
Interactive Commands
When in interactive mode (ch), use these commands:
!q- exit interface!h- help page!c- clear chat history!b- backtrack messages!t [buff]- text editor mode\- multi-line mode (exit with\)!m- switch models!o- select from all models!p- switch platforms!l [dir]- load files/dirs!a [filter]- search and load sessions (filters: 1d, 1w, 1m, 1y, exact,, ) !x- record shell session!s [url]- scrape URL(s) or from history!w [query]- web search or from history!d- generate codedump!e [file]- export chat(s)!y- add to clipboardcc- quick copy latest responsectrl+c- clear prompt inputctrl+d- exit completely
Advanced Features
Code Export (-e flag):
- Automatically detects programming languages
- Saves with proper file extensions
- Supports 25+ languages and file types
Interactive Export (!e and !e [file]):
Offers three modes for exporting chat history:
- turn export: Select individual user prompts and bot responses to export. Uses
>alloption to quickly select everything. Opens editor for final review before saving. - block export: Extracts all code blocks from your entire chat history. Lets you save each snippet individually, intelligently suggesting file names and extensions based on the code's language and content. Presents a prioritized list of suggested new names and existing files (marked with
[w]for overwrite). - manual export: Allows you to select specific chat entries, which are then combined into a single file for you to edit and save manually. Also benefits from the smart file-saving interface.
Optional: Provide a filename (!e output.txt) to skip the file selection step and save directly to that file.
AI-Suggested Filenames:
When you reach a filename selection step in any !e export mode, Ch asks the currently selected model to propose a few short, snake*case filenames that summarize what's being saved. The model receives the full chat history plus the content being exported, so the suggestions are context-aware. AI suggestions appear at the top of the fzf list (always with a .txt extension), followed by the regular ch*<hash>.<ext> options and existing files in the directory.
Behavior notes:
- A spinner ("Loading...") is shown while the model responds. Pressing Ctrl+C aborts the export, matching how other spinners behave in Ch.
- If the chat history is short (default: under 500 characters of non-system content), AI naming is skipped automatically. There isn't enough context for useful names yet.
- If the model takes longer than the configured timeout (default 30 seconds) to respond, the request is cancelled and Ch falls back to the hash-based list.
- If the model fails, returns nothing usable, or AI naming is disabled, Ch silently falls back to the existing hash-based filename list.
- Output is parsed from a fenced code block tagged
textfor reliable extraction across providers, and each name is sanitized (lowercase,[a-z0-9_]only, deduped, capped at 40 characters).
Configurable via ~/.ch/config.json. Example showing all four keys with their defaults:
{
"ai_name_disable": false,
"ai_name_char_threshold": 500,
"ai_name_count": 8,
"ai_name_timeout_seconds": 30,
"ai_name_prompt": "Based on the conversation above, propose {count} short filenames that best summarize what's being saved.\n\nRules for each name:\n- lowercase only\n- words separated by underscores\n- 1 to 4 words per name\n- no file extension\n- no punctuation, no quotes, no spaces, no commentary\n\nOutput format: respond with EXACTLY one fenced code block tagged \"text\", containing one filename per line and nothing else. Example:\n\n```text\nhello_world\napi_request_handler\nparse_json\n```\n\nDo not include any text before or after the code block."
}
Set ai_name_disable to true to skip AI naming entirely. Use {count} as a placeholder in ai_name_prompt to substitute ai_name_count at request time.
URL Scraping (!s and -l with URLs):
- Supports regular web pages and YouTube videos
- Extracts clean text content from web pages using a built-in parser
- YouTube videos include metadata and subtitle extraction via yt-dlp
- Multiple URL support:
!s https://site1.com https://site2.com - Interactive URL selection: When called without arguments (
!s), scans chat history for all URLs, removes duplicates, and presents them via fzf for multi-selection with tab key - Integrated with file loading:
ch -l https://example.com
Web Search (!w):
- Built-in Brave Search integration via the Brave Search API
- Requires
BRAVE_API_KEYto be set in your environment variables - Usage:
!w "search query"or!wto select a sentence from chat history - Results are automatically added to conversation context
- No need for external tools, but requires an API key
Clipboard Copy (!y):
- Four copy modes: turn copy (select individual prompts and responses), block copy (extract code blocks), manual copy (select responses with editor), link copy (select URLs)
- Use
>alloption at the top of any list to quickly select everything - Edit content in your preferred editor before copying (manual mode)
- Cross-platform clipboard support (macOS, Linux, Android/Termux, Windows)
- Usage:
!ythen select mode and items to copy
Web Content Interaction
The -s and -w flags in the terminal CLI are used for web content interaction:
-s flag (Scrape URL)
- Usage:
ch -s <URL> - Function: Scrapes content from the specified URL.
- Supports scraping normal web pages and YouTube videos.
- For normal web pages, it fetches and extracts clean text content from the HTML.
- For YouTube URLs, it uses
yt-dlpto extract metadata and subtitles. - The scraped content is printed directly to the terminal.
-w flag (Web Search)
- Usage:
ch -w <search query> - Function: Performs a web search using the Brave Search API.
- Requires
BRAVE_API_KEYenvironment variable to be set. - Fetches search results from Brave Search.
- Prints the formatted search results (title, URL, description) to the terminal.
Both commands help in integrating external web content and search results into CLI workflow with Ch.
Platform Compatibility
Ch supports multiple AI platforms with seamless switching:
| Platform | Models | Environment Variable | Regions/Endpoints |
|---|---|---|---|
| OpenAI | GPT-4o, GPT-4o-mini, etc. | OPENAI_API_KEY | 1 |
| OpenRouter | Various models | OPENROUTER_API_KEY | 1 |
| Groq | Llama3, Mixtral, etc. | GROQ_API_KEY | 1 |
| DeepSeek | DeepSeek-Chat, etc. | DEEP_SEEK_API_KEY | 1 |
| Anthropic | Claude-3.5, etc. | ANTHROPIC_API_KEY | 1 |
| xAI | Grok models | XAI_API_KEY | 1 |
| Together | Llama3, Mixtral, etc. | TOGETHER_API_KEY | 1 |
| Gemini models | GEMINI_API_KEY | 1 | |
| Mistral | Mistral-tiny, small, etc. | MISTRAL_API_KEY | 1 |
| Amazon Bedrock | Claude, Llama, Mistral, etc | AWS_BEDROCK_API_KEY | 22 |
| Ollama | Local models (Llama3, etc) | (none) | 1 |
Switch platforms during conversation:
!p groq
!p anthropic
!m gpt-4o
Multi-Region Platforms:
Some platforms like Amazon Bedrock support multiple regions. When switching to a multi-region platform, you'll be prompted to select a region before choosing a model:
!p amazon
# Prompts: region: (select from 22 AWS regions)
# Prompts: model: (select from available models in that region)
Supported AWS Bedrock regions: US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Tokyo, Seoul, Osaka, Mumbai, Hyderabad, Singapore, Sydney), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm, Zurich), South America (São Paulo), AWS GovCloud (US-East, US-West), and FIPS endpoints.
Website
The project website is hosted on GitHub Pages at: https://mehmetmhy.github.io/ch/
The website source is located in the docs/ directory. To run the website locally, run the following commands:
cd docs
./run.sh
Development
Prerequisites
- Go 1.26.0 or higher
- fzf for interactive selections
- yt-dlp for YouTube video scraping
- Tesseract OCR (optional) for image-to-text extraction from images. The installer will warn you if it's missing.
BRAVE_API_KEYfor web search (see API Keys)- Clipboard utils (auto-detected): pbcopy, xclip, xsel, wl-copy, termux-clipboard-set
- Vim but Helix IDE is recommended
Build from Source
git clone https://github.com/MehmetMHY/ch.git
cd ch
# build locally without installing
./install.sh -b
Build Options
# using the install script (local build options)
./install.sh -b # build locally without installing
./install.sh -r -b # refresh/update all dependencies and build
./install.sh -v # update version in Makefile interactively
./install.sh -h # show help with all options
# using Make directly
make install # install to $GOPATH/bin
make clean # clean build artifacts
make test # run tests
make lint # run linter
make fmt # format code
make dev # build and run in dev mode
Version Management
Update the project version interactively:
./install.sh -v
This will:
- Display the current version from Makefile
- Offer semantic version bump options (patch, minor, major)
- Allow custom version input
- Update the VERSION in Makefile automatically
Contributing
Contributions are welcome! Here's how to get started:
- Report Issues: Open an issue for bugs or feature requests
- Submit Pull Requests: Fork, make changes, and submit a PR
- Improve Documentation: Help enhance README, examples, or guides
Development Setup
git clone https://github.com/MehmetMHY/ch.git
cd ch
# refresh dependencies and build
./install.sh -r -b
make dev
Code Standards
- Follow existing Go conventions
- Run
make fmtandmake lintbefore submitting - Test your changes thoroughly
- Update documentation as needed
- To add new slow models, add regex patterns to
slow_model_patternsin~/.ch/config.json
Uninstall
Use --safe-uninstall for a confirmation prompt before deletion (recommended). The --uninstall flag deletes immediately without confirmation.
# safe uninstall with confirmation prompt (recommended)
./install.sh --safe-uninstall
# or uninstall without confirmation
./install.sh --uninstall
Manual uninstall:
# manual uninstall for Unix-based systems
sudo rm -f /usr/local/bin/ch
rm -rf ~/.ch
# manual uninstall for Android/Termux systems
rm -f $PREFIX/bin/ch
rm -rf ~/.ch
Clean Temporary Files
If you want to safely remove all Ch temporary files without uninstalling the application:
[ -d "${HOME}/.ch/tmp/" ] && rm -rf "${HOME}/.ch/tmp/"
This is useful for reclaiming disk space if temporary files from shell sessions, file loads, or other operations have accumulated.
License
Ch is licensed under the MIT License. See LICENSE for details.