About
May 12, 2026 Β· View on GitHub
About
fish-ai adds AI functionality to Fish.
It's awesome! I built it to make my life easier, and I hope it will make
yours easier too. Here is the complete sales pitch:
- It can turn a comment into a shell command and vice versa, which means
less time spent
reading manpages, googling and copy-pasting from Stack Overflow. Great
when working with
git,kubectl,curland other tools with loads of parameters and switches. - Did you make a typo? It can also fix a broken command (similarly to
thefuck). - Not sure what to type next or just lazy? Let the LLM autocomplete your commands with a built in fuzzy finder.
- Everything is done using two (configurable) keyboard shortcuts, no mouse needed!
- It can be hooked up to the LLM of your choice (even a self-hosted one!).
- The whole thing is open source, hopefully somewhat easy to read and around 2000 lines of code, which means that you can audit the code yourself in an afternoon.
- Install and update with ease using
fisher. - Tested on both macOS and the most common Linux distributions.
- Does not interfere with
fzf.fish,tideor any of the other plugins you're already using! - Does not wrap your shell, install telemetry or force you to switch to a proprietary terminal emulator.
This plugin was originally based on Tom DΓΆrr's fish.codex repository.
Without Tom, this repository would not exist!
If you like it, please add a β.
Bug fixes are welcome! I consider this project largely feature complete. Before opening a PR for a feature request, consider opening an issue where you explain what you want to add and why, and we can talk about it first.
π₯ Demo
π¨βπ§ How to install
Install fish-ai
Make sure git and either uv, or
a supported version of Python
along with pip and venv is installed. Then grab the plugin using
fisher:
fisher install realiserad/fish-ai
Create a configuration
Create a configuration file $XDG_CONFIG_HOME/fish-ai.ini (use
~/.config/fish-ai.ini if $XDG_CONFIG_HOME is not set) where
you specify which LLM fish-ai should talk to. If you're not sure,
use GitHub Models.
Anthropic
To use Anthropic:
[anthropic]
provider = anthropic
api_key = <your API key>
model = claude-sonnet-4-6
Azure OpenAI
To use Azure OpenAI:
[fish-ai]
configuration = azure
[azure]
provider = azure
server = https://<your instance>.openai.azure.com
model = <your deployment name>
api_key = <your API key>
Bedrock
AWS Bedrock provides LLMs hosted by AWS. They can be accessed either through the Mantle gateway or the Converse API.
If no api_key is configured, a short-term token is automatically
generated from your
AWS credentials.
You can also specify an api_key directly if you prefer to use a
Bedrock API key.
Use aws_profile to select a named profile from your AWS configuration. If omitted,
the default credential chain is used.
Available model IDs are listed in the Bedrock documentation.
Converse API
To use the Converse API:
[fish-ai]
configuration = aws-converse
[aws-converse]
provider = bedrock
bedrock_api = converse
model = anthropic.claude-haiku-4-5-20251001-v1:0
aws_region = us-east-1
aws_profile = default
It requires the bedrock:InvokeModel permission.
Mantle gateway
To use the Mantle gateway:
[fish-ai]
configuration = aws-mantle
[aws-mantle]
provider = bedrock
model = anthropic.claude-haiku-4-5
aws_region = us-east-1
aws_profile = default
It requires the bedrock-mantle:CreateInference permission.
Cohere
To use Cohere:
[cohere]
provider = cohere
api_key = <your API key>
model = command-a-03-2025
DeepSeek
To use DeepSeek:
[deepseek]
provider = deepseek
api_key = <your API key>
model = deepseek-chat
GitHub Models
To use GitHub Models:
[fish-ai]
configuration = github
[github]
provider = self-hosted
server = https://models.github.ai/inference
api_key = <paste GitHub PAT here>
model = gpt-4o-mini
You can create a personal access token (PAT) here. The PAT does not require any permissions.
To use Gemini from Google:
[google]
provider = google
api_key = <your API key>
model = gemini-3.1-pro-preview
Groq
To use Groq:
[groq]
provider = groq
api_key = <your API key>
OpenAI
To use OpenAI:
[fish-ai]
configuration = openai
[openai]
provider = openai
model = gpt-4o
api_key = <your API key>
organization = <your organization>
OpenRouter
To use OpenRouter:
[fish-ai]
configuration = openrouter
[openrouter]
provider = self-hosted
server = https://openrouter.ai/api/v1
model = google/gemini-3-flash-preview
api_key = <your API key>
extra_body = {"reasoning": {"effort": "minimal", "exclude": true}}
Self-hosted
To use a self-hosted LLM (behind an OpenAI-compatible API):
[fish-ai]
configuration = self-hosted
[self-hosted]
provider = self-hosted
server = https://<your server>:<port>/v1
model = <your model>
api_key = <your API key>
If you are self-hosting, my recommendation is to use
Ollama with
Llama 3.3 70B. An out of the box
configuration running on localhost could then look something
like this:
[fish-ai]
configuration = local-llama
[local-llama]
provider = self-hosted
model = llama3.3
server = http://localhost:11434/v1
Available models are listed here.
Put the API key on your keyring
Instead of putting the API key in the configuration file, you can let
fish-ai load it from your keyring. To save a new API key or transfer
an existing API key to your keyring, run fish_ai_put_api_key.
π How to use
Transform comments into commands and vice versa
Type a comment (anything starting with #), and press Ctrl + P to turn it
into shell command! Note that if your comment is very brief or vague, the LLM
may decide to improve the comment instead of providing a shell command. You
then need to press Ctrl + P again.
You can also run it in reverse. Type a command and press Ctrl + P to turn it into a comment explaining what the command does.
Autocomplete commands
Begin typing your command or comment and press Ctrl + Space to display a list
of completions in fzf (it is bundled
with the plugin, no need to install it separately).
To refine the results, type some instructions and press Ctrl + P
inside fzf.
Suggest fixes
If a command fails, you can immediately press Ctrl + Space at the command prompt
to let fish-ai suggest a fix!
π€Έ Additional options
You can tweak the behaviour of fish-ai by putting additional options in your
fish-ai.ini configuration file.
Change the default key bindings
By default, fish-ai binds to Ctrl + P and Ctrl + Space. You
may want to change this if there is interference with any existing key
bindings on your system.
To change the key bindings, set keymap_1 (defaults to Ctrl + P)
and keymap_2 (defaults to Ctrl + Space) to the key binding escape
sequence of the key binding you want to use.
To get the correct key binding escape sequence, use
fish_key_reader.
For example, if you have the following output from fish_key_reader:
$ fish_key_reader
Press a key:
bind ctrl-p 'do something'
$ fish_key_reader
Press a key:
bind ctrl-space 'do something'
Then put the following in your configuration file:
[fish-ai]
keymap_1 = 'ctrl-p'
keymap_2 = 'ctrl-space'
Restart the shell for the changes to take effect.
Explain in a different language
To explain shell commands in a different language, set the language option
to the name of the language. For example:
[fish-ai]
language = Swedish
This will only work well if the LLM you are using has been trained on a dataset with the chosen language.
Number of completions
To change the number of completions suggested by the LLM when pressing
Ctrl + Space, set the completions option. The default value is 5.
Here is an example of how you can increase the number of completions to 10:
[fish-ai]
completions = 10
To change the number of refined completions suggested by the LLM when pressing
Ctrl + P in fzf, set the refined_completions option. The default value
is 3.
[fish-ai]
refined_completions = 5
Personalise completions using commandline history
You can personalise completions suggested by the LLM by sending an excerpt of your commandline history.
To enable it, specify the maximum number of commands from the history
to send to the LLM using the history_size option. The default value
is 0 (do not send any commandline history).
[fish-ai]
history_size = 5
If you enable this option, consider the use of sponge
to automatically remove broken commands from your commandline history.
Preview pipes
To send the output of a pipe to the LLM when completing a command, use the
preview_pipe option.
[fish-ai]
preview_pipe = True
This will send the output of the longest consecutive pipe after the last
unterminated parenthesis before the cursor. For example, if you autocomplete
az vm list | jq, the output from az vm list will be sent to the LLM.
This behaviour is disabled by default, as it may slow down the completion process and lead to commands being executed twice.
Configure the progress indicator
You can change the progress indicator (the default is β³) shown when the plugin is waiting for a response from the LLM.
To change the default, set the progress_indicator option to zero or
more characters.
[fish-ai]
progress_indicator = wait...
Use custom headers
You can send custom HTTP headers using the headers option. Specify one
or more headers using comma-separated Key: Value pairs. For example:
[fish-ai]
headers = Header-1: value1, Header-2: value2
π Switch between contexts
You can switch between different sections in the configuration using the
fish_ai_switch_context command.
πΎ Data privacy
When using the plugin, fish-ai submits the name of your OS and the
commandline buffer to the LLM.
When you codify or complete a command, it also sends the contents of any
files you mention (as long as the file is readable), and when you explain
or complete a command, the output from <command> --help is provided to
the LLM for reference.
fish-ai can also send an excerpt of your commandline history
when completing a command. This is disabled by default.
Finally, to fix the previous command, the previous commandline buffer, along with any terminal output and the corresponding exit code is sent to the LLM.
If you are concerned with data privacy, you should use a self-hosted LLM. When hosted locally, no data ever leaves your machine.
Redaction of sensitive information
The plugin attempts to redact sensitive information from the prompt
before submitting it to the LLM. Sensitive information is replaced by
the <REDACTED> placeholder.
The following information is redacted:
- Passwords and API keys supplied as commandline arguments
- PEM-encoded private keys stored in files
- Bearer tokens, provided to e.g. cURL
If you trust the LLM provider (e.g. because you are hosting locally)
you can disable redaction using the redact = False option.