LazyShell
July 1, 2025 ยท View on GitHub
A smart CLI tool that generates and executes shell commands using AI
LazyShell is a command-line interface that helps you quickly generate and execute shell commands using AI. It supports multiple AI providers and provides an interactive configuration system for easy setup.
Features โจ
- ๐ Generates shell commands from natural language descriptions
- โก Supports multiple AI providers (Groq, Google Gemini, OpenRouter, Anthropic, OpenAI, Ollama, Mistral)
- ๐ง Interactive configuration system - no manual environment setup needed
- ๐ Safe execution with confirmation prompt
- ๐ Fast and lightweight
- ๐ Automatic fallback to environment variables
- ๐พ Persistent configuration storage
- ๐ Automatic clipboard integration - generated commands are copied to clipboard
- ๐งช Built-in evaluation system for testing AI performance
- ๐ Model benchmarking capabilities
- ๐ค LLM Judge evaluation system
- โ๏ธ CI/CD integration with automated quality checks
- ๐ฅ๏ธ System-aware command generation - detects OS, distro, and package manager
- ๐ Command refinement - iteratively improve commands with AI feedback
Installation ๐ฆ
Using npm
npm install -g lazyshell
Using yarn
yarn global add lazyshell
Using pnpm
pnpm add -g lazyshell
Using bun (recommended)
bun add -g lazyshell
Using Install Script (experimental)
curl -fsSL https://raw.githubusercontent.com/bernoussama/lazyshell/main/install | bash
Quick Start ๐
-
First Run: LazyShell will automatically prompt you to select an AI provider and enter your API key:
lazyshell "find all files larger than 100MB" # or use the short alias lsh "find all files larger than 100MB" -
Interactive Setup: Choose from supported providers:
- Groq - Fast LLaMA models with great performance
- Google Gemini - Google's latest AI models
- OpenRouter - Access to multiple models including free options
- Anthropic Claude - Powerful reasoning capabilities
- OpenAI - GPT models including GPT-4
- Ollama - Local models (no API key required)
- Mistral - Mistral AI models for code generation
- LMStudio - Local models via LMStudio (experimental, no API key required)
-
Automatic Configuration: Your preferences are saved to
~/.lazyshell/config.jsonand used for future runs. -
Clipboard Integration: Generated commands are automatically copied to your clipboard for easy pasting.
Configuration ๐ง
Interactive Setup (Recommended)
On first run, LazyShell will guide you through:
- Selecting your preferred AI provider
- Entering your API key (if required)
- Automatically saving the configuration
Configuration Management
# Open configuration UI
lazyshell config
Manual Environment Variables (Optional)
You can still use environment variables as before:
export GROQ_API_KEY='your-api-key-here'
# OR
export GOOGLE_GENERATIVE_AI_API_KEY='your-api-key-here'
# OR
export OPENROUTER_API_KEY='your-api-key-here'
# OR
export ANTHROPIC_API_KEY='your-api-key-here'
# OR
export OPENAI_API_KEY='your-api-key-here'
Note: Ollama and LMStudio don't require API keys as they run models locally.
Configuration File Location
- Linux/macOS:
~/.lazyshell/config.json - Windows:
%USERPROFILE%\.lazyshell\config.json
Supported AI Providers ๐ค
| Provider | Models | API Key Required | Notes |
|---|---|---|---|
| Groq | LLaMA 3.3 70B | Yes | Fast inference, excellent performance |
| Google Gemini | Gemini 2.0 Flash Lite | Yes | Latest Google AI models |
| OpenRouter | Multiple models | Yes | Includes free tier options |
| Anthropic | Claude 3.5 Haiku | Yes | Advanced reasoning capabilities |
| OpenAI | GPT-4o Mini | Yes | Industry standard models |
| Ollama | Local models | No | Run models locally |
| Mistral | Devstral Small | No | Code-optimized models |
| LMStudio | Local models | No | Experimental - Local models via LMStudio |
Usage Examples ๐
Basic Usage
lazyshell "your natural language command description"
# or use the short alias
lsh "your natural language command description"
Silent Mode
lazyshell -s "find all JavaScript files" # No explanation, just the command
lsh --silent "show disk usage" # Same with long flag
Examples
# Find files
lazyshell "find all JavaScript files modified in the last 7 days"
# System monitoring
lazyshell "show disk usage sorted by size"
# Process management
lazyshell "find all running node processes"
# Docker operations
lazyshell "list all docker containers with their memory usage"
# File operations
lazyshell "compress all .log files in this directory"
# Package management (system-aware)
lazyshell "install docker" # Uses apt/yum/pacman/etc based on your distro
Interactive Features
- Execute: Run the generated command immediately
- Refine: Modify your prompt to get a better command
- Cancel: Exit without running anything
- Clipboard: Commands are automatically copied for manual execution
System Intelligence ๐ง
LazyShell automatically detects your system environment:
- Operating System: Linux, macOS, Windows
- Linux Distribution: Ubuntu, Fedora, Arch, etc.
- Package Manager: apt, yum, dnf, pacman, zypper, etc.
- Shell: bash, zsh, fish, etc.
- Current Directory: Provides context for relative paths
This enables LazyShell to generate system-appropriate commands and suggest the right package manager for installations.
Evaluation System ๐งช
LazyShell includes a flexible evaluation system for testing and benchmarking AI performance:
import { runEval, Levenshtein, LLMJudge, createLLMJudge } from './lib/eval';
await runEval("My Eval", {
// Test data function
data: async () => {
return [{ input: "Hello", expected: "Hello World!" }];
},
// Task to perform
task: async (input) => {
return input + " World!";
},
// Scoring methods
scorers: [Levenshtein, LLMJudge],
});
Built-in Scorers
- ExactMatch: Perfect string matching
- Levenshtein: Edit distance similarity
- Contains: Substring matching
- LLMJudge: AI-powered quality evaluation
- createLLMJudge: Custom AI judges with specific criteria
LLM Judge Features
- AI-Powered Evaluation: Uses LLMs to evaluate command quality without expected outputs
- Multiple Criteria: Quality, correctness, security, efficiency assessments
- Rate Limiting: Built-in retry logic and exponential backoff
- Configurable Models: Use different AI models for judging
Features
- Generic TypeScript interfaces for any evaluation task
- Multiple scoring methods per evaluation
- Async support for LLM-based tasks
- Detailed scoring reports with averages
- Error handling for failed test cases
See docs/EVALUATION.md for complete documentation.
Model Benchmarking ๐
LazyShell includes comprehensive benchmarking capabilities to compare AI model performance:
Running Benchmarks
# Build and run benchmarks
bun run build
bun dist/bench_models.mjs
Benchmark Features
- Multi-Model Testing: Compare Groq, Gemini, Ollama, Mistral, and OpenRouter models
- Performance Metrics: Response time, success rate, and output quality
- Standardized Prompts: Consistent test cases across all models
- JSON Reports: Detailed results saved to
benchmark-results/directory
Available Models
llama-3.3-70b-versatile(Groq)gemini-2.0-flash-lite(Google)devstral-small-2505(Mistral)ollama3.2(Ollama)or-devstral(OpenRouter)
CI Evaluations ๐ฆ
LazyShell includes automated quality assessments that run in CI to ensure consistent performance:
Overview
- Automated Testing: Runs on every PR and push to main/develop
- Threshold-Based: Configurable quality thresholds that must be met
- LLM Judges: Uses AI to evaluate command quality, correctness, security, and efficiency
- GitHub Actions: Integrated with CI/CD pipeline
Quick Setup
- Add
GROQ_API_KEYto your GitHub repository secrets - Evaluations run automatically with 70% threshold by default
- CI fails if quality scores drop below the threshold
Local Testing
# Run CI evaluations locally
bun run eval:ci
Custom Evaluation Scripts
# Run basic evaluations
bun run build && bun dist/lib/basic.eval.mjs
# Run LLM judge evaluation
bun run build && bun dist/lib/llm-judge.eval.mjs
# Test AI library
bun run build && bun dist/test-ai-lib.mjs
# Run example evaluations
bun run build && bun dist/lib/example.eval.mjs
See docs/CI_EVALUATIONS.md for complete setup and configuration guide.
Development ๐ ๏ธ
Prerequisites
- Bun (recommended)
Setup
-
Clone the repository:
git clone https://github.com/bernoussama/lazyshell.git cd lazyshell -
Install dependencies:
bun install -
Build the project:
bun run build -
Link the package for local development:
bun link --global
Available Scripts
bun x # Quick run with jiti (development)
bun run build # Compile TypeScript with pkgroll
bun run typecheck # Type checking only
bun run lint # Check code formatting and linting
bun run lint:fix # Fix formatting and linting issues
bun run eval:ci # Run CI evaluations locally
bun run release:patch # Build, version bump, publish, and push
bun run prerelease # Build, prerelease version, publish, and push
Project Structure
src/
โโโ index.ts # Main CLI entry point
โโโ utils.ts # Utility functions (command execution, history)
โโโ bench_models.ts # Model benchmarking script
โโโ test-ai-lib.ts # AI library testing script
โโโ commands/
โ โโโ config.ts # Configuration UI command
โโโ helpers/
โ โโโ index.ts # Helper exports
โ โโโ package-manager.ts # System package manager detection
โโโ lib/
โโโ ai.ts # AI provider integrations and command generation
โโโ config.ts # Configuration management
โโโ eval.ts # Evaluation framework
โโโ basic.eval.ts # Basic evaluation examples
โโโ ci-eval.ts # CI evaluation script
โโโ example.eval.ts # Example evaluation scenarios
โโโ llm-judge.eval.ts # LLM judge evaluation examples
Development Features
- TypeScript: Full type safety and modern JavaScript features
- pkgroll: Modern bundling with tree-shaking
- jiti: Fast development with TypeScript execution
- Watch Mode: Auto-compilation during development
- Modular Architecture: Clean separation of concerns
- ESM: Modern ES modules throughout
Troubleshooting ๐ง
Configuration Issues
- Invalid configuration: Delete
~/.lazyshell/config.jsonto reset or uselazyshell config - API key errors: Run
lazyshell configto re-enter your API key - Provider not working: Try switching to a different provider in the configuration
Environment Variables
LazyShell will automatically fall back to environment variables if the config file is invalid or incomplete.
Common Issues
- Clipboard not working: Ensure your system supports clipboard operations
- Model timeout: Some models (especially Ollama) may take longer to respond
- Rate limiting: Built-in retry logic handles temporary rate limits
- Command not found: Make sure the package is properly installed globally
Debug Mode
For troubleshooting, you can check:
- Configuration file:
~/.lazyshell/config.json - System detection: The AI considers your OS, distro, and package manager
- Command history: Generated commands are added to your shell history
Contributing ๐ค
Contributions are welcome! Please feel free to submit a Pull Request.
Development Guidelines
- Follow TypeScript best practices
- Add tests for new features
- Update documentation as needed
- Run evaluations before submitting PRs
- Use the KISS principle (Keep It Simple Stupid)
- Follow GitHub flow (create feature branches)
License ๐
This project is licensed under the GPL-3.0 License - see the LICENSE file for details.
Acknowledgments
- Built with Commander.js
- Interactive prompts powered by @clack/prompts
- Clipboard integration via @napi-rs/clipboard
- AI SDK integration with Vercel AI SDK
- Bundled with pkgroll
- Powered by AI models from multiple providers
- Inspired by the need to be lazy (in a good way!)