Claude Code Best Practices

April 18, 2026 · View on GitHub

简体中文 | English

Claude Code Best Practices

Claude Code is Anthropic's official CLI AI coding tool. It's not just a code completion tool — it's an Agent that understands your entire project and executes complex tasks autonomously.


Core Concepts

Before diving into tips, get familiar with these core concepts:

ConceptDescriptionUse Case
SubagentChild process Agent, executes tasks independentlyParallel processing, context isolation
CommandShortcuts starting with /Quick access to common operations
SkillMethodology files under .claude/skills/Teach the AI how to do things
HookScripts that run before/after tool callsAutomated validation, notifications
MCP ServerModel Context Protocol serviceExtend AI capabilities (databases, APIs, etc.)
MemoryPersistent memoryRetain context across conversations
CheckpointAuto-saved snapshotsSafe rollback

Getting Started

Installation

# Install globally via npm
npm install -g @anthropic-ai/claude-code

# Or run directly with npx
npx @anthropic-ai/claude-code

First Run

cd /your/project
claude

Once in interactive mode, try these:

# Understand the project
> What does this project do? Walk me through the architecture.

# Small task
> Add a truncate function to utils/string.ts that cuts strings at a given length and appends an ellipsis.

# Big task (just describe it — Claude Code plans and executes automatically)
> Refactor all endpoints under src/api/ to use a unified error handling format.

CLAUDE.md — Project Configuration File

Create a CLAUDE.md in your project root. Claude Code reads it on every startup:

# Project Overview
This is an e-commerce admin dashboard built with Next.js 14 + TypeScript.

# Code Conventions
- Use TypeScript strict mode
- Components use PascalCase
- API routes go under src/app/api/
- Tests use Vitest, placed in __tests__/

# Common Commands
- Dev server: pnpm dev
- Run tests: pnpm test
- Type check: pnpm typecheck
- Lint: pnpm lint

# Important Notes
- Do NOT modify anything under src/legacy/ — it's a compatibility layer for the old version
- Database migrations must be generated via drizzle-kit — never write raw SQL

Prompting Tips

1. Provide Full Context in One Go

❌ Bad: Add a login feature.
✅ Good: Add a JWT login endpoint under src/app/api/. Use bcrypt for password
    verification, set token expiry to 7 days, and return errors in a unified
    { code, message } format. Follow the style of src/app/api/register/route.ts.

2. Point to Reference Files

Follow the style of src/components/UserTable.tsx.
Create a new src/components/OrderTable.tsx with
pagination, sorting, and search using TanStack Table.

3. Analyze First, Act Second

Read src/services/payment.ts and src/services/order.ts first.
Analyze the current payment flow for issues,
propose improvements, and wait for my confirmation before making changes.

4. Limit the Scope

Only modify src/utils/date.ts.
Don't touch other files. Don't add new dependencies.

5. Draw Red Lines with Negatives

Implement a caching layer.
Don't use Redis — use in-memory caching.
Don't add new dependencies — use a plain Map.
Don't change any existing function signatures.

Advanced Techniques

Agent Capabilities

Claude Code is an Agent by design — describe the task and it plans and executes autonomously:

> Add unit tests to the entire project, targeting 80% coverage.

Claude Code will: read code → make a plan → write tests → run tests → fix failures → verify passing.

Tip: No special commands or flags needed. Just describe the desired outcome. Claude Code decides on its own whether multi-step execution is necessary.

Subagent Parallelism

Split large tasks across multiple Subagents running in parallel:

Do these three things in parallel:
1. Add pagination parameters to src/api/users.ts
2. Add date filtering to src/api/orders.ts
3. Add search to src/api/products.ts
Use subagents to execute each one independently.

Skill Files

Skills are one of the most powerful features. Place methodology files under .claude/skills/ and the AI loads them automatically:

.claude/skills/
├── brainstorming.md      # Requirements analysis workflow
├── debugging.md          # Debugging methodology
├── code-review.md        # Code review standards
└── verification.md       # Pre-completion verification

Quick-install superpowers-zh (20 battle-tested skills):

cd /your/project
npx superpowers-zh
# Auto-detects project type, installs to .claude/skills/
# Also supports Cursor, Gemini CLI, etc.

After installation, Claude Code loads these skills automatically. Invoke them with / commands — e.g., /brainstorming for requirements analysis, /debugging for systematic debugging. See superpowers-zh.

Need specialized roles? Use the 211 AI expert personas from agency-agents-zh:

# Copy role files to .claude/skills/ to use them
# Examples: database optimizer, security engineer, code reviewer, etc.

Reference roles in CLAUDE.md:

# Roles
When I say "review as a security expert", act according to .claude/skills/security-engineer.md.

Hook Automation

Hooks run scripts automatically before or after tool calls:

// .claude/settings.json
{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "Bash",
        "hooks": [
          { "type": "command", "command": "echo 'About to run a command'" }
        ]
      }
    ],
    "PostToolUse": [
      {
        "matcher": "Write|Edit",
        "hooks": [
          { "type": "command", "command": "pnpm lint --fix" }
        ]
      }
    ]
  }
}

Hook trigger points include PreToolUse, PostToolUse, Notification, etc. Each trigger uses matcher to match tool names. See the official docs.

Use cases:

  • Auto-run related tests after every file edit
  • Auto-lint before every commit
  • Auto-add copyright headers to new files

Git Worktree Isolation

Use worktrees for experimental changes without affecting the main branch:

Do this refactor in a git worktree.
If the result is good, merge it. If not, discard it.

Memory Across Conversations

Claude Code stores memory under ~/.claude/ for cross-conversation use:

Remember: this project deploys on Vercel,
CI uses GitHub Actions, and the database is Supabase PostgreSQL.
Factor these in for all deployment-related tasks going forward.

Multi-Agent Orchestration

When tasks are complex enough to need multiple AI roles collaborating (e.g., architect designs → developer implements → reviewer audits), use agency-orchestrator for YAML-based orchestration:

# workflow.yaml
name: New Feature Development
steps:
  - agent: Software Architect
    task: Analyze requirements, produce technical design
    output: design.md

  - agent: Backend Developer
    task: Implement code per design.md
    input: design.md

  - agent: Code Reviewer
    task: Review code quality and security

Best for: team-scale complex tasks, deliverables requiring multiple review rounds, standardized development workflows.


Debugging Tips

1. Let It Read Logs — Don't Let It Guess

❌ Bad: Tests are failing, fix them.
✅ Good: Run pnpm test src/api/users.test.ts,
    show me the failure output, analyze the root cause, then fix.

2. Narrow the Scope

Look only at lines 45-80 of src/services/auth.ts.
A user reports "token still shows expired after refresh".
Analyze possible causes first — don't change anything yet.

3. Diff Analysis

Check git log for recent changes to src/api/payment.ts.
Compare logic before and after to find which commit introduced this bug.

Tips Cheat Sheet (60+)

Organized by category, one tip per row. Bookmark this section — it's all you need.

Prompting (12)

#TipDetails
1Analyze before actingHave Claude read the code and propose a plan. Confirm before it makes changes. Prevents premature rewrites
2Start overNot happy? Say "Scrap this approach. Use what you've learned to design an elegant solution from scratch"
3Quiz me"Review this change and ask me questions until you're confident I understand it, then open the PR"
4Limit scope"Only modify this one file. Don't touch anything else. Don't add new dependencies"
5Specify reference files"Follow the style of src/api/users.ts when writing orders.ts" — 10x better than a vague description
6Draw red lines with negatives"Don't use Redis. Don't add new dependencies. Don't change function signatures"
7Ask for 2-3 options"Give me 2 options with pros and cons. I'll pick one, then you implement it"
8Confirm step by step"Pause after each step and wait for my confirmation. Don't do everything at once"
9Define done criteria"Definition of done: all tests pass + zero TypeScript errors + lint passes"
10Use ultrathink for deep reasoningStart your prompt with "ultrathink" or "think really hard" to trigger extended thinking
11Let it write commit messages"Write a commit message that explains why this change was made, not what changed"
12English prompts are more preciseFor complex technical tasks, English prompts yield more precise results. Use your native language for simple tasks

CLAUDE.md (10)

#TipDetails
1Keep it under 200 linesToo long and the AI ignores the bottom. For large projects, split into .claude/rules/
2The "run tests" testIf anyone opens Claude Code and says "run tests" and it succeeds, your CLAUDE.md is good enough
3Use settings.json over "don't""Don't modify file X" in CLAUDE.md gets ignored easily. Permission controls in settings.json are more reliable
4Use <important> tagsWrap critical rules in <important> tags — the model pays more attention to them
5List common commandsInclude dev, test, build, lint, and deploy commands so the AI doesn't have to guess
6List prohibitionsExplicitly state which files are off-limits, which methods are banned, which dependencies must not be added
7Document directory structureDescribe key directories so the AI knows where things live
8Layer your CLAUDE.md filesGlobal rules in the root, module-specific rules in subdirectories (e.g., src/api/CLAUDE.md)
9Update regularlyAs the project evolves, keep CLAUDE.md current. Stale rules are worse than no rules
10Use .claude/rules/ for conditional loadingIn large projects, use globs to load rules by file type instead of cramming everything in one file

Agent & Subagent (10)

#TipDetails
1Split subagents by featureCreate a "payment module agent" — not a generic "backend engineer"
2Adversarial testingOne agent writes code, another (with independent context) hunts for bugs
3Write skill descriptions for the modelWrite "trigger when the user wants to do X" — not a human-friendly summary
4Add gotchas to skillsRecord mistakes Claude has made inside skill files. Highest signal-to-noise content you can write
5context: fork for isolationRun a skill in an isolated subagent; the main context only sees the final result
6One agent, one clear taskA single agent with one task succeeds far more often than one agent juggling five
7Pass files between agents, not messagesHave agent A write output to a file; agent B reads the file. More reliable than verbal handoffs
8Use worktrees for experimentsRun subagents in git worktrees for experimental changes. Discard if unhappy
9Extend capabilities with MCPConnect databases, call APIs, query docs — MCP makes agents 10x more capable
10Custom commands for repetitive tasksWrap frequent operations into /command shortcuts for one-click execution

Hooks (8)

#TipDetails
1PostToolUse auto-formatAuto-run prettier/eslint --fix after Claude writes code to prevent formatting issues
2PostToolUse auto-testAuto-run related tests after every file modification to catch issues early
3PreToolUse block dangerous opsCheck commands before Bash tool execution; block rm -rf and similar
4Notification hook for alertsAuto-send Slack/webhook notifications when long tasks complete
5Stop hook for forced verificationRemind Claude to verify its own output at the end of each turn
6Exit 1 in hooks to block executionA non-zero exit from a hook script blocks the tool call — use this to enforce rules
7Hook stdout as contextClaude sees the hook's stdout, so you can pass extra information through it
8Match tool names preciselyMatcher supports Write|Edit, Bash, etc. Don't use * to match everything

Workflow (12)

#TipDetails
1Manually /compact at 50% contextDon't wait for auto-compaction. Proactive compaction keeps AI quality high
2Esc Esc to revert to checkpointGone off track? Roll back via checkpoint instead of trying to fix in a polluted context
3Keep PRs small and focusedAim for a median of ~120 lines per PR. Split large changes into multiple PRs
4Finish migrations before new featuresA half-migrated codebase makes the AI pick wrong patterns. Keep the codebase clean
5New conversation for new tasksStart fresh for each independent task. Stale context from old conversations degrades quality
6Start with plan modeFor complex tasks, enter plan mode (/plan) first. Confirm the plan, then execute
7Fast iterations > one perfect shotShip an MVP, verify it works, then refine. Don't expect perfection from a single prompt
8Let Claude run tests itselfDon't run them for it. Let it run, read output, and fix. That's an agent's sweet spot
9Use --resume to continueInterrupted? Use claude --resume to restore context and keep going
10Use --print for non-interactive tasksIn CI/CD: claude --print "check code style" for automation
11Headless mode for batch tasksclaude -p "task" --output-format json is ideal for scripted invocations
12Parallel worktrees for throughputRun multiple Claude instances in separate worktrees to process modules in parallel

Git & PRs (8)

#TipDetails
1Let Claude write PR descriptions"Read the git diff and write a PR description explaining what changed and why"
2Squash merge for clean historyAI's intermediate commits are noisy. Squash to keep only the final result
3Isolate with feature branchesOne branch per task. Claude's changes never touch main directly
4Use Claude for code review"Read this PR's diff and review it for security, performance, and maintainability"
5Self-check before committing"Pre-commit check: any leftover console.log, TODO, or hardcoded values?"
6Don't amend the previous commitClaude sometimes uses --amend and overwrites your prior commit. Explicitly say "create a new commit"
7Use git stash to protect your workHave Claude git stash before making changes. Unhappy? git stash pop to restore
8Let Claude resolve merge conflicts"Look at the conflicting files, resolve by business logic, and preserve valid changes from both sides"

Debugging (10)

#TipDetails
1Let it run commands and read output"Run pnpm test and show me the failure output" is 10x better than "tests are broken, fix them"
2Narrow scope before asking"Look only at auth.ts lines 45-80" beats "this module has a bug"
3Use git log to compare"Check recent changes, compare before and after, find which commit introduced the bug"
4Reproduce before fixing"Write a test case that reproduces this bug first, then fix it and confirm the test goes from red to green"
5Binary search with bisect"Use git bisect to find the commit that introduced this bug"
6Read logs, don't guess"Check the last 50 lines of logs/error.log and analyze the error"
7Add temporary logging"Add console.log at key points to print variable values, run once, and review the output"
8Compare working vs. broken"User A works fine, user B doesn't. Compare the two requests for differences"
9Check environment differences"Works locally but not in production? Compare env vars, dependency versions, and Node version"
10Don't fix blindly"Give me 3 possible causes with investigation steps. I'll confirm before you change any code"

Cost & Performance (6)

#TipDetails
1Use Haiku for simple tasksclaude --model haiku for straightforward work — 10x cheaper
2Use headless mode for batch jobsScripted batch calls consume fewer tokens than interactive mode
3A precise CLAUDE.md saves tokensThe more precise your context, the fewer files the AI needs to read, and the lower the cost
4Avoid repeatedly reading large filesTell Claude the exact line range instead of having it read the entire file every time
5Use /compact to free contextCompress long conversations promptly to reduce per-turn token consumption
6Monitor usageCheck API usage regularly and set budget caps to avoid overspending

For more tips, see claude-code-best-practice.


Common Pitfalls

PitfallDescriptionSolution
Context overflowConversations get too long, AI gets worseStart new conversations regularly; pass context via CLAUDE.md
Hallucinated APIsAI invents APIs that don't existHave it check docs or grep to confirm first
Over-refactoringYou asked to fix a bug, it rewrites the whole fileExplicitly say "only fix this one thing, don't refactor"
Tests not runAI says "done" without verifyingUse a verification skill or hook to force validation
Missing error handlingQuick implementation with no edge case handlingExplicitly require error handling in your prompt

👉 Deep dive: Claude Code Pitfalls — 8 real-world traps, each with Symptom / Cause / Recovery / Prevention


Configuration Templates

Copy these directly into your project:

TemplatePurpose
CLAUDE.mdProject configuration file template — copy to project root and customize
settings.jsonPermission configuration template — copy to .claude/settings.json

Further Reading