SLM Mesh (SuperLocalMemory Mesh)

April 17, 2026 · View on GitHub

Peer-to-peer communication for AI coding agents.

npm version License: Elastic-2.0 Tests: 480 passing Coverage: 100%

Part of the Qualixar research initiative by Varun Pratap Bhardwaj.

SLM stands for SuperLocalMemory — the local-first AI memory system. SLM Mesh is the communication layer that wires AI agent sessions together.


Your AI sessions can finally talk to each other.

https://github.com/user-attachments/assets/1016ec92-8d71-4570-89a8-3e512850557c

3 AI agents across VS Code, iTerm2, and Antigravity — discovering each other, sharing state, and coordinating in real-time.


The Problem

Every developer running parallel AI coding sessions hits the same wall: sessions are completely isolated. Session A fixes a database race condition. Session B is building a feature that touches the same database. Session B has no idea what Session A just did.

You become the message bus — copy-pasting context between terminals, losing time, losing focus.

This is not a Claude Code problem. This is not a Cursor problem. This is an AI agent architecture problem. Every tool — Claude Code, Cursor, Windsurf, Aider, Codex — has isolated sessions. SLM Mesh fixes that.

Quick Start

# Install
npm install -g slm-mesh

# Add to Claude Code
claude mcp add --scope user slm-mesh -- npx slm-mesh

# Optional: Add slash commands (works in every project)
mkdir -p ~/.claude/commands
cp $(npm root -g)/slm-mesh/skills/*.md ~/.claude/commands/
# Now type /mesh-peers, /mesh-send, /mesh-lock, /mesh-status, /mesh-sync in any session

Zero config. Zero cloud. Zero dangerous flags. Works with any MCP-compatible AI coding agent.

No Dangerous Flags Required

Some tools require --dangerously-skip-permissions to work. SLM Mesh does not. It runs entirely on localhost with bearer token authentication. No network exposure. No elevated permissions. No flags to explain to your security team.

How It Works

Developer starts AI agent session
  → Agent spawns SLM Mesh MCP server (stdio)
    → MCP server auto-starts broker on localhost (if not running)
    → MCP server registers with broker, gets peer ID
    → Broker opens Unix Domain Socket for real-time push (<100ms)
    → 8 tools available to the agent

Developer closes session
  → MCP server unregisters, broker releases locks, notifies other peers
  → When no peers remain, broker auto-shuts down after 60s

Everything runs on localhost. No cloud. No telemetry. Your data never leaves your machine.

Features

SLM Mesh is built on 6 pillars:

PillarWhat It Does
Peer DiscoveryAuto-detect all running AI agent sessions. Register on start, deregister on shutdown, heartbeat to detect crashes. Scope by machine, directory, or git repo.
Direct MessagingSend structured messages between sessions with delivery confirmation and queryable history.
BroadcastOne-to-all message delivery for config changes, alerts, and coordination.
Shared StateKey-value scratchpad accessible by all peers. Namespaced by project.
File CoordinationAdvisory file locks prevent two agents from editing the same file. Auto-expire after configurable timeout.
Event BusSubscribe to peer_joined, peer_left, state_changed, file_locked, file_unlocked, and custom events.

Installation

npm install -g slm-mesh

npx (no install)

npx slm-mesh

MCP Setup: Claude Code

claude mcp add --scope user slm-mesh -- npx slm-mesh

MCP Setup: Cursor

Add to .cursor/mcp.json:

{
  "mcpServers": {
    "slm-mesh": {
      "command": "npx",
      "args": ["slm-mesh"]
    }
  }
}

MCP Setup: VS Code / Windsurf / Other MCP Agents

Add to your MCP settings:

{
  "mcpServers": {
    "slm-mesh": {
      "command": "npx",
      "args": ["slm-mesh"]
    }
  }
}

8 MCP Tools

When connected via MCP, your AI agent gets these tools:

ToolDescription
mesh_peersDiscover other AI agent sessions on this machine (scope: machine, directory, or repo)
mesh_summarySet a description of what you are working on (visible to other agents)
mesh_sendSend a message to a specific peer or broadcast to all (to: "all")
mesh_inboxRead messages from other sessions (filter: unread or all)
mesh_stateRead or write shared key-value state (get, set, list, delete)
mesh_lockAdvisory file locking (lock, unlock, query) with auto-expire
mesh_eventsRead or subscribe to mesh events (peer_joined, state_changed, etc.)
mesh_statusCheck broker health, peer count, message stats

CLI

SLM Mesh includes a full CLI for humans and scripts:

# Broker
slm-mesh start              # Start broker (foreground)
slm-mesh stop               # Stop broker
slm-mesh status             # Health check + stats

# Discovery
slm-mesh peers              # List active sessions

# Messaging
slm-mesh send <id> "message"
slm-mesh broadcast "message"

# Shared State
slm-mesh state set key value
slm-mesh state get key

# Locks
slm-mesh lock list

# Events
slm-mesh events

# JSON mode (for scripts)
slm-mesh status --json
slm-mesh peers --json

Python Client

pip install slm-mesh
from slm_mesh import SLMMeshClient

client = SLMMeshClient()
peers = client.peers()
client.send(my_id, peers[0].id, "What are you working on?")

The Python client wraps the broker HTTP API. Zero dependencies (stdlib only). The broker must be running (auto-started by any MCP connection or slm-mesh start).

Architecture

┌─────────────────────────────────────────────────────┐
│                   SLM Mesh v1.0.0                    │
│                                                      │
│  ┌──────────────┐     ┌───────────────────────────┐  │
│  │ Broker        │     │ MCP Server (per session)  │  │
│  │ (auto-start)  │◄───►│ 8 tools for AI agents    │  │
│  │ localhost      │     │ Registers with broker    │  │
│  │ SQLite + UDS   │     │ Receives push via UDS    │  │
│  └──────────────┘     └───────────────────────────┘  │
│         ▲                                            │
│         │              ┌───────────────────────────┐  │
│         └─────────────►│ CLI (standalone)          │  │
│                        │ slm-mesh peers/send/...   │  │
│                        └───────────────────────────┘  │
│                                                      │
│  ┌──────────────────────────────────────────────────┐ │
│  │ Adapter Layer                                     │ │
│  │ Backend: SQLite (default) | Custom                │ │
│  │ Memory Bridge: SuperLocalMemory (optional)        │ │
│  └──────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────┘
  • Broker — One per machine. Auto-starts on first use, auto-stops when idle. SQLite with WAL mode. Real-time push via Unix Domain Sockets.
  • MCP Server — One per AI agent session. Stdio transport. Registers with broker. Exposes 8 tools.
  • CLI — Standalone binary. HTTP to broker. For humans and scripts.
  • Adapters — Pluggable storage backends and optional memory bridges.

Security

  • Localhost only — Broker binds to 127.0.0.1. Cannot be overridden to bind to 0.0.0.0.
  • Bearer token auth — Random 32-byte token generated per broker session. All requests require Authorization: Bearer <token>.
  • No shell injection — All process spawning uses execFileSync with argument arrays.
  • Input validation — UUID peer IDs, 64KB max payload, 500 char max summary, rate limiting (100 req/10s per peer).
  • File permissions — Database, token, PID files created with 0o600. Data directory with 0o700.
  • No telemetry — Nothing phones home. No analytics. No tracking.

Configuration

All configuration is optional. Defaults work out of the box.

VariableDefaultDescription
SLM_MESH_PORT7899Broker HTTP port
SLM_MESH_DATA_DIR~/.slm-mesh/Data directory
SLM_MESH_HOST127.0.0.1Broker bind address (localhost only)
SLM_MESH_HEARTBEAT_MS15000Heartbeat interval
SLM_MESH_STALE_MS30000Time before peer marked stale
SLM_MESH_DEAD_MS60000Time before stale peer removed
SLM_MESH_LOCK_TTL_MIN10Default lock timeout (minutes)

Agent Compatibility

SLM Mesh works with any AI coding agent that supports the Model Context Protocol:

AgentStatus
Claude CodeSupported
CursorSupported
VS Code (Copilot)Supported
WindsurfSupported
AiderSupported
CodexSupported
Any MCP clientSupported

Agent auto-detection — SLM Mesh detects which agent spawned it by inspecting the process tree and environment variables. This metadata is visible to other peers.

SLM Mesh vs claude-peers

Inspired by the growing need for inter-session communication in AI coding workflows. SLM Mesh takes a production-first approach with persistence, security, and agent-agnostic design.

claude-peers proved the demand. SLM Mesh is the production-grade answer.

CapabilitySLM Meshclaude-peers
MCP tools84
Peer discoveryScoped (machine/dir/repo)Machine only
Direct messagingYesYes
BroadcastYesYes
Shared stateYesNo
File lockingYesNo
Event busYesNo
CLIFull (with --json)No
Python clientYesNo
Agent-agnosticAny MCP agentClaude Code only
Dangerous flagsNot requiredRequired
Test coverage480 tests, 100% lines0 tests
Bearer token authYesNo
Rate limitingYesNo
RuntimeNode.jsBun

Documentation

Full documentation is available in the docs/ folder:

Contributing

Contributions are welcome. See CONTRIBUTING.md for guidelines.

git clone https://github.com/qualixar/slm-mesh.git
cd slm-mesh
npm install
npm test           # 480 tests
npm run typecheck  # 0 errors
npm run build      # Production build

We use TDD and require 100% line coverage for all changes.

License

Elastic License 2.0 — Copyright 2026 Varun Pratap Bhardwaj.

The Qualixar Ecosystem

Qualixar is a research initiative building the operating system for AI agents:

ProductRoleDescription
SuperLocalMemoryThe BrainLocal-first AI memory — persistent semantic memory for coding agents
SLM MeshThe Nervous SystemPeer-to-peer communication — carries signals between agent sessions
Qualixar OSThe BodyAgent orchestration — the full operating system for AI agent teams

Each product works independently. Together, they form a complete agent operating system.

SLM Mesh can optionally bridge messages to SuperLocalMemory for cross-session recall — but it works perfectly standalone with zero dependencies on other Qualixar products.


Part of the Qualixar research initiative by Varun Pratap Bhardwaj.


⭐ Support This Project

If this project solves a real problem for you, please star the repo — it helps other developers discover Qualixar and signals that the AI agent reliability community is growing. Every star matters.

Star History Chart


Part of the Qualixar AI Agent Reliability Platform

Qualixar is building the open-source infrastructure for AI agent reliability engineering. Seven products, seven peer-reviewed papers, one coherent platform. Each tool solves one reliability pillar:

ProductPurposeInstallPaper
SuperLocalMemoryPersistent memory + learning for AI agentsnpx superlocalmemoryarXiv:2604.04514
Qualixar OSUniversal agent runtime (13 execution topologies)npx qualixar-osarXiv:2604.06392
SLM MeshP2P coordination across AI agent sessionsnpm i slm-mesh
SLM MCP HubFederate 430+ MCP tools through one gatewaypip install slm-mcp-hub
AgentAssayToken-efficient AI agent testingpip install agentassayarXiv:2603.02601
AgentAssertBehavioral contracts + drift detectionpip install agentassert-abcarXiv:2602.22302
SkillFortifyFormal verification for AI agent skillspip install skillfortifyarXiv:2603.00195

Zero cloud dependency. Local-first. EU AI Act compliant.

Start here → qualixar.com · All papers on Qualixar HuggingFace