README.md
May 13, 2026 · View on GitHub
Table of Contents
- Project Overview
- Use Cases
- Quick Start
- Architecture Methods
- Benchmarks
- Evaluation
- Citations
- Stay Tuned
- Contributing
Project Overview
EverOS is a unified home for applying, building, and evaluating long-term memory in self-evolving agents. The repository is organized around three essential parts:
| Part | What it gives you | Start here |
|---|---|---|
| Use cases | Apps, demos, and integrations showing how memory changes real agent workflows. | use-cases/ |
| Architecture methods | Memory systems and algorithms you can run, extend, or compare. | methods/ |
| Benchmarks | Open evaluation suites for memory quality and agent self-evolution. | benchmarks/ |
At the center of EverOS is EverCore, a long-term memory operating system for agents. If you are new to the project, scan the use cases first to see what memory enables, then follow the Quick Start to run EverCore locally. The architecture and benchmark sections below give you the deeper reference material when you are ready to compare systems or reproduce results.
Use Cases
Use cases show what persistent memory makes possible in real products and workflows. Some examples are packaged in this repository; others point to external demos or integrations you can study and adapt.
Rokid AI Assistant with EverOSConnect to EverOS within Rokid Glasses enabling long-term memory for all of your smart activities. Coming soon |
Creative Assistant with MemoryCreative assistant with long-term memory, never forget your crativites anymore. Coming soon |
Earth Online Memory GameEarth Online is a memory-aware productivity game that turns everyday planning into a living quest log. |
Multi-Agent Orchestration PlatformGolutra presents a multi-agent workforce for engineering teams, extending the IDE model from a single assistant to coordinated agents. |
Your Personal Tasting UniverseRecord, visualize, and explore your tasting journey through an immersive 3D star map. |
EverOS Open HerBuild AI that feels. Open-source persona engine — personality emerges from neural drives, not prompts. Inspired by Her. |
Browser Agent for Personal MemoryRuminer brings persistent memory to a browser agent so it can carry personal context across web tasks. |
EverMem Sync with EverOSOne command to connect any AI coding CLI to EverMemOS long-term memory. |
MCO - Orchestrate AI Coding AgentsMCO equips your primary agent with an agent team that can work together to solve complex tasks. |
Study Buddy with Self-Evolving MemoryStudy proactively with an agent that has self-evolving memory. |
Alzheimer’s Memory AssistantEmpowering individuals with advanced memory support and daily assistance. |
Memory-Driven Multi-Agent NPC ExperienceAn iOS sci-fi mystery game where players explore and uncover the truth. |
Mobi CompanionAn iOS app where users create, nurture, and live with a personalized AI companion called Mobi. |
AI Wearable with MemoryA context-native AI wearable that listens to everyday life and converts conversations into memory. |
OpenClaw Agent MemoryA 24/7 agent workflow with continuous learning memory across sessions. |
Live2D Character with MemoryAdd long-term memory to a real-time Live2D character, powered by TEN Framework. |
Computer-Use with MemoryRun screenshot-based analysis with computer-use and store the results in memory. |
Game of Thrones MemoriesA demonstration of AI memory infrastructure through an interactive Q&A experience with A Game of Thrones. |
Claude Code PluginPersistent memory for Claude Code. Automatically saves and recalls context from past coding sessions. |
Memory Graph VisualizationExplore stored entities and relationships in a graph interface. Frontend demo; backend integration is in progress. |
Quick Start
Choose the path that matches your goal:
git clone https://github.com/EverMind-AI/EverOS.git
cd EverOS
| Goal | Component | Entry Point |
|---|---|---|
| Build agents with long-term memory | EverCore | methods/EverCore/ |
| Explore the hypergraph memory architecture | HyperMem | methods/HyperMem/ |
| Evaluate memory system quality | EverMemBench | benchmarks/EverMemBench/ |
| Measure agent self-evolution | EvoAgentBench | benchmarks/EvoAgentBench/ |
| Adapt an example app or integration | Use cases | use-cases/ |
Each component has its own installation guide, dependency configuration, and usage examples.
EverCore
The fastest way to run a memory system locally is to start with EverCore:
cd methods/EverCore
# Start Docker services
docker compose up -d
# Install dependencies
curl -LsSf https://astral.sh/uv/install.sh | sh
uv sync
# Configure API keys
cp env.template .env
# Edit .env and set:
# - LLM_API_KEY (for memory extraction)
# - VECTORIZE_API_KEY (for embedding/rerank)
# Start server
uv run python src/run.py
# Verify installation
curl http://localhost:1995/health
# Expected response: {"status": "healthy", ...}
Server runs at http://localhost:1995 · Full Setup Guide
Basic Usage
Store and retrieve memories with simple Python code:
import requests
API_BASE = "http://localhost:1995/api/v1"
# 1. Store a conversation memory
requests.post(f"{API_BASE}/memories", json={
"message_id": "msg_001",
"create_time": "2025-02-01T10:00:00+00:00",
"sender": "user_001",
"content": "I love playing soccer on weekends"
})
# 2. Search for relevant memories
response = requests.get(f"{API_BASE}/memories/search", json={
"query": "What sports does the user like?",
"user_id": "user_001",
"memory_types": ["episodic_memory"],
"retrieve_method": "hybrid"
})
result = response.json().get("result", {})
for memory_group in result.get("memories", []):
print(f"Memory: {memory_group}")
More Examples · API Reference · Interactive Demos
Architecture Methods
These are the memory architectures currently included in EverOS. Use them as runnable systems, research references, or starting points for your own agent memory layer.
EverCoreA self-organizing memory operating system inspired by biological imprinting. Extracts, structures, and retrieves long-term knowledge from conversations so agents can remember, understand, and continuously evolve. LoCoMo 93.05% · LongMemEval 83.00% |
HyperMemA hypergraph-based hierarchical memory architecture that captures high-order associations through hyperedges, with topic, event, and fact layers for coarse-to-fine conversation retrieval. LoCoMo 92.73% |
Benchmarks
These benchmarks provide shared standards for measuring memory quality and agent self-evolution across systems.
EverMemBenchThree-layer memory quality evaluation: factual recall, applied reasoning, and personalized generalization. |
EvoAgentBenchAgent self-evolution evaluation through longitudinal growth curves, transfer efficiency, error avoidance, and skill-hit quality. |
Evaluation
Use the evaluation runner to reproduce EverCore results or compare another memory system against the same benchmark tasks.
Benchmark Results
Supported Benchmarks
- LoCoMo — Long-context memory benchmark with single/multi-hop reasoning
- LongMemEval — Multi-session conversation evaluation
- PersonaMem — Persona-based memory evaluation
Run Evaluations
cd methods/EverCore
# Install evaluation dependencies
uv sync --group evaluation
# Run smoke test (quick verification)
uv run python -m evaluation.cli --dataset locomo --system everos --smoke
# Run full evaluation
uv run python -m evaluation.cli --dataset locomo --system everos
# View results
cat evaluation/results/locomo-everos/report.txt
Full Evaluation Guide · Complete Results
Citations
If EverOS helps your research, please cite the relevant paper:
@article{hu2026evermemos,
title = {EverMemOS: A Self-Organizing Memory Operating System for Structured Long-Horizon Reasoning},
author = {Chuanrui Hu and Xingze Gao and Zuyi Zhou and Dannong Xu and Yi Bai and Xintong Li and Hui Zhang and Tong Li and Chong Zhang and Lidong Bing and Yafeng Deng},
journal = {arXiv preprint arXiv:2601.02163},
year = {2026}
}
@article{yue2026hypermem,
title = {HyperMem: Hypergraph Memory for Long-Term Conversations},
author = {Juwei Yue and Chuanrui Hu and Jiawei Sheng and Zuyi Zhou and Wenyuan Zhang and Tingwen Liu and Li Guo and Yafeng Deng},
journal = {arXiv preprint arXiv:2604.08256},
year = {2026}
}
@article{hu2026evaluating,
title = {Evaluating Long-Horizon Memory for Multi-Party Collaborative Dialogues},
author = {Chuanrui Hu and Tong Li and Xingze Gao and Hongda Chen and Yi Bai and Dannong Xu and Tianwei Lin and Xiaohong Li and Yunyun Han and Jian Pei and Yafeng Deng},
journal = {arXiv preprint arXiv:2602.01313},
year = {2026}
}
Stay Tuned
Star the repo or join the community links above to follow new architecture methods, benchmark releases, and memory-enabled use cases.
Contributing
Contributions are welcome across the whole repository: architecture methods, benchmark coverage, use-case examples, documentation, and bug fixes. Browse Issues to find a good entry point, then open a PR when you are ready.
Tip
Welcome all kinds of contributions 🎉
Help make EverOS better. Code, documentation, benchmark reports, use-case write-ups, and integration examples are all valuable. Share your projects on social media to inspire others.
Connect with one of the EverOS maintainers @elliotchen200 on 𝕏 or @cyfyifanchen on GitHub for project updates, discussions, and collaboration opportunities.
Code Contributors
Contribution Guidelines
Read the Contribution Guidelines for setup, pull request expectations, and use-case submission notes. For responsible disclosure, see the Security Policy.
License, Conduct, and Acknowledgments
Apache 2.0 • Code of Conduct • Acknowledgments