π§ Second Brain AI agent
April 5, 2026 Β· View on GitHub
π§ Second Brain AI agent
Introducing the Second Brain AI Agent Project: Empowering Your Personal Knowledge Management
Are you overwhelmed with the information you collect daily? Do you often find yourself lost in a sea of markdown files, videos, web pages, and PDFs? What if there's a way to seamlessly index, search, and even interact with all this content like never before? Welcome to the future of Personal Knowledge Management: The Second Brain AI Agent Project.
π Inspired by Tiago Forte's Second Brain Concept
Tiago Forte's groundbreaking idea of the Second Brain has revolutionized the way we think about note-taking. Itβs not just about jotting down ideas; it's about creating a powerful tool that enhances learning and creativity. Learn more about Building a Second Brain by Tiago Forte here.
πΌ What Can the Second Brain AI Agent Project Do for You?
-
Automated Indexing: No more manually sorting through files! Automatically index the content of your markdown files along with contained links, such as PDF documents, YouTube videos, and web pages.
-
MCP-Powered Retrieval: Use the built-in Model Context Protocol (MCP) server to pull the most relevant context from your notes and plug it into the LLM or workflow of your choice.
-
Effortless Integration: Whether you follow the Second Brain method or have your own unique way of note-taking, our system seamlessly integrates with your style, helping you harness the true power of your information.
-
Enhanced Productivity: Spend less time organizing and more time innovating. By accessing your information faster and more efficiently, you can focus on what truly matters.
β Who Can Benefit?
- Professionals: Streamline your workflow and find exactly what you need in seconds.
- Students: Make study sessions more productive by quickly accessing and understanding your notes.
- Researchers: Dive deep into your research without getting lost in information overload.
- Creatives: Free your creativity by organizing your thoughts and ideas effortlessly.
π Get Started Today
Don't let your notes and content overwhelm you. Make them your allies in growth, innovation, and productivity. Join us in transforming the way you manage your personal knowledge and take the leap into the future.
Details
If you take notes using markdown files like in the Second Brain method or using your own way, this project automatically indexes the content of the markdown files and the contained links (pdf documents, youtube video, web pages) and allows you to ask question about your content using the OpenAI Large Language Model.
The system is built on top of the LangChain framework and the ChromaDB vector store.
The system takes as input a directory where you store your markdown notes. For example, I take my notes with Obsidian. The system then processes any change in these files automatically with the following pipeline:
graph TD A[Markdown files from your editor]-->B[Text files from markdown and pointers]-->C[Text Chunks]-->D[Vector Database]-->E[AI Agent]
From a markdown file, transform_md.py extracts the text from the markdown file, then from the links inside the markdown file, it extracts pdf, url, youtube video and transforms them into text.
Supported link formats:
- Local PDFs:
~/Documents/report.pdfor/home/user/papers/research.pdf - Remote PDFs:
https://arxiv.org/pdf/2305.04091.pdf - Web pages:
https://example.com/article - YouTube videos:
https://www.youtube.com/watch?v=VIDEO_ID - File URLs:
file:///path/to/document.pdf
There is some support to extract history data from the markdown files: if there is an ## History section or the file name contains History, the file is split in multiple parts according to <day> <month> <year> sections like ### 10 Sep 2023.
From these text files, transform_txt.py breaks these text files into chunks, create a vector embeddings and then stores these vector embeddings into a vector database.
To be able to manipulate dates for activity reports. The system relies on some naming conventions. The first one is filenames containing History, Journal or StatusReport are considered journals with entries in this format: ## 02 Dec 2024 for each date. Other files can have an ## History section with entries in this format: ### 02 Dec 2024 for each date.
To classify documents, the second brain agent uses a concept of a domain per document. The domain metadata is computed for each document by removing numbers and these strings: At, Journal, Project, Notes and History. This is handy if you use a documents named like WorkoutHistory202412.md then the domain is Workout.
To know which domain to use to filter documents, the second brain agent uses a special document that can be described in the .env files in the SBA_ORG_DOC variable and is defaulting to SecondBrainOrganization.md. This document describes the mapping between domains and other concepts if you want for example to separate work and personal activities.
MCP Server
The Second Brain Agent relies on an AI Agent using the Second Brain MCP (Model Context Protocol) server that provides programmatic access to the vector database and document retrieval system.
MCP Server Features
- Retrieve Context: Use
search_documentsto stream back the most relevant chunks from your notes - Search Documents: Perform semantic search across your documents with metadata filtering
- Document Management: Get document counts, metadata, and list available domains
- Domain-based Search: Search within specific domains (work, personal, etc.)
- Recent Documents: Retrieve recently accessed documents
Using the MCP Server
-
Install the MCP server:
uv add fastmcp -
Run the MCP server:
uv run python mcp_server.py -
Test the server:
make test -
Configure MCP clients using the
mcp_config.jsonfile:{ "mcpServers": { "second-brain-agent": { "command": "/your/path/to/second-brain-agent/mcp-server.sh" } } }
Available MCP Tools
search_documents: Search for documents using semantic similarityget_document_count: Get the total number of documentsget_domains: List all available domainsget_recent_documents: Get recently accessed documents
Installation
You need a Python 3 interpreter, uv and the inotify-tools installed. All this has been tested under Fedora Linux 42 on my laptop and Ubuntu latest in the CI workflows. Let me know if it works on your system.
Get the source code:
$ git clone https://github.com/flepied/second-brain-agent.git
Copy the example .env file and edit it to suit your settings:
$ cp example.env .env
Install the dependencies using uv:
$ uv sync --all-extras
chromadb is intentionally not installed through uv. This repository loads the
Python package directly from a sibling checkout at ../chroma/chromadb so local
development can reuse the Chroma Python sources without building the native
Chroma package. The actual Chroma server still runs in Docker.
The Chroma container persists its database under /chroma/chroma, so
compose.yaml bind-mounts $DSTDIR/Db there. If that mount
target changes, sba-txt will rebuild the vector database on the next start.
Then to activate the virtual environment, do:
$ source .venv/bin/activate
systemd services
To install systemd services to manage automatically the different scripts when the operating system starts, use the following command (need sudo access):
$ ./install-systemd-services.sh
To see the output of the md and txt services:
$ journalctl --unit=sba-md.service --user
$ journalctl --unit=sba-txt.service --user
Using the MCP server
The MCP server is now the single interface to explore your second brain. Once the environment is configured you can:
# Start the MCP server
$ uv run python mcp_server.py
To experiment without a dedicated MCP client, the repository ships with a small helper script:
$ uv run python example_mcp_usage.py
The script showcases how to call the exposed tools (document search, counts, domains, and recents) and prints sample results in the terminal. Check the output for an example MCP client configuration snippet that you can paste into Cursor or any other MCP-compatible tool.
Command-line helper
Prefer a quick terminal search? Use the CLI wrapper:
$ uv run python qa.py "What did I learn about LangChain last month?" -k 3
# Filter example: limit to history documents
$ uv run python qa.py "Summarize last quarter highlights" --filter '{"type": {"$eq": "history"}}'
It prints the top matches with their sources so you can jump straight into the relevant files.
Development
Install the test dependencies using uv:
$ uv sync --extra test
And then run the tests, like this:
# Sync the test environment
$ make sync-test
# Run all tests (unit + integration)
$ make test
# Run only unit tests (no external dependencies required)
$ make test-unit
# Run only integration tests (requires vector database)
$ make test-integration
Note: Integration tests require a running vector database and are automatically excluded during pre-commit hooks. Unit tests run without external dependencies and are suitable for CI/CD pipelines.
Full Integration Testing
For comprehensive testing of the entire system including the vector database and MCP server:
$ ./integration-test.sh
This script:
- Sets up a complete test environment with ChromaDB
- Processes test documents through the system
- Runs pytest integration tests to validate MCP server functionality
- Tests document lifecycle (create, modify, delete)
- Provides end-to-end validation of the system
Note: This requires docker-compose/podman-compose and will create temporary test data.
pre-commit
Before submitting a PR, make sure to activate pre-commit:
uv run pre-commit install