Your codebase,
remembered.

Mnemo gives AI coding agents persistent memory of your project — decisions made, files in scope, blockers resolved — across every session.

$ npm install -g mnemo-cli
View on GitHub Read the docs →
~/projects/api — mnemo
$ mnemo feat context
# FEAT: payment-integration
Branch: feature/payment-integration
Status: in-progress
Last updated: 2026-04-25
## Relevant Files
· src/routes/payments.ts # main route handler
· src/services/stripe.ts
· src/models/order.ts
## Decisions
· 2026-04-20 Stripe Checkout, not Payment Intents — simpler for MVP
· 2026-04-22 Orders stay PENDING until webhook confirms payment
## Current Status
Webhook handler implemented, writing tests.
The problem

Agents start blind. Every session.

Every AI coding session starts the same way — the agent re-reads your files, re-traces your imports, re-asks what you're building. All the context from yesterday is gone.

The cold-start tax

On a medium-sized codebase, an AI agent spends 2,000–8,000 tokens just mapping the project before writing a single line of code. Every. Single. Session.

Decisions don't persist

You chose Stripe Checkout over Payment Intents. You scoped three files. You resolved a webhook blocker. None of that exists next session — you explain it again from scratch.

How it works

Three layers of memory

Mnemo maintains three complementary indexes, updated automatically on every git commit.

Layer 1

Semantic Index

Natural-language search over your entire codebase using local embeddings. No API keys, no internet, no code sent anywhere. Runs entirely on-device via ONNX Runtime.

Layer 2

Structural Graph

File-level dependency graph via Tree-sitter. Know which files import what, which are affected by a change, and what symbols live where — without reading a single file.

Layer 3 · The differentiator

FEAT Context Cache

Per-feature decisions, linked files, blockers, and status — persisted across sessions as structured events. Your agent reads this at session start and already knows where you left off.

Quick start

Up and running in two minutes

Four commands and your AI agent has persistent memory.

1

Install

npm install -g mnemo-cli
2

Initialize and index your project

mnemo init    # installs git hook, creates ~/.mnemo/ config
mnemo update  # indexes codebase (~88MB ONNX model on first run; incremental on subsequent runs)
3

Start tracking a feature

mnemo feat start payment-flow
mnemo feat decision "Using Stripe Checkout — simpler for MVP"
mnemo feat link-file src/routes/payments.ts
4

Wire up your AI agent

mnemo install claude    # Claude Code
mnemo install codex     # OpenAI Codex / ChatGPT
mnemo install copilot   # GitHub Copilot
mnemo install cursor    # Cursor
mnemo install windsurf  # Windsurf
Agent support

Works with every major AI coding agent

One install command per agent. Mnemo generates the right config file for each one — and keeps it up to date.

Claude Code
mnemo install claude

CLAUDE.md + skill file + MCP server

GitHub Copilot
mnemo install copilot

copilot-instructions.md + skill file

OpenAI Codex
mnemo install codex

AGENTS.md + skill file

Cursor
mnemo install cursor

.cursor/rules + skill file

Windsurf
mnemo install windsurf

.windsurfrules + skill file

Also available as an MCP server for any MCP-compatible client: mnemo mcp serve

Everything stays on your machine

All indexes are stored in ~/.mnemo/. The default embedding model (all-MiniLM-L6-v2) runs entirely on-device via ONNX Runtime. No code is ever sent to any server. Optionally use Ollama or OpenAI for better embedding quality.