Your codebase,
remembered.

Switch between Claude, Copilot, Cursor, and Codex without re-explaining a thing. Cross Context keeps persistent memory of your work — readable by any agent, any session.

$ npm install -g cross-context
View on GitHub Read the docs →
~/projects/api — xctx
$ xctx feat context
# FEAT: payment-integration
Status: in-progress — webhook handler done, writing tests
## Your Memory
· I prefer explicit error types over generic throws
· Always write the test before the implementation
## Project Memory
· Auth uses JWT with 15min expiry, refresh token in httpOnly cookie
· All DB queries go through src/db/query.ts — never raw SQL
## Decisions
· 2026-05-01 Stripe Checkout, not Payment Intents — simpler for MVP
## Relevant Files
· src/routes/payments.ts # main route handler
· src/services/stripe.ts
· src/models/order.ts
The problem

Every agent switch costs you an hour.

You're deep in a feature. You've explained the architecture, scoped the right files, made the hard calls. Then the tokens run out — or you need a second opinion from a different model. And you start from zero.

The token wall

Claude Code hits its context limit mid-feature. You open ChatGPT. The codebase structure, the architectural decision you just made, the three files in scope — gone. Fifteen minutes of re-explaining before you write a single line.

The session reset

You come back the next morning. The agent has no memory of yesterday. It re-reads your file tree, re-asks what framework you're using, re-discovers what you spent an hour explaining last session.

The solution

Memory that lives outside the agent.

Cross Context stores everything in ~/.xctx/ — not inside Claude, not inside Copilot. Any agent you wire up reads the same persistent memory. Switch agents, start a new session, bring in a colleague — the context is always there.

Claude Code
writes context →
~/.xctx/
← reads context
ChatGPT · Copilot · Cursor · Windsurf
Five layers of memory

Everything an agent needs to pick up where you left off.

01

Feature Context

What you're building right now — linked files, decisions, blockers, and current status. Updated as you work, read at every session start.

xctx feat context
02

Project Memory

Architectural patterns and constraints for this codebase. Auth setup, DB conventions, API contracts — the things every agent should know before touching a file.

xctx memory add --project
03

Developer Memory

Your personal patterns, preferences, and expertise. Follows you across every project and every agent — not tied to any codebase.

xctx memory add --user
04

Codebase Index

Natural-language search over your entire codebase. Runs fully on-device — no API keys, no code sent anywhere.

xctx search "…"
05

Dependency Graph

File-level dependency map via Tree-sitter. Know what imports what and what breaks before you change anything.

xctx graph deps
Quick start

Running in two minutes.

1

Install

npm install -g cross-context
2

Initialize and index your project

xctx init    # installs git hook, sets up ~/.xctx/
xctx update  # indexes your codebase (~88MB model on first run)
3

Start tracking your work

xctx feat start payment-flow
xctx feat decision "Stripe Checkout — simpler for MVP"
xctx feat link-file src/routes/payments.ts
4

Wire up your agents

xctx install claude    # Claude Code
xctx install codex     # OpenAI Codex / ChatGPT
xctx install copilot   # GitHub Copilot
xctx install cursor    # Cursor
xctx install windsurf  # Windsurf
Agent support

One command per agent. Works with all of them.

Each install generates the right config file for that agent and registers the MCP server. Switch agents anytime — the memory stays.

Claude Code
xctx install claude

CLAUDE.md + skill file + MCP

GitHub Copilot
xctx install copilot

copilot-instructions.md + skill + MCP

OpenAI Codex
xctx install codex

AGENTS.md + skill file + MCP

Cursor
xctx install cursor

.cursor/rules + skill + MCP

Windsurf
xctx install windsurf

.windsurfrules + skill + MCP

Also available as an MCP server for any MCP-compatible client: xctx mcp serve

Everything stays on your machine

All memory is stored in ~/.xctx/. The default embedding model (all-MiniLM-L6-v2) runs entirely on-device via ONNX Runtime. No code, decisions, or context is ever sent to any server. Optionally use Ollama or OpenAI for better embedding quality.