Get started with Cross Context

Cross Context keeps five layers of persistent memory about your codebase and your work. Any agent you wire up reads them at session start — switch from Claude to Copilot mid-feature and it picks up exactly where you left off.

Installation #

Install Cross Context globally via npm. Node.js 20 or later is required.

npm install -g cross-context

Verify the installation:

xctx --version

Initialize a project #

Run xctx init inside any git repository. It installs a post-commit hook and creates the project config under ~/.xctx/.

cd my-project
xctx init

Cross Context identifies your project by hashing its git remote URL — the same remote always maps to the same project, so your context survives re-clones and directory moves.

No git remote? Cross Context falls back to the absolute path of the repository root. You can always set a remote later with git remote add origin <url> and re-run xctx init.

Wire up your agent #

Run xctx install <agent> to generate the right config file for your AI coding agent. Cross Context keeps it up to date automatically after every commit.

xctx install claude    # CLAUDE.md + skill file + MCP in ~/.claude/settings.json
xctx install copilot   # copilot-instructions.md + skill file + MCP in .vscode/mcp.json
xctx install codex     # AGENTS.md + skill file + MCP in ~/.codex/config.toml
xctx install cursor    # .cursor/rules/xctx.mdc + skill file + MCP in .cursor/mcp.json
xctx install windsurf  # .windsurfrules + skill file + MCP in ~/.codeium/windsurf/mcp_config.json
Agent Command Generated file
Claude Code xctx install claude CLAUDE.md + .claude/skills/xctx.md + MCP in ~/.claude/settings.json
GitHub Copilot xctx install copilot .github/copilot-instructions.md + .github/skills/xctx/SKILL.md + MCP in .vscode/mcp.json
OpenAI Codex xctx install codex AGENTS.md + .agents/skills/xctx/SKILL.md + MCP in ~/.codex/config.toml
Cursor xctx install cursor .cursor/rules/xctx.mdc + .cursor/skills/xctx/SKILL.md + MCP in .cursor/mcp.json
Windsurf xctx install windsurf .windsurfrules + .windsurf/skills/xctx/SKILL.md + MCP in ~/.codeium/windsurf/mcp_config.json
Any MCP client xctx mcp serve stdio MCP server
All agents get MCP integration automatically. Each xctx install <agent> registers the MCP server in the agent's native config. Agents call get_feat_context, search_codebase, record_decision, and other tools natively — no shell permissions needed. Restart your agent once after install.
Switching agents? The memory lives in ~/.xctx/, not inside any agent. Wire up multiple agents and they all read the same context — decisions, files, and status — regardless of which agent wrote it.

Feature Context Layer 1 #

The working memory of your active feature — linked files, decisions, blockers, and current status. Updated as you work. Read by any agent at session start. When you switch agents, this is what lets the new one continue without a briefing.

Each feature is an append-only event log (events.jsonl) rendered into a context.md your agent reads at session start.

# Start a feature
xctx feat start payment-flow

# Record decisions as you make them
xctx feat decision "Using Stripe Checkout — simpler for MVP"

# Link the files in scope
xctx feat link-file src/routes/payments.ts

# Note blockers
xctx feat blocker "Webhook signature verification failing on localhost"

# Print what any agent reads at session start
xctx feat context

# Mark done — distills insights into memory
xctx feat done
Tip: xctx install <agent> injects xctx feat context automatically at every session start — you don't need to paste it manually.

Project Memory Layer 2 #

Architectural patterns and constraints that apply across your entire codebase — not tied to any single feature. Auth setup, DB conventions, API contracts — things every agent should know before writing a line.

xctx memory add --project "Auth uses JWT with 15min expiry, refresh token in httpOnly cookie"
xctx memory add --project "All DB queries go through src/db/query.ts — never raw SQL"
xctx memory list --project

Project memory appears at the top of every xctx feat context output, so agents always see it alongside the active feature.

Developer Memory Layer 3 #

Your personal patterns and preferences — follows you across every project and every agent, not just this codebase. Agents learn how you work, not just what you're building.

xctx memory add --user "I prefer explicit error types over generic Error throws"
xctx memory add --user "Always write the test before the implementation"
xctx memory list --user

Codebase Index Layer 4 #

Semantic search over your codebase using local embeddings. Uses all-MiniLM-L6-v2 (~88 MB) via ONNX Runtime — no API keys, no network calls, no code leaves your machine.

xctx search "JWT authentication middleware"
xctx search "database connection pooling" --limit 5
xctx search "auth guards" --include-tests

The index is updated automatically on every git commit via the post-commit hook installed by xctx init.

To index immediately without committing:

xctx update

xctx update performs incremental indexing by default — it hashes every file and only re-indexes files that changed, skipping unchanged ones. A full re-index of a large project (2 000+ files) completes in seconds on subsequent runs.

Dependency Graph Layer 5 #

File-level dependency graph built with Tree-sitter. Tells your agent which files import what, which are affected by a change, and what symbols live where — without reading file contents.

# Files this file imports
xctx graph deps src/services/auth.ts

# Files that import this file
xctx graph refs src/routes/payments.ts

# Transitive dependents (BFS, max depth 3)
xctx graph affected src/services/auth.ts

# Top-level functions and classes in a file
xctx graph symbols src/services/auth.ts

xctx init #

Initialize Cross Context for the current git repository.

xctx init [--force]

Creates ~/.xctx/projects/<project-id>/ and installs the post-commit hook at .git/hooks/post-commit.

FlagDescription
--forceOverwrite existing config and hook

xctx feat #

Manage FEAT context cache entries.

SubcommandDescription
feat start <name>Create and activate a feature
feat doneMark the active feature done
feat context [--no-suggest]Print the rendered context.md (suppress file suggestions with --no-suggest)
feat listList all features
feat switch <name>Switch active feature
feat decision <text>Record a decision
feat blocker <text>Record a blocker
feat link-file <path>Add a file to the active feature
feat unlink-file <path>Remove a linked file
feat note <text>Add a free-form note
feat suggest-files [--limit n]Suggest files to link based on current context

xctx install #

xctx install claude|copilot|codex|cursor|windsurf

Generates the agent-specific config file and adds instructions for the agent to load Cross Context at session start.

xctx mcp serve #

xctx mcp serve [--port <n>]

Starts a stdio-based MCP server exposing Cross Context tools to any MCP-compatible client. Useful for custom agent setups or IDE extensions that speak the Model Context Protocol.

Data model #

Each feature is an append-only event log. Events are never mutated — a snapshot is re-derived on every read.

// FeatureEvent (events.jsonl)
{
  "id":        "evt_01j...",
  "type":      "decision" | "blocker" | "link-file" | "note" | ...,
  "payload":   "<text or path>",
  "timestamp": "2026-05-02T14:32:00Z"
}

The context.md file is derived from this log on every write — it is never the source of truth.

Local storage layout #

~/.xctx/
  config.json
  projects/
    {project-id}/          # xxh3(git remote)[0:16]
      meta.json
      index.db             # vector index (SQLite + sqlite-vec)
      graph.db             # structural dependency graph (SQLite)
      feats/
        {feat-name}/
          events.jsonl     # source of truth (append-only)
          context.md       # derived — do not edit
          meta.json
      active_feat          # name of active feature (plain text)

All data lives under ~/.xctx/. Nothing is written to your repository except the git hook and the agent config file.

Privacy #

Cross Context is designed to run entirely offline by default. The semantic index uses all-MiniLM-L6-v2 downloaded once from Hugging Face (~88 MB) and cached locally. No code or context is ever sent to any external server.

Optionally, you can use Ollama (local) or OpenAI (cloud) for better embedding quality:

xctx config set embedding.provider ollama
xctx config set embedding.provider openai
When using OpenAI, only code chunks are sent to the embedding API. Cross Context never sends your FEAT context, decisions, or blockers to any external service.