Shouldn't your AI know
what you're building?
ContextStream gives every AI tool persistent memory, semantic code search, and learned guardrails — so you stop repeating yourself and start shipping faster.
See the difference in Claude Code
Same task. Same agent. One has ContextStream — the other doesn't.
“ContextStream gave me the same 'developer intuition' that a senior engineer on your team would have, letting me implement a massive, multi-file feature quickly and accurately.”
“The product is excellent, and I love the pace and direction of development. I’m already recommending ContextStream to my colleagues.”
— Early access developer
Search that understands what you mean
Six specialized modes — from semantic understanding to exact symbol lookup. Your agents find the right code on the first try.
That’s real money you’re not burning on redundant context.
Every token earns its keep
ContextStream doesn't just store context — it filters, deduplicates, and compresses so your agents get exactly what they need and nothing they don't.
Intent-Based Filtering
Only sends context relevant to the current task. Focused briefings, not your entire codebase history.
Deduplicated Context
Automatically strips repeated context across sessions. No more paying to explain the same thing twice.
Progressive Compression
Older context is intelligently compressed while preserving key decisions and reasoning.
Pre-loaded Knowledge
Context is indexed and ready before the agent starts. Zero warm-up, instant understanding.
Local MCP server, secure cloud storage
The MCP server runs locally. If you enable code indexing, selected files are sent to our API to generate embeddings and build your encrypted search index. Secrets are excluded by default (.env, keys, dependencies), and you can add project or global ignore rules via .contextstream/ignore. All data is encrypted at rest and never used for AI training.
What we store:
Embeddings, file metadata, optionally indexed file content (depending on features enabled), dependency graphs, decisions, lessons learned, and memory events. All data is encrypted at rest.
What gets sent over the wire:
- Indexing: File content is sent for embedding generation, code analysis, and search indexing.
- Queries: Your search query is sent; matching code and context are returned from the stored index.
“But don’t AI tools already have memory?”
Built-in memory is limited. ContextStream is the infrastructure.
Memory is just the beginning
Ask questions about your codebase, trace decisions back to code, and understand the impact of changes before you make them.
Impact Analysis
"What breaks if I change the UserService class?"See all dependencies and side effects before refactoring.
Decision History
"Why did we choose PostgreSQL over MongoDB?"Recall past decisions with full context and reasoning.
Semantic Code Search
"Find where we handle rate limiting"Search by meaning, not just keywords. Find code by intent.
Knowledge Graph
Pro Graph-LiteElite Full Graph"Show dependencies for the auth module"Pro includes Graph-Lite for module-level links; Elite unlocks full graph layers.
GEMINI PRO 3.1 ON KNOWLEDGE GRAPH
“ContextStream's graph analysis allowed me to quickly parse the relationships in engine.rs... making writing search_nodes_by_name a breeze.”
Correct once. Never again.
When you correct your AI, ContextStream captures the lesson and prevents the same mistake across every future session — automatically.
Done! I've pushed the changes directly to main.
NO! You need to run the test suite first. CI is completely broken and the team is blocked.
I'll push these changes to main now—
"Always run the full test suite before any git push"
Hold on — let me run the test suite first. I have a lesson about this.
All 247 tests passing — safely pushing to main now.
Automatic capture
Corrections are extracted into structured lessons — no manual work needed.
Always-on guardrails
Lessons persist across sessions and tools. Your AI never forgets.
Team-wide protection
One person's correction becomes a guardrail for every teammate.
Up and running in under two minutes
Copy-paste one config block. Ask one question. Watch your AI actually remember.
Install & run the setup wizard (30 seconds)
Run the installer — it handles everything:
The wizard will sign you in, create your API key, and configure your editors automatically.
Start FreeManual config (if you skipped the wizard)
Paste this into your Cursor / Claude MCP settings. VS Code: use .vscode/mcp.json with "servers" instead of "mcpServers". Tip: use "inputs" for the API key so it stays out of version control.
{
"mcpServers": {
"contextstream": {
"args": [],
"command": "contextstream-mcp",
"env": {
"CONTEXTSTREAM_API_KEY": "your-api-key-here",
"CONTEXTSTREAM_API_URL": "https://api.contextstream.io"
}
}
}
}Try it out (30 seconds)
Open Cursor or Claude, start a new chat, and type:
"Initialize session and remember that I prefer TypeScript with strict mode, and we use PostgreSQL for this project."Then start a brand new conversation and ask:
"What are my preferences for this project?"It remembers. Across sessions. Across tools. Forever.
Your workflow, before and after
Same tasks. Wildly different vibes.
You explain the auth system. Close chat. Open new chat. Explain auth again. Contemplate career in goat farming.
AI recalls your auth decisions from last month — JWT choice, refresh token strategy, that weird edge case at 2am. All of it.
Switch from Cursor to Claude. Lose all context. Start over. Wonder if your AI has amnesia or just doesn't care.
Same memory everywhere. Cursor, Claude, VS Code — your AI knows you the way your barista knows your order.
New team member joins. Spends 3 days asking "why did we..." questions. Everyone pretends to remember.
Shared workspace memory. New hires get full context from day one. Onboarding speedrun unlocked.
"Why did we build it this way?" No one remembers. The Slack thread is gone. The doc is from 2019. You're on your own.
Decisions linked to code. Ask why, get the reasoning and the commit. Your codebase finally has a diary.
You push without tests. AI doesn't say a word. You push without tests again. AI still silent. Chaos reigns.
"Hey, you told me to always remind you about tests." Your AI actually learns from your mistakes — so you don't repeat them.
Your agents finally know what you know
Every decision, preference, and lesson your team captures — indexed, searchable, and growing automatically. This is what happens when your AI actually remembers.
“Being able to query workspace knowledge meant I didn't have to repeatedly re-read the same API models... allowing me to focus on the complex logic of assembling 12 parallel data streams through tokio::join! rather than hunting for definitions.”
Indexing Activity
Files indexed over the past week
Language Distribution
Top languages by line count
Join developers who are done repeating themselves
ContextStream is in active development with real users shipping faster every day. Early adopters get direct access to the team and influence on the roadmap.
“I was tired of explaining our auth setup to Claude every session. Now I just say ‘init session’ and it already knows. It’s like my AI finally graduated from short-term memory.”
— Early access user
Your code stays yours
Your data is encrypted, never used for training, and you control who has access. Security isn’t a feature — it’s the foundation.
Encrypted at Rest
AES-256 encryption at rest, including backups. Industry-standard protection for every byte.
No Training on Your Data
We never use your code to train AI models. Your intellectual property remains yours.
You Control Access
Workspace permissions and API keys give you full control over who sees your data.
Delete Anytime
Delete workspace data anytime from the dashboard. See our Privacy Policy for retention details.
Technical Security Specifications
TLS 1.3 for all data in transit
AES-256 encryption at rest
Infrastructure that scales
Strong security practices
Data encrypted at rest with secure cloud storage
API key authentication for all requests
Frequently Asked Questions
ContextStream is an MCP (Model Context Protocol) server that gives your AI coding assistants persistent memory and code intelligence. It runs locally on your machine, indexes your codebase, and provides context to AI tools like Cursor, Claude Code, and VS Code. When you chat with an AI, ContextStream automatically retrieves relevant decisions, code context, and documentation — so you never have to repeat yourself.
Any tool that supports MCP (Model Context Protocol). That includes Cursor, Claude Code, VS Code with Copilot, Windsurf, and more. Because it uses an open standard, your memory and context are never locked into a single tool.
Every decision, preference, and lesson learned gets stored in your knowledge graph. When you start a new session — even in a different tool — ContextStream provides that context automatically. Close Cursor on Friday, open Claude on Monday, and your AI still knows your architecture, your preferences, and the decisions your team made last sprint.
All data is encrypted at rest with AES-256, transmitted over TLS 1.3, and never used to train AI models. You control access with workspace permissions and API keys, and you can delete your workspace data at any time. Retention and deletion behavior are described in our Privacy Policy.
There's a free plan to get started with generous limits. Paid plans start at $10/month for power users and teams who need persistent storage, advanced code analysis, and team-shared memory. Check our pricing page for full details.
ContextStream maps how your files and modules connect. When you ask "what happens if I change this function?", it traces the dependency graph and tells you exactly which files, tests, and features would be affected — before you make the change.
Stop repeating yourself.
Setup takes 2 minutes. Your AI remembers across sessions and tools.
Seriously. We won't make you explain this page to your AI. It'll just know.
Install the ContextStream MCP server
curl -fsSL https://contextstream.io/scripts/mcp.sh | bashWorks with Cursor, Claude Code, VS Code, and any MCP-compatible tool.