Connect Your AI Tools
ContextStream integrates with AI tools via the Model Context Protocol (MCP). Give Claude, Cursor, Windsurf, Cline, Kilo Code, and Roo Code persistent memory across all your conversations.
What is MCP?
The Model Context Protocol (MCP) is an open standard that allows AI assistants to connect to external tools and data sources. With ContextStream's MCP server, your AI tools can:
- Remember conversations and decisions across sessions
- Search your codebase and documentation semantically
- Build and query knowledge graphs
- Share context between different AI tools
Prerequisites
- A ContextStream account with an API key
- Node.js 18+ installed on your system
Cursor / VS Code
Add ContextStream to your Cursor or VS Code MCP configuration:
{
"mcpServers": {
"contextstream": {
"command": "npx",
"args": ["-y", "@contextstream/mcp-server"],
"env": {
"CONTEXTSTREAM_API_URL": "https://api.contextstream.io",
"CONTEXTSTREAM_API_KEY": "your_api_key"
}
}
}
}After editing the config, restart your editor for changes to take effect.
Codex CLI
To use ContextStream with the Codex CLI, add the MCP server configuration to your ~/.codex/config.toml file:
[mcpServers.contextstream]
command = "npx"
args = ["-y", "@contextstream/mcp-server"]
[mcpServers.contextstream.env]
CONTEXTSTREAM_API_URL = "https://api.contextstream.io"
CONTEXTSTREAM_API_KEY = "your_api_key"After editing the config, restart Codex so it can load the ContextStream MCP server.
Claude Code (CLI)
Add ContextStream to Claude Code using the CLI command:
claude mcp add contextstream -e CONTEXTSTREAM_API_URL=https://api.contextstream.io -e CONTEXTSTREAM_API_KEY=your_api_key -- npx -y @contextstream/mcp-serverOr manually edit your settings file at ~/.claude/settings.json:
{
"mcpServers": {
"contextstream": {
"command": "npx",
"args": ["-y", "@contextstream/mcp-server"],
"env": {
"CONTEXTSTREAM_API_URL": "https://api.contextstream.io",
"CONTEXTSTREAM_API_KEY": "your_api_key"
}
}
}
}After adding the MCP server, restart Claude Code for changes to take effect. You can verify the server is loaded by running claude mcp list.
Claude Desktop (GUI App)
Add ContextStream to the Claude Desktop application:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"contextstream": {
"command": "npx",
"args": ["-y", "@contextstream/mcp-server"],
"env": {
"CONTEXTSTREAM_API_URL": "https://api.contextstream.io",
"CONTEXTSTREAM_API_KEY": "your_api_key"
}
}
}
}After editing the config, quit and restart Claude Desktop for changes to take effect.
Windsurf (Codeium)
Windsurf supports MCP servers through its configuration:
{
"mcpServers": {
"contextstream": {
"command": "npx",
"args": ["-y", "@contextstream/mcp-server"],
"env": {
"CONTEXTSTREAM_API_URL": "https://api.contextstream.io",
"CONTEXTSTREAM_API_KEY": "your_api_key"
}
}
}
}Cline
Add ContextStream to your Cline MCP configuration. Click the MCP Servers icon in Cline, select the "Configure" tab, then click "Configure MCP Servers" to edit:
{
"mcpServers": {
"contextstream": {
"command": "npx",
"args": ["-y", "@contextstream/mcp-server"],
"env": {
"CONTEXTSTREAM_API_URL": "https://api.contextstream.io",
"CONTEXTSTREAM_API_KEY": "your_api_key"
}
}
}
}After editing the config, restart Cline for changes to take effect. You can also use alwaysAllow to auto-approve specific tools.
Kilo Code
Add ContextStream to your Kilo Code MCP configuration. You can configure MCP servers globally or per-project:
Global: Click Settings → MCP Servers → Installed → Edit Global MCP to open mcp_settings.json
Project: .kilocode/mcp.json in your project root
{
"mcpServers": {
"contextstream": {
"command": "npx",
"args": ["-y", "@contextstream/mcp-server"],
"env": {
"CONTEXTSTREAM_API_URL": "https://api.contextstream.io",
"CONTEXTSTREAM_API_KEY": "your_api_key"
}
}
}
}Project-level configs take precedence over global configs. Restart Kilo Code after editing.
Roo Code
Add ContextStream to your Roo Code MCP configuration. You can configure MCP servers globally or per-project:
Global: Click the settings icon → Edit Global MCP to open mcp_settings.json
Project: .roo/mcp.json in your project root
{
"mcpServers": {
"contextstream": {
"command": "npx",
"args": ["-y", "@contextstream/mcp-server"],
"env": {
"CONTEXTSTREAM_API_URL": "https://api.contextstream.io",
"CONTEXTSTREAM_API_KEY": "your_api_key"
}
}
}
}Project-level configs take precedence over global configs. Restart Roo Code after editing.
Improve Automatic ContextStream Usage
Recommended
ContextStream's auto-context feature loads workspace context automatically when you use any ContextStream tool. However, AI assistants may not always proactively save decisions or recall past context. Adding editor AI rules improves consistency and ensures the AI automatically captures decisions, preferences, and important context throughout your conversations.
You can add ContextStream rules at two levels: Global (applies to all projects) or Project (applies to one project).
Global Rules (All Projects)
Add these once and they'll apply to every project automatically:
| Editor | Global Rules Location |
|---|---|
| Windsurf | ~/.codeium/windsurf/memories/global_rules.md |
| Cursor | Settings → General → Rules for AI |
| Cline | ~/Documents/Cline/Rules/ folder |
| Kilo Code | ~/.kilocode/rules/ folder |
| Roo Code | ~/.roo/rules/ folder |
| Claude Code | ~/.claude/CLAUDE.md |
Project Rules (Single Project)
Add these to a specific project. For Windsurf and Cursor folder-based rules, set the activation mode to "Always On" so the rules are always active.
| Editor | Project Rules Location |
|---|---|
| Windsurf | .windsurfrules or .windsurf/rules/*.md |
| Cursor | .cursorrules or .cursor/rules/*.mdc |
| Cline | .clinerules file or .clinerules/ folder |
| Kilo Code | .kilocode/rules/ folder |
| Roo Code | .roorules file or .roo/rules/ folder |
| Claude Code | CLAUDE.md in project root |
| Aider | .aider.conf.yml in project root |
Activation Modes (Windsurf, Cursor, Kilo Code & Roo Code)
When using folder-based rules (e.g., .windsurf/rules/), each rule file has an activation mode:
- Always On — Always active (recommended for ContextStream)
- Manual — Only when you @mention the rule
- Model Decision — AI decides based on description
- Glob — Active for matching file patterns
Global rules (global_rules.md) and root-level files (.windsurfrules) are always active.
Add this content to your rules file. Here are examples for each editor:
Claude Code
For Claude Code, create a CLAUDE.md file in your project root or add to your global ~/.claude/CLAUDE.md:
## ContextStream Integration
You have access to ContextStream MCP tools for persistent memory and context.
## TL;DR - REQUIRED EVERY MESSAGE
| Message | What to Call |
|---------|--------------|
| **1st message** | `mcp__contextstream__session_init(folder_path="...", context_hint="<user's message>")` |
| **2nd+ messages** | `mcp__contextstream__context_smart(user_message="<user's message>")` |
| **After completing task** | `mcp__contextstream__session_capture(...)` - MUST capture decisions/insights |
**NO EXCEPTIONS.** Do not skip even if you think you have enough context.
---
### Why context_smart is Required (Even After session_init)
**Common mistake:** "session_init already gave me context, I don't need context_smart"
**This is WRONG:**
- `session_init` returns the last ~10 items **BY TIME** (chronological)
- `context_smart` **SEARCHES** for items **RELEVANT to THIS message** (semantic)
**Without context_smart, you WILL miss relevant older context.**
---
### When to Capture (MANDATORY)
| When | Tool | Example |
|------|------|---------|
| User makes a decision | `mcp__contextstream__session_capture` | "Let's use PostgreSQL" → capture as decision |
| User states preference | `mcp__contextstream__session_capture` | "I prefer TypeScript" → capture as preference |
| You complete a task | `mcp__contextstream__session_capture` | Capture what was done |
| Need past context | `mcp__contextstream__session_recall` | "What did we decide about X?" |
**You MUST capture after completing any significant task.**Windsurf
For Windsurf, create a .windsurfrules file in your project root or add to your global ~/.codeium/windsurf/memories/global_rules.md:
## ContextStream Integration
You have access to ContextStream MCP tools for persistent memory and context.
## TL;DR - REQUIRED EVERY MESSAGE
| Message | What to Call |
|---------|--------------|
| **1st message** | `session_init(folder_path="...", context_hint="<user's message>")` |
| **2nd+ messages** | `context_smart(user_message="<user's message>")` |
| **After completing task** | `session_capture(...)` - MUST capture decisions/insights |
**NO EXCEPTIONS.** Do not skip even if you think you have enough context.
---
### Why context_smart is Required (Even After session_init)
**Common mistake:** "session_init already gave me context, I don't need context_smart"
**This is WRONG:**
- `session_init` returns the last ~10 items **BY TIME** (chronological)
- `context_smart` **SEARCHES** for items **RELEVANT to THIS message** (semantic)
**Without context_smart, you WILL miss relevant older context.**
---
### When to Capture (MANDATORY)
| When | Tool | Example |
|------|------|---------|
| User makes a decision | `session_capture` | "Let's use PostgreSQL" → capture as decision |
| User states preference | `session_capture` | "I prefer TypeScript" → capture as preference |
| You complete a task | `session_capture` | Capture what was done |
| Need past context | `session_recall` | "What did we decide about X?" |
**You MUST capture after completing any significant task.**
---
### Quick Examples
```
# First message - user asks about auth
session_init(folder_path="/path/to/project", context_hint="how should I implement auth?")
# Second message - user asks about database
context_smart(user_message="what database should I use?")
# Returns: W:Maker|P:myproject|D:Use PostgreSQL|D:No ORMs|M:DB schema at...
# User says "Let's use Redis for caching"
session_capture(event_type="decision", title="Caching Choice", content="Using Redis")
# Check past decisions
session_recall(query="what did we decide about caching?")
```Kilo Code
For Kilo Code, create a Markdown file in .kilocode/rules/:
## ContextStream Integration
You have access to ContextStream MCP tools for persistent memory and context.
## TL;DR - REQUIRED EVERY MESSAGE
| Message | What to Call |
|---------|--------------|
| **1st message** | `session_init(folder_path="...", context_hint="<user's message>")` |
| **2nd+ messages** | `context_smart(user_message="<user's message>")` |
| **After completing task** | `session_capture(...)` - MUST capture decisions/insights |
**NO EXCEPTIONS.** Do not skip even if you think you have enough context.
---
### Why context_smart is Required (Even After session_init)
**Common mistake:** "session_init already gave me context, I don't need context_smart"
**This is WRONG:**
- `session_init` returns the last ~10 items **BY TIME** (chronological)
- `context_smart` **SEARCHES** for items **RELEVANT to THIS message** (semantic)
**Without context_smart, you WILL miss relevant older context.**
---
### When to Capture (MANDATORY)
| When | Tool | Example |
|------|------|---------|
| User makes a decision | `session_capture` | "Let's use PostgreSQL" → capture as decision |
| User states preference | `session_capture` | "I prefer TypeScript" → capture as preference |
| You complete a task | `session_capture` | Capture what was done |
| Need past context | `session_recall` | "What did we decide about X?" |
**You MUST capture after completing any significant task.**Cline
For Cline, create a .clinerules file in your project root or use the .clinerules/ folder:
## ContextStream Integration
You have access to ContextStream MCP tools for persistent memory and context.
## TL;DR - REQUIRED EVERY MESSAGE
| Message | What to Call |
|---------|--------------|
| **1st message** | `session_init(folder_path="...", context_hint="<user's message>")` |
| **2nd+ messages** | `context_smart(user_message="<user's message>")` |
| **After completing task** | `session_capture(...)` - MUST capture decisions/insights |
**NO EXCEPTIONS.** Do not skip even if you think you have enough context.
---
### Why context_smart is Required (Even After session_init)
**Common mistake:** "session_init already gave me context, I don't need context_smart"
**This is WRONG:**
- `session_init` returns the last ~10 items **BY TIME** (chronological)
- `context_smart` **SEARCHES** for items **RELEVANT to THIS message** (semantic)
**Without context_smart, you WILL miss relevant older context.**
---
### When to Capture (MANDATORY)
| When | Tool | Example |
|------|------|---------|
| User makes a decision | `session_capture` | "Let's use PostgreSQL" → capture as decision |
| User states preference | `session_capture` | "I prefer TypeScript" → capture as preference |
| You complete a task | `session_capture` | Capture what was done |
| Need past context | `session_recall` | "What did we decide about X?" |
**You MUST capture after completing any significant task.**Roo Code
For Roo Code, create a .roorules file or use the .roo/rules/ folder:
## ContextStream Integration
You have access to ContextStream MCP tools for persistent memory and context.
## TL;DR - REQUIRED EVERY MESSAGE
| Message | What to Call |
|---------|--------------|
| **1st message** | `session_init(folder_path="...", context_hint="<user's message>")` |
| **2nd+ messages** | `context_smart(user_message="<user's message>")` |
| **After completing task** | `session_capture(...)` - MUST capture decisions/insights |
**NO EXCEPTIONS.** Do not skip even if you think you have enough context.
---
### Why context_smart is Required (Even After session_init)
**Common mistake:** "session_init already gave me context, I don't need context_smart"
**This is WRONG:**
- `session_init` returns the last ~10 items **BY TIME** (chronological)
- `context_smart` **SEARCHES** for items **RELEVANT to THIS message** (semantic)
**Without context_smart, you WILL miss relevant older context.**
---
### When to Capture (MANDATORY)
| When | Tool | Example |
|------|------|---------|
| User makes a decision | `session_capture` | "Let's use PostgreSQL" → capture as decision |
| User states preference | `session_capture` | "I prefer TypeScript" → capture as preference |
| You complete a task | `session_capture` | Capture what was done |
| Need past context | `session_recall` | "What did we decide about X?" |
**You MUST capture after completing any significant task.**Auto-Generate Rules
You can also ask the AI to generate these rules automatically by saying:"Use generate_editor_rules to create ContextStream rules for this project"
Token-Saving Context Tools
Save ~80% on AI Tokens
These tools let AI editors use ContextStream for context instead of including full chat history in every prompt. This dramatically reduces token usage and cost while maintaining context quality.
Instead of AI editors sending your entire conversation history with every prompt, these tools retrieve only relevant context on-demand:
| Tool | Purpose | Token Usage |
|---|---|---|
context_smart | KEY TOOL - Get minified relevant context before every response | ~200 tokens |
session_summary | Compact workspace summary for conversation start | ~500 tokens |
ai_context_budget | Get context within a specified token budget | Custom budget |
session_compress | Extract key info from chat history, store as memory | N/A (saves data) |
session_delta | Get incremental context changes since timestamp | ~100 tokens |
How context_smart Works
The AI calls context_smart with each user message to get only relevant context:
# User asks: "how should I implement auth?"
# AI calls: context_smart(user_message="how should I implement auth?")
# Returns minified format:
W:Maker|P:contextstream|D:Use JWT for auth|D:No session cookies|M:Auth API at /auth/...
# Type codes: W=Workspace, P=Project, D=Decision, M=Memory, I=Insight
# ~200 tokens instead of ~5,000 tokens for full chat historyToken Savings Example
❌ Traditional (Full History)
- Turn 1: 2,000 tokens
- Turn 5: 10,000 tokens
- Turn 10: 20,000+ tokens
- Total: ~50,000 tokens
✅ With ContextStream
- Turn 1: 500 tokens (summary)
- Turn 5: 700 tokens (smart context)
- Turn 10: 800 tokens (smart context)
- Total: ~8,000 tokens (84% savings)
To enable automatic token-saving, add editor rules (see above) that instruct the AI to callcontext_smart before every response.
All 69 MCP Tools
ContextStream provides 69 MCP tools organized into categories. You don't need to memorize tool names—just ask naturally and the AI will use the right tool. Each tool below includes an example prompt you can use.
Tip: Once your session is initialized (which happens automatically), most features work by simply asking questions like "What did we decide about the database?" or "Remember that we chose React".
Essential Session Tools
session_initInitialize session and load workspace context, decisions, and memorysession_captureCapture decisions, preferences, bugs, features, and insightssession_recallSearch past context using natural languagesession_rememberQuick way to store important contextcontext_smartGet minified relevant context for a query (call before every response)Token-Saving Tools
session_summaryGet compact context summary (~500 tokens)session_compressExtract and store key info from chat historyai_context_budgetGet context within a token budgetsession_deltaGet changes since a timestampMemory & Context
session_smart_searchSearch with automatic context enrichmentsession_get_user_contextGet user preferences and coding stylememory_create_eventCreate structured memory events with metadatamemory_searchSearch existing memory and notesmemory_decisionsList past decisions for referencememory_timelineView chronological history of a workspacememory_summaryGet condensed summary of workspace memorymemory_list_eventsList all memory eventsmemory_get_eventGet a specific memory eventmemory_update_eventUpdate an existing memory eventmemory_delete_eventDelete a memory eventmemory_distill_eventExtract key insights from a memory eventmemory_bulk_ingestBulk import multiple memory eventsKnowledge Graph
memory_create_nodeCreate a knowledge node with relationsmemory_list_nodesList knowledge graph nodesmemory_get_nodeGet a specific knowledge nodememory_update_nodeUpdate a knowledge nodememory_delete_nodeDelete a knowledge nodememory_supersede_nodeReplace a node with updated info (maintains history)graph_relatedFind related knowledge nodesgraph_pathFind path between two nodesgraph_decisionsQuery decision history from the graphgraph_contradictionsFind contradicting informationCode Search
search_semanticFind code by meaning/intent (best for 'how does X work?')search_hybridCombined semantic + keyword search (most versatile)search_keywordExact text/symbol searchsearch_patternRegex pattern searchsearch_suggestionsGet search suggestions based on partial queryAI Context Building
ai_contextBuild LLM context for a query (docs + code + memory)ai_enhanced_contextDeeper analysis with more contextai_embeddingsGenerate embeddings for textai_planGenerate development plan from descriptionai_tasksGenerate tasks from plan or descriptionCode Analysis
graph_dependenciesFind what a file/function depends ongraph_impactAnalyze impact of changing somethinggraph_call_pathTrace call paths between functionsgraph_circular_dependenciesDetect circular imports/dependenciesgraph_unused_codeFind dead/unused codeProject Management
projects_listList all projectsprojects_getGet project detailsprojects_createCreate a new projectprojects_updateRename or update a projectprojects_deleteDelete a project and its contentsprojects_overviewGet project summary and statsprojects_filesList indexed files in a projectprojects_statisticsGet detailed code statisticsprojects_indexTrigger re-indexing of projectprojects_index_statusCheck indexing statusprojects_ingest_localIndex a local directoryWorkspace Organization
workspaces_listList accessible workspacesworkspaces_getGet workspace detailsworkspaces_createCreate a new workspaceworkspaces_updateRename or update a workspaceworkspaces_deleteDelete a workspace and all its contentsworkspaces_overviewGet workspace summaryworkspaces_contentList content in workspaceworkspaces_analyticsGet workspace usage analyticsworkspace_associateLink a folder to a workspaceUtilities
auth_meGet current user profilegenerate_editor_rulesGenerate AI rules files for editorsUsage Examples
Once connected, you can ask your AI assistant things like:
"Remember that we decided to use PostgreSQL for the database"
"What were our previous decisions about authentication?"
"Search our codebase for how we handle API rate limiting"
"Show me related context about the payment system"
Troubleshooting
MCP server not starting
Ensure Node.js 18+ is installed and npx is available in your PATH. Try running npx @contextstream/mcp-server manually to check for errors.
Authentication errors
Verify your API key is correct and has not expired. You can generate a new key from your ContextStream dashboard.
Tools not appearing
Restart your AI application after modifying the config. Check the application logs for MCP connection errors.