Add MCP HTTP/SSE server and complete n8n integration
Major Changes: - Implemented MCP HTTP/SSE transport server for n8n and web clients - Created mcp_server/http_server.py with FastAPI for JSON-RPC 2.0 over HTTP - Added health check endpoint (/health) for container monitoring - Refactored mcp-server/ to mcp_server/ (Python module structure) - Updated Dockerfile.mcp to run HTTP server with health checks MCP Server Features: - 7 memory tools exposed via MCP (add, search, get, update, delete) - HTTP/SSE transport on port 8765 for n8n integration - stdio transport for Claude Code integration - JSON-RPC 2.0 protocol implementation - CORS support for web clients n8n Integration: - Successfully tested with AI Agent workflows - MCP Client Tool configuration documented - Working webhook endpoint tested and verified - System prompt optimized for automatic user_id usage Documentation: - Created comprehensive Mintlify documentation site - Added docs/mcp/introduction.mdx - MCP server overview - Added docs/mcp/installation.mdx - Installation guide - Added docs/mcp/tools.mdx - Complete tool reference - Added docs/examples/n8n.mdx - n8n integration guide - Added docs/examples/claude-code.mdx - Claude Code setup - Updated README.md with MCP HTTP server info - Updated roadmap to mark Phase 1 as complete Bug Fixes: - Fixed synchronized delete operations across Supabase and Neo4j - Updated memory_service.py with proper error handling - Fixed Neo4j connection issues in delete operations Configuration: - Added MCP_HOST and MCP_PORT environment variables - Updated .env.example with MCP server configuration - Updated docker-compose.yml with MCP container health checks Testing: - Added test scripts for MCP HTTP endpoint verification - Created test workflows in n8n - Verified all 7 memory tools working correctly - Tested synchronized operations across both stores Version: 1.0.0 Status: Phase 1 Complete - Production Ready 🤖 Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
18
docs/ecosystem.config.js
Normal file
18
docs/ecosystem.config.js
Normal file
@@ -0,0 +1,18 @@
|
||||
module.exports = {
|
||||
apps: [
|
||||
{
|
||||
name: 'mem0-docs',
|
||||
cwd: '/home/klas/mem0/docs',
|
||||
script: 'mintlify',
|
||||
args: 'dev --no-open',
|
||||
interpreter: 'none',
|
||||
instances: 1,
|
||||
autorestart: true,
|
||||
watch: false,
|
||||
max_memory_restart: '500M',
|
||||
env: {
|
||||
NODE_ENV: 'production'
|
||||
}
|
||||
}
|
||||
]
|
||||
};
|
||||
422
docs/examples/claude-code.mdx
Normal file
422
docs/examples/claude-code.mdx
Normal file
@@ -0,0 +1,422 @@
|
||||
---
|
||||
title: 'Claude Code Integration'
|
||||
description: 'Use T6 Mem0 v2 with Claude Code for AI-powered development'
|
||||
---
|
||||
|
||||
# Claude Code Integration
|
||||
|
||||
Integrate the T6 Mem0 v2 MCP server with Claude Code to give your AI coding assistant persistent memory across sessions.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Claude Code CLI installed
|
||||
- T6 Mem0 v2 MCP server installed locally
|
||||
- Python 3.11+ environment
|
||||
- Running Supabase and Neo4j instances
|
||||
|
||||
## Installation
|
||||
|
||||
### 1. Install Dependencies
|
||||
|
||||
```bash
|
||||
cd /path/to/t6_mem0_v2
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 2. Configure Environment
|
||||
|
||||
Create `.env` file with required credentials:
|
||||
|
||||
```bash
|
||||
# OpenAI
|
||||
OPENAI_API_KEY=your_openai_key_here
|
||||
|
||||
# Supabase (Vector Store)
|
||||
SUPABASE_CONNECTION_STRING=postgresql://user:pass@host:port/database
|
||||
|
||||
# Neo4j (Graph Store)
|
||||
NEO4J_URI=neo4j://localhost:7687
|
||||
NEO4J_USER=neo4j
|
||||
NEO4J_PASSWORD=your_neo4j_password
|
||||
|
||||
# Mem0 Configuration
|
||||
MEM0_COLLECTION_NAME=t6_memories
|
||||
MEM0_EMBEDDING_DIMS=1536
|
||||
MEM0_VERSION=v1.1
|
||||
```
|
||||
|
||||
### 3. Verify MCP Server
|
||||
|
||||
Test the stdio transport:
|
||||
|
||||
```bash
|
||||
echo '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' | python -m mcp_server.main
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1,
|
||||
"result": {
|
||||
"tools": [
|
||||
{"name": "add_memory", "description": "Add new memory from messages..."},
|
||||
{"name": "search_memories", "description": "Search memories by semantic similarity..."},
|
||||
...
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Claude Code Configuration
|
||||
|
||||
### Option 1: MCP Server Configuration (Recommended)
|
||||
|
||||
Add to your Claude Code MCP settings file (`~/.config/claude/mcp.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"t6-mem0": {
|
||||
"command": "python",
|
||||
"args": ["-m", "mcp_server.main"],
|
||||
"cwd": "/path/to/t6_mem0_v2",
|
||||
"env": {
|
||||
"OPENAI_API_KEY": "${OPENAI_API_KEY}",
|
||||
"SUPABASE_CONNECTION_STRING": "${SUPABASE_CONNECTION_STRING}",
|
||||
"NEO4J_URI": "neo4j://localhost:7687",
|
||||
"NEO4J_USER": "neo4j",
|
||||
"NEO4J_PASSWORD": "${NEO4J_PASSWORD}",
|
||||
"MEM0_COLLECTION_NAME": "t6_memories",
|
||||
"MEM0_EMBEDDING_DIMS": "1536",
|
||||
"MEM0_VERSION": "v1.1"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Option 2: Direct Python Integration
|
||||
|
||||
Use the MCP SDK directly in Python:
|
||||
|
||||
```python
|
||||
from mcp import ClientSession, StdioServerParameters
|
||||
from mcp.client.stdio import stdio_client
|
||||
|
||||
# Configure server
|
||||
server_params = StdioServerParameters(
|
||||
command="python",
|
||||
args=["-m", "mcp_server.main"],
|
||||
env={
|
||||
"OPENAI_API_KEY": "your_key_here",
|
||||
"SUPABASE_CONNECTION_STRING": "postgresql://...",
|
||||
"NEO4J_URI": "neo4j://localhost:7687",
|
||||
"NEO4J_USER": "neo4j",
|
||||
"NEO4J_PASSWORD": "your_password"
|
||||
}
|
||||
)
|
||||
|
||||
# Connect and use
|
||||
async with stdio_client(server_params) as (read, write):
|
||||
async with ClientSession(read, write) as session:
|
||||
# Initialize session
|
||||
await session.initialize()
|
||||
|
||||
# List available tools
|
||||
tools = await session.list_tools()
|
||||
print(f"Available tools: {[tool.name for tool in tools.tools]}")
|
||||
|
||||
# Add a memory
|
||||
result = await session.call_tool(
|
||||
"add_memory",
|
||||
arguments={
|
||||
"messages": [
|
||||
{"role": "user", "content": "I prefer TypeScript over JavaScript"},
|
||||
{"role": "assistant", "content": "Got it, I'll remember that!"}
|
||||
],
|
||||
"user_id": "developer_123"
|
||||
}
|
||||
)
|
||||
|
||||
# Search memories
|
||||
results = await session.call_tool(
|
||||
"search_memories",
|
||||
arguments={
|
||||
"query": "What languages does the developer prefer?",
|
||||
"user_id": "developer_123",
|
||||
"limit": 5
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Example 1: Storing Code Preferences
|
||||
|
||||
```python
|
||||
# User tells Claude Code their preferences
|
||||
User: "I prefer using async/await over callbacks in JavaScript"
|
||||
|
||||
# Claude Code automatically calls add_memory
|
||||
await session.call_tool(
|
||||
"add_memory",
|
||||
arguments={
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "I prefer using async/await over callbacks in JavaScript"
|
||||
},
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": "I'll remember your preference for async/await!"
|
||||
}
|
||||
],
|
||||
"user_id": "developer_123",
|
||||
"metadata": {
|
||||
"category": "coding_preference",
|
||||
"language": "javascript"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Example 2: Recalling Project Context
|
||||
|
||||
```python
|
||||
# Later in a new session
|
||||
User: "How should I structure this async function?"
|
||||
|
||||
# Claude Code searches memories first
|
||||
memories = await session.call_tool(
|
||||
"search_memories",
|
||||
arguments={
|
||||
"query": "JavaScript async preferences",
|
||||
"user_id": "developer_123",
|
||||
"limit": 3
|
||||
}
|
||||
)
|
||||
|
||||
# Claude uses retrieved context to provide personalized response
|
||||
# "Based on your preference for async/await, here's how I'd structure it..."
|
||||
```
|
||||
|
||||
### Example 3: Project-Specific Memory
|
||||
|
||||
```python
|
||||
# Store project-specific information
|
||||
await session.call_tool(
|
||||
"add_memory",
|
||||
arguments={
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "This project uses Supabase for the database and Neo4j for the knowledge graph"
|
||||
},
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": "Got it! I'll remember the tech stack for this project."
|
||||
}
|
||||
],
|
||||
"user_id": "developer_123",
|
||||
"agent_id": "project_t6_mem0",
|
||||
"metadata": {
|
||||
"project": "t6_mem0_v2",
|
||||
"category": "tech_stack"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Available Tools in Claude Code
|
||||
|
||||
Once configured, these tools are automatically available:
|
||||
|
||||
| Tool | Description | Use Case |
|
||||
|------|-------------|----------|
|
||||
| `add_memory` | Store information | Save preferences, project details, learned patterns |
|
||||
| `search_memories` | Semantic search | Find relevant context from past conversations |
|
||||
| `get_all_memories` | Get all memories | Review everything Claude knows about you |
|
||||
| `update_memory` | Modify memory | Correct or update stored information |
|
||||
| `delete_memory` | Remove specific memory | Clear outdated information |
|
||||
| `delete_all_memories` | Clear all memories | Start fresh for new project |
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Use Meaningful User IDs
|
||||
|
||||
```python
|
||||
# Good - descriptive IDs
|
||||
user_id = "developer_john_doe"
|
||||
agent_id = "project_ecommerce_backend"
|
||||
|
||||
# Avoid - generic IDs
|
||||
user_id = "user1"
|
||||
agent_id = "agent"
|
||||
```
|
||||
|
||||
### 2. Add Rich Metadata
|
||||
|
||||
```python
|
||||
metadata = {
|
||||
"project": "t6_mem0_v2",
|
||||
"category": "bug_fix",
|
||||
"file": "mcp_server/http_server.py",
|
||||
"timestamp": "2025-10-15T10:30:00Z",
|
||||
"session_id": "abc-123-def"
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Search Before Adding
|
||||
|
||||
```python
|
||||
# Check if information already exists
|
||||
existing = await session.call_tool(
|
||||
"search_memories",
|
||||
arguments={
|
||||
"query": "Python coding style preferences",
|
||||
"user_id": "developer_123"
|
||||
}
|
||||
)
|
||||
|
||||
# Only add if not found or needs updating
|
||||
if not existing or needs_update:
|
||||
await session.call_tool("add_memory", ...)
|
||||
```
|
||||
|
||||
### 4. Regular Cleanup
|
||||
|
||||
```python
|
||||
# Periodically clean up old project memories
|
||||
await session.call_tool(
|
||||
"delete_all_memories",
|
||||
arguments={
|
||||
"agent_id": "old_project_archived"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### MCP Server Won't Start
|
||||
|
||||
**Error**: `ModuleNotFoundError: No module named 'mcp_server'`
|
||||
|
||||
**Solution**: Ensure you're running from the correct directory:
|
||||
```bash
|
||||
cd /path/to/t6_mem0_v2
|
||||
python -m mcp_server.main
|
||||
```
|
||||
|
||||
### Database Connection Errors
|
||||
|
||||
**Error**: `Cannot connect to Supabase/Neo4j`
|
||||
|
||||
**Solution**: Verify services are running and credentials are correct:
|
||||
```bash
|
||||
# Test Neo4j
|
||||
curl http://localhost:7474
|
||||
|
||||
# Test Supabase connection
|
||||
psql $SUPABASE_CONNECTION_STRING -c "SELECT 1"
|
||||
```
|
||||
|
||||
### Environment Variables Not Loading
|
||||
|
||||
**Error**: `KeyError: 'OPENAI_API_KEY'`
|
||||
|
||||
**Solution**: Load `.env` file or set environment variables:
|
||||
```bash
|
||||
# Load from .env
|
||||
export $(cat .env | xargs)
|
||||
|
||||
# Or set directly
|
||||
export OPENAI_API_KEY=your_key_here
|
||||
```
|
||||
|
||||
### Slow Response Times
|
||||
|
||||
**Issue**: Tool calls taking longer than expected
|
||||
|
||||
**Solutions**:
|
||||
- Check network latency to Supabase
|
||||
- Verify Neo4j indexes are created
|
||||
- Reduce `limit` parameter in search queries
|
||||
- Consider caching frequently accessed memories
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Custom Memory Categories
|
||||
|
||||
```python
|
||||
# Define custom categories
|
||||
CATEGORIES = {
|
||||
"preferences": "User coding preferences and style",
|
||||
"bugs": "Known bugs and their solutions",
|
||||
"architecture": "System design decisions",
|
||||
"dependencies": "Project dependencies and versions"
|
||||
}
|
||||
|
||||
# Store with category
|
||||
await session.call_tool(
|
||||
"add_memory",
|
||||
arguments={
|
||||
"messages": [...],
|
||||
"metadata": {
|
||||
"category": "architecture",
|
||||
"importance": "high"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Multi-Agent Collaboration
|
||||
|
||||
```python
|
||||
# Different agents for different purposes
|
||||
AGENTS = {
|
||||
"code_reviewer": "Reviews code for best practices",
|
||||
"debugger": "Helps debug issues",
|
||||
"architect": "Provides architectural guidance"
|
||||
}
|
||||
|
||||
# Store agent-specific knowledge
|
||||
await session.call_tool(
|
||||
"add_memory",
|
||||
arguments={
|
||||
"messages": [...],
|
||||
"user_id": "developer_123",
|
||||
"agent_id": "code_reviewer",
|
||||
"metadata": {"role": "code_review"}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Session Management
|
||||
|
||||
```python
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
|
||||
# Create session tracking
|
||||
session_id = str(uuid.uuid4())
|
||||
session_start = datetime.now().isoformat()
|
||||
|
||||
# Store with session context
|
||||
metadata = {
|
||||
"session_id": session_id,
|
||||
"session_start": session_start,
|
||||
"context": "debugging_authentication"
|
||||
}
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Tool Reference" icon="wrench" href="/mcp/tools">
|
||||
Complete reference for all 7 MCP tools
|
||||
</Card>
|
||||
<Card title="n8n Integration" icon="workflow" href="/examples/n8n">
|
||||
Use MCP in n8n workflows
|
||||
</Card>
|
||||
</CardGroup>
|
||||
371
docs/examples/n8n.mdx
Normal file
371
docs/examples/n8n.mdx
Normal file
@@ -0,0 +1,371 @@
|
||||
---
|
||||
title: 'n8n Integration'
|
||||
description: 'Use T6 Mem0 v2 with n8n AI Agent workflows'
|
||||
---
|
||||
|
||||
# n8n Integration Guide
|
||||
|
||||
Integrate the T6 Mem0 v2 MCP server with n8n AI Agent workflows to give your AI assistants persistent memory capabilities.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Running n8n instance
|
||||
- T6 Mem0 v2 MCP server deployed (see [Installation](/mcp/installation))
|
||||
- OpenAI API key configured in n8n
|
||||
- Both services on the same Docker network (recommended)
|
||||
|
||||
## Network Configuration
|
||||
|
||||
For Docker deployments, ensure n8n and the MCP server are on the same network:
|
||||
|
||||
```bash
|
||||
# Find MCP container IP
|
||||
docker inspect t6-mem0-mcp --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
|
||||
# Example output: 172.21.0.14
|
||||
|
||||
# Verify connectivity from n8n network
|
||||
docker run --rm --network localai alpine/curl:latest \
|
||||
curl -s http://172.21.0.14:8765/health
|
||||
```
|
||||
|
||||
## Creating an AI Agent Workflow
|
||||
|
||||
### Step 1: Add Webhook or Chat Trigger
|
||||
|
||||
For manual testing, use **When chat message received**:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "When chat message received",
|
||||
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
|
||||
"parameters": {
|
||||
"options": {}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
For production webhooks, use **Webhook**:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "Webhook",
|
||||
"type": "n8n-nodes-base.webhook",
|
||||
"parameters": {
|
||||
"path": "mem0-chat",
|
||||
"httpMethod": "POST",
|
||||
"responseMode": "responseNode",
|
||||
"options": {}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Add AI Agent Node
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "AI Agent",
|
||||
"type": "@n8n/n8n-nodes-langchain.agent",
|
||||
"parameters": {
|
||||
"promptType": "auto",
|
||||
"text": "={{ $json.chatInput }}",
|
||||
"hasOutputParser": false,
|
||||
"options": {
|
||||
"systemMessage": "You are a helpful AI assistant with persistent memory powered by mem0.\n\n⚠️ CRITICAL: You MUST use user_id=\"chat_user\" in EVERY memory tool call. Never ask the user for their user_id.\n\n📝 How to use memory tools:\n\n1. add_memory - Store new information\n Example call: {\"messages\": [{\"role\": \"user\", \"content\": \"I love Python\"}, {\"role\": \"assistant\", \"content\": \"Noted!\"}], \"user_id\": \"chat_user\"}\n\n2. get_all_memories - Retrieve everything you know about the user\n Example call: {\"user_id\": \"chat_user\"}\n Use this when user asks \"what do you know about me?\" or similar\n\n3. search_memories - Find specific information\n Example call: {\"query\": \"programming languages\", \"user_id\": \"chat_user\"}\n\n4. delete_all_memories - Clear all memories\n Example call: {\"user_id\": \"chat_user\"}\n\n💡 Tips:\n- When user shares personal info, immediately call add_memory\n- When user asks about themselves, call get_all_memories\n- Always format messages as array with role and content\n- Be conversational and friendly\n\nRemember: ALWAYS use user_id=\"chat_user\" in every single tool call!"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Add MCP Client Tool
|
||||
|
||||
This is the critical node that connects to the mem0 MCP server:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "MCP Client",
|
||||
"type": "@n8n/n8n-nodes-langchain.toolMcpClient",
|
||||
"parameters": {
|
||||
"endpointUrl": "http://172.21.0.14:8765/mcp",
|
||||
"serverTransport": "httpStreamable",
|
||||
"authentication": "none",
|
||||
"include": "all"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Important Configuration**:
|
||||
- **endpointUrl**: Use the Docker network IP of your MCP container (find with `docker inspect t6-mem0-mcp`)
|
||||
- **serverTransport**: Must be `httpStreamable` for HTTP/SSE transport
|
||||
- **authentication**: Set to `none` (no authentication required)
|
||||
- **include**: Set to `all` to expose all 7 memory tools
|
||||
|
||||
### Step 4: Add OpenAI Chat Model
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "OpenAI Chat Model",
|
||||
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
|
||||
"parameters": {
|
||||
"model": "gpt-4o-mini",
|
||||
"options": {
|
||||
"temperature": 0.7
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<Warning>
|
||||
Make sure to use `lmChatOpenAi` (not `lmOpenAi`) for chat models like gpt-4o-mini. Using the wrong node type will cause errors.
|
||||
</Warning>
|
||||
|
||||
### Step 5: Connect the Nodes
|
||||
|
||||
Connect nodes in this order:
|
||||
1. **Trigger** → **AI Agent**
|
||||
2. **MCP Client** → **AI Agent** (to Tools port)
|
||||
3. **OpenAI Chat Model** → **AI Agent** (to Model port)
|
||||
|
||||
## Complete Workflow Example
|
||||
|
||||
Here's a complete working workflow you can import:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "AI Agent with Mem0",
|
||||
"nodes": [
|
||||
{
|
||||
"id": "webhook",
|
||||
"name": "Webhook",
|
||||
"type": "n8n-nodes-base.webhook",
|
||||
"position": [250, 300],
|
||||
"parameters": {
|
||||
"path": "mem0-chat",
|
||||
"httpMethod": "POST",
|
||||
"responseMode": "responseNode"
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "agent",
|
||||
"name": "AI Agent",
|
||||
"type": "@n8n/n8n-nodes-langchain.agent",
|
||||
"position": [450, 300],
|
||||
"parameters": {
|
||||
"promptType": "auto",
|
||||
"text": "={{ $json.body.message }}",
|
||||
"options": {
|
||||
"systemMessage": "You are a helpful AI assistant with persistent memory.\n\nALWAYS use user_id=\"chat_user\" in every memory tool call."
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "mcp",
|
||||
"name": "MCP Client",
|
||||
"type": "@n8n/n8n-nodes-langchain.toolMcpClient",
|
||||
"position": [450, 150],
|
||||
"parameters": {
|
||||
"endpointUrl": "http://172.21.0.14:8765/mcp",
|
||||
"serverTransport": "httpStreamable",
|
||||
"authentication": "none",
|
||||
"include": "all"
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "openai",
|
||||
"name": "OpenAI Chat Model",
|
||||
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
|
||||
"position": [450, 450],
|
||||
"parameters": {
|
||||
"model": "gpt-4o-mini",
|
||||
"options": {"temperature": 0.7}
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "respond",
|
||||
"name": "Respond to Webhook",
|
||||
"type": "n8n-nodes-base.respondToWebhook",
|
||||
"position": [650, 300],
|
||||
"parameters": {
|
||||
"respondWith": "json",
|
||||
"responseBody": "={{ { \"response\": $json.output } }}"
|
||||
}
|
||||
}
|
||||
],
|
||||
"connections": {
|
||||
"Webhook": {
|
||||
"main": [[{"node": "AI Agent", "type": "main", "index": 0}]]
|
||||
},
|
||||
"AI Agent": {
|
||||
"main": [[{"node": "Respond to Webhook", "type": "main", "index": 0}]]
|
||||
},
|
||||
"MCP Client": {
|
||||
"main": [[{"node": "AI Agent", "type": "ai_tool", "index": 0}]]
|
||||
},
|
||||
"OpenAI Chat Model": {
|
||||
"main": [[{"node": "AI Agent", "type": "ai_languageModel", "index": 0}]]
|
||||
}
|
||||
},
|
||||
"active": false,
|
||||
"settings": {},
|
||||
"tags": []
|
||||
}
|
||||
```
|
||||
|
||||
## Testing the Workflow
|
||||
|
||||
### Manual Testing
|
||||
|
||||
1. **Activate** the workflow in n8n UI
|
||||
2. Open the chat interface (if using chat trigger)
|
||||
3. Try these test messages:
|
||||
|
||||
```
|
||||
Test 1: Store memory
|
||||
User: "My name is Alice and I love Python programming"
|
||||
Expected: Agent confirms storing the information
|
||||
|
||||
Test 2: Retrieve memories
|
||||
User: "What do you know about me?"
|
||||
Expected: Agent lists stored memories about Alice and Python
|
||||
|
||||
Test 3: Search
|
||||
User: "What programming languages do I like?"
|
||||
Expected: Agent finds and mentions Python
|
||||
|
||||
Test 4: Add more
|
||||
User: "I also enjoy hiking on weekends"
|
||||
Expected: Agent stores the new hobby
|
||||
|
||||
Test 5: Verify
|
||||
User: "Tell me everything you remember"
|
||||
Expected: Agent lists all memories including name, Python, and hiking
|
||||
```
|
||||
|
||||
### Webhook Testing
|
||||
|
||||
For production webhook workflows:
|
||||
|
||||
```bash
|
||||
# Activate the workflow first in n8n UI
|
||||
|
||||
# Send test message
|
||||
curl -X POST "https://your-n8n-domain.com/webhook/mem0-chat" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"message": "My name is Bob and I work as a software engineer"
|
||||
}'
|
||||
|
||||
# Expected response
|
||||
{
|
||||
"response": "Got it, Bob! I've noted that you work as a software engineer..."
|
||||
}
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### MCP Client Can't Connect
|
||||
|
||||
**Error**: "Failed to connect to MCP server"
|
||||
|
||||
**Solutions**:
|
||||
1. Verify MCP server is running:
|
||||
```bash
|
||||
curl http://172.21.0.14:8765/health
|
||||
```
|
||||
|
||||
2. Check Docker network connectivity:
|
||||
```bash
|
||||
docker run --rm --network localai alpine/curl:latest \
|
||||
curl -s http://172.21.0.14:8765/health
|
||||
```
|
||||
|
||||
3. Verify both containers are on same network:
|
||||
```bash
|
||||
docker network inspect localai
|
||||
```
|
||||
|
||||
### Agent Asks for User ID
|
||||
|
||||
**Error**: Agent responds "Could you please provide me with your user ID?"
|
||||
|
||||
**Solution**: Update system message to explicitly include user_id in examples:
|
||||
```
|
||||
CRITICAL: You MUST use user_id="chat_user" in EVERY memory tool call.
|
||||
|
||||
Example: {"messages": [...], "user_id": "chat_user"}
|
||||
```
|
||||
|
||||
### Webhook Not Registered
|
||||
|
||||
**Error**: `{"code":404,"message":"The requested webhook is not registered"}`
|
||||
|
||||
**Solutions**:
|
||||
1. Activate the workflow in n8n UI
|
||||
2. Check webhook path matches your URL
|
||||
3. Verify workflow is saved and active
|
||||
|
||||
### Wrong Model Type Error
|
||||
|
||||
**Error**: "Your chosen OpenAI model is a chat model and not a text-in/text-out LLM"
|
||||
|
||||
**Solution**: Use `@n8n/n8n-nodes-langchain.lmChatOpenAi` node type, not `lmOpenAi`
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Dynamic User IDs
|
||||
|
||||
To use dynamic user IDs based on webhook input:
|
||||
|
||||
```javascript
|
||||
// In AI Agent system message
|
||||
"Use user_id from the webhook data: user_id=\"{{ $json.body.user_id }}\""
|
||||
|
||||
// Webhook payload
|
||||
{
|
||||
"user_id": "user_12345",
|
||||
"message": "Remember this information"
|
||||
}
|
||||
```
|
||||
|
||||
### Multiple Agents
|
||||
|
||||
To support multiple agents with separate memories:
|
||||
|
||||
```javascript
|
||||
// System message
|
||||
"You are Agent Alpha. Use agent_id=\"agent_alpha\" in all memory calls."
|
||||
|
||||
// Tool call example
|
||||
{
|
||||
"messages": [...],
|
||||
"agent_id": "agent_alpha",
|
||||
"user_id": "user_123"
|
||||
}
|
||||
```
|
||||
|
||||
### Custom Metadata
|
||||
|
||||
Add context to stored memories:
|
||||
|
||||
```javascript
|
||||
// In add_memory call
|
||||
{
|
||||
"messages": [...],
|
||||
"user_id": "chat_user",
|
||||
"metadata": {
|
||||
"source": "webhook",
|
||||
"session_id": "{{ $json.session_id }}",
|
||||
"timestamp": "{{ $now }}"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Tool Reference" icon="wrench" href="/mcp/tools">
|
||||
Detailed documentation for all MCP tools
|
||||
</Card>
|
||||
<Card title="Claude Code" icon="code" href="/examples/claude-code">
|
||||
Use MCP with Claude Code
|
||||
</Card>
|
||||
</CardGroup>
|
||||
6
docs/favicon.svg
Normal file
6
docs/favicon.svg
Normal file
@@ -0,0 +1,6 @@
|
||||
<svg width="32" height="32" xmlns="http://www.w3.org/2000/svg">
|
||||
<rect width="32" height="32" rx="6" fill="#0D9373"/>
|
||||
<text x="16" y="23" font-family="Arial, sans-serif" font-size="18" font-weight="bold" fill="white" text-anchor="middle">
|
||||
M
|
||||
</text>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 265 B |
55
docs/images/hero-dark.svg
Normal file
55
docs/images/hero-dark.svg
Normal file
@@ -0,0 +1,55 @@
|
||||
<svg width="800" height="400" xmlns="http://www.w3.org/2000/svg">
|
||||
<defs>
|
||||
<linearGradient id="grad2" x1="0%" y1="0%" x2="100%" y2="100%">
|
||||
<stop offset="0%" style="stop-color:#07C983;stop-opacity:0.2" />
|
||||
<stop offset="100%" style="stop-color:#0D9373;stop-opacity:0.3" />
|
||||
</linearGradient>
|
||||
</defs>
|
||||
|
||||
<!-- Background -->
|
||||
<rect width="800" height="400" fill="#0f1117"/>
|
||||
<rect width="800" height="400" fill="url(#grad2)"/>
|
||||
|
||||
<!-- Grid pattern -->
|
||||
<g opacity="0.2">
|
||||
<line x1="0" y1="100" x2="800" y2="100" stroke="#07C983" stroke-width="1"/>
|
||||
<line x1="0" y1="200" x2="800" y2="200" stroke="#07C983" stroke-width="1"/>
|
||||
<line x1="0" y1="300" x2="800" y2="300" stroke="#07C983" stroke-width="1"/>
|
||||
<line x1="200" y1="0" x2="200" y2="400" stroke="#07C983" stroke-width="1"/>
|
||||
<line x1="400" y1="0" x2="400" y2="400" stroke="#07C983" stroke-width="1"/>
|
||||
<line x1="600" y1="0" x2="600" y2="400" stroke="#07C983" stroke-width="1"/>
|
||||
</g>
|
||||
|
||||
<!-- Memory nodes -->
|
||||
<circle cx="200" cy="150" r="40" fill="#07C983" opacity="0.4"/>
|
||||
<circle cx="400" cy="200" r="50" fill="#0D9373" opacity="0.4"/>
|
||||
<circle cx="600" cy="150" r="35" fill="#07C983" opacity="0.4"/>
|
||||
|
||||
<!-- Connection lines -->
|
||||
<line x1="200" y1="150" x2="400" y2="200" stroke="#07C983" stroke-width="2" opacity="0.4"/>
|
||||
<line x1="400" y1="200" x2="600" y2="150" stroke="#0D9373" stroke-width="2" opacity="0.4"/>
|
||||
|
||||
<!-- Main text -->
|
||||
<text x="400" y="100" font-family="Arial, sans-serif" font-size="48" font-weight="bold" fill="#07C983" text-anchor="middle">
|
||||
T6 Mem0 v2
|
||||
</text>
|
||||
<text x="400" y="140" font-family="Arial, sans-serif" font-size="24" fill="#ccc" text-anchor="middle">
|
||||
Memory System for LLM Applications
|
||||
</text>
|
||||
|
||||
<!-- Feature icons/text -->
|
||||
<g transform="translate(150, 280)">
|
||||
<circle cx="0" cy="0" r="30" fill="#07C983" opacity="0.3"/>
|
||||
<text x="0" y="5" font-family="Arial, sans-serif" font-size="24" fill="#07C983" text-anchor="middle" font-weight="bold">MCP</text>
|
||||
</g>
|
||||
|
||||
<g transform="translate(400, 280)">
|
||||
<circle cx="0" cy="0" r="30" fill="#0D9373" opacity="0.3"/>
|
||||
<text x="0" y="5" font-family="Arial, sans-serif" font-size="24" fill="#07C983" text-anchor="middle" font-weight="bold">API</text>
|
||||
</g>
|
||||
|
||||
<g transform="translate(650, 280)">
|
||||
<circle cx="0" cy="0" r="30" fill="#07C983" opacity="0.3"/>
|
||||
<text x="0" y="8" font-family="Arial, sans-serif" font-size="20" fill="#07C983" text-anchor="middle" font-weight="bold">Graph</text>
|
||||
</g>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 2.5 KiB |
54
docs/images/hero-light.svg
Normal file
54
docs/images/hero-light.svg
Normal file
@@ -0,0 +1,54 @@
|
||||
<svg width="800" height="400" xmlns="http://www.w3.org/2000/svg">
|
||||
<defs>
|
||||
<linearGradient id="grad1" x1="0%" y1="0%" x2="100%" y2="100%">
|
||||
<stop offset="0%" style="stop-color:#0D9373;stop-opacity:0.1" />
|
||||
<stop offset="100%" style="stop-color:#07C983;stop-opacity:0.2" />
|
||||
</linearGradient>
|
||||
</defs>
|
||||
|
||||
<!-- Background -->
|
||||
<rect width="800" height="400" fill="url(#grad1)"/>
|
||||
|
||||
<!-- Grid pattern -->
|
||||
<g opacity="0.1">
|
||||
<line x1="0" y1="100" x2="800" y2="100" stroke="#0D9373" stroke-width="1"/>
|
||||
<line x1="0" y1="200" x2="800" y2="200" stroke="#0D9373" stroke-width="1"/>
|
||||
<line x1="0" y1="300" x2="800" y2="300" stroke="#0D9373" stroke-width="1"/>
|
||||
<line x1="200" y1="0" x2="200" y2="400" stroke="#0D9373" stroke-width="1"/>
|
||||
<line x1="400" y1="0" x2="400" y2="400" stroke="#0D9373" stroke-width="1"/>
|
||||
<line x1="600" y1="0" x2="600" y2="400" stroke="#0D9373" stroke-width="1"/>
|
||||
</g>
|
||||
|
||||
<!-- Memory nodes -->
|
||||
<circle cx="200" cy="150" r="40" fill="#0D9373" opacity="0.3"/>
|
||||
<circle cx="400" cy="200" r="50" fill="#07C983" opacity="0.3"/>
|
||||
<circle cx="600" cy="150" r="35" fill="#0D9373" opacity="0.3"/>
|
||||
|
||||
<!-- Connection lines -->
|
||||
<line x1="200" y1="150" x2="400" y2="200" stroke="#0D9373" stroke-width="2" opacity="0.3"/>
|
||||
<line x1="400" y1="200" x2="600" y2="150" stroke="#07C983" stroke-width="2" opacity="0.3"/>
|
||||
|
||||
<!-- Main text -->
|
||||
<text x="400" y="100" font-family="Arial, sans-serif" font-size="48" font-weight="bold" fill="#0D9373" text-anchor="middle">
|
||||
T6 Mem0 v2
|
||||
</text>
|
||||
<text x="400" y="140" font-family="Arial, sans-serif" font-size="24" fill="#666" text-anchor="middle">
|
||||
Memory System for LLM Applications
|
||||
</text>
|
||||
|
||||
<!-- Feature icons/text -->
|
||||
<g transform="translate(150, 280)">
|
||||
<circle cx="0" cy="0" r="30" fill="#0D9373" opacity="0.2"/>
|
||||
<text x="0" y="5" font-family="Arial, sans-serif" font-size="24" fill="#0D9373" text-anchor="middle" font-weight="bold">MCP</text>
|
||||
</g>
|
||||
|
||||
<g transform="translate(400, 280)">
|
||||
<circle cx="0" cy="0" r="30" fill="#07C983" opacity="0.2"/>
|
||||
<text x="0" y="5" font-family="Arial, sans-serif" font-size="24" fill="#0D9373" text-anchor="middle" font-weight="bold">API</text>
|
||||
</g>
|
||||
|
||||
<g transform="translate(650, 280)">
|
||||
<circle cx="0" cy="0" r="30" fill="#0D9373" opacity="0.2"/>
|
||||
<text x="0" y="8" font-family="Arial, sans-serif" font-size="20" fill="#0D9373" text-anchor="middle" font-weight="bold">Graph</text>
|
||||
</g>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 2.4 KiB |
8
docs/logo/dark.svg
Normal file
8
docs/logo/dark.svg
Normal file
@@ -0,0 +1,8 @@
|
||||
<svg width="120" height="40" xmlns="http://www.w3.org/2000/svg">
|
||||
<text x="10" y="28" font-family="Arial, sans-serif" font-size="24" font-weight="bold" fill="#07C983">
|
||||
Mem0
|
||||
</text>
|
||||
<text x="78" y="28" font-family="Arial, sans-serif" font-size="16" fill="#ccc">
|
||||
v2
|
||||
</text>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 294 B |
8
docs/logo/light.svg
Normal file
8
docs/logo/light.svg
Normal file
@@ -0,0 +1,8 @@
|
||||
<svg width="120" height="40" xmlns="http://www.w3.org/2000/svg">
|
||||
<text x="10" y="28" font-family="Arial, sans-serif" font-size="24" font-weight="bold" fill="#0D9373">
|
||||
Mem0
|
||||
</text>
|
||||
<text x="78" y="28" font-family="Arial, sans-serif" font-size="16" fill="#666">
|
||||
v2
|
||||
</text>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 294 B |
230
docs/mcp/installation.mdx
Normal file
230
docs/mcp/installation.mdx
Normal file
@@ -0,0 +1,230 @@
|
||||
---
|
||||
title: 'MCP Server Installation'
|
||||
description: 'Install and configure the T6 Mem0 v2 MCP server'
|
||||
---
|
||||
|
||||
# Installing the MCP Server
|
||||
|
||||
The MCP server can be run in two modes: HTTP/SSE for web integrations, or stdio for local tool usage.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Python 3.11+
|
||||
- Running Supabase instance (vector store)
|
||||
- Running Neo4j instance (graph store)
|
||||
- OpenAI API key
|
||||
|
||||
## Environment Setup
|
||||
|
||||
Create a `.env` file with required configuration:
|
||||
|
||||
```bash
|
||||
# OpenAI
|
||||
OPENAI_API_KEY=your_openai_key_here
|
||||
|
||||
# Supabase (Vector Store)
|
||||
SUPABASE_CONNECTION_STRING=postgresql://user:pass@host:port/database
|
||||
|
||||
# Neo4j (Graph Store)
|
||||
NEO4J_URI=neo4j://localhost:7687
|
||||
NEO4J_USER=neo4j
|
||||
NEO4J_PASSWORD=your_neo4j_password
|
||||
|
||||
# MCP Server
|
||||
MCP_HOST=0.0.0.0
|
||||
MCP_PORT=8765
|
||||
|
||||
# Mem0 Configuration
|
||||
MEM0_COLLECTION_NAME=t6_memories
|
||||
MEM0_EMBEDDING_DIMS=1536
|
||||
MEM0_VERSION=v1.1
|
||||
```
|
||||
|
||||
## Installation Methods
|
||||
|
||||
### Method 1: Docker (Recommended)
|
||||
|
||||
The easiest way to run the MCP server is using Docker Compose:
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone https://git.colsys.tech/klas/t6_mem0_v2
|
||||
cd t6_mem0_v2
|
||||
|
||||
# Copy and configure environment
|
||||
cp .env.example .env
|
||||
# Edit .env with your settings
|
||||
|
||||
# Start all services
|
||||
docker compose up -d
|
||||
|
||||
# MCP HTTP server will be available at http://localhost:8765
|
||||
```
|
||||
|
||||
**Health Check**:
|
||||
```bash
|
||||
curl http://localhost:8765/health
|
||||
# {"status":"healthy","service":"t6-mem0-v2-mcp-http","transport":"http-streamable"}
|
||||
```
|
||||
|
||||
### Method 2: Local Python
|
||||
|
||||
For development or local usage:
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Run HTTP server
|
||||
python -m mcp_server.http_server
|
||||
|
||||
# Or run stdio server (for Claude Code)
|
||||
python -m mcp_server.main
|
||||
```
|
||||
|
||||
## Verify Installation
|
||||
|
||||
### Test HTTP Endpoint
|
||||
|
||||
```bash
|
||||
curl -X POST "http://localhost:8765/mcp" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1,
|
||||
"method": "tools/list",
|
||||
"params": {}
|
||||
}'
|
||||
```
|
||||
|
||||
Expected response:
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1,
|
||||
"result": {
|
||||
"tools": [
|
||||
{
|
||||
"name": "add_memory",
|
||||
"description": "Add new memory from messages...",
|
||||
"inputSchema": {...}
|
||||
},
|
||||
// ... 6 more tools
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Test stdio Server
|
||||
|
||||
```bash
|
||||
echo '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' | python -m mcp_server.main
|
||||
```
|
||||
|
||||
## Docker Configuration
|
||||
|
||||
The MCP server is configured in `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
mcp-server:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: docker/Dockerfile.mcp
|
||||
container_name: t6-mem0-mcp
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8765:8765"
|
||||
environment:
|
||||
- OPENAI_API_KEY=${OPENAI_API_KEY}
|
||||
- SUPABASE_CONNECTION_STRING=${SUPABASE_CONNECTION_STRING}
|
||||
- NEO4J_URI=neo4j://neo4j:7687
|
||||
- NEO4J_USER=${NEO4J_USER}
|
||||
- NEO4J_PASSWORD=${NEO4J_PASSWORD}
|
||||
- MCP_HOST=0.0.0.0
|
||||
- MCP_PORT=8765
|
||||
depends_on:
|
||||
neo4j:
|
||||
condition: service_healthy
|
||||
networks:
|
||||
- localai
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -f http://localhost:8765/health || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
```
|
||||
|
||||
## Network Configuration
|
||||
|
||||
For n8n integration on the same Docker network:
|
||||
|
||||
```yaml
|
||||
# Add to your n8n docker-compose.yml
|
||||
networks:
|
||||
localai:
|
||||
external: true
|
||||
|
||||
services:
|
||||
n8n:
|
||||
networks:
|
||||
- localai
|
||||
```
|
||||
|
||||
Then use internal Docker network IP in n8n:
|
||||
```
|
||||
http://172.21.0.14:8765/mcp
|
||||
```
|
||||
|
||||
Find the MCP container IP:
|
||||
```bash
|
||||
docker inspect t6-mem0-mcp --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Container Won't Start
|
||||
|
||||
Check logs:
|
||||
```bash
|
||||
docker logs t6-mem0-mcp --tail 50
|
||||
```
|
||||
|
||||
Common issues:
|
||||
- Missing environment variables
|
||||
- Cannot connect to Neo4j or Supabase
|
||||
- Port 8765 already in use
|
||||
|
||||
### Health Check Failing
|
||||
|
||||
Verify services are reachable:
|
||||
```bash
|
||||
# Test Neo4j connection
|
||||
docker exec t6-mem0-mcp curl http://neo4j:7474
|
||||
|
||||
# Test from host
|
||||
curl http://localhost:8765/health
|
||||
```
|
||||
|
||||
### n8n Can't Connect
|
||||
|
||||
1. Verify same Docker network:
|
||||
```bash
|
||||
docker network inspect localai
|
||||
```
|
||||
|
||||
2. Test connectivity from n8n container:
|
||||
```bash
|
||||
docker run --rm --network localai alpine/curl:latest \
|
||||
curl -s http://172.21.0.14:8765/health
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Tool Reference" icon="wrench" href="/mcp/tools">
|
||||
Learn about available MCP tools
|
||||
</Card>
|
||||
<Card title="n8n Integration" icon="workflow" href="/examples/n8n">
|
||||
Use MCP in n8n workflows
|
||||
</Card>
|
||||
</CardGroup>
|
||||
117
docs/mcp/introduction.mdx
Normal file
117
docs/mcp/introduction.mdx
Normal file
@@ -0,0 +1,117 @@
|
||||
---
|
||||
title: 'MCP Server Introduction'
|
||||
description: 'Model Context Protocol server for AI-powered memory operations'
|
||||
---
|
||||
|
||||
# MCP Server Overview
|
||||
|
||||
The T6 Mem0 v2 MCP (Model Context Protocol) server provides a standardized interface for AI assistants and agents to interact with the memory system. It exposes all memory operations as MCP tools that can be used by any MCP-compatible client.
|
||||
|
||||
## What is MCP?
|
||||
|
||||
Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to LLMs. Created by Anthropic, it enables:
|
||||
|
||||
- **Universal tool access** - One protocol works across all AI assistants
|
||||
- **Secure communication** - Structured message format with validation
|
||||
- **Rich capabilities** - Tools, resources, and prompts in a single protocol
|
||||
|
||||
## Features
|
||||
|
||||
- ✅ **7 Memory Tools** - Complete CRUD operations for memories
|
||||
- ✅ **HTTP/SSE Transport** - Compatible with n8n and web-based clients
|
||||
- ✅ **stdio Transport** - Compatible with Claude Code and terminal-based clients
|
||||
- ✅ **Synchronized Operations** - Ensures both Supabase and Neo4j stay in sync
|
||||
- ✅ **Type-safe** - Full schema validation for all operations
|
||||
|
||||
## Available Tools
|
||||
|
||||
| Tool | Description |
|
||||
|------|-------------|
|
||||
| `add_memory` | Store new memories from conversation messages |
|
||||
| `search_memories` | Semantic search across stored memories |
|
||||
| `get_memory` | Retrieve a specific memory by ID |
|
||||
| `get_all_memories` | Get all memories for a user or agent |
|
||||
| `update_memory` | Update existing memory content |
|
||||
| `delete_memory` | Delete a specific memory |
|
||||
| `delete_all_memories` | Delete all memories for a user/agent |
|
||||
|
||||
## Transport Options
|
||||
|
||||
### HTTP/SSE Transport
|
||||
|
||||
Best for:
|
||||
- n8n workflows
|
||||
- Web applications
|
||||
- REST API integrations
|
||||
- Remote access
|
||||
|
||||
**Endpoint**: `http://localhost:8765/mcp`
|
||||
|
||||
### stdio Transport
|
||||
|
||||
Best for:
|
||||
- Claude Code integration
|
||||
- Local development tools
|
||||
- Command-line applications
|
||||
- Direct Python integration
|
||||
|
||||
**Usage**: Run as a subprocess with JSON-RPC over stdin/stdout
|
||||
|
||||
## Quick Example
|
||||
|
||||
```javascript
|
||||
// Using n8n MCP Client Tool
|
||||
{
|
||||
"endpointUrl": "http://172.21.0.14:8765/mcp",
|
||||
"serverTransport": "httpStreamable",
|
||||
"authentication": "none",
|
||||
"include": "all"
|
||||
}
|
||||
```
|
||||
|
||||
```python
|
||||
# Using Python MCP SDK
|
||||
from mcp import ClientSession, StdioServerParameters
|
||||
from mcp.client.stdio import stdio_client
|
||||
|
||||
server_params = StdioServerParameters(
|
||||
command="python",
|
||||
args=["-m", "mcp_server.main"]
|
||||
)
|
||||
|
||||
async with stdio_client(server_params) as (read, write):
|
||||
async with ClientSession(read, write) as session:
|
||||
await session.initialize()
|
||||
|
||||
# List available tools
|
||||
tools = await session.list_tools()
|
||||
|
||||
# Call a tool
|
||||
result = await session.call_tool(
|
||||
"add_memory",
|
||||
arguments={
|
||||
"messages": [
|
||||
{"role": "user", "content": "I love Python"},
|
||||
{"role": "assistant", "content": "Noted!"}
|
||||
],
|
||||
"user_id": "user_123"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Installation" icon="download" href="/mcp/installation">
|
||||
Set up the MCP server locally or in Docker
|
||||
</Card>
|
||||
<Card title="Tool Reference" icon="wrench" href="/mcp/tools">
|
||||
Detailed documentation for all available tools
|
||||
</Card>
|
||||
<Card title="n8n Integration" icon="workflow" href="/examples/n8n">
|
||||
Use MCP tools in n8n AI Agent workflows
|
||||
</Card>
|
||||
<Card title="Claude Code" icon="code" href="/examples/claude-code">
|
||||
Integrate with Claude Code for AI-powered coding
|
||||
</Card>
|
||||
</CardGroup>
|
||||
384
docs/mcp/tools.mdx
Normal file
384
docs/mcp/tools.mdx
Normal file
@@ -0,0 +1,384 @@
|
||||
---
|
||||
title: 'MCP Tool Reference'
|
||||
description: 'Complete reference for all 7 memory operation tools'
|
||||
---
|
||||
|
||||
# MCP Tool Reference
|
||||
|
||||
The T6 Mem0 v2 MCP server provides 7 tools for complete memory lifecycle management. All tools use JSON-RPC 2.0 protocol and support both HTTP/SSE and stdio transports.
|
||||
|
||||
## add_memory
|
||||
|
||||
Store new memories extracted from conversation messages.
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `messages` | Array | Yes | Array of message objects with `role` and `content` |
|
||||
| `user_id` | String | No | User identifier for memory association |
|
||||
| `agent_id` | String | No | Agent identifier for memory association |
|
||||
| `metadata` | Object | No | Additional metadata to store with memories |
|
||||
|
||||
### Example Request
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1,
|
||||
"method": "tools/call",
|
||||
"params": {
|
||||
"name": "add_memory",
|
||||
"arguments": {
|
||||
"messages": [
|
||||
{"role": "user", "content": "I love Python programming"},
|
||||
{"role": "assistant", "content": "Great! I'll remember that."}
|
||||
],
|
||||
"user_id": "user_123",
|
||||
"metadata": {"source": "chat", "session_id": "abc-123"}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example Response
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1,
|
||||
"result": {
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "Added 1 memories for user user_123"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## search_memories
|
||||
|
||||
Search memories using semantic similarity matching.
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `query` | String | Yes | Search query text |
|
||||
| `user_id` | String | No | Filter by user ID |
|
||||
| `agent_id` | String | No | Filter by agent ID |
|
||||
| `limit` | Integer | No | Maximum results (default: 10, max: 50) |
|
||||
|
||||
### Example Request
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 2,
|
||||
"method": "tools/call",
|
||||
"params": {
|
||||
"name": "search_memories",
|
||||
"arguments": {
|
||||
"query": "What programming languages does the user like?",
|
||||
"user_id": "user_123",
|
||||
"limit": 5
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example Response
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 2,
|
||||
"result": {
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "Found 2 memories:\n1. ID: mem_abc123 - User loves Python programming (score: 0.92)\n2. ID: mem_def456 - User interested in JavaScript (score: 0.78)"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## get_memory
|
||||
|
||||
Retrieve a specific memory by its ID.
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `memory_id` | String | Yes | Unique memory identifier |
|
||||
|
||||
### Example Request
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 3,
|
||||
"method": "tools/call",
|
||||
"params": {
|
||||
"name": "get_memory",
|
||||
"arguments": {
|
||||
"memory_id": "mem_abc123"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example Response
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 3,
|
||||
"result": {
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "Memory: User loves Python programming\nCreated: 2025-10-15T10:30:00Z\nUser: user_123"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## get_all_memories
|
||||
|
||||
Retrieve all memories for a specific user or agent.
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `user_id` | String | No* | User identifier |
|
||||
| `agent_id` | String | No* | Agent identifier |
|
||||
|
||||
*At least one of `user_id` or `agent_id` must be provided.
|
||||
|
||||
### Example Request
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 4,
|
||||
"method": "tools/call",
|
||||
"params": {
|
||||
"name": "get_all_memories",
|
||||
"arguments": {
|
||||
"user_id": "user_123"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example Response
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 4,
|
||||
"result": {
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "Found 3 memories for user user_123:\n1. User loves Python programming\n2. User interested in JavaScript\n3. User works as software engineer"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## update_memory
|
||||
|
||||
Update the content of an existing memory.
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `memory_id` | String | Yes | Unique memory identifier |
|
||||
| `data` | String | Yes | New memory content |
|
||||
|
||||
### Example Request
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 5,
|
||||
"method": "tools/call",
|
||||
"params": {
|
||||
"name": "update_memory",
|
||||
"arguments": {
|
||||
"memory_id": "mem_abc123",
|
||||
"data": "User is an expert Python developer"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example Response
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 5,
|
||||
"result": {
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "Memory mem_abc123 updated successfully"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## delete_memory
|
||||
|
||||
Delete a specific memory by ID.
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `memory_id` | String | Yes | Unique memory identifier |
|
||||
|
||||
### Example Request
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 6,
|
||||
"method": "tools/call",
|
||||
"params": {
|
||||
"name": "delete_memory",
|
||||
"arguments": {
|
||||
"memory_id": "mem_abc123"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example Response
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 6,
|
||||
"result": {
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "Memory mem_abc123 deleted successfully from both vector and graph stores"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## delete_all_memories
|
||||
|
||||
Delete all memories for a specific user or agent.
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `user_id` | String | No* | User identifier |
|
||||
| `agent_id` | String | No* | Agent identifier |
|
||||
|
||||
*At least one of `user_id` or `agent_id` must be provided.
|
||||
|
||||
<Warning>
|
||||
This operation is irreversible. All memories for the specified user/agent will be permanently deleted from both Supabase (vector store) and Neo4j (graph store).
|
||||
</Warning>
|
||||
|
||||
### Example Request
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 7,
|
||||
"method": "tools/call",
|
||||
"params": {
|
||||
"name": "delete_all_memories",
|
||||
"arguments": {
|
||||
"user_id": "user_123"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example Response
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 7,
|
||||
"result": {
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "Deleted 3 memories for user user_123"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Error Responses
|
||||
|
||||
All tools return standardized error responses:
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1,
|
||||
"error": {
|
||||
"code": -32603,
|
||||
"message": "Internal error: Memory not found",
|
||||
"data": {
|
||||
"type": "MemoryNotFoundError",
|
||||
"details": "No memory exists with ID mem_xyz789"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Common Error Codes
|
||||
|
||||
| Code | Description |
|
||||
|------|-------------|
|
||||
| `-32700` | Parse error - Invalid JSON |
|
||||
| `-32600` | Invalid request - Missing required fields |
|
||||
| `-32601` | Method not found - Unknown tool name |
|
||||
| `-32602` | Invalid params - Invalid arguments |
|
||||
| `-32603` | Internal error - Server-side error |
|
||||
|
||||
## Synchronized Operations
|
||||
|
||||
<Info>
|
||||
All delete operations (both `delete_memory` and `delete_all_memories`) are synchronized across both storage backends:
|
||||
- **Supabase (Vector Store)**: Removes embeddings and memory records
|
||||
- **Neo4j (Graph Store)**: Removes nodes and relationships
|
||||
|
||||
This ensures data consistency across the entire memory system.
|
||||
</Info>
|
||||
|
||||
## Next Steps
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="n8n Integration" icon="workflow" href="/examples/n8n">
|
||||
Use MCP tools in n8n workflows
|
||||
</Card>
|
||||
<Card title="Claude Code" icon="code" href="/examples/claude-code">
|
||||
Integrate with Claude Code
|
||||
</Card>
|
||||
</CardGroup>
|
||||
Reference in New Issue
Block a user