Add MCP HTTP/SSE server and complete n8n integration
Major Changes: - Implemented MCP HTTP/SSE transport server for n8n and web clients - Created mcp_server/http_server.py with FastAPI for JSON-RPC 2.0 over HTTP - Added health check endpoint (/health) for container monitoring - Refactored mcp-server/ to mcp_server/ (Python module structure) - Updated Dockerfile.mcp to run HTTP server with health checks MCP Server Features: - 7 memory tools exposed via MCP (add, search, get, update, delete) - HTTP/SSE transport on port 8765 for n8n integration - stdio transport for Claude Code integration - JSON-RPC 2.0 protocol implementation - CORS support for web clients n8n Integration: - Successfully tested with AI Agent workflows - MCP Client Tool configuration documented - Working webhook endpoint tested and verified - System prompt optimized for automatic user_id usage Documentation: - Created comprehensive Mintlify documentation site - Added docs/mcp/introduction.mdx - MCP server overview - Added docs/mcp/installation.mdx - Installation guide - Added docs/mcp/tools.mdx - Complete tool reference - Added docs/examples/n8n.mdx - n8n integration guide - Added docs/examples/claude-code.mdx - Claude Code setup - Updated README.md with MCP HTTP server info - Updated roadmap to mark Phase 1 as complete Bug Fixes: - Fixed synchronized delete operations across Supabase and Neo4j - Updated memory_service.py with proper error handling - Fixed Neo4j connection issues in delete operations Configuration: - Added MCP_HOST and MCP_PORT environment variables - Updated .env.example with MCP server configuration - Updated docker-compose.yml with MCP container health checks Testing: - Added test scripts for MCP HTTP endpoint verification - Created test workflows in n8n - Verified all 7 memory tools working correctly - Tested synchronized operations across both stores Version: 1.0.0 Status: Phase 1 Complete - Production Ready 🤖 Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
422
docs/examples/claude-code.mdx
Normal file
422
docs/examples/claude-code.mdx
Normal file
@@ -0,0 +1,422 @@
|
||||
---
|
||||
title: 'Claude Code Integration'
|
||||
description: 'Use T6 Mem0 v2 with Claude Code for AI-powered development'
|
||||
---
|
||||
|
||||
# Claude Code Integration
|
||||
|
||||
Integrate the T6 Mem0 v2 MCP server with Claude Code to give your AI coding assistant persistent memory across sessions.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Claude Code CLI installed
|
||||
- T6 Mem0 v2 MCP server installed locally
|
||||
- Python 3.11+ environment
|
||||
- Running Supabase and Neo4j instances
|
||||
|
||||
## Installation
|
||||
|
||||
### 1. Install Dependencies
|
||||
|
||||
```bash
|
||||
cd /path/to/t6_mem0_v2
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 2. Configure Environment
|
||||
|
||||
Create `.env` file with required credentials:
|
||||
|
||||
```bash
|
||||
# OpenAI
|
||||
OPENAI_API_KEY=your_openai_key_here
|
||||
|
||||
# Supabase (Vector Store)
|
||||
SUPABASE_CONNECTION_STRING=postgresql://user:pass@host:port/database
|
||||
|
||||
# Neo4j (Graph Store)
|
||||
NEO4J_URI=neo4j://localhost:7687
|
||||
NEO4J_USER=neo4j
|
||||
NEO4J_PASSWORD=your_neo4j_password
|
||||
|
||||
# Mem0 Configuration
|
||||
MEM0_COLLECTION_NAME=t6_memories
|
||||
MEM0_EMBEDDING_DIMS=1536
|
||||
MEM0_VERSION=v1.1
|
||||
```
|
||||
|
||||
### 3. Verify MCP Server
|
||||
|
||||
Test the stdio transport:
|
||||
|
||||
```bash
|
||||
echo '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' | python -m mcp_server.main
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1,
|
||||
"result": {
|
||||
"tools": [
|
||||
{"name": "add_memory", "description": "Add new memory from messages..."},
|
||||
{"name": "search_memories", "description": "Search memories by semantic similarity..."},
|
||||
...
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Claude Code Configuration
|
||||
|
||||
### Option 1: MCP Server Configuration (Recommended)
|
||||
|
||||
Add to your Claude Code MCP settings file (`~/.config/claude/mcp.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"t6-mem0": {
|
||||
"command": "python",
|
||||
"args": ["-m", "mcp_server.main"],
|
||||
"cwd": "/path/to/t6_mem0_v2",
|
||||
"env": {
|
||||
"OPENAI_API_KEY": "${OPENAI_API_KEY}",
|
||||
"SUPABASE_CONNECTION_STRING": "${SUPABASE_CONNECTION_STRING}",
|
||||
"NEO4J_URI": "neo4j://localhost:7687",
|
||||
"NEO4J_USER": "neo4j",
|
||||
"NEO4J_PASSWORD": "${NEO4J_PASSWORD}",
|
||||
"MEM0_COLLECTION_NAME": "t6_memories",
|
||||
"MEM0_EMBEDDING_DIMS": "1536",
|
||||
"MEM0_VERSION": "v1.1"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Option 2: Direct Python Integration
|
||||
|
||||
Use the MCP SDK directly in Python:
|
||||
|
||||
```python
|
||||
from mcp import ClientSession, StdioServerParameters
|
||||
from mcp.client.stdio import stdio_client
|
||||
|
||||
# Configure server
|
||||
server_params = StdioServerParameters(
|
||||
command="python",
|
||||
args=["-m", "mcp_server.main"],
|
||||
env={
|
||||
"OPENAI_API_KEY": "your_key_here",
|
||||
"SUPABASE_CONNECTION_STRING": "postgresql://...",
|
||||
"NEO4J_URI": "neo4j://localhost:7687",
|
||||
"NEO4J_USER": "neo4j",
|
||||
"NEO4J_PASSWORD": "your_password"
|
||||
}
|
||||
)
|
||||
|
||||
# Connect and use
|
||||
async with stdio_client(server_params) as (read, write):
|
||||
async with ClientSession(read, write) as session:
|
||||
# Initialize session
|
||||
await session.initialize()
|
||||
|
||||
# List available tools
|
||||
tools = await session.list_tools()
|
||||
print(f"Available tools: {[tool.name for tool in tools.tools]}")
|
||||
|
||||
# Add a memory
|
||||
result = await session.call_tool(
|
||||
"add_memory",
|
||||
arguments={
|
||||
"messages": [
|
||||
{"role": "user", "content": "I prefer TypeScript over JavaScript"},
|
||||
{"role": "assistant", "content": "Got it, I'll remember that!"}
|
||||
],
|
||||
"user_id": "developer_123"
|
||||
}
|
||||
)
|
||||
|
||||
# Search memories
|
||||
results = await session.call_tool(
|
||||
"search_memories",
|
||||
arguments={
|
||||
"query": "What languages does the developer prefer?",
|
||||
"user_id": "developer_123",
|
||||
"limit": 5
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Example 1: Storing Code Preferences
|
||||
|
||||
```python
|
||||
# User tells Claude Code their preferences
|
||||
User: "I prefer using async/await over callbacks in JavaScript"
|
||||
|
||||
# Claude Code automatically calls add_memory
|
||||
await session.call_tool(
|
||||
"add_memory",
|
||||
arguments={
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "I prefer using async/await over callbacks in JavaScript"
|
||||
},
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": "I'll remember your preference for async/await!"
|
||||
}
|
||||
],
|
||||
"user_id": "developer_123",
|
||||
"metadata": {
|
||||
"category": "coding_preference",
|
||||
"language": "javascript"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Example 2: Recalling Project Context
|
||||
|
||||
```python
|
||||
# Later in a new session
|
||||
User: "How should I structure this async function?"
|
||||
|
||||
# Claude Code searches memories first
|
||||
memories = await session.call_tool(
|
||||
"search_memories",
|
||||
arguments={
|
||||
"query": "JavaScript async preferences",
|
||||
"user_id": "developer_123",
|
||||
"limit": 3
|
||||
}
|
||||
)
|
||||
|
||||
# Claude uses retrieved context to provide personalized response
|
||||
# "Based on your preference for async/await, here's how I'd structure it..."
|
||||
```
|
||||
|
||||
### Example 3: Project-Specific Memory
|
||||
|
||||
```python
|
||||
# Store project-specific information
|
||||
await session.call_tool(
|
||||
"add_memory",
|
||||
arguments={
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "This project uses Supabase for the database and Neo4j for the knowledge graph"
|
||||
},
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": "Got it! I'll remember the tech stack for this project."
|
||||
}
|
||||
],
|
||||
"user_id": "developer_123",
|
||||
"agent_id": "project_t6_mem0",
|
||||
"metadata": {
|
||||
"project": "t6_mem0_v2",
|
||||
"category": "tech_stack"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Available Tools in Claude Code
|
||||
|
||||
Once configured, these tools are automatically available:
|
||||
|
||||
| Tool | Description | Use Case |
|
||||
|------|-------------|----------|
|
||||
| `add_memory` | Store information | Save preferences, project details, learned patterns |
|
||||
| `search_memories` | Semantic search | Find relevant context from past conversations |
|
||||
| `get_all_memories` | Get all memories | Review everything Claude knows about you |
|
||||
| `update_memory` | Modify memory | Correct or update stored information |
|
||||
| `delete_memory` | Remove specific memory | Clear outdated information |
|
||||
| `delete_all_memories` | Clear all memories | Start fresh for new project |
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Use Meaningful User IDs
|
||||
|
||||
```python
|
||||
# Good - descriptive IDs
|
||||
user_id = "developer_john_doe"
|
||||
agent_id = "project_ecommerce_backend"
|
||||
|
||||
# Avoid - generic IDs
|
||||
user_id = "user1"
|
||||
agent_id = "agent"
|
||||
```
|
||||
|
||||
### 2. Add Rich Metadata
|
||||
|
||||
```python
|
||||
metadata = {
|
||||
"project": "t6_mem0_v2",
|
||||
"category": "bug_fix",
|
||||
"file": "mcp_server/http_server.py",
|
||||
"timestamp": "2025-10-15T10:30:00Z",
|
||||
"session_id": "abc-123-def"
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Search Before Adding
|
||||
|
||||
```python
|
||||
# Check if information already exists
|
||||
existing = await session.call_tool(
|
||||
"search_memories",
|
||||
arguments={
|
||||
"query": "Python coding style preferences",
|
||||
"user_id": "developer_123"
|
||||
}
|
||||
)
|
||||
|
||||
# Only add if not found or needs updating
|
||||
if not existing or needs_update:
|
||||
await session.call_tool("add_memory", ...)
|
||||
```
|
||||
|
||||
### 4. Regular Cleanup
|
||||
|
||||
```python
|
||||
# Periodically clean up old project memories
|
||||
await session.call_tool(
|
||||
"delete_all_memories",
|
||||
arguments={
|
||||
"agent_id": "old_project_archived"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### MCP Server Won't Start
|
||||
|
||||
**Error**: `ModuleNotFoundError: No module named 'mcp_server'`
|
||||
|
||||
**Solution**: Ensure you're running from the correct directory:
|
||||
```bash
|
||||
cd /path/to/t6_mem0_v2
|
||||
python -m mcp_server.main
|
||||
```
|
||||
|
||||
### Database Connection Errors
|
||||
|
||||
**Error**: `Cannot connect to Supabase/Neo4j`
|
||||
|
||||
**Solution**: Verify services are running and credentials are correct:
|
||||
```bash
|
||||
# Test Neo4j
|
||||
curl http://localhost:7474
|
||||
|
||||
# Test Supabase connection
|
||||
psql $SUPABASE_CONNECTION_STRING -c "SELECT 1"
|
||||
```
|
||||
|
||||
### Environment Variables Not Loading
|
||||
|
||||
**Error**: `KeyError: 'OPENAI_API_KEY'`
|
||||
|
||||
**Solution**: Load `.env` file or set environment variables:
|
||||
```bash
|
||||
# Load from .env
|
||||
export $(cat .env | xargs)
|
||||
|
||||
# Or set directly
|
||||
export OPENAI_API_KEY=your_key_here
|
||||
```
|
||||
|
||||
### Slow Response Times
|
||||
|
||||
**Issue**: Tool calls taking longer than expected
|
||||
|
||||
**Solutions**:
|
||||
- Check network latency to Supabase
|
||||
- Verify Neo4j indexes are created
|
||||
- Reduce `limit` parameter in search queries
|
||||
- Consider caching frequently accessed memories
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Custom Memory Categories
|
||||
|
||||
```python
|
||||
# Define custom categories
|
||||
CATEGORIES = {
|
||||
"preferences": "User coding preferences and style",
|
||||
"bugs": "Known bugs and their solutions",
|
||||
"architecture": "System design decisions",
|
||||
"dependencies": "Project dependencies and versions"
|
||||
}
|
||||
|
||||
# Store with category
|
||||
await session.call_tool(
|
||||
"add_memory",
|
||||
arguments={
|
||||
"messages": [...],
|
||||
"metadata": {
|
||||
"category": "architecture",
|
||||
"importance": "high"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Multi-Agent Collaboration
|
||||
|
||||
```python
|
||||
# Different agents for different purposes
|
||||
AGENTS = {
|
||||
"code_reviewer": "Reviews code for best practices",
|
||||
"debugger": "Helps debug issues",
|
||||
"architect": "Provides architectural guidance"
|
||||
}
|
||||
|
||||
# Store agent-specific knowledge
|
||||
await session.call_tool(
|
||||
"add_memory",
|
||||
arguments={
|
||||
"messages": [...],
|
||||
"user_id": "developer_123",
|
||||
"agent_id": "code_reviewer",
|
||||
"metadata": {"role": "code_review"}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Session Management
|
||||
|
||||
```python
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
|
||||
# Create session tracking
|
||||
session_id = str(uuid.uuid4())
|
||||
session_start = datetime.now().isoformat()
|
||||
|
||||
# Store with session context
|
||||
metadata = {
|
||||
"session_id": session_id,
|
||||
"session_start": session_start,
|
||||
"context": "debugging_authentication"
|
||||
}
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Tool Reference" icon="wrench" href="/mcp/tools">
|
||||
Complete reference for all 7 MCP tools
|
||||
</Card>
|
||||
<Card title="n8n Integration" icon="workflow" href="/examples/n8n">
|
||||
Use MCP in n8n workflows
|
||||
</Card>
|
||||
</CardGroup>
|
||||
371
docs/examples/n8n.mdx
Normal file
371
docs/examples/n8n.mdx
Normal file
@@ -0,0 +1,371 @@
|
||||
---
|
||||
title: 'n8n Integration'
|
||||
description: 'Use T6 Mem0 v2 with n8n AI Agent workflows'
|
||||
---
|
||||
|
||||
# n8n Integration Guide
|
||||
|
||||
Integrate the T6 Mem0 v2 MCP server with n8n AI Agent workflows to give your AI assistants persistent memory capabilities.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Running n8n instance
|
||||
- T6 Mem0 v2 MCP server deployed (see [Installation](/mcp/installation))
|
||||
- OpenAI API key configured in n8n
|
||||
- Both services on the same Docker network (recommended)
|
||||
|
||||
## Network Configuration
|
||||
|
||||
For Docker deployments, ensure n8n and the MCP server are on the same network:
|
||||
|
||||
```bash
|
||||
# Find MCP container IP
|
||||
docker inspect t6-mem0-mcp --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
|
||||
# Example output: 172.21.0.14
|
||||
|
||||
# Verify connectivity from n8n network
|
||||
docker run --rm --network localai alpine/curl:latest \
|
||||
curl -s http://172.21.0.14:8765/health
|
||||
```
|
||||
|
||||
## Creating an AI Agent Workflow
|
||||
|
||||
### Step 1: Add Webhook or Chat Trigger
|
||||
|
||||
For manual testing, use **When chat message received**:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "When chat message received",
|
||||
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
|
||||
"parameters": {
|
||||
"options": {}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
For production webhooks, use **Webhook**:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "Webhook",
|
||||
"type": "n8n-nodes-base.webhook",
|
||||
"parameters": {
|
||||
"path": "mem0-chat",
|
||||
"httpMethod": "POST",
|
||||
"responseMode": "responseNode",
|
||||
"options": {}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Add AI Agent Node
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "AI Agent",
|
||||
"type": "@n8n/n8n-nodes-langchain.agent",
|
||||
"parameters": {
|
||||
"promptType": "auto",
|
||||
"text": "={{ $json.chatInput }}",
|
||||
"hasOutputParser": false,
|
||||
"options": {
|
||||
"systemMessage": "You are a helpful AI assistant with persistent memory powered by mem0.\n\n⚠️ CRITICAL: You MUST use user_id=\"chat_user\" in EVERY memory tool call. Never ask the user for their user_id.\n\n📝 How to use memory tools:\n\n1. add_memory - Store new information\n Example call: {\"messages\": [{\"role\": \"user\", \"content\": \"I love Python\"}, {\"role\": \"assistant\", \"content\": \"Noted!\"}], \"user_id\": \"chat_user\"}\n\n2. get_all_memories - Retrieve everything you know about the user\n Example call: {\"user_id\": \"chat_user\"}\n Use this when user asks \"what do you know about me?\" or similar\n\n3. search_memories - Find specific information\n Example call: {\"query\": \"programming languages\", \"user_id\": \"chat_user\"}\n\n4. delete_all_memories - Clear all memories\n Example call: {\"user_id\": \"chat_user\"}\n\n💡 Tips:\n- When user shares personal info, immediately call add_memory\n- When user asks about themselves, call get_all_memories\n- Always format messages as array with role and content\n- Be conversational and friendly\n\nRemember: ALWAYS use user_id=\"chat_user\" in every single tool call!"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Add MCP Client Tool
|
||||
|
||||
This is the critical node that connects to the mem0 MCP server:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "MCP Client",
|
||||
"type": "@n8n/n8n-nodes-langchain.toolMcpClient",
|
||||
"parameters": {
|
||||
"endpointUrl": "http://172.21.0.14:8765/mcp",
|
||||
"serverTransport": "httpStreamable",
|
||||
"authentication": "none",
|
||||
"include": "all"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Important Configuration**:
|
||||
- **endpointUrl**: Use the Docker network IP of your MCP container (find with `docker inspect t6-mem0-mcp`)
|
||||
- **serverTransport**: Must be `httpStreamable` for HTTP/SSE transport
|
||||
- **authentication**: Set to `none` (no authentication required)
|
||||
- **include**: Set to `all` to expose all 7 memory tools
|
||||
|
||||
### Step 4: Add OpenAI Chat Model
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "OpenAI Chat Model",
|
||||
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
|
||||
"parameters": {
|
||||
"model": "gpt-4o-mini",
|
||||
"options": {
|
||||
"temperature": 0.7
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<Warning>
|
||||
Make sure to use `lmChatOpenAi` (not `lmOpenAi`) for chat models like gpt-4o-mini. Using the wrong node type will cause errors.
|
||||
</Warning>
|
||||
|
||||
### Step 5: Connect the Nodes
|
||||
|
||||
Connect nodes in this order:
|
||||
1. **Trigger** → **AI Agent**
|
||||
2. **MCP Client** → **AI Agent** (to Tools port)
|
||||
3. **OpenAI Chat Model** → **AI Agent** (to Model port)
|
||||
|
||||
## Complete Workflow Example
|
||||
|
||||
Here's a complete working workflow you can import:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "AI Agent with Mem0",
|
||||
"nodes": [
|
||||
{
|
||||
"id": "webhook",
|
||||
"name": "Webhook",
|
||||
"type": "n8n-nodes-base.webhook",
|
||||
"position": [250, 300],
|
||||
"parameters": {
|
||||
"path": "mem0-chat",
|
||||
"httpMethod": "POST",
|
||||
"responseMode": "responseNode"
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "agent",
|
||||
"name": "AI Agent",
|
||||
"type": "@n8n/n8n-nodes-langchain.agent",
|
||||
"position": [450, 300],
|
||||
"parameters": {
|
||||
"promptType": "auto",
|
||||
"text": "={{ $json.body.message }}",
|
||||
"options": {
|
||||
"systemMessage": "You are a helpful AI assistant with persistent memory.\n\nALWAYS use user_id=\"chat_user\" in every memory tool call."
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "mcp",
|
||||
"name": "MCP Client",
|
||||
"type": "@n8n/n8n-nodes-langchain.toolMcpClient",
|
||||
"position": [450, 150],
|
||||
"parameters": {
|
||||
"endpointUrl": "http://172.21.0.14:8765/mcp",
|
||||
"serverTransport": "httpStreamable",
|
||||
"authentication": "none",
|
||||
"include": "all"
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "openai",
|
||||
"name": "OpenAI Chat Model",
|
||||
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
|
||||
"position": [450, 450],
|
||||
"parameters": {
|
||||
"model": "gpt-4o-mini",
|
||||
"options": {"temperature": 0.7}
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "respond",
|
||||
"name": "Respond to Webhook",
|
||||
"type": "n8n-nodes-base.respondToWebhook",
|
||||
"position": [650, 300],
|
||||
"parameters": {
|
||||
"respondWith": "json",
|
||||
"responseBody": "={{ { \"response\": $json.output } }}"
|
||||
}
|
||||
}
|
||||
],
|
||||
"connections": {
|
||||
"Webhook": {
|
||||
"main": [[{"node": "AI Agent", "type": "main", "index": 0}]]
|
||||
},
|
||||
"AI Agent": {
|
||||
"main": [[{"node": "Respond to Webhook", "type": "main", "index": 0}]]
|
||||
},
|
||||
"MCP Client": {
|
||||
"main": [[{"node": "AI Agent", "type": "ai_tool", "index": 0}]]
|
||||
},
|
||||
"OpenAI Chat Model": {
|
||||
"main": [[{"node": "AI Agent", "type": "ai_languageModel", "index": 0}]]
|
||||
}
|
||||
},
|
||||
"active": false,
|
||||
"settings": {},
|
||||
"tags": []
|
||||
}
|
||||
```
|
||||
|
||||
## Testing the Workflow
|
||||
|
||||
### Manual Testing
|
||||
|
||||
1. **Activate** the workflow in n8n UI
|
||||
2. Open the chat interface (if using chat trigger)
|
||||
3. Try these test messages:
|
||||
|
||||
```
|
||||
Test 1: Store memory
|
||||
User: "My name is Alice and I love Python programming"
|
||||
Expected: Agent confirms storing the information
|
||||
|
||||
Test 2: Retrieve memories
|
||||
User: "What do you know about me?"
|
||||
Expected: Agent lists stored memories about Alice and Python
|
||||
|
||||
Test 3: Search
|
||||
User: "What programming languages do I like?"
|
||||
Expected: Agent finds and mentions Python
|
||||
|
||||
Test 4: Add more
|
||||
User: "I also enjoy hiking on weekends"
|
||||
Expected: Agent stores the new hobby
|
||||
|
||||
Test 5: Verify
|
||||
User: "Tell me everything you remember"
|
||||
Expected: Agent lists all memories including name, Python, and hiking
|
||||
```
|
||||
|
||||
### Webhook Testing
|
||||
|
||||
For production webhook workflows:
|
||||
|
||||
```bash
|
||||
# Activate the workflow first in n8n UI
|
||||
|
||||
# Send test message
|
||||
curl -X POST "https://your-n8n-domain.com/webhook/mem0-chat" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"message": "My name is Bob and I work as a software engineer"
|
||||
}'
|
||||
|
||||
# Expected response
|
||||
{
|
||||
"response": "Got it, Bob! I've noted that you work as a software engineer..."
|
||||
}
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### MCP Client Can't Connect
|
||||
|
||||
**Error**: "Failed to connect to MCP server"
|
||||
|
||||
**Solutions**:
|
||||
1. Verify MCP server is running:
|
||||
```bash
|
||||
curl http://172.21.0.14:8765/health
|
||||
```
|
||||
|
||||
2. Check Docker network connectivity:
|
||||
```bash
|
||||
docker run --rm --network localai alpine/curl:latest \
|
||||
curl -s http://172.21.0.14:8765/health
|
||||
```
|
||||
|
||||
3. Verify both containers are on same network:
|
||||
```bash
|
||||
docker network inspect localai
|
||||
```
|
||||
|
||||
### Agent Asks for User ID
|
||||
|
||||
**Error**: Agent responds "Could you please provide me with your user ID?"
|
||||
|
||||
**Solution**: Update system message to explicitly include user_id in examples:
|
||||
```
|
||||
CRITICAL: You MUST use user_id="chat_user" in EVERY memory tool call.
|
||||
|
||||
Example: {"messages": [...], "user_id": "chat_user"}
|
||||
```
|
||||
|
||||
### Webhook Not Registered
|
||||
|
||||
**Error**: `{"code":404,"message":"The requested webhook is not registered"}`
|
||||
|
||||
**Solutions**:
|
||||
1. Activate the workflow in n8n UI
|
||||
2. Check webhook path matches your URL
|
||||
3. Verify workflow is saved and active
|
||||
|
||||
### Wrong Model Type Error
|
||||
|
||||
**Error**: "Your chosen OpenAI model is a chat model and not a text-in/text-out LLM"
|
||||
|
||||
**Solution**: Use `@n8n/n8n-nodes-langchain.lmChatOpenAi` node type, not `lmOpenAi`
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Dynamic User IDs
|
||||
|
||||
To use dynamic user IDs based on webhook input:
|
||||
|
||||
```javascript
|
||||
// In AI Agent system message
|
||||
"Use user_id from the webhook data: user_id=\"{{ $json.body.user_id }}\""
|
||||
|
||||
// Webhook payload
|
||||
{
|
||||
"user_id": "user_12345",
|
||||
"message": "Remember this information"
|
||||
}
|
||||
```
|
||||
|
||||
### Multiple Agents
|
||||
|
||||
To support multiple agents with separate memories:
|
||||
|
||||
```javascript
|
||||
// System message
|
||||
"You are Agent Alpha. Use agent_id=\"agent_alpha\" in all memory calls."
|
||||
|
||||
// Tool call example
|
||||
{
|
||||
"messages": [...],
|
||||
"agent_id": "agent_alpha",
|
||||
"user_id": "user_123"
|
||||
}
|
||||
```
|
||||
|
||||
### Custom Metadata
|
||||
|
||||
Add context to stored memories:
|
||||
|
||||
```javascript
|
||||
// In add_memory call
|
||||
{
|
||||
"messages": [...],
|
||||
"user_id": "chat_user",
|
||||
"metadata": {
|
||||
"source": "webhook",
|
||||
"session_id": "{{ $json.session_id }}",
|
||||
"timestamp": "{{ $now }}"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Tool Reference" icon="wrench" href="/mcp/tools">
|
||||
Detailed documentation for all MCP tools
|
||||
</Card>
|
||||
<Card title="Claude Code" icon="code" href="/examples/claude-code">
|
||||
Use MCP with Claude Code
|
||||
</Card>
|
||||
</CardGroup>
|
||||
Reference in New Issue
Block a user