Files
t6_mem0_v2/docs/examples/n8n.mdx
Claude Code 1998bef6f4 Add MCP HTTP/SSE server and complete n8n integration
Major Changes:
- Implemented MCP HTTP/SSE transport server for n8n and web clients
- Created mcp_server/http_server.py with FastAPI for JSON-RPC 2.0 over HTTP
- Added health check endpoint (/health) for container monitoring
- Refactored mcp-server/ to mcp_server/ (Python module structure)
- Updated Dockerfile.mcp to run HTTP server with health checks

MCP Server Features:
- 7 memory tools exposed via MCP (add, search, get, update, delete)
- HTTP/SSE transport on port 8765 for n8n integration
- stdio transport for Claude Code integration
- JSON-RPC 2.0 protocol implementation
- CORS support for web clients

n8n Integration:
- Successfully tested with AI Agent workflows
- MCP Client Tool configuration documented
- Working webhook endpoint tested and verified
- System prompt optimized for automatic user_id usage

Documentation:
- Created comprehensive Mintlify documentation site
- Added docs/mcp/introduction.mdx - MCP server overview
- Added docs/mcp/installation.mdx - Installation guide
- Added docs/mcp/tools.mdx - Complete tool reference
- Added docs/examples/n8n.mdx - n8n integration guide
- Added docs/examples/claude-code.mdx - Claude Code setup
- Updated README.md with MCP HTTP server info
- Updated roadmap to mark Phase 1 as complete

Bug Fixes:
- Fixed synchronized delete operations across Supabase and Neo4j
- Updated memory_service.py with proper error handling
- Fixed Neo4j connection issues in delete operations

Configuration:
- Added MCP_HOST and MCP_PORT environment variables
- Updated .env.example with MCP server configuration
- Updated docker-compose.yml with MCP container health checks

Testing:
- Added test scripts for MCP HTTP endpoint verification
- Created test workflows in n8n
- Verified all 7 memory tools working correctly
- Tested synchronized operations across both stores

Version: 1.0.0
Status: Phase 1 Complete - Production Ready

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-15 13:56:41 +02:00

372 lines
9.5 KiB
Plaintext
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
title: 'n8n Integration'
description: 'Use T6 Mem0 v2 with n8n AI Agent workflows'
---
# n8n Integration Guide
Integrate the T6 Mem0 v2 MCP server with n8n AI Agent workflows to give your AI assistants persistent memory capabilities.
## Prerequisites
- Running n8n instance
- T6 Mem0 v2 MCP server deployed (see [Installation](/mcp/installation))
- OpenAI API key configured in n8n
- Both services on the same Docker network (recommended)
## Network Configuration
For Docker deployments, ensure n8n and the MCP server are on the same network:
```bash
# Find MCP container IP
docker inspect t6-mem0-mcp --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
# Example output: 172.21.0.14
# Verify connectivity from n8n network
docker run --rm --network localai alpine/curl:latest \
curl -s http://172.21.0.14:8765/health
```
## Creating an AI Agent Workflow
### Step 1: Add Webhook or Chat Trigger
For manual testing, use **When chat message received**:
```json
{
"name": "When chat message received",
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"parameters": {
"options": {}
}
}
```
For production webhooks, use **Webhook**:
```json
{
"name": "Webhook",
"type": "n8n-nodes-base.webhook",
"parameters": {
"path": "mem0-chat",
"httpMethod": "POST",
"responseMode": "responseNode",
"options": {}
}
}
```
### Step 2: Add AI Agent Node
```json
{
"name": "AI Agent",
"type": "@n8n/n8n-nodes-langchain.agent",
"parameters": {
"promptType": "auto",
"text": "={{ $json.chatInput }}",
"hasOutputParser": false,
"options": {
"systemMessage": "You are a helpful AI assistant with persistent memory powered by mem0.\n\n⚠ CRITICAL: You MUST use user_id=\"chat_user\" in EVERY memory tool call. Never ask the user for their user_id.\n\n📝 How to use memory tools:\n\n1. add_memory - Store new information\n Example call: {\"messages\": [{\"role\": \"user\", \"content\": \"I love Python\"}, {\"role\": \"assistant\", \"content\": \"Noted!\"}], \"user_id\": \"chat_user\"}\n\n2. get_all_memories - Retrieve everything you know about the user\n Example call: {\"user_id\": \"chat_user\"}\n Use this when user asks \"what do you know about me?\" or similar\n\n3. search_memories - Find specific information\n Example call: {\"query\": \"programming languages\", \"user_id\": \"chat_user\"}\n\n4. delete_all_memories - Clear all memories\n Example call: {\"user_id\": \"chat_user\"}\n\n💡 Tips:\n- When user shares personal info, immediately call add_memory\n- When user asks about themselves, call get_all_memories\n- Always format messages as array with role and content\n- Be conversational and friendly\n\nRemember: ALWAYS use user_id=\"chat_user\" in every single tool call!"
}
}
}
```
### Step 3: Add MCP Client Tool
This is the critical node that connects to the mem0 MCP server:
```json
{
"name": "MCP Client",
"type": "@n8n/n8n-nodes-langchain.toolMcpClient",
"parameters": {
"endpointUrl": "http://172.21.0.14:8765/mcp",
"serverTransport": "httpStreamable",
"authentication": "none",
"include": "all"
}
}
```
**Important Configuration**:
- **endpointUrl**: Use the Docker network IP of your MCP container (find with `docker inspect t6-mem0-mcp`)
- **serverTransport**: Must be `httpStreamable` for HTTP/SSE transport
- **authentication**: Set to `none` (no authentication required)
- **include**: Set to `all` to expose all 7 memory tools
### Step 4: Add OpenAI Chat Model
```json
{
"name": "OpenAI Chat Model",
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"parameters": {
"model": "gpt-4o-mini",
"options": {
"temperature": 0.7
}
}
}
```
<Warning>
Make sure to use `lmChatOpenAi` (not `lmOpenAi`) for chat models like gpt-4o-mini. Using the wrong node type will cause errors.
</Warning>
### Step 5: Connect the Nodes
Connect nodes in this order:
1. **Trigger** → **AI Agent**
2. **MCP Client** → **AI Agent** (to Tools port)
3. **OpenAI Chat Model** → **AI Agent** (to Model port)
## Complete Workflow Example
Here's a complete working workflow you can import:
```json
{
"name": "AI Agent with Mem0",
"nodes": [
{
"id": "webhook",
"name": "Webhook",
"type": "n8n-nodes-base.webhook",
"position": [250, 300],
"parameters": {
"path": "mem0-chat",
"httpMethod": "POST",
"responseMode": "responseNode"
}
},
{
"id": "agent",
"name": "AI Agent",
"type": "@n8n/n8n-nodes-langchain.agent",
"position": [450, 300],
"parameters": {
"promptType": "auto",
"text": "={{ $json.body.message }}",
"options": {
"systemMessage": "You are a helpful AI assistant with persistent memory.\n\nALWAYS use user_id=\"chat_user\" in every memory tool call."
}
}
},
{
"id": "mcp",
"name": "MCP Client",
"type": "@n8n/n8n-nodes-langchain.toolMcpClient",
"position": [450, 150],
"parameters": {
"endpointUrl": "http://172.21.0.14:8765/mcp",
"serverTransport": "httpStreamable",
"authentication": "none",
"include": "all"
}
},
{
"id": "openai",
"name": "OpenAI Chat Model",
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"position": [450, 450],
"parameters": {
"model": "gpt-4o-mini",
"options": {"temperature": 0.7}
}
},
{
"id": "respond",
"name": "Respond to Webhook",
"type": "n8n-nodes-base.respondToWebhook",
"position": [650, 300],
"parameters": {
"respondWith": "json",
"responseBody": "={{ { \"response\": $json.output } }}"
}
}
],
"connections": {
"Webhook": {
"main": [[{"node": "AI Agent", "type": "main", "index": 0}]]
},
"AI Agent": {
"main": [[{"node": "Respond to Webhook", "type": "main", "index": 0}]]
},
"MCP Client": {
"main": [[{"node": "AI Agent", "type": "ai_tool", "index": 0}]]
},
"OpenAI Chat Model": {
"main": [[{"node": "AI Agent", "type": "ai_languageModel", "index": 0}]]
}
},
"active": false,
"settings": {},
"tags": []
}
```
## Testing the Workflow
### Manual Testing
1. **Activate** the workflow in n8n UI
2. Open the chat interface (if using chat trigger)
3. Try these test messages:
```
Test 1: Store memory
User: "My name is Alice and I love Python programming"
Expected: Agent confirms storing the information
Test 2: Retrieve memories
User: "What do you know about me?"
Expected: Agent lists stored memories about Alice and Python
Test 3: Search
User: "What programming languages do I like?"
Expected: Agent finds and mentions Python
Test 4: Add more
User: "I also enjoy hiking on weekends"
Expected: Agent stores the new hobby
Test 5: Verify
User: "Tell me everything you remember"
Expected: Agent lists all memories including name, Python, and hiking
```
### Webhook Testing
For production webhook workflows:
```bash
# Activate the workflow first in n8n UI
# Send test message
curl -X POST "https://your-n8n-domain.com/webhook/mem0-chat" \
-H "Content-Type: application/json" \
-d '{
"message": "My name is Bob and I work as a software engineer"
}'
# Expected response
{
"response": "Got it, Bob! I've noted that you work as a software engineer..."
}
```
## Troubleshooting
### MCP Client Can't Connect
**Error**: "Failed to connect to MCP server"
**Solutions**:
1. Verify MCP server is running:
```bash
curl http://172.21.0.14:8765/health
```
2. Check Docker network connectivity:
```bash
docker run --rm --network localai alpine/curl:latest \
curl -s http://172.21.0.14:8765/health
```
3. Verify both containers are on same network:
```bash
docker network inspect localai
```
### Agent Asks for User ID
**Error**: Agent responds "Could you please provide me with your user ID?"
**Solution**: Update system message to explicitly include user_id in examples:
```
CRITICAL: You MUST use user_id="chat_user" in EVERY memory tool call.
Example: {"messages": [...], "user_id": "chat_user"}
```
### Webhook Not Registered
**Error**: `{"code":404,"message":"The requested webhook is not registered"}`
**Solutions**:
1. Activate the workflow in n8n UI
2. Check webhook path matches your URL
3. Verify workflow is saved and active
### Wrong Model Type Error
**Error**: "Your chosen OpenAI model is a chat model and not a text-in/text-out LLM"
**Solution**: Use `@n8n/n8n-nodes-langchain.lmChatOpenAi` node type, not `lmOpenAi`
## Advanced Configuration
### Dynamic User IDs
To use dynamic user IDs based on webhook input:
```javascript
// In AI Agent system message
"Use user_id from the webhook data: user_id=\"{{ $json.body.user_id }}\""
// Webhook payload
{
"user_id": "user_12345",
"message": "Remember this information"
}
```
### Multiple Agents
To support multiple agents with separate memories:
```javascript
// System message
"You are Agent Alpha. Use agent_id=\"agent_alpha\" in all memory calls."
// Tool call example
{
"messages": [...],
"agent_id": "agent_alpha",
"user_id": "user_123"
}
```
### Custom Metadata
Add context to stored memories:
```javascript
// In add_memory call
{
"messages": [...],
"user_id": "chat_user",
"metadata": {
"source": "webhook",
"session_id": "{{ $json.session_id }}",
"timestamp": "{{ $now }}"
}
}
```
## Next Steps
<CardGroup cols={2}>
<Card title="Tool Reference" icon="wrench" href="/mcp/tools">
Detailed documentation for all MCP tools
</Card>
<Card title="Claude Code" icon="code" href="/examples/claude-code">
Use MCP with Claude Code
</Card>
</CardGroup>