---
title: 'n8n Integration'
description: 'Use T6 Mem0 v2 with n8n AI Agent workflows'
---
# n8n Integration Guide
Integrate the T6 Mem0 v2 MCP server with n8n AI Agent workflows to give your AI assistants persistent memory capabilities.
## Prerequisites
- Running n8n instance
- T6 Mem0 v2 MCP server deployed (see [Installation](/mcp/installation))
- OpenAI API key configured in n8n
- Both services on the same Docker network (recommended)
## Network Configuration
For Docker deployments, ensure n8n and the MCP server are on the same network:
```bash
# Find MCP container IP
docker inspect t6-mem0-mcp --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
# Example output: 172.21.0.14
# Verify connectivity from n8n network
docker run --rm --network localai alpine/curl:latest \
curl -s http://172.21.0.14:8765/health
```
## Creating an AI Agent Workflow
### Step 1: Add Webhook or Chat Trigger
For manual testing, use **When chat message received**:
```json
{
"name": "When chat message received",
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"parameters": {
"options": {}
}
}
```
For production webhooks, use **Webhook**:
```json
{
"name": "Webhook",
"type": "n8n-nodes-base.webhook",
"parameters": {
"path": "mem0-chat",
"httpMethod": "POST",
"responseMode": "responseNode",
"options": {}
}
}
```
### Step 2: Add AI Agent Node
```json
{
"name": "AI Agent",
"type": "@n8n/n8n-nodes-langchain.agent",
"parameters": {
"promptType": "auto",
"text": "={{ $json.chatInput }}",
"hasOutputParser": false,
"options": {
"systemMessage": "You are a helpful AI assistant with persistent memory powered by mem0.\n\nā ļø CRITICAL: You MUST use user_id=\"chat_user\" in EVERY memory tool call. Never ask the user for their user_id.\n\nš How to use memory tools:\n\n1. add_memory - Store new information\n Example call: {\"messages\": [{\"role\": \"user\", \"content\": \"I love Python\"}, {\"role\": \"assistant\", \"content\": \"Noted!\"}], \"user_id\": \"chat_user\"}\n\n2. get_all_memories - Retrieve everything you know about the user\n Example call: {\"user_id\": \"chat_user\"}\n Use this when user asks \"what do you know about me?\" or similar\n\n3. search_memories - Find specific information\n Example call: {\"query\": \"programming languages\", \"user_id\": \"chat_user\"}\n\n4. delete_all_memories - Clear all memories\n Example call: {\"user_id\": \"chat_user\"}\n\nš” Tips:\n- When user shares personal info, immediately call add_memory\n- When user asks about themselves, call get_all_memories\n- Always format messages as array with role and content\n- Be conversational and friendly\n\nRemember: ALWAYS use user_id=\"chat_user\" in every single tool call!"
}
}
}
```
### Step 3: Add MCP Client Tool
This is the critical node that connects to the mem0 MCP server:
```json
{
"name": "MCP Client",
"type": "@n8n/n8n-nodes-langchain.toolMcpClient",
"parameters": {
"endpointUrl": "http://172.21.0.14:8765/mcp",
"serverTransport": "httpStreamable",
"authentication": "none",
"include": "all"
}
}
```
**Important Configuration**:
- **endpointUrl**: Use the Docker network IP of your MCP container (find with `docker inspect t6-mem0-mcp`)
- **serverTransport**: Must be `httpStreamable` for HTTP/SSE transport
- **authentication**: Set to `none` (no authentication required)
- **include**: Set to `all` to expose all 7 memory tools
### Step 4: Add OpenAI Chat Model
```json
{
"name": "OpenAI Chat Model",
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"parameters": {
"model": "gpt-4o-mini",
"options": {
"temperature": 0.7
}
}
}
```
Make sure to use `lmChatOpenAi` (not `lmOpenAi`) for chat models like gpt-4o-mini. Using the wrong node type will cause errors.
### Step 5: Connect the Nodes
Connect nodes in this order:
1. **Trigger** ā **AI Agent**
2. **MCP Client** ā **AI Agent** (to Tools port)
3. **OpenAI Chat Model** ā **AI Agent** (to Model port)
## Complete Workflow Example
Here's a complete working workflow you can import:
```json
{
"name": "AI Agent with Mem0",
"nodes": [
{
"id": "webhook",
"name": "Webhook",
"type": "n8n-nodes-base.webhook",
"position": [250, 300],
"parameters": {
"path": "mem0-chat",
"httpMethod": "POST",
"responseMode": "responseNode"
}
},
{
"id": "agent",
"name": "AI Agent",
"type": "@n8n/n8n-nodes-langchain.agent",
"position": [450, 300],
"parameters": {
"promptType": "auto",
"text": "={{ $json.body.message }}",
"options": {
"systemMessage": "You are a helpful AI assistant with persistent memory.\n\nALWAYS use user_id=\"chat_user\" in every memory tool call."
}
}
},
{
"id": "mcp",
"name": "MCP Client",
"type": "@n8n/n8n-nodes-langchain.toolMcpClient",
"position": [450, 150],
"parameters": {
"endpointUrl": "http://172.21.0.14:8765/mcp",
"serverTransport": "httpStreamable",
"authentication": "none",
"include": "all"
}
},
{
"id": "openai",
"name": "OpenAI Chat Model",
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"position": [450, 450],
"parameters": {
"model": "gpt-4o-mini",
"options": {"temperature": 0.7}
}
},
{
"id": "respond",
"name": "Respond to Webhook",
"type": "n8n-nodes-base.respondToWebhook",
"position": [650, 300],
"parameters": {
"respondWith": "json",
"responseBody": "={{ { \"response\": $json.output } }}"
}
}
],
"connections": {
"Webhook": {
"main": [[{"node": "AI Agent", "type": "main", "index": 0}]]
},
"AI Agent": {
"main": [[{"node": "Respond to Webhook", "type": "main", "index": 0}]]
},
"MCP Client": {
"main": [[{"node": "AI Agent", "type": "ai_tool", "index": 0}]]
},
"OpenAI Chat Model": {
"main": [[{"node": "AI Agent", "type": "ai_languageModel", "index": 0}]]
}
},
"active": false,
"settings": {},
"tags": []
}
```
## Testing the Workflow
### Manual Testing
1. **Activate** the workflow in n8n UI
2. Open the chat interface (if using chat trigger)
3. Try these test messages:
```
Test 1: Store memory
User: "My name is Alice and I love Python programming"
Expected: Agent confirms storing the information
Test 2: Retrieve memories
User: "What do you know about me?"
Expected: Agent lists stored memories about Alice and Python
Test 3: Search
User: "What programming languages do I like?"
Expected: Agent finds and mentions Python
Test 4: Add more
User: "I also enjoy hiking on weekends"
Expected: Agent stores the new hobby
Test 5: Verify
User: "Tell me everything you remember"
Expected: Agent lists all memories including name, Python, and hiking
```
### Webhook Testing
For production webhook workflows:
```bash
# Activate the workflow first in n8n UI
# Send test message
curl -X POST "https://your-n8n-domain.com/webhook/mem0-chat" \
-H "Content-Type: application/json" \
-d '{
"message": "My name is Bob and I work as a software engineer"
}'
# Expected response
{
"response": "Got it, Bob! I've noted that you work as a software engineer..."
}
```
## Troubleshooting
### MCP Client Can't Connect
**Error**: "Failed to connect to MCP server"
**Solutions**:
1. Verify MCP server is running:
```bash
curl http://172.21.0.14:8765/health
```
2. Check Docker network connectivity:
```bash
docker run --rm --network localai alpine/curl:latest \
curl -s http://172.21.0.14:8765/health
```
3. Verify both containers are on same network:
```bash
docker network inspect localai
```
### Agent Asks for User ID
**Error**: Agent responds "Could you please provide me with your user ID?"
**Solution**: Update system message to explicitly include user_id in examples:
```
CRITICAL: You MUST use user_id="chat_user" in EVERY memory tool call.
Example: {"messages": [...], "user_id": "chat_user"}
```
### Webhook Not Registered
**Error**: `{"code":404,"message":"The requested webhook is not registered"}`
**Solutions**:
1. Activate the workflow in n8n UI
2. Check webhook path matches your URL
3. Verify workflow is saved and active
### Wrong Model Type Error
**Error**: "Your chosen OpenAI model is a chat model and not a text-in/text-out LLM"
**Solution**: Use `@n8n/n8n-nodes-langchain.lmChatOpenAi` node type, not `lmOpenAi`
## Advanced Configuration
### Dynamic User IDs
To use dynamic user IDs based on webhook input:
```javascript
// In AI Agent system message
"Use user_id from the webhook data: user_id=\"{{ $json.body.user_id }}\""
// Webhook payload
{
"user_id": "user_12345",
"message": "Remember this information"
}
```
### Multiple Agents
To support multiple agents with separate memories:
```javascript
// System message
"You are Agent Alpha. Use agent_id=\"agent_alpha\" in all memory calls."
// Tool call example
{
"messages": [...],
"agent_id": "agent_alpha",
"user_id": "user_123"
}
```
### Custom Metadata
Add context to stored memories:
```javascript
// In add_memory call
{
"messages": [...],
"user_id": "chat_user",
"metadata": {
"source": "webhook",
"session_id": "{{ $json.session_id }}",
"timestamp": "{{ $now }}"
}
}
```
## Next Steps
Detailed documentation for all MCP tools
Use MCP with Claude Code