Add Docker networking support for N8N and container integration
- Added docker-compose.api-localai.yml for Docker network integration - Updated config.py to support dynamic Supabase connection strings via environment variables - Enhanced documentation with Docker network deployment instructions - Added specific N8N workflow integration guidance - Solved Docker networking issues for container-to-container communication Key improvements: * Container-to-container API access for N8N workflows * Automatic service dependency resolution (Ollama, Supabase) * Comprehensive deployment options for different use cases * Production-ready Docker network configuration 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -101,7 +101,7 @@ graph TB
|
||||
| **Supabase** | ✅ Ready | Self-hosted database with pgvector on localhost:8000 |
|
||||
| **Ollama** | ✅ Ready | 21+ local models available on localhost:11434 |
|
||||
| **Mem0 Core** | ✅ Ready | Memory management system v0.1.115 |
|
||||
| **REST API** | ✅ Ready | FastAPI server with full CRUD, auth, and testing on localhost:8080 |
|
||||
| **REST API** | ✅ Ready | FastAPI server with full CRUD, auth, testing, and Docker networking support |
|
||||
|
||||
## Getting Started
|
||||
|
||||
|
||||
@@ -97,6 +97,28 @@ Mem0 provides a comprehensive REST API server built with FastAPI. The implementa
|
||||
The Docker deployment automatically configures external access on `0.0.0.0:8080`.
|
||||
</Note>
|
||||
</Tab>
|
||||
|
||||
<Tab title="Docker Network Integration ✅ For N8N & Container Services">
|
||||
For integration with N8N workflows or other containerized services:
|
||||
|
||||
```bash
|
||||
# Deploy to existing Docker network (e.g., localai)
|
||||
docker-compose -f docker-compose.api-localai.yml up -d
|
||||
|
||||
# Find the container IP address
|
||||
docker inspect mem0-api-localai --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
|
||||
```
|
||||
|
||||
**Usage in N8N HTTP Request Node:**
|
||||
- **URL**: `http://172.21.0.17:8080/v1/memories` (use actual container IP)
|
||||
- **Method**: POST
|
||||
- **Headers**: `Authorization: Bearer mem0_dev_key_123456789`
|
||||
- **Body**: JSON object with `messages`, `user_id`, and `metadata`
|
||||
|
||||
<Note>
|
||||
**Perfect for Docker ecosystems!** Automatically handles Ollama and Supabase connections within the same network. Use container IP addresses for reliable service-to-service communication.
|
||||
</Note>
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## API Endpoints
|
||||
|
||||
@@ -75,9 +75,28 @@ Our Phase 2 implementation provides a production-ready REST API with two deploym
|
||||
The Docker deployment automatically configures the API to accept external connections on `0.0.0.0:8080`.
|
||||
</Note>
|
||||
</Tab>
|
||||
|
||||
<Tab title="Docker Network Integration ✅ For N8N/Containers">
|
||||
For integration with N8N or other Docker containers on custom networks:
|
||||
|
||||
```bash
|
||||
# Deploy to localai network (or your custom network)
|
||||
docker-compose -f docker-compose.api-localai.yml up -d
|
||||
|
||||
# Find container IP for connections
|
||||
docker inspect mem0-api-localai --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
|
||||
```
|
||||
|
||||
**Access:** http://CONTAINER_IP:8080 (from within Docker network)
|
||||
**Example:** http://172.21.0.17:8080
|
||||
|
||||
<Note>
|
||||
Perfect for N8N workflows and Docker-to-Docker communication. Automatically handles service dependencies like Ollama and Supabase connections.
|
||||
</Note>
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
Both options provide:
|
||||
All deployment options provide:
|
||||
- Interactive documentation at `/docs`
|
||||
- Full authentication and rate limiting
|
||||
- Comprehensive error handling
|
||||
|
||||
Reference in New Issue
Block a user