Files
t6_mem0/docs/open-source/features/rest-api.mdx
Docker Config Backup 710adff0aa Add Docker networking support for N8N and container integration
- Added docker-compose.api-localai.yml for Docker network integration
- Updated config.py to support dynamic Supabase connection strings via environment variables
- Enhanced documentation with Docker network deployment instructions
- Added specific N8N workflow integration guidance
- Solved Docker networking issues for container-to-container communication

Key improvements:
* Container-to-container API access for N8N workflows
* Automatic service dependency resolution (Ollama, Supabase)
* Comprehensive deployment options for different use cases
* Production-ready Docker network configuration

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-01 08:25:25 +02:00

226 lines
6.7 KiB
Plaintext

---
title: REST API Server
icon: "server"
iconType: "solid"
---
<Note>
**Phase 2 Complete ✅** - The REST API implementation is fully functional and production-ready as of 2025-07-31.
</Note>
Mem0 provides a comprehensive REST API server built with FastAPI. The implementation features complete CRUD operations, authentication, rate limiting, and robust error handling. The API includes OpenAPI documentation accessible at `/docs` when the server is running.
<Frame caption="APIs supported by Mem0 REST API Server">
<img src="/images/rest-api-server.png"/>
</Frame>
## Features ✅ Complete
- **Memory Management:** Full CRUD operations (Create, Read, Update, Delete)
- **User Management:** User-specific memory operations and statistics
- **Search & Retrieval:** Advanced search with filtering and pagination
- **Authentication:** API key-based authentication with Bearer tokens
- **Rate Limiting:** Configurable rate limiting (100 req/min default)
- **Admin Operations:** Protected admin endpoints for system management
- **Health Monitoring:** Health checks and system status endpoints
- **Error Handling:** Comprehensive error responses with structured format
- **OpenAPI Documentation:** Interactive API documentation at `/docs`
- **Async Operations:** Non-blocking operations for scalability
## Running Locally
<Tabs>
<Tab title="Direct Python Execution ✅ Current Implementation">
Our Phase 2 implementation provides a ready-to-use FastAPI server.
1. Ensure you have the required dependencies installed:
```bash
pip install fastapi uvicorn mem0ai[all] posthog qdrant-client vecs ollama
```
2. Configure your environment (optional - has sensible defaults):
```bash
export API_HOST=localhost
export API_PORT=8080
export API_KEYS=mem0_dev_key_123456789,mem0_custom_key_123
export ADMIN_API_KEYS=mem0_admin_key_111222333
export RATE_LIMIT_REQUESTS=100
export RATE_LIMIT_WINDOW_MINUTES=1
```
3. Start the server:
```bash
python start_api.py
```
4. Access the API at **http://localhost:8080**
5. View documentation at **http://localhost:8080/docs**
</Tab>
<Tab title="Direct Uvicorn">
For development with automatic reloading:
```bash
uvicorn api.main:app --host localhost --port 8080 --reload
```
</Tab>
<Tab title="With Docker ✅ Recommended for External Access">
For external access and production deployment:
```bash
# Using Docker Compose (recommended)
docker-compose -f docker-compose.api.yml up -d
```
Or build and run manually:
```bash
# Build the image
docker build -t mem0-api-server .
# Run with external access
docker run -d \
--name mem0-api \
-p 8080:8080 \
-e API_HOST=0.0.0.0 \
-e API_PORT=8080 \
mem0-api-server
```
**Access:** http://YOUR_SERVER_IP:8080 (accessible from external networks)
<Note>
The Docker deployment automatically configures external access on `0.0.0.0:8080`.
</Note>
</Tab>
<Tab title="Docker Network Integration ✅ For N8N & Container Services">
For integration with N8N workflows or other containerized services:
```bash
# Deploy to existing Docker network (e.g., localai)
docker-compose -f docker-compose.api-localai.yml up -d
# Find the container IP address
docker inspect mem0-api-localai --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
```
**Usage in N8N HTTP Request Node:**
- **URL**: `http://172.21.0.17:8080/v1/memories` (use actual container IP)
- **Method**: POST
- **Headers**: `Authorization: Bearer mem0_dev_key_123456789`
- **Body**: JSON object with `messages`, `user_id`, and `metadata`
<Note>
**Perfect for Docker ecosystems!** Automatically handles Ollama and Supabase connections within the same network. Use container IP addresses for reliable service-to-service communication.
</Note>
</Tab>
</Tabs>
## API Endpoints
Our Phase 2 implementation includes the following endpoints:
### Memory Operations
- `POST /v1/memories` - Add new memories
- `GET /v1/memories/search` - Search memories by content
- `GET /v1/memories/{memory_id}` - Get specific memory
- `DELETE /v1/memories/{memory_id}` - Delete specific memory
- `GET /v1/memories/user/{user_id}` - Get all user memories
- `DELETE /v1/users/{user_id}/memories` - Delete all user memories
### User Management
- `GET /v1/users/{user_id}/stats` - Get user statistics
### Health & Monitoring
- `GET /health` - Basic health check (no auth required)
- `GET /status` - Detailed system status (auth required)
- `GET /v1/metrics` - API metrics (admin only)
### Authentication
All endpoints (except `/health`) require authentication via Bearer token:
```bash
Authorization: Bearer mem0_dev_key_123456789
```
## Usage Examples
<CodeGroup>
```bash cURL
# Add a memory
curl -X POST "http://localhost:8080/v1/memories" \
-H "Authorization: Bearer mem0_dev_key_123456789" \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "I love Python programming"}],
"user_id": "user123",
"metadata": {"source": "api_test"}
}'
```
```python Python
import requests
headers = {"Authorization": "Bearer mem0_dev_key_123456789"}
# Add memory
response = requests.post(
"http://localhost:8080/v1/memories",
headers=headers,
json={
"messages": [{"role": "user", "content": "I love Python programming"}],
"user_id": "user123",
"metadata": {"source": "python_client"}
}
)
# Search memories
response = requests.get(
"http://localhost:8080/v1/memories/search",
headers=headers,
params={"query": "Python", "user_id": "user123", "limit": 10}
)
```
```javascript JavaScript
const headers = {
'Authorization': 'Bearer mem0_dev_key_123456789',
'Content-Type': 'application/json'
};
// Add memory
const response = await fetch('http://localhost:8080/v1/memories', {
method: 'POST',
headers: headers,
body: JSON.stringify({
messages: [{role: 'user', content: 'I love Python programming'}],
user_id: 'user123',
metadata: {source: 'js_client'}
})
});
```
</CodeGroup>
## Testing
We provide comprehensive testing suites:
```bash
# Run comprehensive tests
python test_api.py
# Run quick validation tests
python test_api_simple.py
```
## Interactive Documentation
When the server is running, access the interactive OpenAPI documentation at:
- **Swagger UI:** [http://localhost:8080/docs](http://localhost:8080/docs)
- **ReDoc:** [http://localhost:8080/redoc](http://localhost:8080/redoc)