Major Changes: - Added Ollama as alternative LLM provider to OpenAI - Implemented flexible provider switching via environment variables - Support for multiple embedding models (OpenAI and Ollama) - Created comprehensive Ollama setup guide Configuration Changes (config.py): - Added LLM_PROVIDER and EMBEDDER_PROVIDER settings - Added Ollama configuration: base URL, LLM model, embedding model - Modified get_mem0_config() to dynamically switch providers - OpenAI API key now optional when using Ollama - Added validation to ensure required keys based on provider Supported Configurations: 1. Full OpenAI (default): - LLM_PROVIDER=openai - EMBEDDER_PROVIDER=openai 2. Full Ollama (local): - LLM_PROVIDER=ollama - EMBEDDER_PROVIDER=ollama 3. Hybrid configurations: - Ollama LLM + OpenAI embeddings - OpenAI LLM + Ollama embeddings Ollama Models Supported: - LLM: llama3.1:8b, llama3.1:70b, mistral:7b, codellama:7b, phi3:3.8b - Embeddings: nomic-embed-text, mxbai-embed-large, all-minilm Documentation: - Created docs/setup/ollama.mdx - Complete Ollama setup guide - Installation methods (host and Docker) - Model selection and comparison - Docker Compose configuration - Performance tuning and GPU acceleration - Migration guide from OpenAI - Troubleshooting section - Updated README.md with Ollama features - Updated .env.example with provider selection - Marked Phase 2 as complete in roadmap Environment Variables: - LLM_PROVIDER: Select LLM provider (openai/ollama) - EMBEDDER_PROVIDER: Select embedding provider (openai/ollama) - OLLAMA_BASE_URL: Ollama API endpoint (default: http://localhost:11434) - OLLAMA_LLM_MODEL: Ollama model for text generation - OLLAMA_EMBEDDING_MODEL: Ollama model for embeddings - MEM0_EMBEDDING_DIMS: Must match embedding model dimensions Breaking Changes: - None - defaults to OpenAI for backward compatibility Migration Notes: - When switching from OpenAI to Ollama embeddings, existing embeddings must be cleared due to dimension changes (1536 → 768 for nomic-embed-text) - Update MEM0_EMBEDDING_DIMS to match chosen embedding model Benefits: ✅ Cost savings - no API costs with local models ✅ Privacy - all data stays local ✅ Offline capability - works without internet ✅ Model variety - access to many open-source models ✅ Flexibility - easy switching between providers Version: 1.1.0 Status: Phase 2 Complete - Production Ready with Ollama Support 🤖 Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
337 lines
9.2 KiB
Markdown
337 lines
9.2 KiB
Markdown
# T6 Mem0 v2 - Memory System for LLM Applications
|
|
|
|
Comprehensive memory system based on mem0.ai featuring MCP server integration, REST API, hybrid storage architecture, and AI-powered memory management.
|
|
|
|
## Features
|
|
|
|
- **MCP Server**: HTTP/SSE and stdio transports for universal AI integration
|
|
- ✅ n8n AI Agent workflows
|
|
- ✅ Claude Code integration
|
|
- ✅ 7 memory tools (add, search, get, update, delete)
|
|
- **REST API**: Full HTTP API for memory operations (CRUD)
|
|
- **Hybrid Storage**: Supabase (pgvector) + Neo4j (graph relationships)
|
|
- **Synchronized Operations**: Automatic sync across vector and graph stores
|
|
- **Flexible LLM Support**:
|
|
- ✅ OpenAI (GPT-4, GPT-3.5)
|
|
- ✅ Ollama (Llama 3.1, Mistral, local models)
|
|
- ✅ Switchable via environment variables
|
|
- **Multi-Agent Support**: User and agent-specific memory isolation
|
|
- **Graph Visualization**: Neo4j Browser for relationship exploration
|
|
- **Docker-Native**: Fully containerized with Docker Compose
|
|
|
|
## Architecture
|
|
|
|
```
|
|
Clients (n8n, Claude Code, Custom Apps)
|
|
↓
|
|
┌─────────────────┬───────────────────┐
|
|
│ MCP Server │ REST API │
|
|
│ Port 8765 │ Port 8080 │
|
|
│ HTTP/SSE+stdio │ FastAPI │
|
|
└─────────────────┴───────────────────┘
|
|
↓
|
|
Mem0 Core Library (v0.1.118)
|
|
↓
|
|
┌─────────────────┬───────────────────┬───────────────────┐
|
|
│ Supabase │ Neo4j │ OpenAI │
|
|
│ Vector Store │ Graph Store │ Embeddings+LLM │
|
|
│ pgvector │ Cypher Queries │ text-embedding-3 │
|
|
└─────────────────┴───────────────────┴───────────────────┘
|
|
```
|
|
|
|
## Quick Start
|
|
|
|
### Prerequisites
|
|
|
|
- Docker and Docker Compose
|
|
- Existing Supabase instance (PostgreSQL with pgvector)
|
|
- **Choose one:**
|
|
- OpenAI API key (for cloud LLM)
|
|
- Ollama installed (for local LLM) - [Setup Guide](docs/setup/ollama.mdx)
|
|
- Python 3.11+ (for development)
|
|
|
|
### Installation
|
|
|
|
```bash
|
|
# Clone repository
|
|
git clone https://git.colsys.tech/klas/t6_mem0_v2
|
|
cd t6_mem0_v2
|
|
|
|
# Configure environment
|
|
cp .env.example .env
|
|
# Edit .env with your credentials
|
|
|
|
# Start services
|
|
docker compose up -d
|
|
|
|
# Verify health
|
|
curl http://localhost:8080/v1/health
|
|
curl http://localhost:8765/health
|
|
```
|
|
|
|
### Configuration
|
|
|
|
Create `.env` file:
|
|
|
|
**Option 1: OpenAI (Default)**
|
|
|
|
```bash
|
|
# LLM Configuration
|
|
LLM_PROVIDER=openai
|
|
EMBEDDER_PROVIDER=openai
|
|
OPENAI_API_KEY=sk-your-key-here
|
|
|
|
# Supabase
|
|
SUPABASE_CONNECTION_STRING=postgresql://user:pass@172.21.0.12:5432/postgres
|
|
|
|
# Neo4j
|
|
NEO4J_URI=neo4j://neo4j:7687
|
|
NEO4J_USER=neo4j
|
|
NEO4J_PASSWORD=your-password
|
|
|
|
# REST API
|
|
API_KEY=your-secure-api-key
|
|
|
|
# MCP Server
|
|
MCP_HOST=0.0.0.0
|
|
MCP_PORT=8765
|
|
|
|
# Mem0 Configuration
|
|
MEM0_COLLECTION_NAME=t6_memories
|
|
MEM0_EMBEDDING_DIMS=1536 # OpenAI embeddings
|
|
MEM0_VERSION=v1.1
|
|
```
|
|
|
|
**Option 2: Ollama (Local LLM)**
|
|
|
|
```bash
|
|
# LLM Configuration
|
|
LLM_PROVIDER=ollama
|
|
EMBEDDER_PROVIDER=ollama
|
|
OLLAMA_BASE_URL=http://localhost:11434
|
|
OLLAMA_LLM_MODEL=llama3.1:8b
|
|
OLLAMA_EMBEDDING_MODEL=nomic-embed-text
|
|
|
|
# Supabase
|
|
SUPABASE_CONNECTION_STRING=postgresql://user:pass@172.21.0.12:5432/postgres
|
|
|
|
# Neo4j
|
|
NEO4J_URI=neo4j://neo4j:7687
|
|
NEO4J_USER=neo4j
|
|
NEO4J_PASSWORD=your-password
|
|
|
|
# REST API
|
|
API_KEY=your-secure-api-key
|
|
|
|
# MCP Server
|
|
MCP_HOST=0.0.0.0
|
|
MCP_PORT=8765
|
|
|
|
# Mem0 Configuration
|
|
MEM0_COLLECTION_NAME=t6_memories
|
|
MEM0_EMBEDDING_DIMS=768 # Ollama nomic-embed-text
|
|
MEM0_VERSION=v1.1
|
|
```
|
|
|
|
See [Ollama Setup Guide](docs/setup/ollama.mdx) for detailed configuration.
|
|
|
|
## Usage
|
|
|
|
### REST API
|
|
|
|
```bash
|
|
# Add memory
|
|
curl -X POST http://localhost:8080/v1/memories/ \
|
|
-H "Authorization: Bearer YOUR_API_KEY" \
|
|
-H "Content-Type: application/json" \
|
|
-d '{"messages":[{"role":"user","content":"I love pizza"}],"user_id":"alice"}'
|
|
|
|
# Search memories
|
|
curl -X GET "http://localhost:8080/v1/memories/search?query=food&user_id=alice" \
|
|
-H "Authorization: Bearer YOUR_API_KEY"
|
|
```
|
|
|
|
### MCP Server
|
|
|
|
**HTTP/SSE Transport (for n8n, web clients):**
|
|
|
|
```bash
|
|
# MCP endpoint
|
|
http://localhost:8765/mcp
|
|
|
|
# Test tools/list
|
|
curl -X POST http://localhost:8765/mcp \
|
|
-H "Content-Type: application/json" \
|
|
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'
|
|
```
|
|
|
|
**stdio Transport (for Claude Code, local tools):**
|
|
|
|
Add to `~/.config/claude/mcp.json`:
|
|
|
|
```json
|
|
{
|
|
"mcpServers": {
|
|
"t6-mem0": {
|
|
"command": "python",
|
|
"args": ["-m", "mcp_server.main"],
|
|
"cwd": "/path/to/t6_mem0_v2",
|
|
"env": {
|
|
"OPENAI_API_KEY": "${OPENAI_API_KEY}",
|
|
"SUPABASE_CONNECTION_STRING": "${SUPABASE_CONNECTION_STRING}",
|
|
"NEO4J_URI": "neo4j://localhost:7687",
|
|
"NEO4J_USER": "neo4j",
|
|
"NEO4J_PASSWORD": "${NEO4J_PASSWORD}"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
**n8n Integration:**
|
|
|
|
Use the MCP Client Tool node in n8n AI Agent workflows:
|
|
|
|
```javascript
|
|
{
|
|
"endpointUrl": "http://172.21.0.14:8765/mcp", // Use Docker network IP
|
|
"serverTransport": "httpStreamable",
|
|
"authentication": "none",
|
|
"include": "all"
|
|
}
|
|
```
|
|
|
|
See [n8n integration guide](docs/examples/n8n.mdx) for complete workflow examples.
|
|
|
|
## Documentation
|
|
|
|
Full documentation available at: `docs/` (Mintlify)
|
|
|
|
### MCP Server
|
|
- [MCP Server Introduction](docs/mcp/introduction.mdx)
|
|
- [MCP Installation Guide](docs/mcp/installation.mdx)
|
|
- [MCP Tool Reference](docs/mcp/tools.mdx)
|
|
|
|
### Integration Guides
|
|
- [n8n Integration Guide](docs/examples/n8n.mdx)
|
|
- [Claude Code Integration](docs/examples/claude-code.mdx)
|
|
|
|
### Setup
|
|
- [Ollama Setup (Local LLM)](docs/setup/ollama.mdx)
|
|
|
|
### Architecture
|
|
- [Architecture Overview](ARCHITECTURE.md)
|
|
- [Project Requirements](PROJECT_REQUIREMENTS.md)
|
|
|
|
## Project Structure
|
|
|
|
```
|
|
t6_mem0_v2/
|
|
├── api/ # REST API (FastAPI)
|
|
│ ├── main.py # API entry point
|
|
│ ├── memory_service.py # Memory operations
|
|
│ └── routes.py # API endpoints
|
|
├── mcp_server/ # MCP server implementation
|
|
│ ├── main.py # stdio transport (Claude Code)
|
|
│ ├── http_server.py # HTTP/SSE transport (n8n, web)
|
|
│ ├── tools.py # MCP tool definitions
|
|
│ └── server.py # Core MCP server logic
|
|
├── docker/ # Docker configurations
|
|
│ ├── Dockerfile.api # REST API container
|
|
│ └── Dockerfile.mcp # MCP server container
|
|
├── docs/ # Mintlify documentation
|
|
│ ├── mcp/ # MCP server docs
|
|
│ └── examples/ # Integration examples
|
|
├── tests/ # Test suites
|
|
├── config.py # Configuration management
|
|
├── requirements.txt # Python dependencies
|
|
└── docker-compose.yml # Service orchestration
|
|
```
|
|
|
|
## Technology Stack
|
|
|
|
- **Core**: mem0ai library (v0.1.118+)
|
|
- **Vector DB**: Supabase with pgvector
|
|
- **Graph DB**: Neo4j 5.x
|
|
- **LLM Options**:
|
|
- OpenAI API (GPT-4o-mini, text-embedding-3-small)
|
|
- Ollama (Llama 3.1, Mistral, nomic-embed-text)
|
|
- **REST API**: FastAPI
|
|
- **MCP**: Python MCP SDK
|
|
- **Container**: Docker & Docker Compose
|
|
|
|
## Roadmap
|
|
|
|
### Phase 1: Foundation ✅ COMPLETED
|
|
- ✅ Architecture design
|
|
- ✅ REST API implementation (FastAPI with Bearer auth)
|
|
- ✅ MCP server implementation (HTTP/SSE + stdio transports)
|
|
- ✅ Supabase integration (pgvector for embeddings)
|
|
- ✅ Neo4j integration (graph relationships)
|
|
- ✅ Documentation site (Mintlify)
|
|
- ✅ n8n AI Agent integration
|
|
- ✅ Claude Code integration
|
|
- ✅ Docker deployment with health checks
|
|
|
|
### Phase 2: Local LLM ✅ COMPLETED
|
|
- ✅ Local Ollama integration
|
|
- ✅ Model switching capabilities (OpenAI ↔ Ollama)
|
|
- ✅ Embedding model selection
|
|
- ✅ Environment-based provider configuration
|
|
|
|
### Phase 3: Advanced Features
|
|
- ⏳ Memory versioning and history
|
|
- ⏳ Advanced graph queries and analytics
|
|
- ⏳ Multi-modal memory support (images, audio)
|
|
- ⏳ Analytics dashboard
|
|
- ⏳ Memory export/import
|
|
- ⏳ Custom embedding models
|
|
|
|
## Development
|
|
|
|
```bash
|
|
# Install dependencies
|
|
pip install -r requirements.txt
|
|
|
|
# Run tests
|
|
pytest tests/
|
|
|
|
# Format code
|
|
black .
|
|
ruff check .
|
|
|
|
# Run locally (development)
|
|
python -m api.main
|
|
```
|
|
|
|
## Contributing
|
|
|
|
This is a private project. For issues or suggestions, contact the maintainer.
|
|
|
|
## License
|
|
|
|
Proprietary - All rights reserved
|
|
|
|
## Support
|
|
|
|
- Repository: https://git.colsys.tech/klas/t6_mem0_v2
|
|
- Documentation: See `docs/` directory
|
|
- Issues: Contact maintainer
|
|
|
|
---
|
|
|
|
**Status**: Phase 2 Complete - Production Ready with Ollama Support
|
|
**Version**: 1.1.0
|
|
**Last Updated**: 2025-10-15
|
|
|
|
## Recent Updates
|
|
|
|
- **2025-10-15**: ✅ Ollama integration complete - local LLM support
|
|
- **2025-10-15**: ✅ Flexible provider switching (OpenAI ↔ Ollama)
|
|
- **2025-10-15**: ✅ Support for multiple embedding models
|
|
- **2025-10-15**: MCP HTTP/SSE server implementation complete
|
|
- **2025-10-15**: n8n AI Agent integration tested and documented
|
|
- **2025-10-15**: Complete Mintlify documentation site
|
|
- **2025-10-15**: Synchronized delete operations across stores
|
|
- **2025-10-13**: Initial project setup and architecture
|