Complete implementation: REST API, MCP server, and documentation

Implementation Summary:
- REST API with FastAPI (complete CRUD operations)
- MCP Server with Python MCP SDK (7 tools)
- Supabase migrations (pgvector setup)
- Docker Compose orchestration
- Mintlify documentation site
- Environment configuration
- Shared config module

REST API Features:
- POST /v1/memories/ - Add memory
- GET /v1/memories/search - Semantic search
- GET /v1/memories/{id} - Get memory
- GET /v1/memories/user/{user_id} - User memories
- PATCH /v1/memories/{id} - Update memory
- DELETE /v1/memories/{id} - Delete memory
- GET /v1/health - Health check
- GET /v1/stats - Statistics
- Bearer token authentication
- OpenAPI documentation

MCP Server Tools:
- add_memory - Add from messages
- search_memories - Semantic search
- get_memory - Retrieve by ID
- get_all_memories - List all
- update_memory - Update content
- delete_memory - Delete by ID
- delete_all_memories - Bulk delete

Infrastructure:
- Neo4j 5.26 with APOC/GDS
- Supabase pgvector integration
- Docker network: localai
- Health checks and monitoring
- Structured logging

Documentation:
- Introduction page
- Quickstart guide
- Architecture deep dive
- Mintlify configuration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Claude Code
2025-10-14 08:44:16 +02:00
parent cfa7abd23d
commit 61a4050a8e
26 changed files with 3248 additions and 0 deletions

313
docs/architecture.mdx Normal file
View File

@@ -0,0 +1,313 @@
---
title: 'System Architecture'
description: 'Technical architecture and design decisions for T6 Mem0 v2'
---
## Architecture Overview
T6 Mem0 v2 implements a **hybrid storage architecture** combining vector search, graph relationships, and structured data storage for optimal memory management.
```
┌─────────────────────────────────────────────────────────────┐
│ Client Layer │
├──────────────────┬──────────────────┬──────────────────────┤
│ Claude Code (MCP)│ N8N Workflows │ External Apps │
└──────────────────┴──────────────────┴──────────────────────┘
│ │ │
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────────────────────────┐
│ Interface Layer │
├──────────────────────────────┬──────────────────────────────┤
│ MCP Server (Port 8765) │ REST API (Port 8080) │
│ - SSE Connections │ - FastAPI │
│ - MCP Protocol │ - OpenAPI Spec │
│ - Tool Registration │ - Auth Middleware │
└──────────────────────────────┴──────────────────────────────┘
│ │
└────────┬───────────┘
┌─────────────────────────────────────────────────────────────┐
│ Core Layer │
│ Mem0 Core Library │
│ - Memory Management - Embedding Generation │
│ - Semantic Search - Relationship Extraction │
│ - Multi-Agent Support - Deduplication │
└─────────────────────────────────────────────────────────────┘
┌───────────────────┼───────────────────┐
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Vector Store │ │ Graph Store │ │ External LLM │
│ Supabase │ │ Neo4j │ │ OpenAI │
│ (pgvector) │ │ (Cypher) │ │ (Embeddings) │
│ 172.21.0.12 │ │ 172.21.0.x │ │ API Cloud │
└─────────────────┘ └─────────────────┘ └─────────────────┘
```
## Design Decisions
### 1. Hybrid Storage Architecture ✅
**Why Multiple Storage Systems?**
Each store is optimized for specific query patterns:
<AccordionGroup>
<Accordion title="Vector Store (Supabase + pgvector)">
**Purpose**: Semantic similarity search
- Stores 1536-dimensional OpenAI embeddings
- HNSW indexing for fast approximate nearest neighbor search
- O(log n) query performance
- Cosine distance for similarity measurement
</Accordion>
<Accordion title="Graph Store (Neo4j)">
**Purpose**: Relationship modeling
- Entity extraction and connection mapping
- Relationship traversal and pathfinding
- Visual exploration in Neo4j Browser
- Dynamic knowledge graph evolution
</Accordion>
<Accordion title="Key-Value Store (PostgreSQL JSONB)">
**Purpose**: Flexible metadata
- Schema-less metadata storage
- Fast JSON queries with GIN indexes
- Eliminates need for separate Redis
- Simplifies infrastructure
</Accordion>
</AccordionGroup>
### 2. MCP Server Implementation
**Custom vs. Pre-built**
<Info>
We built a custom MCP server instead of using OpenMemory MCP because:
- OpenMemory uses Qdrant (we need Supabase)
- Full control over Supabase + Neo4j integration
- Exact match to our storage stack
</Info>
### 3. Docker Networking Strategy
**localai Network Integration**
All services run on the `localai` Docker network (172.21.0.0/16):
```yaml
services:
neo4j: 172.21.0.x:7687
api: 172.21.0.x:8080
mcp-server: 172.21.0.x:8765
supabase: 172.21.0.12:5432 (existing)
```
**Benefits:**
- Container-to-container communication
- Service discovery via Docker DNS
- No host networking complications
- Persistent IPs via Docker Compose
## Data Flow
### Adding a Memory
<Steps>
<Step title="Client Request">
Client sends conversation messages via MCP or REST API
</Step>
<Step title="Mem0 Processing">
- LLM extracts key facts from messages
- Generates embedding vector (1536-dim)
- Identifies entities and relationships
</Step>
<Step title="Vector Storage">
Stores embedding + metadata in Supabase (pgvector)
</Step>
<Step title="Graph Storage">
Creates nodes and relationships in Neo4j
</Step>
<Step title="Response">
Returns memory ID and confirmation to client
</Step>
</Steps>
### Searching Memories
<Steps>
<Step title="Query Embedding">
Convert search query to vector using OpenAI
</Step>
<Step title="Vector Search">
Find similar memories in Supabase (cosine similarity)
</Step>
<Step title="Graph Enrichment">
Fetch related context from Neo4j graph
</Step>
<Step title="Ranked Results">
Return memories sorted by relevance score
</Step>
</Steps>
## Performance Characteristics
Based on mem0.ai research:
<CardGroup cols={3}>
<Card title="26% Accuracy Boost" icon="chart-line">
Higher accuracy vs baseline OpenAI
</Card>
<Card title="91% Lower Latency" icon="bolt">
Compared to full-context approaches
</Card>
<Card title="90% Token Savings" icon="dollar-sign">
Through selective memory retrieval
</Card>
</CardGroup>
## Security Architecture
### Authentication
- **REST API**: Bearer token authentication
- **MCP Server**: Client-specific SSE endpoints
- **Tokens**: Stored securely in environment variables
### Data Privacy
<Check>
All data stored locally - no cloud sync or external storage
</Check>
- Supabase instance is local (172.21.0.12)
- Neo4j runs in Docker container
- User isolation via `user_id` filtering
### Network Security
- Services on private Docker network
- No public exposure (use reverse proxy if needed)
- Internal communication only
## Scalability
### Horizontal Scaling
<Tabs>
<Tab title="REST API">
Deploy multiple API containers behind load balancer
</Tab>
<Tab title="MCP Server">
Dedicated instances per client group
</Tab>
<Tab title="Mem0 Core">
Stateless design scales with containers
</Tab>
</Tabs>
### Vertical Scaling
- **Supabase**: PostgreSQL connection pooling
- **Neo4j**: Memory configuration tuning
- **Vector Indexing**: HNSW for performance
## Technology Choices
| Component | Technology | Why? |
|-----------|-----------|------|
| Core Library | mem0ai | Production-ready, 26% accuracy boost |
| Vector DB | Supabase (pgvector) | Existing infrastructure, PostgreSQL |
| Graph DB | Neo4j | Best-in-class graph database |
| LLM | OpenAI | High-quality embeddings, GPT-4o |
| REST API | FastAPI | Fast, modern, auto-docs |
| MCP Protocol | Python MCP SDK | Official MCP implementation |
| Containers | Docker Compose | Simple orchestration |
## Phase 2: Ollama Integration
**Configuration-driven provider switching:**
```python
# Phase 1 (OpenAI)
"llm": {
"provider": "openai",
"config": {"model": "gpt-4o-mini"}
}
# Phase 2 (Ollama)
"llm": {
"provider": "ollama",
"config": {
"model": "llama3.1:8b",
"ollama_base_url": "http://172.21.0.1:11434"
}
}
```
**No code changes required** - just environment variables!
## Monitoring & Observability
### Metrics to Track
- Memory operations per second
- Average response time
- Vector search latency
- Graph query complexity
- OpenAI token usage
### Logging
- Structured JSON logs
- Request/response tracking
- Error aggregation
- Performance profiling
<Tip>
Use Prometheus + Grafana for production monitoring
</Tip>
## Deep Dive Resources
For complete architectural details, see:
- [ARCHITECTURE.md](https://git.colsys.tech/klas/t6_mem0_v2/blob/main/ARCHITECTURE.md)
- [PROJECT_REQUIREMENTS.md](https://git.colsys.tech/klas/t6_mem0_v2/blob/main/PROJECT_REQUIREMENTS.md)
## Next Steps
<CardGroup cols={2}>
<Card
title="Setup Supabase"
icon="database"
href="/setup/supabase"
>
Configure vector store
</Card>
<Card
title="Setup Neo4j"
icon="diagram-project"
href="/setup/neo4j"
>
Configure graph database
</Card>
<Card
title="API Reference"
icon="code"
href="/api-reference/introduction"
>
Explore endpoints
</Card>
<Card
title="MCP Integration"
icon="plug"
href="/mcp/introduction"
>
Connect with Claude Code
</Card>
</CardGroup>

168
docs/introduction.mdx Normal file
View File

@@ -0,0 +1,168 @@
---
title: Introduction
description: 'Welcome to T6 Mem0 v2 - Memory System for LLM Applications'
---
<img
className="block dark:hidden"
src="/images/hero-light.svg"
alt="Hero Light"
/>
<img
className="hidden dark:block"
src="/images/hero-dark.svg"
alt="Hero Dark"
/>
## What is T6 Mem0 v2?
T6 Mem0 v2 is a comprehensive memory system for LLM applications built on **mem0.ai**, featuring:
- 🔌 **MCP Server Integration** - Native Model Context Protocol support for Claude Code and AI tools
- 🌐 **REST API** - Full HTTP API for memory operations
- 🗄️ **Hybrid Storage** - Supabase (vector) + Neo4j (graph) for optimal performance
- 🤖 **AI-Powered** - OpenAI embeddings with 26% accuracy improvement
- 📊 **Graph Visualization** - Explore memory relationships in Neo4j Browser
- 🐳 **Docker-Native** - Fully containerized deployment
## Key Features
<CardGroup cols={2}>
<Card
title="Semantic Memory Search"
icon="magnifying-glass"
href="/api-reference/memories/search"
>
Find relevant memories using AI-powered semantic similarity
</Card>
<Card
title="MCP Integration"
icon="plug"
href="/mcp/introduction"
>
Use as MCP server with Claude Code, Cursor, and other AI tools
</Card>
<Card
title="Graph Relationships"
icon="diagram-project"
href="/setup/neo4j"
>
Visualize and explore memory connections with Neo4j
</Card>
<Card
title="Multi-Agent Support"
icon="users"
href="/api-reference/introduction"
>
Isolate memories by user, agent, or run identifiers
</Card>
</CardGroup>
## Architecture
T6 Mem0 v2 uses a **hybrid storage architecture** for optimal performance:
```
┌──────────────────────────────────┐
│ Clients (Claude, N8N, Apps) │
└──────────────┬───────────────────┘
┌──────────────┴───────────────────┐
│ MCP Server (8765) + REST (8080) │
└──────────────┬───────────────────┘
┌──────────────┴───────────────────┐
│ Mem0 Core Library │
└──────────────┬───────────────────┘
┌──────────┴──────────┐
│ │
┌───┴──────┐ ┌──────┴─────┐
│ Supabase │ │ Neo4j │
│ (Vector) │ │ (Graph) │
└──────────┘ └────────────┘
```
### Storage Layers
- **Vector Store (Supabase)**: Semantic similarity search with pgvector
- **Graph Store (Neo4j)**: Relationship modeling between memories
- **Key-Value Store (PostgreSQL JSONB)**: Flexible metadata storage
## Performance
Based on mem0.ai research:
- **26% higher accuracy** compared to baseline OpenAI
- **91% lower latency** than full-context approaches
- **90% token cost savings** through selective retrieval
## Use Cases
<AccordionGroup>
<Accordion icon="comment" title="Conversational AI">
Maintain context across conversations, remember user preferences, and provide personalized responses
</Accordion>
<Accordion icon="robot" title="AI Agents">
Give agents long-term memory, enable learning from past interactions, and improve decision-making
</Accordion>
<Accordion icon="headset" title="Customer Support">
Remember customer history, track issues across sessions, and provide consistent support
</Accordion>
<Accordion icon="graduation-cap" title="Educational Tools">
Track learning progress, adapt to user knowledge level, and personalize content delivery
</Accordion>
</AccordionGroup>
## Quick Links
<CardGroup cols={2}>
<Card
title="Quickstart"
icon="rocket"
href="/quickstart"
>
Get up and running in 5 minutes
</Card>
<Card
title="Architecture Deep Dive"
icon="sitemap"
href="/architecture"
>
Understand the system design
</Card>
<Card
title="API Reference"
icon="code"
href="/api-reference/introduction"
>
Explore the REST API
</Card>
<Card
title="MCP Integration"
icon="link"
href="/mcp/introduction"
>
Connect with Claude Code
</Card>
</CardGroup>
## Technology Stack
- **Core**: mem0ai library
- **Vector DB**: Supabase with pgvector
- **Graph DB**: Neo4j 5.x
- **LLM**: OpenAI API (Phase 1), Ollama (Phase 2)
- **REST API**: FastAPI + Pydantic
- **MCP**: Python MCP SDK
- **Container**: Docker & Docker Compose
## Support & Community
- **Repository**: [git.colsys.tech/klas/t6_mem0_v2](https://git.colsys.tech/klas/t6_mem0_v2)
- **mem0.ai**: [Official mem0 website](https://mem0.ai)
- **Issues**: Contact maintainer
---
Ready to get started? Continue to the [Quickstart Guide](/quickstart).

111
docs/mint.json Normal file
View File

@@ -0,0 +1,111 @@
{
"name": "T6 Mem0 v2",
"logo": {
"dark": "/logo/dark.svg",
"light": "/logo/light.svg"
},
"favicon": "/favicon.svg",
"colors": {
"primary": "#0D9373",
"light": "#07C983",
"dark": "#0D9373",
"anchors": {
"from": "#0D9373",
"to": "#07C983"
}
},
"topbarLinks": [
{
"name": "Support",
"url": "mailto:support@example.com"
}
],
"topbarCtaButton": {
"name": "Dashboard",
"url": "https://git.colsys.tech/klas/t6_mem0_v2"
},
"tabs": [
{
"name": "API Reference",
"url": "api-reference"
},
{
"name": "MCP Integration",
"url": "mcp"
}
],
"anchors": [
{
"name": "GitHub",
"icon": "github",
"url": "https://git.colsys.tech/klas/t6_mem0_v2"
},
{
"name": "mem0.ai",
"icon": "link",
"url": "https://mem0.ai"
}
],
"navigation": [
{
"group": "Get Started",
"pages": [
"introduction",
"quickstart",
"architecture"
]
},
{
"group": "Setup",
"pages": [
"setup/installation",
"setup/configuration",
"setup/supabase",
"setup/neo4j"
]
},
{
"group": "API Documentation",
"pages": [
"api-reference/introduction",
"api-reference/authentication"
]
},
{
"group": "Memory Operations",
"pages": [
"api-reference/memories/add",
"api-reference/memories/search",
"api-reference/memories/get",
"api-reference/memories/update",
"api-reference/memories/delete"
]
},
{
"group": "System",
"pages": [
"api-reference/health",
"api-reference/stats"
]
},
{
"group": "MCP Server",
"pages": [
"mcp/introduction",
"mcp/installation",
"mcp/tools"
]
},
{
"group": "Examples",
"pages": [
"examples/claude-code",
"examples/n8n",
"examples/python"
]
}
],
"footerSocials": {
"github": "https://git.colsys.tech/klas/t6_mem0_v2"
}
}

259
docs/quickstart.mdx Normal file
View File

@@ -0,0 +1,259 @@
---
title: 'Quickstart'
description: 'Get T6 Mem0 v2 running in 5 minutes'
---
## Prerequisites
Before you begin, ensure you have:
- Docker and Docker Compose installed
- Existing Supabase instance (PostgreSQL with pgvector)
- OpenAI API key
- Git access to the repository
<Check>
**Ready to go?** Let's set up your memory system!
</Check>
## Step 1: Clone Repository
```bash
git clone https://git.colsys.tech/klas/t6_mem0_v2
cd t6_mem0_v2
```
## Step 2: Configure Environment
Create `.env` file from template:
```bash
cp .env.example .env
```
Edit `.env` with your credentials:
```bash
# OpenAI Configuration
OPENAI_API_KEY=sk-your-openai-api-key-here
# Supabase Configuration (your existing instance)
SUPABASE_CONNECTION_STRING=postgresql://supabase_admin:password@172.21.0.12:5432/postgres
# Neo4j Configuration
NEO4J_PASSWORD=your-secure-neo4j-password
# API Configuration
API_KEY=your-secure-api-key-here
```
<Warning>
**Important**: Replace all placeholder values with your actual credentials. Never commit the `.env` file to version control!
</Warning>
## Step 3: Apply Database Migrations
Run the Supabase migration to set up the vector store:
### Option A: Using Supabase SQL Editor (Recommended)
1. Open your Supabase dashboard
2. Navigate to **SQL Editor**
3. Copy contents from `migrations/supabase/001_init_vector_store.sql`
4. Paste and execute
### Option B: Using psql
```bash
psql "$SUPABASE_CONNECTION_STRING" -f migrations/supabase/001_init_vector_store.sql
```
<Tip>
The migration creates tables, indexes, and functions needed for vector similarity search. See [Supabase Setup](/setup/supabase) for details.
</Tip>
## Step 4: Start Services
Launch all services with Docker Compose:
```bash
docker compose up -d
```
This starts:
- **Neo4j** (ports 7474, 7687)
- **REST API** (port 8080)
- **MCP Server** (port 8765)
## Step 5: Verify Installation
Check service health:
```bash
# Check API health
curl http://localhost:8080/v1/health
# Expected response:
# {
# "status": "healthy",
# "version": "0.1.0",
# "dependencies": {
# "mem0": "healthy"
# }
# }
```
<Check>
**Success!** All services are running. Let's try using the memory system.
</Check>
## Step 6: Add Your First Memory
### Using REST API
```bash
curl -X POST http://localhost:8080/v1/memories/ \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "I love pizza with mushrooms and olives"}
],
"user_id": "alice"
}'
```
### Response
```json
{
"status": "success",
"memories": [
{
"id": "mem_abc123",
"memory": "User loves pizza with mushrooms and olives",
"user_id": "alice",
"created_at": "2025-10-13T12:00:00Z"
}
],
"message": "Successfully added 1 memory(ies)"
}
```
## Step 7: Search Memories
```bash
curl -X GET "http://localhost:8080/v1/memories/search?query=What food does Alice like?&user_id=alice" \
-H "Authorization: Bearer YOUR_API_KEY"
```
### Response
```json
{
"status": "success",
"memories": [
{
"id": "mem_abc123",
"memory": "User loves pizza with mushrooms and olives",
"user_id": "alice",
"score": 0.95,
"created_at": "2025-10-13T12:00:00Z"
}
],
"count": 1
}
```
<Success>
**Congratulations!** Your memory system is working. The AI remembered Alice's food preferences and retrieved them semantically.
</Success>
## Step 8: Explore Neo4j (Optional)
View memory relationships in Neo4j Browser:
1. Open http://localhost:7474 in your browser
2. Login with:
- **Username**: `neo4j`
- **Password**: (from your `.env` NEO4J_PASSWORD)
3. Run query:
```cypher
MATCH (n) RETURN n LIMIT 25
```
<Tip>
Neo4j visualizes relationships between memories, entities, and concepts extracted by mem0.
</Tip>
## Next Steps
<CardGroup cols={2}>
<Card
title="Configure MCP Server"
icon="plug"
href="/mcp/installation"
>
Connect with Claude Code for AI assistant integration
</Card>
<Card
title="API Reference"
icon="book"
href="/api-reference/introduction"
>
Explore all available endpoints
</Card>
<Card
title="Architecture"
icon="sitemap"
href="/architecture"
>
Understand the system design
</Card>
<Card
title="Examples"
icon="code"
href="/examples/n8n"
>
See integration examples
</Card>
</CardGroup>
## Common Issues
<AccordionGroup>
<Accordion title="Connection to Supabase fails">
- Verify `SUPABASE_CONNECTION_STRING` is correct
- Ensure Supabase is accessible from Docker network
- Check if pgvector extension is enabled
- Run migration script if tables don't exist
</Accordion>
<Accordion title="Neo4j won't start">
- Check if ports 7474 and 7687 are available
- Verify `NEO4J_PASSWORD` is set in `.env`
- Check Docker logs: `docker logs t6-mem0-neo4j`
</Accordion>
<Accordion title="API returns authentication error">
- Verify `API_KEY` in `.env` matches request header
- Ensure Authorization header format: `Bearer YOUR_API_KEY`
</Accordion>
<Accordion title="OpenAI errors">
- Check `OPENAI_API_KEY` is valid
- Verify API key has sufficient credits
- Check internet connectivity from containers
</Accordion>
</AccordionGroup>
## Getting Help
- Review [Architecture documentation](/architecture)
- Check [API Reference](/api-reference/introduction)
- See [Setup guides](/setup/installation)
---
**Ready for production?** Continue to [Configuration](/setup/configuration) for advanced settings.