Complete implementation: REST API, MCP server, and documentation

Implementation Summary:
- REST API with FastAPI (complete CRUD operations)
- MCP Server with Python MCP SDK (7 tools)
- Supabase migrations (pgvector setup)
- Docker Compose orchestration
- Mintlify documentation site
- Environment configuration
- Shared config module

REST API Features:
- POST /v1/memories/ - Add memory
- GET /v1/memories/search - Semantic search
- GET /v1/memories/{id} - Get memory
- GET /v1/memories/user/{user_id} - User memories
- PATCH /v1/memories/{id} - Update memory
- DELETE /v1/memories/{id} - Delete memory
- GET /v1/health - Health check
- GET /v1/stats - Statistics
- Bearer token authentication
- OpenAPI documentation

MCP Server Tools:
- add_memory - Add from messages
- search_memories - Semantic search
- get_memory - Retrieve by ID
- get_all_memories - List all
- update_memory - Update content
- delete_memory - Delete by ID
- delete_all_memories - Bulk delete

Infrastructure:
- Neo4j 5.26 with APOC/GDS
- Supabase pgvector integration
- Docker network: localai
- Health checks and monitoring
- Structured logging

Documentation:
- Introduction page
- Quickstart guide
- Architecture deep dive
- Mintlify configuration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Claude Code
2025-10-14 08:44:16 +02:00
parent cfa7abd23d
commit 61a4050a8e
26 changed files with 3248 additions and 0 deletions

View File

@@ -0,0 +1,172 @@
-- T6 Mem0 v2 - Initial Vector Store Setup
-- This migration creates the necessary tables and functions for Mem0 vector storage
-- Enable pgvector extension
CREATE EXTENSION IF NOT EXISTS vector;
-- Create memories table
CREATE TABLE IF NOT EXISTS t6_memories (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
embedding vector(1536), -- OpenAI text-embedding-3-small dimension
metadata JSONB NOT NULL DEFAULT '{}'::JSONB,
user_id TEXT,
agent_id TEXT,
run_id TEXT,
memory_text TEXT NOT NULL,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
hash TEXT, -- For deduplication
-- Indexes
CONSTRAINT t6_memories_hash_unique UNIQUE (hash)
);
-- Create index on embedding using HNSW for fast similarity search
CREATE INDEX IF NOT EXISTS t6_memories_embedding_idx
ON t6_memories
USING hnsw (embedding vector_cosine_ops);
-- Create indexes on metadata fields for filtering
CREATE INDEX IF NOT EXISTS t6_memories_user_id_idx
ON t6_memories (user_id);
CREATE INDEX IF NOT EXISTS t6_memories_agent_id_idx
ON t6_memories (agent_id);
CREATE INDEX IF NOT EXISTS t6_memories_run_id_idx
ON t6_memories (run_id);
CREATE INDEX IF NOT EXISTS t6_memories_created_at_idx
ON t6_memories (created_at DESC);
-- Create GIN index on metadata for JSON queries
CREATE INDEX IF NOT EXISTS t6_memories_metadata_idx
ON t6_memories
USING GIN (metadata);
-- Create full-text search index on memory_text
CREATE INDEX IF NOT EXISTS t6_memories_text_search_idx
ON t6_memories
USING GIN (to_tsvector('english', memory_text));
-- Function to update the updated_at timestamp
CREATE OR REPLACE FUNCTION update_t6_memories_updated_at()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Trigger to automatically update updated_at
CREATE TRIGGER t6_memories_updated_at_trigger
BEFORE UPDATE ON t6_memories
FOR EACH ROW
EXECUTE FUNCTION update_t6_memories_updated_at();
-- Function for vector similarity search with filters
CREATE OR REPLACE FUNCTION match_t6_memories(
query_embedding vector(1536),
match_count INT DEFAULT 10,
filter_user_id TEXT DEFAULT NULL,
filter_agent_id TEXT DEFAULT NULL,
filter_run_id TEXT DEFAULT NULL
)
RETURNS TABLE (
id UUID,
memory_text TEXT,
metadata JSONB,
user_id TEXT,
agent_id TEXT,
run_id TEXT,
similarity FLOAT,
created_at TIMESTAMP WITH TIME ZONE
)
LANGUAGE plpgsql
AS $$
BEGIN
RETURN QUERY
SELECT
t6_memories.id,
t6_memories.memory_text,
t6_memories.metadata,
t6_memories.user_id,
t6_memories.agent_id,
t6_memories.run_id,
1 - (t6_memories.embedding <=> query_embedding) AS similarity,
t6_memories.created_at
FROM t6_memories
WHERE
(filter_user_id IS NULL OR t6_memories.user_id = filter_user_id) AND
(filter_agent_id IS NULL OR t6_memories.agent_id = filter_agent_id) AND
(filter_run_id IS NULL OR t6_memories.run_id = filter_run_id)
ORDER BY t6_memories.embedding <=> query_embedding
LIMIT match_count;
END;
$$;
-- Function to get memory statistics
CREATE OR REPLACE FUNCTION get_t6_memory_stats()
RETURNS TABLE (
total_memories BIGINT,
total_users BIGINT,
total_agents BIGINT,
avg_memories_per_user NUMERIC,
oldest_memory TIMESTAMP WITH TIME ZONE,
newest_memory TIMESTAMP WITH TIME ZONE
)
LANGUAGE plpgsql
AS $$
BEGIN
RETURN QUERY
SELECT
COUNT(*)::BIGINT AS total_memories,
COUNT(DISTINCT user_id)::BIGINT AS total_users,
COUNT(DISTINCT agent_id)::BIGINT AS total_agents,
CASE
WHEN COUNT(DISTINCT user_id) > 0
THEN ROUND(COUNT(*)::NUMERIC / COUNT(DISTINCT user_id), 2)
ELSE 0
END AS avg_memories_per_user,
MIN(created_at) AS oldest_memory,
MAX(created_at) AS newest_memory
FROM t6_memories;
END;
$$;
-- Create a view for recent memories
CREATE OR REPLACE VIEW t6_recent_memories AS
SELECT
id,
user_id,
agent_id,
run_id,
memory_text,
metadata,
created_at,
updated_at
FROM t6_memories
ORDER BY created_at DESC
LIMIT 100;
-- Grant necessary permissions (adjust as needed for your setup)
-- GRANT SELECT, INSERT, UPDATE, DELETE ON t6_memories TO authenticated;
-- GRANT EXECUTE ON FUNCTION match_t6_memories TO authenticated;
-- GRANT EXECUTE ON FUNCTION get_t6_memory_stats TO authenticated;
-- Comments for documentation
COMMENT ON TABLE t6_memories IS 'Storage for T6 Mem0 v2 memory vectors and metadata';
COMMENT ON COLUMN t6_memories.embedding IS 'OpenAI text-embedding-3-small vector (1536 dimensions)';
COMMENT ON COLUMN t6_memories.metadata IS 'Flexible JSON metadata for additional memory properties';
COMMENT ON COLUMN t6_memories.hash IS 'Hash for deduplication of identical memories';
COMMENT ON FUNCTION match_t6_memories IS 'Performs cosine similarity search on memory embeddings with optional filters';
COMMENT ON FUNCTION get_t6_memory_stats IS 'Returns statistics about stored memories';
-- Success message
DO $$
BEGIN
RAISE NOTICE 'T6 Mem0 v2 vector store initialized successfully!';
RAISE NOTICE 'Table: t6_memories';
RAISE NOTICE 'Functions: match_t6_memories, get_t6_memory_stats';
RAISE NOTICE 'View: t6_recent_memories';
END $$;

View File

@@ -0,0 +1,153 @@
# Supabase Migrations for T6 Mem0 v2
## Overview
This directory contains SQL migrations for setting up the Supabase vector store used by T6 Mem0 v2.
## Migrations
### 001_init_vector_store.sql
Initial setup migration that creates:
- **pgvector extension**: Enables vector similarity search
- **t6_memories table**: Main storage for memory vectors and metadata
- **Indexes**: HNSW for vectors, B-tree for filters, GIN for JSONB
- **Functions**:
- `match_t6_memories()`: Vector similarity search with filters
- `get_t6_memory_stats()`: Memory statistics
- `update_t6_memories_updated_at()`: Auto-update timestamp
- **View**: `t6_recent_memories` for quick access to recent entries
## Applying Migrations
### Method 1: Supabase SQL Editor (Recommended)
1. Open your Supabase project dashboard
2. Navigate to SQL Editor
3. Create a new query
4. Copy and paste the contents of `001_init_vector_store.sql`
5. Click "Run" to execute
### Method 2: psql Command Line
```bash
# Connect to your Supabase database
psql "postgresql://supabase_admin:PASSWORD@172.21.0.12:5432/postgres"
# Run the migration
\i migrations/supabase/001_init_vector_store.sql
```
### Method 3: Programmatic Application
```python
import psycopg2
# Connect to Supabase
conn = psycopg2.connect(
"postgresql://supabase_admin:PASSWORD@172.21.0.12:5432/postgres"
)
# Read and execute migration
with open('migrations/supabase/001_init_vector_store.sql', 'r') as f:
migration_sql = f.read()
with conn.cursor() as cur:
cur.execute(migration_sql)
conn.commit()
conn.close()
```
## Verification
After applying the migration, verify the setup:
```sql
-- Check if pgvector extension is enabled
SELECT * FROM pg_extension WHERE extname = 'vector';
-- Check if table exists
\d t6_memories
-- Verify indexes
\di t6_memories*
-- Test the similarity search function
SELECT * FROM match_t6_memories(
'[0.1, 0.2, ...]'::vector(1536), -- Sample embedding
10, -- Match count
'test_user', -- User ID filter
NULL, -- Agent ID filter
NULL -- Run ID filter
);
-- Get memory statistics
SELECT * FROM get_t6_memory_stats();
```
## Rollback
If you need to rollback the migration:
```sql
-- Drop view
DROP VIEW IF EXISTS t6_recent_memories;
-- Drop functions
DROP FUNCTION IF EXISTS get_t6_memory_stats();
DROP FUNCTION IF EXISTS match_t6_memories(vector, INT, TEXT, TEXT, TEXT);
DROP FUNCTION IF EXISTS update_t6_memories_updated_at();
-- Drop trigger
DROP TRIGGER IF EXISTS t6_memories_updated_at_trigger ON t6_memories;
-- Drop table (WARNING: This will delete all data!)
DROP TABLE IF EXISTS t6_memories CASCADE;
-- Optionally remove extension (only if not used elsewhere)
-- DROP EXTENSION IF EXISTS vector CASCADE;
```
## Schema
### t6_memories Table
| Column | Type | Description |
|--------|------|-------------|
| id | UUID | Primary key |
| embedding | vector(1536) | OpenAI embedding vector |
| metadata | JSONB | Flexible metadata |
| user_id | TEXT | User identifier |
| agent_id | TEXT | Agent identifier |
| run_id | TEXT | Run identifier |
| memory_text | TEXT | Original memory text |
| created_at | TIMESTAMPTZ | Creation timestamp |
| updated_at | TIMESTAMPTZ | Last update timestamp |
| hash | TEXT | Deduplication hash (unique) |
### Indexes
- **t6_memories_embedding_idx**: HNSW index for fast vector search
- **t6_memories_user_id_idx**: B-tree for user filtering
- **t6_memories_agent_id_idx**: B-tree for agent filtering
- **t6_memories_run_id_idx**: B-tree for run filtering
- **t6_memories_created_at_idx**: B-tree for time-based queries
- **t6_memories_metadata_idx**: GIN for JSON queries
- **t6_memories_text_search_idx**: GIN for full-text search
## Notes
- The HNSW index provides O(log n) approximate nearest neighbor search
- Cosine distance is used for similarity (1 - cosine similarity)
- All timestamps are stored in UTC
- The hash column ensures deduplication of identical memories
- Metadata is stored as JSONB for flexible schema evolution
## Support
For issues or questions about migrations, refer to:
- [Supabase Vector Documentation](https://supabase.com/docs/guides/database/extensions/pgvector)
- [pgvector Documentation](https://github.com/pgvector/pgvector)
- Project Architecture: `../../ARCHITECTURE.md`