--- title: 'Architecture Overview' description: 'Understanding the Mem0 Memory System architecture and components' --- ## System Architecture The Mem0 Memory System follows a modular, local-first architecture designed for maximum privacy, performance, and control. ```mermaid graph TB A[AI Applications] --> B[MCP Server - Port 8765] B --> C[Memory API - Port 8080] C --> D[Mem0 Core v0.1.115] D --> E[Vector Store - Supabase] D --> F[Graph Store - Neo4j] D --> G[LLM Provider] G --> H[Ollama - Port 11434] G --> I[OpenAI/Remote APIs] E --> J[Supabase - Port 8000/5435] F --> K[Neo4j - Port 7687] ``` ## Core Components ### Memory Layer (Mem0 Core) - **Version**: 0.1.115 - **Purpose**: Central memory management and coordination - **Features**: Memory operations, provider abstraction, configuration management ### Vector Storage (Supabase) - **Port**: 8000 (API), 5435 (PostgreSQL) - **Purpose**: High-performance vector search with pgvector and database storage - **Features**: PostgreSQL with pgvector, semantic search, embeddings storage, relational data ### Graph Storage (Neo4j) - **Port**: 7474 (HTTP), 7687 (Bolt) - **Version**: 5.23.0 - **Purpose**: Entity relationships and contextual memory connections - **Features**: Knowledge graph, relationship mapping, graph queries ### LLM Providers #### Ollama (Local) - **Port**: 11434 - **Models Available**: 21+ including Llama, Qwen, embeddings - **Benefits**: Privacy, cost control, offline operation #### OpenAI (Remote) - **API**: External service - **Models**: GPT-4, embeddings - **Benefits**: State-of-the-art performance, reliability ## Data Flow ### Memory Addition 1. **Input**: User messages or content 2. **Processing**: LLM extracts facts and relationships 3. **Storage**: - Facts stored as vectors in Supabase (pgvector) - Relationships stored as graph in Neo4j 4. **Indexing**: Content indexed for fast retrieval ### Memory Retrieval 1. **Query**: Semantic search query 2. **Vector Search**: Supabase finds similar memories using pgvector 3. **Graph Traversal**: Neo4j provides contextual relationships 4. **Ranking**: Combined scoring and relevance 5. **Response**: Structured memory results ## Configuration Architecture ### Environment Management ```bash # Core Services NEO4J_URI=bolt://localhost:7687 SUPABASE_URL=http://localhost:8000 OLLAMA_BASE_URL=http://localhost:11434 # Provider Selection LLM_PROVIDER=ollama # or openai VECTOR_STORE=supabase GRAPH_STORE=neo4j ``` ### Provider Abstraction The system supports multiple providers through a unified interface: - **LLM Providers**: OpenAI, Ollama, Anthropic, etc. - **Vector Stores**: Supabase (pgvector), Qdrant, Pinecone, Weaviate, etc. - **Graph Stores**: Neo4j, Amazon Neptune, etc. ## Security Architecture ### Local-First Design - All data stored locally - No external dependencies required - Full control over data processing ### Authentication Layers - API key management - Rate limiting - Access control per user/application ### Network Security - Services bound to localhost by default - Configurable network policies - TLS support for remote connections ## Scalability Considerations ### Horizontal Scaling - Supabase horizontal scaling support - Neo4j clustering capabilities - Load balancing for API layer ### Performance Optimization - Vector search optimization - Graph query optimization - Caching strategies - Connection pooling ## Deployment Patterns ### Development - Docker Compose for local services - Python virtual environment - File-based configuration ### Production - Container orchestration - Service mesh integration - Monitoring and logging - Backup and recovery ## Integration Points ### MCP Protocol - Standardized AI tool integration - Claude Code compatibility - Protocol-based communication ### API Layer - RESTful endpoints - OpenAPI specification - SDK support multiple languages ### Webhook Support - Event-driven updates - Real-time notifications - Integration with external systems