- Bump version to 1.1.1
- Update last updated date to 2025-10-16
- Add recent updates for timezone fix and ollama library
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix mem0 library hardcoded US/Pacific timezone in Docker build
- Add TZ=Europe/Prague environment variable to containers
- Add missing ollama Python library to requirements.txt
- Add Ollama environment variables to MCP container
- Include test scripts for Ollama configuration validation
This resolves timestamp issues where memories were created with
incorrect Pacific timezone (-07:00) instead of local time (+02:00).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major Changes:
- Added Ollama as alternative LLM provider to OpenAI
- Implemented flexible provider switching via environment variables
- Support for multiple embedding models (OpenAI and Ollama)
- Created comprehensive Ollama setup guide
Configuration Changes (config.py):
- Added LLM_PROVIDER and EMBEDDER_PROVIDER settings
- Added Ollama configuration: base URL, LLM model, embedding model
- Modified get_mem0_config() to dynamically switch providers
- OpenAI API key now optional when using Ollama
- Added validation to ensure required keys based on provider
Supported Configurations:
1. Full OpenAI (default):
- LLM_PROVIDER=openai
- EMBEDDER_PROVIDER=openai
2. Full Ollama (local):
- LLM_PROVIDER=ollama
- EMBEDDER_PROVIDER=ollama
3. Hybrid configurations:
- Ollama LLM + OpenAI embeddings
- OpenAI LLM + Ollama embeddings
Ollama Models Supported:
- LLM: llama3.1:8b, llama3.1:70b, mistral:7b, codellama:7b, phi3:3.8b
- Embeddings: nomic-embed-text, mxbai-embed-large, all-minilm
Documentation:
- Created docs/setup/ollama.mdx - Complete Ollama setup guide
- Installation methods (host and Docker)
- Model selection and comparison
- Docker Compose configuration
- Performance tuning and GPU acceleration
- Migration guide from OpenAI
- Troubleshooting section
- Updated README.md with Ollama features
- Updated .env.example with provider selection
- Marked Phase 2 as complete in roadmap
Environment Variables:
- LLM_PROVIDER: Select LLM provider (openai/ollama)
- EMBEDDER_PROVIDER: Select embedding provider (openai/ollama)
- OLLAMA_BASE_URL: Ollama API endpoint (default: http://localhost:11434)
- OLLAMA_LLM_MODEL: Ollama model for text generation
- OLLAMA_EMBEDDING_MODEL: Ollama model for embeddings
- MEM0_EMBEDDING_DIMS: Must match embedding model dimensions
Breaking Changes:
- None - defaults to OpenAI for backward compatibility
Migration Notes:
- When switching from OpenAI to Ollama embeddings, existing embeddings
must be cleared due to dimension changes (1536 → 768 for nomic-embed-text)
- Update MEM0_EMBEDDING_DIMS to match chosen embedding model
Benefits:
✅ Cost savings - no API costs with local models
✅ Privacy - all data stays local
✅ Offline capability - works without internet
✅ Model variety - access to many open-source models
✅ Flexibility - easy switching between providers
Version: 1.1.0
Status: Phase 2 Complete - Production Ready with Ollama Support
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
Major Changes:
- Implemented MCP HTTP/SSE transport server for n8n and web clients
- Created mcp_server/http_server.py with FastAPI for JSON-RPC 2.0 over HTTP
- Added health check endpoint (/health) for container monitoring
- Refactored mcp-server/ to mcp_server/ (Python module structure)
- Updated Dockerfile.mcp to run HTTP server with health checks
MCP Server Features:
- 7 memory tools exposed via MCP (add, search, get, update, delete)
- HTTP/SSE transport on port 8765 for n8n integration
- stdio transport for Claude Code integration
- JSON-RPC 2.0 protocol implementation
- CORS support for web clients
n8n Integration:
- Successfully tested with AI Agent workflows
- MCP Client Tool configuration documented
- Working webhook endpoint tested and verified
- System prompt optimized for automatic user_id usage
Documentation:
- Created comprehensive Mintlify documentation site
- Added docs/mcp/introduction.mdx - MCP server overview
- Added docs/mcp/installation.mdx - Installation guide
- Added docs/mcp/tools.mdx - Complete tool reference
- Added docs/examples/n8n.mdx - n8n integration guide
- Added docs/examples/claude-code.mdx - Claude Code setup
- Updated README.md with MCP HTTP server info
- Updated roadmap to mark Phase 1 as complete
Bug Fixes:
- Fixed synchronized delete operations across Supabase and Neo4j
- Updated memory_service.py with proper error handling
- Fixed Neo4j connection issues in delete operations
Configuration:
- Added MCP_HOST and MCP_PORT environment variables
- Updated .env.example with MCP server configuration
- Updated docker-compose.yml with MCP container health checks
Testing:
- Added test scripts for MCP HTTP endpoint verification
- Created test workflows in n8n
- Verified all 7 memory tools working correctly
- Tested synchronized operations across both stores
Version: 1.0.0
Status: Phase 1 Complete - Production Ready
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>