Major Changes:
- Added Ollama as alternative LLM provider to OpenAI
- Implemented flexible provider switching via environment variables
- Support for multiple embedding models (OpenAI and Ollama)
- Created comprehensive Ollama setup guide
Configuration Changes (config.py):
- Added LLM_PROVIDER and EMBEDDER_PROVIDER settings
- Added Ollama configuration: base URL, LLM model, embedding model
- Modified get_mem0_config() to dynamically switch providers
- OpenAI API key now optional when using Ollama
- Added validation to ensure required keys based on provider
Supported Configurations:
1. Full OpenAI (default):
- LLM_PROVIDER=openai
- EMBEDDER_PROVIDER=openai
2. Full Ollama (local):
- LLM_PROVIDER=ollama
- EMBEDDER_PROVIDER=ollama
3. Hybrid configurations:
- Ollama LLM + OpenAI embeddings
- OpenAI LLM + Ollama embeddings
Ollama Models Supported:
- LLM: llama3.1:8b, llama3.1:70b, mistral:7b, codellama:7b, phi3:3.8b
- Embeddings: nomic-embed-text, mxbai-embed-large, all-minilm
Documentation:
- Created docs/setup/ollama.mdx - Complete Ollama setup guide
- Installation methods (host and Docker)
- Model selection and comparison
- Docker Compose configuration
- Performance tuning and GPU acceleration
- Migration guide from OpenAI
- Troubleshooting section
- Updated README.md with Ollama features
- Updated .env.example with provider selection
- Marked Phase 2 as complete in roadmap
Environment Variables:
- LLM_PROVIDER: Select LLM provider (openai/ollama)
- EMBEDDER_PROVIDER: Select embedding provider (openai/ollama)
- OLLAMA_BASE_URL: Ollama API endpoint (default: http://localhost:11434)
- OLLAMA_LLM_MODEL: Ollama model for text generation
- OLLAMA_EMBEDDING_MODEL: Ollama model for embeddings
- MEM0_EMBEDDING_DIMS: Must match embedding model dimensions
Breaking Changes:
- None - defaults to OpenAI for backward compatibility
Migration Notes:
- When switching from OpenAI to Ollama embeddings, existing embeddings
must be cleared due to dimension changes (1536 → 768 for nomic-embed-text)
- Update MEM0_EMBEDDING_DIMS to match chosen embedding model
Benefits:
✅ Cost savings - no API costs with local models
✅ Privacy - all data stays local
✅ Offline capability - works without internet
✅ Model variety - access to many open-source models
✅ Flexibility - easy switching between providers
Version: 1.1.0
Status: Phase 2 Complete - Production Ready with Ollama Support
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>