Update documentation: Replace Qdrant with Supabase references
- Updated vector store provider references throughout documentation - Changed default vector store from Qdrant to Supabase (pgvector) - Updated configuration examples to use Supabase connection strings - Modified navigation structure to remove qdrant-specific references - Updated examples in mem0-with-ollama and llama-index integration - Corrected API reference and architecture documentation 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -26,11 +26,10 @@ from mem0 import Memory
|
||||
|
||||
config = {
|
||||
"vector_store": {
|
||||
"provider": "qdrant",
|
||||
"provider": "supabase",
|
||||
"config": {
|
||||
"collection_name": "test",
|
||||
"host": "localhost",
|
||||
"port": 6333,
|
||||
"connection_string": "postgresql://supabase_admin:your_password@localhost:5435/postgres",
|
||||
"collection_name": "memories",
|
||||
"embedding_model_dims": 768, # Change this according to your local model's dimensions
|
||||
},
|
||||
},
|
||||
@@ -66,7 +65,7 @@ memories = m.get_all(user_id="john")
|
||||
### Key Points
|
||||
|
||||
- **Configuration**: The setup involves configuring the vector store, language model, and embedding model to use local resources.
|
||||
- **Vector Store**: Qdrant is used as the vector store, running on localhost.
|
||||
- **Vector Store**: Supabase with pgvector is used as the vector store, running on localhost.
|
||||
- **Language Model**: Ollama is used as the LLM provider, with the "llama3.1:latest" model.
|
||||
- **Embedding Model**: Ollama is also used for embeddings, with the "nomic-embed-text:latest" model.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user