Update documentation: Replace Qdrant with Supabase references
- Updated vector store provider references throughout documentation - Changed default vector store from Qdrant to Supabase (pgvector) - Updated configuration examples to use Supabase connection strings - Modified navigation structure to remove qdrant-specific references - Updated examples in mem0-with-ollama and llama-index integration - Corrected API reference and architecture documentation 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -172,7 +172,7 @@ X-RateLimit-Reset: 1627849200
|
||||
|
||||
### Completed ✅
|
||||
- Core mem0 integration
|
||||
- Database connections (Neo4j, Qdrant)
|
||||
- Database connections (Neo4j, Supabase)
|
||||
- LLM provider support (Ollama, OpenAI)
|
||||
- Configuration management
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ iconType: "solid"
|
||||
|
||||
The `config` is defined as an object with two main keys:
|
||||
- `vector_store`: Specifies the vector database provider and its configuration
|
||||
- `provider`: The name of the vector database (e.g., "chroma", "pgvector", "qdrant", "milvus", "upstash_vector", "azure_ai_search", "vertex_ai_vector_search")
|
||||
- `provider`: The name of the vector database (e.g., "chroma", "pgvector", "supabase", "milvus", "upstash_vector", "azure_ai_search", "vertex_ai_vector_search")
|
||||
- `config`: A nested dictionary containing provider-specific settings
|
||||
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ Mem0 includes built-in support for various popular databases. Memory can utilize
|
||||
See the list of supported vector databases below.
|
||||
|
||||
<Note>
|
||||
The following vector databases are supported in the Python implementation. The TypeScript implementation currently only supports Qdrant, Redis,Vectorize and in-memory vector database.
|
||||
The following vector databases are supported in the Python implementation. The TypeScript implementation currently only supports Qdrant, Redis, Vectorize and in-memory vector database.
|
||||
</Note>
|
||||
|
||||
<CardGroup cols={3}>
|
||||
@@ -37,7 +37,7 @@ See the list of supported vector databases below.
|
||||
|
||||
## Usage
|
||||
|
||||
To utilize a vector database, you must provide a configuration to customize its usage. If no configuration is supplied, a default configuration will be applied, and `Qdrant` will be used as the vector database.
|
||||
To utilize a vector database, you must provide a configuration to customize its usage. If no configuration is supplied, a default configuration will be applied, and `Supabase` (with pgvector) will be used as the vector database.
|
||||
|
||||
For a comprehensive list of available parameters for vector database configuration, please refer to [Config](./config).
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ description: 'Complete development environment setup and workflow'
|
||||
├── test_basic.py # Basic functionality tests
|
||||
├── test_openai.py # OpenAI integration test
|
||||
├── test_all_connections.py # Comprehensive connection tests
|
||||
├── docker-compose.yml # Neo4j & Qdrant containers
|
||||
├── docker-compose.yml # Neo4j container (Supabase is external)
|
||||
├── .env # Environment variables
|
||||
└── docs/ # Documentation (Mintlify)
|
||||
```
|
||||
@@ -24,7 +24,7 @@ description: 'Complete development environment setup and workflow'
|
||||
| Component | Status | Port | Description |
|
||||
|-----------|--------|------|-------------|
|
||||
| Neo4j | ✅ READY | 7474/7687 | Graph memory storage |
|
||||
| Qdrant | ✅ READY | 6333/6334 | Vector memory storage |
|
||||
| Supabase | ✅ READY | 8000/5435 | Vector & database storage (self-hosted) |
|
||||
| Ollama | ✅ READY | 11434 | Local LLM processing |
|
||||
| Mem0 Core | ✅ READY | - | Memory management system v0.1.115 |
|
||||
|
||||
|
||||
@@ -12,12 +12,12 @@ graph TB
|
||||
A[AI Applications] --> B[MCP Server - Port 8765]
|
||||
B --> C[Memory API - Port 8080]
|
||||
C --> D[Mem0 Core v0.1.115]
|
||||
D --> E[Vector Store - Qdrant]
|
||||
D --> E[Vector Store - Supabase]
|
||||
D --> F[Graph Store - Neo4j]
|
||||
D --> G[LLM Provider]
|
||||
G --> H[Ollama - Port 11434]
|
||||
G --> I[OpenAI/Remote APIs]
|
||||
E --> J[Qdrant - Port 6333]
|
||||
E --> J[Supabase - Port 8000/5435]
|
||||
F --> K[Neo4j - Port 7687]
|
||||
```
|
||||
|
||||
@@ -28,10 +28,10 @@ graph TB
|
||||
- **Purpose**: Central memory management and coordination
|
||||
- **Features**: Memory operations, provider abstraction, configuration management
|
||||
|
||||
### Vector Storage (Qdrant)
|
||||
- **Port**: 6333 (REST), 6334 (gRPC)
|
||||
- **Purpose**: High-performance vector search and similarity matching
|
||||
- **Features**: Collections management, semantic search, embeddings storage
|
||||
### Vector Storage (Supabase)
|
||||
- **Port**: 8000 (API), 5435 (PostgreSQL)
|
||||
- **Purpose**: High-performance vector search with pgvector and database storage
|
||||
- **Features**: PostgreSQL with pgvector, semantic search, embeddings storage, relational data
|
||||
|
||||
### Graph Storage (Neo4j)
|
||||
- **Port**: 7474 (HTTP), 7687 (Bolt)
|
||||
@@ -57,13 +57,13 @@ graph TB
|
||||
1. **Input**: User messages or content
|
||||
2. **Processing**: LLM extracts facts and relationships
|
||||
3. **Storage**:
|
||||
- Facts stored as vectors in Qdrant
|
||||
- Facts stored as vectors in Supabase (pgvector)
|
||||
- Relationships stored as graph in Neo4j
|
||||
4. **Indexing**: Content indexed for fast retrieval
|
||||
|
||||
### Memory Retrieval
|
||||
1. **Query**: Semantic search query
|
||||
2. **Vector Search**: Qdrant finds similar memories
|
||||
2. **Vector Search**: Supabase finds similar memories using pgvector
|
||||
3. **Graph Traversal**: Neo4j provides contextual relationships
|
||||
4. **Ranking**: Combined scoring and relevance
|
||||
5. **Response**: Structured memory results
|
||||
@@ -74,12 +74,12 @@ graph TB
|
||||
```bash
|
||||
# Core Services
|
||||
NEO4J_URI=bolt://localhost:7687
|
||||
QDRANT_URL=http://localhost:6333
|
||||
SUPABASE_URL=http://localhost:8000
|
||||
OLLAMA_BASE_URL=http://localhost:11434
|
||||
|
||||
# Provider Selection
|
||||
LLM_PROVIDER=ollama # or openai
|
||||
VECTOR_STORE=qdrant
|
||||
VECTOR_STORE=supabase
|
||||
GRAPH_STORE=neo4j
|
||||
```
|
||||
|
||||
@@ -87,7 +87,7 @@ GRAPH_STORE=neo4j
|
||||
The system supports multiple providers through a unified interface:
|
||||
|
||||
- **LLM Providers**: OpenAI, Ollama, Anthropic, etc.
|
||||
- **Vector Stores**: Qdrant, Pinecone, Weaviate, etc.
|
||||
- **Vector Stores**: Supabase (pgvector), Qdrant, Pinecone, Weaviate, etc.
|
||||
- **Graph Stores**: Neo4j, Amazon Neptune, etc.
|
||||
|
||||
## Security Architecture
|
||||
@@ -110,7 +110,7 @@ The system supports multiple providers through a unified interface:
|
||||
## Scalability Considerations
|
||||
|
||||
### Horizontal Scaling
|
||||
- Qdrant cluster support
|
||||
- Supabase horizontal scaling support
|
||||
- Neo4j clustering capabilities
|
||||
- Load balancing for API layer
|
||||
|
||||
|
||||
@@ -26,11 +26,10 @@ from mem0 import Memory
|
||||
|
||||
config = {
|
||||
"vector_store": {
|
||||
"provider": "qdrant",
|
||||
"provider": "supabase",
|
||||
"config": {
|
||||
"collection_name": "test",
|
||||
"host": "localhost",
|
||||
"port": 6333,
|
||||
"connection_string": "postgresql://supabase_admin:your_password@localhost:5435/postgres",
|
||||
"collection_name": "memories",
|
||||
"embedding_model_dims": 768, # Change this according to your local model's dimensions
|
||||
},
|
||||
},
|
||||
@@ -66,7 +65,7 @@ memories = m.get_all(user_id="john")
|
||||
### Key Points
|
||||
|
||||
- **Configuration**: The setup involves configuring the vector store, language model, and embedding model to use local resources.
|
||||
- **Vector Store**: Qdrant is used as the vector store, running on localhost.
|
||||
- **Vector Store**: Supabase with pgvector is used as the vector store, running on localhost.
|
||||
- **Language Model**: Ollama is used as the LLM provider, with the "llama3.1:latest" model.
|
||||
- **Embedding Model**: Ollama is also used for embeddings, with the "nomic-embed-text:latest" model.
|
||||
|
||||
|
||||
@@ -69,11 +69,10 @@ Set your Mem0 OSS by providing configuration details:
|
||||
```python
|
||||
config = {
|
||||
"vector_store": {
|
||||
"provider": "qdrant",
|
||||
"provider": "supabase",
|
||||
"config": {
|
||||
"collection_name": "test_9",
|
||||
"host": "localhost",
|
||||
"port": 6333,
|
||||
"connection_string": "postgresql://supabase_admin:your_password@localhost:5435/postgres",
|
||||
"collection_name": "memories",
|
||||
"embedding_model_dims": 1536, # Change this according to your local model's dimensions
|
||||
},
|
||||
},
|
||||
|
||||
@@ -72,7 +72,6 @@
|
||||
"group": "Database Integration",
|
||||
"pages": [
|
||||
"database/neo4j",
|
||||
"database/qdrant",
|
||||
"database/supabase"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -45,17 +45,16 @@ m = AsyncMemory()
|
||||
<Tab title="Advanced">
|
||||
If you want to run Mem0 in production, initialize using the following method:
|
||||
|
||||
Run Qdrant first:
|
||||
Run Supabase first:
|
||||
|
||||
```bash
|
||||
docker pull qdrant/qdrant
|
||||
# Ensure you have Supabase running locally
|
||||
# See https://supabase.com/docs/guides/self-hosting/docker for setup
|
||||
|
||||
docker run -p 6333:6333 -p 6334:6334 \
|
||||
-v $(pwd)/qdrant_storage:/qdrant/storage:z \
|
||||
qdrant/qdrant
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
Then, instantiate memory with qdrant server:
|
||||
Then, instantiate memory with Supabase server:
|
||||
|
||||
```python
|
||||
import os
|
||||
@@ -65,10 +64,10 @@ os.environ["OPENAI_API_KEY"] = "your-api-key"
|
||||
|
||||
config = {
|
||||
"vector_store": {
|
||||
"provider": "qdrant",
|
||||
"provider": "supabase",
|
||||
"config": {
|
||||
"host": "localhost",
|
||||
"port": 6333,
|
||||
"connection_string": "postgresql://supabase_admin:your_password@localhost:5435/postgres",
|
||||
"collection_name": "memories",
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
@@ -7,7 +7,7 @@ description: 'Get your Mem0 Memory System running in under 5 minutes'
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Docker & Docker Compose" icon="docker">
|
||||
Required for Neo4j and Qdrant containers
|
||||
Required for Neo4j container (Supabase already running)
|
||||
</Card>
|
||||
<Card title="Python 3.10+" icon="python">
|
||||
For the mem0 core system and API
|
||||
@@ -19,9 +19,13 @@ description: 'Get your Mem0 Memory System running in under 5 minutes'
|
||||
### Step 1: Start Database Services
|
||||
|
||||
```bash
|
||||
docker compose up -d neo4j qdrant
|
||||
docker compose up -d neo4j
|
||||
```
|
||||
|
||||
<Note>
|
||||
Supabase is already running as part of your existing infrastructure on the localai network.
|
||||
</Note>
|
||||
|
||||
### Step 2: Test Your Installation
|
||||
|
||||
```bash
|
||||
|
||||
Reference in New Issue
Block a user