diff --git a/docs/api-reference/introduction.mdx b/docs/api-reference/introduction.mdx
index 8897c1d9..c471f638 100644
--- a/docs/api-reference/introduction.mdx
+++ b/docs/api-reference/introduction.mdx
@@ -172,7 +172,7 @@ X-RateLimit-Reset: 1627849200
### Completed ✅
- Core mem0 integration
-- Database connections (Neo4j, Qdrant)
+- Database connections (Neo4j, Supabase)
- LLM provider support (Ollama, OpenAI)
- Configuration management
diff --git a/docs/components/vectordbs/config.mdx b/docs/components/vectordbs/config.mdx
index ac4272ca..ff8e823c 100644
--- a/docs/components/vectordbs/config.mdx
+++ b/docs/components/vectordbs/config.mdx
@@ -10,7 +10,7 @@ iconType: "solid"
The `config` is defined as an object with two main keys:
- `vector_store`: Specifies the vector database provider and its configuration
- - `provider`: The name of the vector database (e.g., "chroma", "pgvector", "qdrant", "milvus", "upstash_vector", "azure_ai_search", "vertex_ai_vector_search")
+ - `provider`: The name of the vector database (e.g., "chroma", "pgvector", "supabase", "milvus", "upstash_vector", "azure_ai_search", "vertex_ai_vector_search")
- `config`: A nested dictionary containing provider-specific settings
diff --git a/docs/components/vectordbs/overview.mdx b/docs/components/vectordbs/overview.mdx
index 1c0ac10d..19bed622 100644
--- a/docs/components/vectordbs/overview.mdx
+++ b/docs/components/vectordbs/overview.mdx
@@ -13,7 +13,7 @@ Mem0 includes built-in support for various popular databases. Memory can utilize
See the list of supported vector databases below.
- The following vector databases are supported in the Python implementation. The TypeScript implementation currently only supports Qdrant, Redis,Vectorize and in-memory vector database.
+ The following vector databases are supported in the Python implementation. The TypeScript implementation currently only supports Qdrant, Redis, Vectorize and in-memory vector database.
@@ -37,7 +37,7 @@ See the list of supported vector databases below.
## Usage
-To utilize a vector database, you must provide a configuration to customize its usage. If no configuration is supplied, a default configuration will be applied, and `Qdrant` will be used as the vector database.
+To utilize a vector database, you must provide a configuration to customize its usage. If no configuration is supplied, a default configuration will be applied, and `Supabase` (with pgvector) will be used as the vector database.
For a comprehensive list of available parameters for vector database configuration, please refer to [Config](./config).
diff --git a/docs/development.mdx b/docs/development.mdx
index 2e7924ac..28beb92e 100644
--- a/docs/development.mdx
+++ b/docs/development.mdx
@@ -14,7 +14,7 @@ description: 'Complete development environment setup and workflow'
├── test_basic.py # Basic functionality tests
├── test_openai.py # OpenAI integration test
├── test_all_connections.py # Comprehensive connection tests
-├── docker-compose.yml # Neo4j & Qdrant containers
+├── docker-compose.yml # Neo4j container (Supabase is external)
├── .env # Environment variables
└── docs/ # Documentation (Mintlify)
```
@@ -24,7 +24,7 @@ description: 'Complete development environment setup and workflow'
| Component | Status | Port | Description |
|-----------|--------|------|-------------|
| Neo4j | ✅ READY | 7474/7687 | Graph memory storage |
-| Qdrant | ✅ READY | 6333/6334 | Vector memory storage |
+| Supabase | ✅ READY | 8000/5435 | Vector & database storage (self-hosted) |
| Ollama | ✅ READY | 11434 | Local LLM processing |
| Mem0 Core | ✅ READY | - | Memory management system v0.1.115 |
diff --git a/docs/essentials/architecture.mdx b/docs/essentials/architecture.mdx
index c86f714e..a460e1bc 100644
--- a/docs/essentials/architecture.mdx
+++ b/docs/essentials/architecture.mdx
@@ -12,12 +12,12 @@ graph TB
A[AI Applications] --> B[MCP Server - Port 8765]
B --> C[Memory API - Port 8080]
C --> D[Mem0 Core v0.1.115]
- D --> E[Vector Store - Qdrant]
+ D --> E[Vector Store - Supabase]
D --> F[Graph Store - Neo4j]
D --> G[LLM Provider]
G --> H[Ollama - Port 11434]
G --> I[OpenAI/Remote APIs]
- E --> J[Qdrant - Port 6333]
+ E --> J[Supabase - Port 8000/5435]
F --> K[Neo4j - Port 7687]
```
@@ -28,10 +28,10 @@ graph TB
- **Purpose**: Central memory management and coordination
- **Features**: Memory operations, provider abstraction, configuration management
-### Vector Storage (Qdrant)
-- **Port**: 6333 (REST), 6334 (gRPC)
-- **Purpose**: High-performance vector search and similarity matching
-- **Features**: Collections management, semantic search, embeddings storage
+### Vector Storage (Supabase)
+- **Port**: 8000 (API), 5435 (PostgreSQL)
+- **Purpose**: High-performance vector search with pgvector and database storage
+- **Features**: PostgreSQL with pgvector, semantic search, embeddings storage, relational data
### Graph Storage (Neo4j)
- **Port**: 7474 (HTTP), 7687 (Bolt)
@@ -57,13 +57,13 @@ graph TB
1. **Input**: User messages or content
2. **Processing**: LLM extracts facts and relationships
3. **Storage**:
- - Facts stored as vectors in Qdrant
+ - Facts stored as vectors in Supabase (pgvector)
- Relationships stored as graph in Neo4j
4. **Indexing**: Content indexed for fast retrieval
### Memory Retrieval
1. **Query**: Semantic search query
-2. **Vector Search**: Qdrant finds similar memories
+2. **Vector Search**: Supabase finds similar memories using pgvector
3. **Graph Traversal**: Neo4j provides contextual relationships
4. **Ranking**: Combined scoring and relevance
5. **Response**: Structured memory results
@@ -74,12 +74,12 @@ graph TB
```bash
# Core Services
NEO4J_URI=bolt://localhost:7687
-QDRANT_URL=http://localhost:6333
+SUPABASE_URL=http://localhost:8000
OLLAMA_BASE_URL=http://localhost:11434
# Provider Selection
LLM_PROVIDER=ollama # or openai
-VECTOR_STORE=qdrant
+VECTOR_STORE=supabase
GRAPH_STORE=neo4j
```
@@ -87,7 +87,7 @@ GRAPH_STORE=neo4j
The system supports multiple providers through a unified interface:
- **LLM Providers**: OpenAI, Ollama, Anthropic, etc.
-- **Vector Stores**: Qdrant, Pinecone, Weaviate, etc.
+- **Vector Stores**: Supabase (pgvector), Qdrant, Pinecone, Weaviate, etc.
- **Graph Stores**: Neo4j, Amazon Neptune, etc.
## Security Architecture
@@ -110,7 +110,7 @@ The system supports multiple providers through a unified interface:
## Scalability Considerations
### Horizontal Scaling
-- Qdrant cluster support
+- Supabase horizontal scaling support
- Neo4j clustering capabilities
- Load balancing for API layer
diff --git a/docs/examples/mem0-with-ollama.mdx b/docs/examples/mem0-with-ollama.mdx
index a4257ae2..001a21f1 100644
--- a/docs/examples/mem0-with-ollama.mdx
+++ b/docs/examples/mem0-with-ollama.mdx
@@ -26,11 +26,10 @@ from mem0 import Memory
config = {
"vector_store": {
- "provider": "qdrant",
+ "provider": "supabase",
"config": {
- "collection_name": "test",
- "host": "localhost",
- "port": 6333,
+ "connection_string": "postgresql://supabase_admin:your_password@localhost:5435/postgres",
+ "collection_name": "memories",
"embedding_model_dims": 768, # Change this according to your local model's dimensions
},
},
@@ -66,7 +65,7 @@ memories = m.get_all(user_id="john")
### Key Points
- **Configuration**: The setup involves configuring the vector store, language model, and embedding model to use local resources.
-- **Vector Store**: Qdrant is used as the vector store, running on localhost.
+- **Vector Store**: Supabase with pgvector is used as the vector store, running on localhost.
- **Language Model**: Ollama is used as the LLM provider, with the "llama3.1:latest" model.
- **Embedding Model**: Ollama is also used for embeddings, with the "nomic-embed-text:latest" model.
diff --git a/docs/integrations/llama-index.mdx b/docs/integrations/llama-index.mdx
index e3d37ccd..c797218e 100644
--- a/docs/integrations/llama-index.mdx
+++ b/docs/integrations/llama-index.mdx
@@ -69,11 +69,10 @@ Set your Mem0 OSS by providing configuration details:
```python
config = {
"vector_store": {
- "provider": "qdrant",
+ "provider": "supabase",
"config": {
- "collection_name": "test_9",
- "host": "localhost",
- "port": 6333,
+ "connection_string": "postgresql://supabase_admin:your_password@localhost:5435/postgres",
+ "collection_name": "memories",
"embedding_model_dims": 1536, # Change this according to your local model's dimensions
},
},
diff --git a/docs/mint.json b/docs/mint.json
index 6ace4fab..5b3df279 100644
--- a/docs/mint.json
+++ b/docs/mint.json
@@ -72,7 +72,6 @@
"group": "Database Integration",
"pages": [
"database/neo4j",
- "database/qdrant",
"database/supabase"
]
},
diff --git a/docs/open-source/python-quickstart.mdx b/docs/open-source/python-quickstart.mdx
index 00f54703..edb8363a 100644
--- a/docs/open-source/python-quickstart.mdx
+++ b/docs/open-source/python-quickstart.mdx
@@ -45,17 +45,16 @@ m = AsyncMemory()
If you want to run Mem0 in production, initialize using the following method:
-Run Qdrant first:
+Run Supabase first:
```bash
-docker pull qdrant/qdrant
+# Ensure you have Supabase running locally
+# See https://supabase.com/docs/guides/self-hosting/docker for setup
-docker run -p 6333:6333 -p 6334:6334 \
- -v $(pwd)/qdrant_storage:/qdrant/storage:z \
- qdrant/qdrant
+docker compose up -d
```
-Then, instantiate memory with qdrant server:
+Then, instantiate memory with Supabase server:
```python
import os
@@ -65,10 +64,10 @@ os.environ["OPENAI_API_KEY"] = "your-api-key"
config = {
"vector_store": {
- "provider": "qdrant",
+ "provider": "supabase",
"config": {
- "host": "localhost",
- "port": 6333,
+ "connection_string": "postgresql://supabase_admin:your_password@localhost:5435/postgres",
+ "collection_name": "memories",
}
},
}
diff --git a/docs/quickstart.mdx b/docs/quickstart.mdx
index ff684740..5e0253aa 100644
--- a/docs/quickstart.mdx
+++ b/docs/quickstart.mdx
@@ -7,7 +7,7 @@ description: 'Get your Mem0 Memory System running in under 5 minutes'
- Required for Neo4j and Qdrant containers
+ Required for Neo4j container (Supabase already running)
For the mem0 core system and API
@@ -19,9 +19,13 @@ description: 'Get your Mem0 Memory System running in under 5 minutes'
### Step 1: Start Database Services
```bash
-docker compose up -d neo4j qdrant
+docker compose up -d neo4j
```
+
+Supabase is already running as part of your existing infrastructure on the localai network.
+
+
### Step 2: Test Your Installation
```bash