Integrate self-hosted Supabase with mem0 system

- Configure mem0 to use self-hosted Supabase instead of Qdrant for vector storage
- Update docker-compose to connect containers to localai network
- Install vecs library for Supabase pgvector integration
- Create comprehensive test suite for Supabase + mem0 integration
- Update documentation to reflect Supabase configuration
- All containers now connected to shared localai network
- Successful vector storage and retrieval tests completed

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Docker Config Backup
2025-07-31 06:57:10 +02:00
parent 724c553a2e
commit 41cd78207a
36 changed files with 2533 additions and 405 deletions

View File

@@ -0,0 +1,189 @@
---
title: 'API Reference'
description: 'Complete API documentation for the Mem0 Memory System'
---
## Overview
The Mem0 Memory System provides a comprehensive REST API for memory operations, built on top of the mem0 framework with enhanced local-first capabilities.
<Note>
**Current Status**: Phase 1 Complete - Core infrastructure ready for API development
</Note>
## Base URL
```
http://localhost:8080/v1
```
## Authentication
All API requests require authentication using API keys:
```bash
curl -H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
http://localhost:8080/v1/memories
```
## Core Endpoints
### Memory Operations
| Method | Endpoint | Description |
|--------|----------|-------------|
| `POST` | `/memories` | Add new memory |
| `GET` | `/memories/search` | Search memories |
| `GET` | `/memories/{id}` | Get specific memory |
| `PUT` | `/memories/{id}` | Update memory |
| `DELETE` | `/memories/{id}` | Delete memory |
| `GET` | `/memories/user/{user_id}` | Get user memories |
### Health & Status
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/health` | System health check |
| `GET` | `/status` | Detailed system status |
| `GET` | `/metrics` | Performance metrics |
## Request/Response Format
### Standard Response Structure
```json
{
"success": true,
"data": {
// Response data
},
"message": "Operation completed successfully",
"timestamp": "2025-07-30T20:15:00Z"
}
```
### Error Response Structure
```json
{
"success": false,
"error": {
"code": "MEMORY_NOT_FOUND",
"message": "Memory with ID 'abc123' not found",
"details": {}
},
"timestamp": "2025-07-30T20:15:00Z"
}
```
## Memory Object
```json
{
"id": "mem_abc123def456",
"content": "User loves building AI applications with local models",
"user_id": "user_789",
"metadata": {
"source": "chat",
"timestamp": "2025-07-30T20:15:00Z",
"entities": ["AI", "applications", "local models"]
},
"embedding": [0.1, 0.2, 0.3, ...],
"relationships": [
{
"type": "mentions",
"entity": "AI applications",
"confidence": 0.95
}
]
}
```
## Configuration
The API behavior can be configured through environment variables:
```bash
# API Configuration
API_PORT=8080
API_HOST=localhost
API_KEY=your_secure_api_key
# Memory Configuration
MAX_MEMORY_SIZE=1000000
SEARCH_LIMIT=50
DEFAULT_USER_ID=default
```
## Rate Limiting
The API implements rate limiting to ensure fair usage:
- **Default**: 100 requests per minute per API key
- **Burst**: Up to 20 requests in 10 seconds
- **Headers**: Rate limit info included in response headers
```
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1627849200
```
## Error Codes
| Code | HTTP Status | Description |
|------|-------------|-------------|
| `INVALID_REQUEST` | 400 | Malformed request |
| `UNAUTHORIZED` | 401 | Invalid or missing API key |
| `FORBIDDEN` | 403 | Insufficient permissions |
| `MEMORY_NOT_FOUND` | 404 | Memory does not exist |
| `RATE_LIMIT_EXCEEDED` | 429 | Too many requests |
| `INTERNAL_ERROR` | 500 | Server error |
## SDK Support
<CardGroup cols={2}>
<Card title="Python SDK" icon="python">
```python
from mem0_client import MemoryClient
client = MemoryClient(api_key="your_key")
```
</Card>
<Card title="JavaScript SDK" icon="js">
```javascript
import { MemoryClient } from '@mem0/client';
const client = new MemoryClient({ apiKey: 'your_key' });
```
</Card>
<Card title="cURL Examples" icon="terminal">
Complete cURL examples for all endpoints
</Card>
<Card title="Postman Collection" icon="api">
Import ready-to-use Postman collection
</Card>
</CardGroup>
## Development Status
<Warning>
**In Development**: The API is currently in Phase 2 development. Core infrastructure (Phase 1) is complete and ready for API implementation.
</Warning>
### Completed ✅
- Core mem0 integration
- Database connections (Neo4j, Qdrant)
- LLM provider support (Ollama, OpenAI)
- Configuration management
### In Progress 🚧
- REST API endpoints
- Authentication system
- Rate limiting
- Error handling
### Planned 📋
- SDK development
- API documentation
- Performance optimization
- Monitoring and logging

76
docs/development.mdx Normal file
View File

@@ -0,0 +1,76 @@
---
title: 'Development Guide'
description: 'Complete development environment setup and workflow'
---
## Development Environment
### Project Structure
```
/home/klas/mem0/
├── venv/ # Python virtual environment
├── config.py # Configuration management
├── test_basic.py # Basic functionality tests
├── test_openai.py # OpenAI integration test
├── test_all_connections.py # Comprehensive connection tests
├── docker-compose.yml # Neo4j & Qdrant containers
├── .env # Environment variables
└── docs/ # Documentation (Mintlify)
```
### Current Status: Phase 1 Complete ✅
| Component | Status | Port | Description |
|-----------|--------|------|-------------|
| Neo4j | ✅ READY | 7474/7687 | Graph memory storage |
| Qdrant | ✅ READY | 6333/6334 | Vector memory storage |
| Ollama | ✅ READY | 11434 | Local LLM processing |
| Mem0 Core | ✅ READY | - | Memory management system v0.1.115 |
### Development Workflow
1. **Environment Setup**
```bash
source venv/bin/activate
```
2. **Start Services**
```bash
docker compose up -d
```
3. **Run Tests**
```bash
python test_all_connections.py
```
4. **Development**
- Edit code and configurations
- Test changes with provided test scripts
- Document changes in this documentation
### Next Development Phases
<CardGroup cols={2}>
<Card title="Phase 2: Core Memory System">
- Ollama integration
- Basic memory operations
- Neo4j graph memory
</Card>
<Card title="Phase 3: API Development">
- REST API endpoints
- Authentication layer
- Performance optimization
</Card>
<Card title="Phase 4: MCP Server">
- HTTP transport protocol
- Claude Code integration
- Standardized operations
</Card>
<Card title="Phase 5: Documentation">
- Complete API reference
- Deployment guides
- Integration examples
</Card>
</CardGroup>

View File

@@ -0,0 +1,151 @@
---
title: 'Architecture Overview'
description: 'Understanding the Mem0 Memory System architecture and components'
---
## System Architecture
The Mem0 Memory System follows a modular, local-first architecture designed for maximum privacy, performance, and control.
```mermaid
graph TB
A[AI Applications] --> B[MCP Server - Port 8765]
B --> C[Memory API - Port 8080]
C --> D[Mem0 Core v0.1.115]
D --> E[Vector Store - Qdrant]
D --> F[Graph Store - Neo4j]
D --> G[LLM Provider]
G --> H[Ollama - Port 11434]
G --> I[OpenAI/Remote APIs]
E --> J[Qdrant - Port 6333]
F --> K[Neo4j - Port 7687]
```
## Core Components
### Memory Layer (Mem0 Core)
- **Version**: 0.1.115
- **Purpose**: Central memory management and coordination
- **Features**: Memory operations, provider abstraction, configuration management
### Vector Storage (Qdrant)
- **Port**: 6333 (REST), 6334 (gRPC)
- **Purpose**: High-performance vector search and similarity matching
- **Features**: Collections management, semantic search, embeddings storage
### Graph Storage (Neo4j)
- **Port**: 7474 (HTTP), 7687 (Bolt)
- **Version**: 5.23.0
- **Purpose**: Entity relationships and contextual memory connections
- **Features**: Knowledge graph, relationship mapping, graph queries
### LLM Providers
#### Ollama (Local)
- **Port**: 11434
- **Models Available**: 21+ including Llama, Qwen, embeddings
- **Benefits**: Privacy, cost control, offline operation
#### OpenAI (Remote)
- **API**: External service
- **Models**: GPT-4, embeddings
- **Benefits**: State-of-the-art performance, reliability
## Data Flow
### Memory Addition
1. **Input**: User messages or content
2. **Processing**: LLM extracts facts and relationships
3. **Storage**:
- Facts stored as vectors in Qdrant
- Relationships stored as graph in Neo4j
4. **Indexing**: Content indexed for fast retrieval
### Memory Retrieval
1. **Query**: Semantic search query
2. **Vector Search**: Qdrant finds similar memories
3. **Graph Traversal**: Neo4j provides contextual relationships
4. **Ranking**: Combined scoring and relevance
5. **Response**: Structured memory results
## Configuration Architecture
### Environment Management
```bash
# Core Services
NEO4J_URI=bolt://localhost:7687
QDRANT_URL=http://localhost:6333
OLLAMA_BASE_URL=http://localhost:11434
# Provider Selection
LLM_PROVIDER=ollama # or openai
VECTOR_STORE=qdrant
GRAPH_STORE=neo4j
```
### Provider Abstraction
The system supports multiple providers through a unified interface:
- **LLM Providers**: OpenAI, Ollama, Anthropic, etc.
- **Vector Stores**: Qdrant, Pinecone, Weaviate, etc.
- **Graph Stores**: Neo4j, Amazon Neptune, etc.
## Security Architecture
### Local-First Design
- All data stored locally
- No external dependencies required
- Full control over data processing
### Authentication Layers
- API key management
- Rate limiting
- Access control per user/application
### Network Security
- Services bound to localhost by default
- Configurable network policies
- TLS support for remote connections
## Scalability Considerations
### Horizontal Scaling
- Qdrant cluster support
- Neo4j clustering capabilities
- Load balancing for API layer
### Performance Optimization
- Vector search optimization
- Graph query optimization
- Caching strategies
- Connection pooling
## Deployment Patterns
### Development
- Docker Compose for local services
- Python virtual environment
- File-based configuration
### Production
- Container orchestration
- Service mesh integration
- Monitoring and logging
- Backup and recovery
## Integration Points
### MCP Protocol
- Standardized AI tool integration
- Claude Code compatibility
- Protocol-based communication
### API Layer
- RESTful endpoints
- OpenAPI specification
- SDK support multiple languages
### Webhook Support
- Event-driven updates
- Real-time notifications
- Integration with external systems

117
docs/introduction.mdx Normal file
View File

@@ -0,0 +1,117 @@
---
title: Introduction
description: 'Welcome to the Mem0 Memory System - A comprehensive memory layer for AI agents'
---
<img
className="block dark:hidden"
src="/images/hero-light.svg"
alt="Hero Light"
/>
<img
className="hidden dark:block"
src="/images/hero-dark.svg"
alt="Hero Dark"
/>
## What is Mem0 Memory System?
The Mem0 Memory System is a comprehensive, self-hosted memory layer designed for AI agents and applications. Built on top of the open-source mem0 framework, it provides persistent, intelligent memory capabilities that enhance AI interactions through contextual understanding and knowledge retention.
<CardGroup cols={2}>
<Card
title="Local-First Architecture"
icon="server"
href="/essentials/architecture"
>
Complete local deployment with Ollama, Neo4j, and Supabase for maximum privacy and control
</Card>
<Card
title="Multi-Provider Support"
icon="plug"
href="/llm/configuration"
>
Seamlessly switch between OpenAI, Ollama, and other LLM providers
</Card>
<Card
title="Graph Memory"
icon="project-diagram"
href="/database/neo4j"
>
Advanced relationship mapping with Neo4j for contextual memory connections
</Card>
<Card
title="MCP Integration"
icon="link"
href="/guides/mcp-integration"
>
Model Context Protocol server for Claude Code and other AI tools
</Card>
</CardGroup>
## Key Features
<AccordionGroup>
<Accordion title="Vector Memory Storage">
High-performance vector search using Supabase with pgvector for semantic memory retrieval and similarity matching.
</Accordion>
<Accordion title="Graph Relationships">
Neo4j-powered knowledge graph for complex entity relationships and contextual memory connections.
</Accordion>
<Accordion title="Local LLM Support">
Full Ollama integration with 20+ local models including Llama, Qwen, and specialized embedding models.
</Accordion>
<Accordion title="API-First Design">
RESTful API with comprehensive memory operations, authentication, and rate limiting.
</Accordion>
<Accordion title="Self-Hosted Privacy">
Complete local deployment ensuring your data never leaves your infrastructure.
</Accordion>
</AccordionGroup>
## Architecture Overview
The system consists of several key components working together:
```mermaid
graph TB
A[AI Applications] --> B[MCP Server]
B --> C[Memory API]
C --> D[Mem0 Core]
D --> E[Vector Store - Supabase]
D --> F[Graph Store - Neo4j]
D --> G[LLM Provider]
G --> H[Ollama Local]
G --> I[OpenAI/Remote]
```
## Current Status: Phase 1 Complete ✅
<Note>
**Foundation Ready**: All core infrastructure components are operational and tested.
</Note>
| Component | Status | Description |
|-----------|--------|-------------|
| **Neo4j** | ✅ Ready | Graph database running on localhost:7474 |
| **Supabase** | ✅ Ready | Self-hosted database with pgvector on localhost:8000 |
| **Ollama** | ✅ Ready | 21+ local models available on localhost:11434 |
| **Mem0 Core** | ✅ Ready | Memory management system v0.1.115 |
## Getting Started
<CardGroup cols={1}>
<Card
title="Quick Start Guide"
icon="rocket"
href="/quickstart"
>
Get your memory system running in under 5 minutes
</Card>
</CardGroup>
Ready to enhance your AI applications with persistent, intelligent memory? Let's get started!

118
docs/mint.json Normal file
View File

@@ -0,0 +1,118 @@
{
"name": "Mem0 Memory System",
"logo": {
"dark": "/logo/dark.svg",
"light": "/logo/light.svg"
},
"favicon": "/favicon.svg",
"colors": {
"primary": "#0D9488",
"light": "#07C983",
"dark": "#0D9488",
"anchors": {
"from": "#0D9488",
"to": "#07C983"
}
},
"topbarLinks": [
{
"name": "Support",
"url": "mailto:support@klas.chat"
}
],
"topbarCtaButton": {
"name": "Dashboard",
"url": "https://n8n.klas.chat"
},
"tabs": [
{
"name": "API Reference",
"url": "api-reference"
},
{
"name": "Guides",
"url": "guides"
}
],
"anchors": [
{
"name": "Documentation",
"icon": "book-open-cover",
"url": "https://docs.klas.chat"
},
{
"name": "Community",
"icon": "slack",
"url": "https://matrix.klas.chat"
},
{
"name": "Blog",
"icon": "newspaper",
"url": "https://klas.chat"
}
],
"navigation": [
{
"group": "Get Started",
"pages": [
"introduction",
"quickstart",
"development"
]
},
{
"group": "Core Concepts",
"pages": [
"essentials/architecture",
"essentials/memory-types",
"essentials/configuration"
]
},
{
"group": "Database Integration",
"pages": [
"database/neo4j",
"database/qdrant",
"database/supabase"
]
},
{
"group": "LLM Providers",
"pages": [
"llm/ollama",
"llm/openai",
"llm/configuration"
]
},
{
"group": "API Documentation",
"pages": [
"api-reference/introduction"
]
},
{
"group": "Memory Operations",
"pages": [
"api-reference/add-memory",
"api-reference/search-memory",
"api-reference/get-memory",
"api-reference/update-memory",
"api-reference/delete-memory"
]
},
{
"group": "Guides",
"pages": [
"guides/getting-started",
"guides/local-development",
"guides/production-deployment",
"guides/mcp-integration"
]
}
],
"footerSocials": {
"website": "https://klas.chat",
"github": "https://github.com/klas",
"linkedin": "https://www.linkedin.com/in/klasmachacek"
}
}

1
docs/mintlify.pid Normal file
View File

@@ -0,0 +1 @@
3080755

View File

@@ -1,421 +1,31 @@
---
title: Quickstart
icon: "bolt"
iconType: "solid"
title: 'Quickstart'
description: 'Get your Mem0 Memory System running in under 5 minutes'
---
<Snippet file="async-memory-add.mdx" />
Mem0 offers two powerful ways to leverage our technology: [our managed platform](#mem0-platform-managed-solution) and [our open source solution](#mem0-open-source).
Check out our [Playground](https://mem0.dev/pd-pg) to see Mem0 in action.
## Prerequisites
<CardGroup cols={2}>
<Card title="Mem0 Platform (Managed Solution)" icon="chart-simple" href="#mem0-platform-managed-solution">
Better, faster, fully managed, and hassle free solution.
<Card title="Docker & Docker Compose" icon="docker">
Required for Neo4j and Qdrant containers
</Card>
<Card title="Mem0 Open Source" icon="code-branch" href="#mem0-open-source">
Self hosted, fully customizable, and open source.
<Card title="Python 3.10+" icon="python">
For the mem0 core system and API
</Card>
</CardGroup>
## Installation
## Mem0 Platform (Managed Solution)
### Step 1: Start Database Services
Our fully managed platform provides a hassle-free way to integrate Mem0's capabilities into your AI agents and assistants. Sign up for Mem0 platform [here](https://mem0.dev/pd).
The Mem0 SDK supports both Python and JavaScript, with full [TypeScript](/platform/quickstart/#4-11-working-with-mem0-in-typescript) support as well.
Follow the steps below to get started with Mem0 Platform:
1. [Install Mem0](#1-install-mem0)
2. [Add Memories](#2-add-memories)
3. [Retrieve Memories](#3-retrieve-memories)
### 1. Install Mem0
<AccordionGroup>
<Accordion title="Install package">
<CodeGroup>
```bash pip
pip install mem0ai
```bash
docker compose up -d neo4j qdrant
```
```bash npm
npm install mem0ai
```
</CodeGroup>
</Accordion>
<Accordion title="Get API Key">
### Step 2: Test Your Installation
1. Sign in to [Mem0 Platform](https://mem0.dev/pd-api)
2. Copy your API Key from the dashboard
![Get API Key from Mem0 Platform](/images/platform/api-key.png)
</Accordion>
</AccordionGroup>
### 2. Add Memories
<AccordionGroup>
<Accordion title="Instantiate client">
<CodeGroup>
```python Python
import os
from mem0 import MemoryClient
os.environ["MEM0_API_KEY"] = "your-api-key"
client = MemoryClient()
```bash
python test_all_connections.py
```
```javascript JavaScript
import MemoryClient from 'mem0ai';
const client = new MemoryClient({ apiKey: 'your-api-key' });
```
</CodeGroup>
</Accordion>
<Accordion title="Add memories">
<CodeGroup>
```python Python
messages = [
{"role": "user", "content": "Thinking of making a sandwich. What do you recommend?"},
{"role": "assistant", "content": "How about adding some cheese for extra flavor?"},
{"role": "user", "content": "Actually, I don't like cheese."},
{"role": "assistant", "content": "I'll remember that you don't like cheese for future recommendations."}
]
client.add(messages, user_id="alex")
```
```javascript JavaScript
const messages = [
{"role": "user", "content": "Thinking of making a sandwich. What do you recommend?"},
{"role": "assistant", "content": "How about adding some cheese for extra flavor?"},
{"role": "user", "content": "Actually, I don't like cheese."},
{"role": "assistant", "content": "I'll remember that you don't like cheese for future recommendations."}
];
client.add(messages, { user_id: "alex" })
.then(response => console.log(response))
.catch(error => console.error(error));
```
```bash cURL
curl -X POST "https://api.mem0.ai/v1/memories/" \
-H "Authorization: Token your-api-key" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "I live in San Francisco. Thinking of making a sandwich. What do you recommend?"},
{"role": "assistant", "content": "How about adding some cheese for extra flavor?"},
{"role": "user", "content": "Actually, I don't like cheese."},
{"role": "assistant", "content": "I'll remember that you don't like cheese for future recommendations."}
],
"user_id": "alex"
}'
```
```json Output
[
{
"id": "24e466b5-e1c6-4bde-8a92-f09a327ffa60",
"memory": "Does not like cheese",
"event": "ADD"
},
{
"id": "e8d78459-fadd-4c5a-bece-abb8c3dc7ed7",
"memory": "Lives in San Francisco",
"event": "ADD"
}
]
```
</CodeGroup>
</Accordion>
</AccordionGroup>
### 3. Retrieve Memories
<AccordionGroup>
<Accordion title="Search for relevant memories">
<CodeGroup>
```python Python
# Example showing location and preference-aware recommendations
query = "I'm craving some pizza. Any recommendations?"
filters = {
"AND": [
{
"user_id": "alex"
}
]
}
client.search(query, version="v2", filters=filters)
```
```javascript JavaScript
const query = "I'm craving some pizza. Any recommendations?";
const filters = {
"AND": [
{
"user_id": "alex"
}
]
};
client.search(query, { version: "v2", filters })
.then(results => console.log(results))
.catch(error => console.error(error));
```
```bash cURL
curl -X POST "https://api.mem0.ai/v1/memories/search/?version=v2" \
-H "Authorization: Token your-api-key" \
-H "Content-Type: application/json" \
-d '{
"query": "I'm craving some pizza. Any recommendations?",
"filters": {
"AND": [
{
"user_id": "alex"
}
]
}
}'
```
```json Output
[
{
"id": "7f165f7e-b411-4afe-b7e5-35789b72c4a5",
"memory": "Does not like cheese",
"user_id": "alex",
"metadata": null,
"created_at": "2024-07-20T01:30:36.275141-07:00",
"updated_at": "2024-07-20T01:30:36.275172-07:00",
"score": 0.92
},
{
"id": "8f165f7e-b411-4afe-b7e5-35789b72c4b6",
"memory": "Lives in San Francisco",
"user_id": "alex",
"metadata": null,
"created_at": "2024-07-20T01:30:36.275141-07:00",
"updated_at": "2024-07-20T01:30:36.275172-07:00",
"score": 0.85
}
]
```
</CodeGroup>
</Accordion>
<Accordion title="Get all memories of a user">
<CodeGroup>
```python Python
filters = {
"AND": [
{
"user_id": "alex"
}
]
}
all_memories = client.get_all(version="v2", filters=filters, page=1, page_size=50)
```
```javascript JavaScript
const filters = {
"AND": [
{
"user_id": "alex"
}
]
};
client.getAll({ version: "v2", filters, page: 1, page_size: 50 })
.then(memories => console.log(memories))
.catch(error => console.error(error));
```
```bash cURL
curl -X GET "https://api.mem0.ai/v1/memories/?version=v2&page=1&page_size=50" \
-H "Authorization: Token your-api-key" \
-H "Content-Type: application/json" \
-d '{
"filters": {
"AND": [
{
"user_id": "alice"
}
]
}
}'
```
```json Output
[
{
"id": "7f165f7e-b411-4afe-b7e5-35789b72c4a5",
"memory": "Does not like cheese",
"user_id": "alex",
"metadata": null,
"created_at": "2024-07-20T01:30:36.275141-07:00",
"updated_at": "2024-07-20T01:30:36.275172-07:00",
"score": 0.92
},
{
"id": "8f165f7e-b411-4afe-b7e5-35789b72c4b6",
"memory": "Lives in San Francisco",
"user_id": "alex",
"metadata": null,
"created_at": "2024-07-20T01:30:36.275141-07:00",
"updated_at": "2024-07-20T01:30:36.275172-07:00",
"score": 0.85
}
]
```
</CodeGroup>
</Accordion>
</AccordionGroup>
<Card title="Mem0 Platform" icon="chart-simple" href="/platform/overview">
Learn more about Mem0 platform
</Card>
## Mem0 Open Source
Our open-source version is available for those who prefer full control and customization. You can self-host Mem0 on your infrastructure and integrate it with your AI agents and assistants. Checkout our [GitHub repository](https://mem0.dev/gd)
Follow the steps below to get started with Mem0 Open Source:
1. [Install Mem0 Open Source](#1-install-mem0-open-source)
2. [Add Memories](#2-add-memories-open-source)
3. [Retrieve Memories](#3-retrieve-memories-open-source)
### 1. Install Mem0 Open Source
<AccordionGroup>
<Accordion title="Install package">
<CodeGroup>
```bash pip
pip install mem0ai
```
```bash npm
npm install mem0ai
```
</CodeGroup>
</Accordion>
</AccordionGroup>
### 2. Add Memories <a name="2-add-memories-open-source"></a>
<AccordionGroup>
<Accordion title="Instantiate client">
<CodeGroup>
```python Python
from mem0 import Memory
m = Memory()
```
```typescript TypeScript
import { Memory } from 'mem0ai/oss';
const memory = new Memory();
```
</CodeGroup>
</Accordion>
<Accordion title="Add memories">
<CodeGroup>
```python Code
# For a user
messages = [
{
"role": "user",
"content": "I like to drink coffee in the morning and go for a walk"
}
]
result = m.add(messages, user_id="alice", metadata={"category": "preferences"})
```
```typescript TypeScript
const messages = [
{
role: "user",
content: "I like to drink coffee in the morning and go for a walk"
}
];
const result = memory.add(messages, { userId: "alice", metadata: { category: "preferences" } });
```
```json Output
[
{
"id": "3dc6f65f-fb3f-4e91-89a8-ed1a22f8898a",
"data": {"memory": "Likes to drink coffee in the morning"},
"event": "ADD"
},
{
"id": "f1673706-e3d6-4f12-a767-0384c7697d53",
"data": {"memory": "Likes to go for a walk"},
"event": "ADD"
}
]
```
</CodeGroup>
</Accordion>
</AccordionGroup>
### 3. Retrieve Memories <a name="3-retrieve-memories-open-source"></a>
<AccordionGroup>
<Accordion title="Search for relevant memories">
<CodeGroup>
```python Python
related_memories = m.search("Should I drink coffee or tea?", user_id="alice")
```
```typescript TypeScript
const relatedMemories = memory.search("Should I drink coffee or tea?", { userId: "alice" });
```
```json Output
[
{
"id": "3dc6f65f-fb3f-4e91-89a8-ed1a22f8898a",
"memory": "Likes to drink coffee in the morning",
"user_id": "alice",
"metadata": {"category": "preferences"},
"categories": ["user_preferences", "food"],
"immutable": false,
"created_at": "2025-02-24T20:11:39.010261-08:00",
"updated_at": "2025-02-24T20:11:39.010274-08:00",
"score": 0.5915589089130715
},
{
"id": "e8d78459-fadd-4c5a-bece-abb8c3dc7ed7",
"memory": "Likes to go for a walk",
"user_id": "alice",
"metadata": {"category": "preferences"},
"categories": ["hobby", "food"],
"immutable": false,
"created_at": "2025-02-24T11:47:52.893038-08:00",
"updated_at": "2025-02-24T11:47:52.893048-08:00",
"score": 0.43263634637810866
}
]
```
</CodeGroup>
</Accordion>
</AccordionGroup>
<CardGroup cols={2}>
<Card title="Mem0 OSS Python SDK" icon="python" href="/open-source/python-quickstart">
Learn more about Mem0 OSS Python SDK
</Card>
<Card title="Mem0 OSS Node.js SDK" icon="node" href="/open-source/node-quickstart">
Learn more about Mem0 OSS Node.js SDK
</Card>
</CardGroup>
You should see all systems passing.