Platform feature docs revamp (#3007)

This commit is contained in:
Antaripa Saha
2025-06-25 13:27:08 +05:30
committed by GitHub
parent 8139b5887f
commit aaf879322c
16 changed files with 937 additions and 208 deletions

View File

@@ -0,0 +1,152 @@
---
title: Add Memory
description: Add memory into the Mem0 platform by storing user-assistant interactions and facts for later retrieval.
icon: "plus"
iconType: "solid"
---
## Overview
The `add` operation is how you store memory into Mem0. Whether you're working with a chatbot, a voice assistant, or a multi-agent system, this is the entry point to create long-term memory.
Memories typically come from a **user-assistant interaction** and Mem0 handles the extraction, transformation, and storage for you.
Mem0 offers two implementation flows:
- **Mem0 Platform** (Managed, scalable, with dashboard + API)
- **Mem0 Open Source** (Lightweight, fully local, flexible SDKs)
Each supports the same core memory operations, but with slightly different setup. Below, we walk through examples for both.
## Architecture
<Frame caption="Architecture diagram illustrating the process of adding memories.">
<img src="../../images/add_architecture.png" />
</Frame>
When you call `add`, Mem0 performs the following steps under the hood:
1. **Information Extraction**
The input messages are passed through an LLM that extracts key facts, decisions, preferences, or events worth remembering.
2. **Conflict Resolution**
Mem0 compares the new memory against existing ones to detect duplication or contradiction and handles updates accordingly.
3. **Memory Storage**
The result is stored in a vector database (for semantic search) and optionally in a graph structure (for relationship mapping).
You dont need to handle any of this manually, Mem0 takes care of it with a single API call or SDK method.
---
## Example: Mem0 Platform
<CodeGroup>
```python Python
from mem0 import MemoryClient
client = MemoryClient(api_key="your-api-key")
messages = [
{"role": "user", "content": "I'm planning a trip to Tokyo next month."},
{"role": "assistant", "content": "Great! Ill remember that for future suggestions."}
]
client.add(
messages=messages,
user_id="alice",
version="v2"
)
```
```javascript JavaScript
import { MemoryClient } from "mem0ai";
const client = new MemoryClient({apiKey: "your-api-key"});
const messages = [
{ role: "user", content: "I'm planning a trip to Tokyo next month." },
{ role: "assistant", content: "Great! Ill remember that for future suggestions." }
];
await client.add({
messages,
user_id: "alice",
version: "v2"
});
```
</CodeGroup>
---
## Example: Mem0 Open Source
<CodeGroup>
```python Python
import os
from mem0 import Memory
os.environ["OPENAI_API_KEY"] = "your-api-key"
m = Memory()
messages = [
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
{"role": "assistant", "content": "How about a thriller movies? They can be quite engaging."},
{"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."},
{"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
]
# Store inferred memories (default behavior)
result = m.add(messages, user_id="alice", metadata={"category": "movie_recommendations"})
# Optionally store raw messages without inference
result = m.add(messages, user_id="alice", metadata={"category": "movie_recommendations"}, infer=False)
```
```javascript JavaScript
import { Memory } from 'mem0ai/oss';
const memory = new Memory();
const messages = [
{
role: "user",
content: "I like to drink coffee in the morning and go for a walk"
}
];
const result = memory.add(messages, {
userId: "alice",
metadata: { category: "preferences" }
});
```
</CodeGroup>
---
## When Should You Add Memory?
Add memory whenever your agent learns something useful:
- A new user preference is shared
- A decision or suggestion is made
- A goal or task is completed
- A new entity is introduced
- A user gives feedback or clarification
Storing this context allows the agent to reason better in future interactions.
### More Details
For full list of supported fields, required formats, and advanced options, see the
[Add Memory API Reference](/api-reference/memory/add-memories).
---
## Need help?
If you have any questions, please feel free to reach out to us using one of the following methods:
<Snippet file="get-help.mdx"/>

View File

@@ -0,0 +1,141 @@
---
title: Delete Memory
description: Remove memories from Mem0 either individually, in bulk, or via filters.
icon: "trash"
iconType: "solid"
---
## Overview
Memories can become outdated, irrelevant, or need to be removed for privacy or compliance reasons. Mem0 offers flexible ways to delete memory:
1. **Delete a Single Memory** using a specific memory ID
2. **Batch Delete** delete multiple known memory IDs (up to 1000)
3. **Filtered Delete** delete memories matching a filter (e.g., `user_id`, `metadata`, `run_id`)
This page walks through code example for each method.
## Use Cases
- Forget a users past preferences by request
- Remove outdated or incorrect memory entries
- Clean up memory after session expiration
- Comply with data deletion requests (e.g., GDPR)
---
## 1. Delete a Single Memory by ID
<CodeGroup>
```python Python
from mem0 import MemoryClient
client = MemoryClient(api_key="your-api-key")
memory_id = "your_memory_id"
client.delete(memory_id=memory_id)
```
```javascript JavaScript
import MemoryClient from 'mem0ai';
const client = new MemoryClient({ apiKey: "your-api-key" });
client.delete("your_memory_id")
.then(result => console.log(result))
.catch(error => console.error(error));
```
</CodeGroup>
---
## 2. Batch Delete Multiple Memories
<CodeGroup>
```python Python
from mem0 import MemoryClient
client = MemoryClient(api_key="your-api-key")
delete_memories = [
{"memory_id": "id1"},
{"memory_id": "id2"}
]
response = client.batch_delete(delete_memories)
print(response)
```
```javascript JavaScript
import MemoryClient from 'mem0ai';
const client = new MemoryClient({ apiKey: "your-api-key" });
const deleteMemories = [
{ memory_id: "id1" },
{ memory_id: "id2" }
];
client.batchDelete(deleteMemories)
.then(response => console.log('Batch delete response:', response))
.catch(error => console.error(error));
```
</CodeGroup>
---
## 3. Delete Memories by Filter (e.g., user_id)
<CodeGroup>
```python Python
from mem0 import MemoryClient
client = MemoryClient(api_key="your-api-key")
# Delete all memories for a specific user
client.delete_all(user_id="alice")
```
```javascript JavaScript
import MemoryClient from 'mem0ai';
const client = new MemoryClient({ apiKey: "your-api-key" });
client.deleteAll({ user_id: "alice" })
.then(result => console.log(result))
.catch(error => console.error(error));
```
</CodeGroup>
You can also filter by other parameters such as:
- `agent_id`
- `run_id`
- `metadata` (as JSON string)
---
## Key Differences
| Method | Use When | IDs Needed | Filters |
|----------------------|-------------------------------------------|------------|----------|
| `delete(memory_id)` | You know exactly which memory to remove | ✔ | ✘ |
| `batch_delete([...])`| You have a known list of memory IDs | ✔ | ✘ |
| `delete_all(...)` | You want to delete by user/agent/run/etc | ✘ | ✔ |
### More Details
For request/response schema and additional filtering options, see:
- [Delete Memory API Reference](/api-reference/memory/delete-memory)
- [Batch Delete API Reference](/api-reference/memory/batch-delete)
- [Delete Memories by Filter Reference](/api-reference/memory/delete-memories)
Youve now seen how to add, search, update, and delete memories in Mem0.
---
## Need help?
If you have any questions, please feel free to reach out to us using one of the following methods:
<Snippet file="get-help.mdx"/>

View File

@@ -0,0 +1,124 @@
---
title: Search Memory
description: Retrieve relevant memories from Mem0 using powerful semantic and filtered search capabilities.
icon: "magnifying-glass"
iconType: "solid"
---
## Overview
The `search` operation allows you to retrieve relevant memories based on a natural language query and optional filters like user ID, agent ID, categories, and more. This is the foundation of giving your agents memory-aware behavior.
Mem0 supports:
- Semantic similarity search
- Metadata filtering (with advanced logic)
- Reranking and thresholds
- Cross-agent, multi-session context resolution
This applies to both:
- **Mem0 Platform** (hosted API with full-scale features)
- **Mem0 Open Source** (local-first with LLM inference and local vector DB)
## Architecture
<Frame caption="Architecture diagram illustrating the memory search process.">
<img src="../../images/search_architecture.png" />
</Frame>
The search flow follows these steps:
1. **Query Processing**
An LLM refines and optimizes your natural language query.
2. **Vector Search**
Semantic embeddings are used to find the most relevant memories using cosine similarity.
3. **Filtering & Ranking**
Logical and comparison-based filters are applied. Memories are scored, filtered, and optionally reranked.
4. **Results Delivery**
Relevant memories are returned with associated metadata and timestamps.
---
## Example: Mem0 Platform
<CodeGroup>
```python Python
from mem0 import MemoryClient
client = MemoryClient(api_key="your-api-key")
query = "What do you know about me?"
filters = {
"OR": [
{"user_id": "alice"},
{"agent_id": {"in": ["travel-assistant", "customer-support"]}}
]
}
results = client.search(query, version="v2", filters=filters)
```
```javascript JavaScript
import { MemoryClient } from "mem0ai";
const client = new MemoryClient({apiKey: "your-api-key"});
const query = "I'm craving some pizza. Any recommendations?";
const filters = {
AND: [
{ user_id: "alice" }
]
};
const results = await client.search(query, {
version: "v2",
filters
});
```
</CodeGroup>
---
## Example: Mem0 Open Source
<CodeGroup>
```python Python
from mem0 import Memory
m = Memory()
related_memories = m.search("Should I drink coffee or tea?", user_id="alice")
```
```javascript JavaScript
import { Memory } from 'mem0ai/oss';
const memory = new Memory();
const relatedMemories = memory.search("Should I drink coffee or tea?", { userId: "alice" });
```
</CodeGroup>
---
## Tips for Better Search
- Use descriptive natural queries (Mem0 can interpret intent)
- Apply filters for scoped, faster lookup
- Use `version: "v2"` for enhanced results
- Consider wildcard filters (e.g., `run_id: "*"`) for broader matches
- Tune with `top_k`, `threshold`, or `rerank` if needed
### More Details
For the full list of filter logic, comparison operators, and optional search parameters, see the
[Search Memory API Reference](/api-reference/memory/v2-search-memories).
---
## Need help?
If you have any questions, please feel free to reach out to us using one of the following methods:
<Snippet file="get-help.mdx"/>

View File

@@ -0,0 +1,117 @@
---
title: Update Memory
description: Modify an existing memory by updating its content or metadata.
icon: "pencil"
iconType: "solid"
---
## Overview
User preferences, interests, and behaviors often evolve over time. The `update` operation lets you revise a stored memory, whether it's updating facts and memories, rephrasing a message, or enriching metadata.
Mem0 supports both:
- **Single Memory Update** for one specific memory using its ID
- **Batch Update** for updating many memories at once (up to 1000)
This guide includes usage for both single update and batch update of memories through **Mem0 Platform**
## Use Cases
- Refine a vague or incorrect memory after a correction
- Add or edit memory with new metadata (e.g., categories, tags)
- Evolve factual knowledge as the users profile changes
- A user profile evolves: “I love spicy food” → later says “Actually, I cant handle spicy food.”
Updating memory ensures your agents remain accurate, adaptive, and personalized.
---
## Update Memory
<CodeGroup>
```python Python
from mem0 import MemoryClient
client = MemoryClient(api_key="your-api-key")
memory_id = "your_memory_id"
client.update(
memory_id=memory_id,
text="Updated memory content about the user",
metadata={"category": "profile-update"}
)
```
```javascript JavaScript
import MemoryClient from 'mem0ai';
const client = new MemoryClient({ apiKey: "your-api-key" });
const memory_id = "your_memory_id";
client.update(memory_id, {
text: "Updated memory content about the user",
metadata: { category: "profile-update" }
})
.then(result => console.log(result))
.catch(error => console.error(error));
```
</CodeGroup>
---
## Batch Update
Update up to 1000 memories in one call.
<CodeGroup>
```python Python
from mem0 import MemoryClient
client = MemoryClient(api_key="your-api-key")
update_memories = [
{"memory_id": "id1", "text": "Watches football"},
{"memory_id": "id2", "text": "Likes to travel"}
]
response = client.batch_update(update_memories)
print(response)
```
```javascript JavaScript
import MemoryClient from 'mem0ai';
const client = new MemoryClient({ apiKey: "your-api-key" });
const updateMemories = [
{ memoryId: "id1", text: "Watches football" },
{ memoryId: "id2", text: "Likes to travel" }
];
client.batchUpdate(updateMemories)
.then(response => console.log('Batch update response:', response))
.catch(error => console.error(error));
```
</CodeGroup>
---
## Tips
- You can update both `text` and `metadata` in the same call.
- Use `batchUpdate` when you're applying similar corrections at scale.
- If memory is marked `immutable`, it must first be deleted and re-added.
- Combine this with feedback mechanisms (e.g., user thumbs-up/down) to self-improve memory.
### More Details
Refer to the full [Update Memory API Reference](/api-reference/memory/update-memory) and [Batch Update Reference](/api-reference/memory/batch-update) for schema and advanced fields.
---
## Need help?
If you have any questions, please feel free to reach out to us using one of the following methods:
<Snippet file="get-help.mdx"/>

View File

@@ -19,10 +19,10 @@
"tab": "Documentation",
"groups": [
{
"group": "Get Started",
"group": "Getting Started",
"icon": "rocket",
"pages": [
"overview",
"what-is-mem0",
"quickstart",
"faqs"
]
@@ -32,7 +32,16 @@
"icon": "brain",
"pages": [
"core-concepts/memory-types",
"core-concepts/memory-operations"
{
"group": "Memory Operations",
"icon": "gear",
"pages": [
"core-concepts/memory-operations/add",
"core-concepts/memory-operations/search",
"core-concepts/memory-operations/update",
"core-concepts/memory-operations/delete"
]
}
]
},
{
@@ -46,21 +55,19 @@
"icon": "star",
"pages": [
"platform/features/platform-overview",
"platform/features/contextual-add",
"platform/features/async-client",
"platform/features/advanced-retrieval",
"platform/features/criteria-retrieval",
"platform/features/contextual-add",
"platform/features/multimodal-support",
"platform/features/timestamp",
"platform/features/selective-memory",
"platform/features/custom-categories",
"platform/features/custom-instructions",
"platform/features/direct-import",
"platform/features/async-client",
"platform/features/memory-export",
"platform/features/timestamp",
"platform/features/expiration-date",
"platform/features/webhooks",
"platform/features/graph-memory",
"platform/features/feedback-mechanism",
"platform/features/expiration-date"
"platform/features/feedback-mechanism"
]
}
]
@@ -69,12 +76,12 @@
"group": "Open Source",
"icon": "code-branch",
"pages": [
"open-source/quickstart",
"open-source/overview",
"open-source/python-quickstart",
"open-source/node-quickstart",
{
"group": "Features",
"icon": "wrench",
"icon": "star",
"pages": [
"open-source/features/async-memory",
"open-source/features/openai_compatibility",
@@ -351,31 +358,6 @@
]
}
]
},
{
"anchor": "Your Dashboard",
"href": "https://app.mem0.ai",
"icon": "chart-simple"
},
{
"anchor": "Demo",
"href": "https://mem0.dev/demo",
"icon": "play"
},
{
"anchor": "Discord",
"href": "https://mem0.dev/DiD",
"icon": "discord"
},
{
"anchor": "GitHub",
"href": "https://github.com/mem0ai/mem0",
"icon": "github"
},
{
"anchor": "Support",
"href": "mailto:founders@mem0.ai",
"icon": "envelope"
}
]
},

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 285 KiB

View File

@@ -1,7 +1,7 @@
---
title: Overview
description: 'Enhance your memory system with graph-based knowledge representation and retrieval'
icon: "database"
icon: "info"
iconType: "solid"
---

View File

@@ -1,5 +1,5 @@
---
title: Node SDK
title: Node SDK Quickstart
description: 'Get started with Mem0 quickly!'
icon: "node"
iconType: "solid"

View File

@@ -1,6 +1,6 @@
---
title: Overview
icon: "info"
icon: "eye"
iconType: "solid"
---

View File

@@ -1,5 +1,5 @@
---
title: Python SDK
title: Python SDK Quickstart
description: 'Get started with Mem0 quickly!'
icon: "python"
iconType: "solid"

View File

@@ -6,100 +6,171 @@ iconType: "solid"
<Snippet file="paper-release.mdx" />
Mem0's **Advanced Retrieval** feature delivers superior search results by leveraging state-of-the-art search algorithms. Beyond the default search functionality, Mem0 offers the following advanced retrieval modes:
Mem0s **Advanced Retrieval** provides additional control over how memories are selected and ranked during search. While the default search uses embedding-based semantic similarity, Advanced Retrieval introduces specialized options to improve recall, ranking accuracy, or filtering based on specific use case.
1. **Keyword Search**
You can enable any of the following modes independently or together:
This mode emphasizes keywords within the query, returning memories that contain the most relevant keywords alongside those from the default search. By default, this parameter is set to `false`. Enabling it enhances search recall, though it may slightly impact precision.
- Keyword Search
- Reranking
- Filtering
```python
client.search(query, keyword_search=True, user_id='alex')
```
Each enhancement can be toggled independently via the `search()` API call. These flags are off by default. These are useful when building agents that require fine-grained retrieval control
**Example:**
```python
# Search for memories about food preferences with keyword search enabled
query = "What are my food preferences?"
results = client.search(query, keyword_search=True, user_id='alex')
---
# Output might include:
# - "Vegetarian. Allergic to nuts." (highly relevant)
# - "Prefers spicy food and enjoys Thai cuisine" (relevant)
# - "Mentioned disliking sea food during restaurant discussion" (keyword match)
## Keyword Search
# Without keyword_search=True, only the most relevant memories would be returned:
# - "Vegetarian. Allergic to nuts." (highly relevant)
# - "Prefers spicy food and enjoys Thai cuisine" (relevant)
# The keyword-based match about "sea food" would be excluded
```
Keyword search expands the result set by including memories that contain lexically similar terms and important keywords from the query, even if they're not semantically similar.
2. **Reranking**
### When to use
- You are searching for specific entities, names, or technical terms
- When you need comprehensive coverage of a topic
- You want broader recall at the cost of slight noise
Normal retrieval gives you memories sorted in order of their relevancy, but the order may not be perfect. Reranking uses a deep neural network to correct this order, ensuring the most relevant memories appear first. If you are concerned about the order of memories, or want that the best results always comes at top then use reranking. This parameter is set to `false` by default. When enabled, it reorders the memories based on a more accurate relevance score.
```python
client.search(query, rerank=True, user_id='alex')
```
### API Usage
```python
results = client.search(
query="What are my food preferences?",
keyword_search=True,
user_id="alex"
)
```
**Example:**
```python
# Search for travel plans with reranking enabled
query = "What are my travel plans?"
results = client.search(query, rerank=True, user_id='alex')
### Example
# Without reranking, results might be ordered like:
# 1. "Traveled to France last year" (less relevant to current plans)
# 2. "Planning a trip to Japan next month" (more relevant to current plans)
# 3. "Interested in visiting Tokyo restaurants" (relevant to current plans)
**Without keyword_search:**
- "Vegetarian. Allergic to nuts."
- "Prefers spicy food and enjoys Thai cuisine"
# With reranking enabled, results would be reordered:
# 1. "Planning a trip to Japan next month" (most relevant to current plans)
# 2. "Interested in visiting Tokyo restaurants" (highly relevant to current plans)
# 3. "Traveled to France last year" (less relevant to current plans)
```
**With keyword_search=True:**
- "Vegetarian. Allergic to nuts."
- "Prefers spicy food and enjoys Thai cuisine"
- "Mentioned disliking seafood during restaurant discussion"
3. **Filtering**
### Trade-offs
- Increases recall
- May slightly reduce precision
- Adds ~10ms latency
Filtering allows you to narrow down search results by applying specific criterias. This parameter is set to `false` by default. When activated, it significantly enhances search precision by removing irrelevant memories, though it may slightly reduce recall. Filtering is particularly useful when you need highly specific information.
---
```python
client.search(query, filter_memories=True, user_id='alex')
```
## Reranking
**Example:**
```python
# Search for dietary restrictions with filtering enabled
query = "What are my dietary restrictions?"
results = client.search(query, filter_memories=True, user_id='alex')
Reranking reorders the retrieved results using a deep semantic relevance model that improves the position of the most relevant matches.
# Without filtering, results might include:
# - "Vegetarian. Allergic to nuts." (directly relevant)
# - "I enjoy cooking Italian food on weekends" (somewhat related to food)
# - "Mentioned disliking seafood during restaurant discussion" (food-related)
# - "Prefers to eat dinner at 7pm" (tangentially food-related)
### When to use
- You rely on top-1 or top-N precision
- When result order is critical for your application
- You want consistent result quality across sessions
# With filtering enabled, results would be focused:
# - "Vegetarian. Allergic to nuts." (directly relevant)
# - "Mentioned disliking seafood during restaurant discussion" (relevant restriction)
#
# The filtering process removes memories that are about food preferences
# but not specifically about dietary restrictions
```
### API Usage
```python
results = client.search(
query="What are my travel plans?",
rerank=True,
user_id="alex"
)
```
### Example
**Without rerank:**
1. "Traveled to France last year"
2. "Planning a trip to Japan next month"
3. "Interested in visiting Tokyo restaurants"
**With rerank=True:**
1. "Planning a trip to Japan next month"
2. "Interested in visiting Tokyo restaurants"
3. "Traveled to France last year"
### Trade-offs
- Significantly improves result ordering accuracy
- Ensures most relevant memories appear first
- Adds ~150200ms latency
- Higher computational cost
---
## Filtering
Filtering allows you to narrow down search results by applying specific criteria from the set of retrieved memories.
### When to use
- You require highly specific results
- You are working with huge amount of data where noise is problematic
- You require quality over quantity results
### API Usage
```python
results = client.search(
query="What are my dietary restrictions?",
filter_memories=True,
user_id="alex"
)
```
### Example
**Without filtering:**
- "Vegetarian. Allergic to nuts."
- "I enjoy cooking Italian food on weekends"
- "Mentioned disliking seafood during restaurant discussion"
- "Prefers to eat dinner at 7pm"
**With filter_memories=True:**
- "Vegetarian. Allergic to nuts."
- "Mentioned disliking seafood during restaurant discussion"
### Trade-offs
- Maximizes precision (highly relevant results only)
- May reduce recall (filters out some relevant memories)
- Adds ~200-300ms latency
- Best for focused, specific queries
---
## Combining Modes
You can combine all three retrieval modes as needed:
```python
results = client.search(
query="What are my travel plans?",
keyword_search=True,
rerank=True,
filter_memories=True,
user_id="alex"
)
```
This configuration broadens the candidate pool with keywords, improves ordering via rerank, and finally cuts noise with filtering.
<Note> Combining all modes may add up to ~450ms latency per query. </Note>
---
## Performance Benchmarks
| **Mode** | **Approximate Latency** |
|------------------|-------------------------|
| `keyword_search` | &lt;10ms |
| `rerank` | 150200ms |
| `filter_memories`| 200300ms |
---
## Best Practices & Limitations
- Use `keyword_search` for broader recall when query context is limited
- Use `rerank` to prioritize the top-most relevant result
- Use `filter_memories` in production-facing or safety-critical agents
- Combine filtering and reranking for maximum accuracy
- Filters may eliminate all results—always handle the empty set gracefully
- Filtering uses LLM evaluation and may be rate-limited depending on your plan
<Note> You can enable or disable these search modes by passing the respective parameters to the `search` method. There is no required sequence for these modes, and any combination can be used based on your needs. </Note>
### Latency Numbers
Here are the typical latency ranges for each search mode:
| **Mode** | **Latency** |
|---------------------|------------------|
| **Keyword Search** | **&lt;10ms** |
| **Reranking** | **150-200ms** |
| **Filtering** | **200-300ms** |
If you have any questions, please feel free to reach out to us using one of the following methods:
<Snippet file="get-help.mdx" />
<Snippet file="get-help.mdx" />

View File

@@ -6,22 +6,57 @@ iconType: "solid"
<Snippet file="paper-release.mdx" />
Mem0's **Criteria Retrieval** feature allows you to retrieve memories based on specific criteria. This is useful when you need to find memories that match certain conditions or criteria, such as emotional content, sentiment, or other custom attributes.
## Setting Up Custom Criteria
Mem0s **Criteria Retrieval** feature allows you to retrieve memories based on your defined criteria. It goes beyond generic semantic relevance and rank memories based on what matters to your application - emotional tone, intent, behavioral signals, or other custom traits.
You can define custom criteria at the project level, assigning weights to each criterion. These weights will be normalized during memory retrieval.
Instead of just searching for "how similar a memory is to this query?", you can define what *relevance* really means for your project. For example:
- Prioritize joyful memories when building a wellness assistant
- Downrank negative memories in a productivity-focused agent
- Highlight curiosity in a tutoring agent
You define **criteria** - custom attributes like "joy", "negativity", "confidence", or "urgency", and assign weights to control how they influence scoring. When you `search`, Mem0 uses these to re-rank memories that are semantically relevant, favoring those that better match your intent.
This gives you nuanced, intent-aware memory search that adapts to your use case.
---
## When to Use Criteria Retrieval
Use Criteria Retrieval if:
- Youre building an agent that should react to **emotions** or **behavioral signals**
- You want to guide memory selection based on **context**, not just content
- You have domain-specific signals like "risk", "positivity", "confidence", etc. that shape recall
---
## Setting Up Criteria Retrieval
Lets walk through how to configure and use Criteria Retrieval step by step.
### Initialize the Client
Before defining any criteria, make sure to initialize the `MemoryClient` with your credentials and project ID:
```python
from mem0 import MemoryClient
client = MemoryClient(
api_key="mem0_api_key",
org_id="mem0_organization_id",
project_id="mem0_project_id"
api_key="your_mem0_api_key",
org_id="your_organization_id",
project_id="your_project_id"
)
```
# Define custom criteria with weights
### Define Your Criteria
Each criterion includes:
- A `name` (used in scoring)
- A `description` (interpreted by the LLM)
- A `weight` (how much it influences the final score)
```python
retrieval_criteria = [
{
"name": "joy",
@@ -39,19 +74,26 @@ retrieval_criteria = [
"weight": 1
}
]
# Update project with custom criteria
client.update_project(
retrieval_criteria=retrieval_criteria
)
```
## Using Criteria Retrieval
### Apply Criteria to Your Project
Once defined, register the criteria to your project:
```python
client.update_project(retrieval_criteria=retrieval_criteria)
```
Criteria apply project-wide. Once set, they affect all searches using `version="v2"`.
## Example Walkthrough
After setting up your criteria, you can use them to filter and retrieve memories. Here's an example:
### Add Memories
```python
# Add some example memories
messages = [
{"role": "user", "content": "What a beautiful sunny day! I feel so refreshed and ready to take on anything!"},
{"role": "user", "content": "I've always wondered how storms form—what triggers them in the atmosphere?"},
@@ -60,125 +102,112 @@ messages = [
]
client.add(messages, user_id="alice")
```
# Search with criteria-based filtering
### Run Standard vs. Criteria-Based Search
```python
# With criteria
filters = {
"AND": [
{"user_id": "alice"}
]
}
results_with_criteria = client.search(
query="Why I am feeling happy today?",
filters=filters,
query="Why I am feeling happy today?",
filters=filters,
version="v2"
)
# Standard search without criteria filtering
# Without criteria
results_without_criteria = client.search(
query="Why I am feeling happy today?",
query="Why I am feeling happy today?",
user_id="alice"
)
```
## Search Results Comparison
Let's compare the results from criteria-based retrieval versus standard retrieval to see how the emotional criteria affects ranking:
### Compare Results
### Search Results (with Criteria)
```python
[
{
"memory": "User feels refreshed and ready to take on anything on a beautiful sunny day",
"score": 0.666,
...
},
{
"memory": "User finally has time to draw something after a long time",
"score": 0.616,
...
},
{
"memory": "User is happy today",
"score": 0.500,
...
},
{
"memory": "User is curious about how storms form and what triggers them in the atmosphere.",
"score": 0.400,
...
},
{
"memory": "It has been raining for days, making everything feel heavier.",
"score": 0.116,
...
}
{"memory": "User feels refreshed and ready to take on anything on a beautiful sunny day", "score": 0.666, ...},
{"memory": "User finally has time to draw something after a long time", "score": 0.616, ...},
{"memory": "User is happy today", "score": 0.500, ...},
{"memory": "User is curious about how storms form and what triggers them in the atmosphere.", "score": 0.400, ...},
{"memory": "It has been raining for days, making everything feel heavier.", "score": 0.116, ...}
]
```
### Search Results (without Criteria)
```python
[
{
"memory": "User is happy today",
"score": 0.607,
...
},
{
"memory": "User feels refreshed and ready to take on anything on a beautiful sunny day",
"score": 0.512,
...
},
{
"memory": "It has been raining for days, making everything feel heavier.",
"score": 0.4617,
...
},
{
"memory": "User is curious about how storms form and what triggers them in the atmosphere.",
"score": 0.340,
...
},
{
"memory": "User finally has time to draw something after a long time",
"score": 0.336,
...
}
{"memory": "User is happy today", "score": 0.607, ...},
{"memory": "User feels refreshed and ready to take on anything on a beautiful sunny day", "score": 0.512, ...},
{"memory": "It has been raining for days, making everything feel heavier.", "score": 0.4617, ...},
{"memory": "User is curious about how storms form and what triggers them in the atmosphere.", "score": 0.340, ...},
{"memory": "User finally has time to draw something after a long time", "score": 0.336, ...},
]
```
Looking at the example results above, we can see how criteria-based filtering affects the output:
## Search Results Comparison
1. **Memory Ordering**: With criteria, memories with high joy scores (like feeling refreshed and drawing) are ranked higher, while without criteria, the most relevant memory ("User is happy today") comes first.
2. **Score Distribution**: With criteria, scores are more spread out (0.116 to 0.666) and reflect the criteria weights, while without criteria, scores are more clustered (0.336 to 0.607) and based purely on relevance.
3. **Trait Sensitivity**: “Rainy day” content is penalized due to negative tone. “Storm curiosity” is recognized and scored accordingly.
3. **Negative Content**: With criteria, the negative memory about rain has a much lower score (0.116) due to the emotion criteria, while without criteria it maintains a relatively high score (0.4617) due to its relevance.
---
4. **Curiosity Content**: The storm-related memory gets a moderate score (0.400) with criteria due to the curiosity weighting, while without criteria it's ranked lower (0.340) as it's less relevant to the happiness query.
## Key Differences vs. Standard Search
## Key Differences
1. **Scoring**: With criteria, normalized scores (0-1) are used based on custom criteria weights, while without criteria, standard relevance scoring is used
2. **Ordering**: With criteria, memories are first retrieved by relevance, then criteria-based filtering and prioritization is applied, while without criteria, ordering is solely by relevance
3. **Filtering**: With criteria, post-retrieval filtering based on custom criteria (joy, curiosity, etc.) is available, which isn't available without criteria
| Aspect | Standard Search | Criteria Retrieval |
|-------------------------|--------------------------------------|-------------------------------------------------|
| Ranking Logic | Semantic similarity only | Semantic + LLM-based criteria scoring |
| Control Over Relevance | None | Fully customizable with weighted criteria |
| Memory Reordering | Static based on similarity | Dynamically re-ranked by intent alignment |
| Emotional Sensitivity | No tone or trait awareness | Incorporates emotion, tone, or custom behaviors |
| Version Required | Defaults | `search(version="v2")` |
<Note>
When no custom criteria are specified, the search will default to standard relevance-based retrieval. In this case, results are returned based solely on their relevance to the query, without any additional filtering or prioritization that would normally be applied through criteria.
If no criteria are defined for a project, `version="v2"` behaves like normal search.
</Note>
---
## Best Practices
- Choose **35 criteria** that reflect your applications intent
- Make descriptions **clear and distinct**, those are interpreted by an LLM
- Use **stronger weights** to amplify impact of important traits
- Avoid redundant or ambiguous criteria (e.g. “positivity” + “joy”)
- Always handle empty result sets in your application logic
---
## How It Works
1. **Criteria Definition**: Define custom criteria with names, descriptions, and weights
2. **Project Configuration**: Apply these criteria at the project level
3. **Memory Retrieval**: Use v2 search with filters to retrieve memories based on your criteria
4. **Weighted Scoring**: Memories are scored based on the defined criteria weights
1. **Criteria Definition**: Define custom criteria with a name, description, and weight. These describe what matters in a memory (e.g., joy, urgency, empathy).
2. **Project Configuration**: Register these criteria using `update_project()`. They apply at the project level and influence all searches using `version="v2"`.
3. **Memory Retrieval**: When you perform a search with `version="v2"`, Mem0 first retrieves relevant memories based on the query and your defined criteria.
4. **Weighted Scoring**: Each retrieved memory is evaluated and scored against the defined criteria and weights.
This lets you prioritize memories that align with your agents goals and not just those that look similar to the query.
<Note>
Criteria retrieval is currently supported only in search v2. Make sure to use `version="v2"` when performing searches with custom criteria.
</Note>
If you have any questions, please feel free to reach out to us using one of the following methods:
---
<Snippet file="get-help.mdx" />
## Summary
- Define what “relevant” means using criteria
- Apply them per project via `update_project()`
- Use `version="v2"` to activate criteria-aware search
- Build agents that reason not just with relevance, but **contextual importance**
---
Need help designing or tuning your criteria?
<Snippet file="get-help.mdx" />

View File

@@ -1,5 +1,5 @@
---
title: Introduction
title: Overview
description: 'Empower your AI applications with long-term memory and personalization'
icon: "eye"
iconType: "solid"

View File

@@ -1,7 +1,7 @@
---
title: Guide
title: Quickstart
description: 'Get started with Mem0 Platform in minutes'
icon: "book"
icon: "bolt"
iconType: "solid"
---

113
docs/what-is-mem0.mdx Normal file
View File

@@ -0,0 +1,113 @@
---
title: What is Mem0?
icon: "brain"
iconType: "solid"
---
Mem0 is a memory layer designed for modern AI agents. It acts as a persistent memory layer that agents can use to:
- Recall relevant past interactions
- Store important user preferences and factual context
- Learn from successes and failures
It gives AI agents memory so they can remember, learn, and evolve across interactions. Mem0 integrates easily into your agent stack and scales from prototypes to production systems.
## Stateless vs. Stateful Agents
Most current agents are stateless: they process a query, generate a response, and forget everything. Even with huge context windows, everything resets the next session.
Stateful agents, powered by Mem0, are different. They retain context, recall what matters, and behave more intelligently over time.
<Frame caption="Stateless vs Stateful Agent">
<img src="../images/stateless-vs-stateful-agent.png" />
</Frame>
## Where Memory Fits in the Agent Stack
Mem0 sits alongside your retriever, planner, and LLM. Unlike retrieval-based systems (like RAG), Mem0 tracks past interactions, stores long-term knowledge, and evolves the agents behavior.
<Frame caption="Memory in Agent Architecture">
<img src="../images/memory-agent-stack.png" />
</Frame>
Memory is not about pushing more tokens into a prompt but about intelligently remembering context that matters. This distinction matters:
| Capability | Context Window | Mem0 Memory |
|------------------|------------------------|-----------------------------|
| Retention | Temporary | Persistent |
| Cost | Grows with input size | Optimized (only what matters) |
| Recall | Token proximity | Relevance + intent-based |
| Personalization | None | Deep, evolving profile |
| Behavior | Reactive | Adaptive |
## Memory vs. RAG: Complementary Tools
RAG (Retrieval-Augmented Generation) is great for fetching facts from documents. But its stateless. It doesnt know who the user is, what theyve asked before, or what failed last time.
Mem0 provides continuity. It stores decisions, preferences, and context—not just knowledge.
| Aspect | RAG | Mem0 Memory |
|--------------------|-------------------------------|-------------------------------|
| Statefulness | Stateless | Stateful |
| Recall Type | Document lookup | Evolving user context |
| Use Case | Ground answers in data | Guide behavior across time |
Together, theyre stronger: RAG informs the LLM; Mem0 shapes its memory.
## Types of Memory in Mem0
Mem0 supports different kinds of memory to mimic how humans store information:
- **Working Memory**: short-term session awareness
- **Factual Memory**: long-term structured knowledge (e.g., preferences, settings)
- **Episodic Memory**: records specific past conversations
- **Semantic Memory**: builds general knowledge over time
## Why Developers Choose Mem0
Mem0 isnt a wrapper around a vector store. Its a full memory engine with:
- **LLM-based extraction**: Intelligently decides what to remember
- **Filtering & decay**: Avoids memory bloat, forgets irrelevant info
- **Costs Reduction**: Save compute costs with smart prompt injection of only relevant memories
- **Dashboards & APIs**: Observability, fine-grained control
- **Cloud and OSS**: Use our platform version or our open-source SDK version
You plug Mem0 into your agent framework, it doesnt replace your LLM or workflows. Instead, it adds a smart memory layer on top.
## Core Capabilities
- **Reduced token usage and faster responses**: sub-50 ms lookups
- **Semantic memory**: procedural, episodic, and factual support
- **Multimodal support**: handle both text and images
- **Graph memory**: connect insights and entities across sessions
- **Host your way**: either a managed service or a self-hosted version
## Getting Started
Mem0 offers two powerful ways to leverage our technology: our [managed platform](/platform/overview) and our [open source solution](/open-source/quickstart).
<CardGroup cols={3}>
<Card title="Quickstart" icon="rocket" href="/quickstart">
Integrate Mem0 in a few lines of code
</Card>
<Card title="Playground" icon="play" href="https://app.mem0.ai/playground">
Mem0 in action
</Card>
<Card title="Examples" icon="lightbulb" href="/examples">
See what you can build with Mem0
</Card>
</CardGroup>
## Need help?
If you have any questions, please feel free to reach out to us using one of the following methods:
<Snippet file="get-help.mdx"/>