---
title: Graph Memory
icon: "circle-nodes"
iconType: "solid"
description: "Enable graph-based memory retrieval for more contextually relevant results"
---
## Overview
Graph Memory enhances memory pipeline by creating relationships between entities in your data. It builds a network of interconnected information for more contextually relevant search results.
This feature allows your AI applications to understand connections between entities, providing richer context for responses. It's ideal for applications needing relationship tracking and nuanced information retrieval across related memories.
## How Graph Memory Works
The Graph Memory feature analyzes how each entity connects and relates to each other. When enabled:
1. Mem0 automatically builds a graph representation of entities
2. Retrieval considers graph relationships between entities
3. Results include entities that may be contextually important even if they're not direct semantic matches
## Using Graph Memory
To use Graph Memory, you need to enable it in your API calls by setting the `enable_graph=True` parameter. You'll also need to specify `output_format="v1.1"` to receive the enriched response format.
### Adding Memories with Graph Memory
When adding new memories, enable Graph Memory to automatically build relationships with existing memories:
```python Python
from mem0 import MemoryClient
client = MemoryClient(
api_key="your-api-key",
org_id="your-org-id",
project_id="your-project-id"
)
messages = [
{"role": "user", "content": "My name is Joseph"},
{"role": "assistant", "content": "Hello Joseph, it's nice to meet you!"},
{"role": "user", "content": "I'm from Seattle and I work as a software engineer"}
]
# Enable graph memory when adding
client.add(
messages,
user_id="joseph",
version="v1",
enable_graph=True,
output_format="v1.1"
)
```
```javascript JavaScript
import { MemoryClient } from "mem0";
const client = new MemoryClient({
apiKey: "your-api-key",
org_id: "your-org-id",
project_id: "your-project-id"
});
const messages = [
{ role: "user", content: "My name is Joseph" },
{ role: "assistant", content: "Hello Joseph, it's nice to meet you!" },
{ role: "user", content: "I'm from Seattle and I work as a software engineer" }
];
// Enable graph memory when adding
await client.add({
messages,
user_id: "joseph",
version: "v1",
enable_graph: true,
output_format: "v1.1"
});
```
```json Output
{
"results": [
{
"memory": "Name is Joseph",
"event": "ADD",
"id": "4a5a417a-fa10-43b5-8c53-a77c45e80438"
},
{
"memory": "Is from Seattle",
"event": "ADD",
"id": "8d268d0f-5452-4714-b27d-ae46f676a49d"
},
{
"memory": "Is a software engineer",
"event": "ADD",
"id": "5f0a184e-ddea-4fe6-9b92-692d6a901df8"
}
]
}
```
The graph memory would look like this:
Graph Memory creates a network of relationships between entities, enabling more contextual retrieval
Response for the graph memory's `add` operation will not be available directly in the response.
As adding graph memories is an asynchronous operation due to heavy processing,
you can use the `get_all()` endpoint to retrieve the memory with the graph metadata.
### Searching with Graph Memory
When searching memories, Graph Memory helps retrieve entities that are contextually important even if they're not direct semantic matches.
```python Python
# Search with graph memory enabled
results = client.search(
"what is my name?",
user_id="joseph",
enable_graph=True,
output_format="v1.1"
)
print(results)
```
```javascript JavaScript
// Search with graph memory enabled
const results = await client.search({
query: "what is my name?",
user_id: "joseph",
enable_graph: true,
output_format: "v1.1"
});
console.log(results);
```
```json Output
{
"results": [
{
"id": "4a5a417a-fa10-43b5-8c53-a77c45e80438",
"memory": "Name is Joseph",
"user_id": "joseph",
"metadata": null,
"categories": ["personal_details"],
"immutable": false,
"created_at": "2025-03-19T09:09:00.146390-07:00",
"updated_at": "2025-03-19T09:09:00.146404-07:00",
"score": 0.3621795393335552
},
{
"id": "8d268d0f-5452-4714-b27d-ae46f676a49d",
"memory": "Is from Seattle",
"user_id": "joseph",
"metadata": null,
"categories": ["personal_details"],
"immutable": false,
"created_at": "2025-03-19T09:09:00.170680-07:00",
"updated_at": "2025-03-19T09:09:00.170692-07:00",
"score": 0.31212713194651254
}
],
"relations": [
{
"source": "joseph",
"source_type": "person",
"relationship": "name",
"target": "joseph",
"target_type": "person",
"score": 0.39
}
]
}
```
### Retrieving All Memories with Graph Memory
When retrieving all memories, Graph Memory provides additional relationship context:
```python Python
# Get all memories with graph context
memories = client.get_all(
user_id="joseph",
enable_graph=True,
output_format="v1.1"
)
print(memories)
```
```javascript JavaScript
// Get all memories with graph context
const memories = await client.getAll({
user_id: "joseph",
enable_graph: true,
output_format: "v1.1"
});
console.log(memories);
```
```json Output
{
"results": [
{
"id": "5f0a184e-ddea-4fe6-9b92-692d6a901df8",
"memory": "Is a software engineer",
"user_id": "joseph",
"metadata": null,
"categories": ["professional_details"],
"immutable": false,
"created_at": "2025-03-19T09:09:00.194116-07:00",
"updated_at": "2025-03-19T09:09:00.194128-07:00",
},
{
"id": "8d268d0f-5452-4714-b27d-ae46f676a49d",
"memory": "Is from Seattle",
"user_id": "joseph",
"metadata": null,
"categories": ["personal_details"],
"immutable": false,
"created_at": "2025-03-19T09:09:00.170680-07:00",
"updated_at": "2025-03-19T09:09:00.170692-07:00",
},
{
"id": "4a5a417a-fa10-43b5-8c53-a77c45e80438",
"memory": "Name is Joseph",
"user_id": "joseph",
"metadata": null,
"categories": ["personal_details"],
"immutable": false,
"created_at": "2025-03-19T09:09:00.146390-07:00",
"updated_at": "2025-03-19T09:09:00.146404-07:00",
}
],
"relations": [
{
"source": "joseph",
"source_type": "person",
"relationship": "name",
"target": "joseph",
"target_type": "person"
},
{
"source": "joseph",
"source_type": "person",
"relationship": "city",
"target": "seattle",
"target_type": "city"
},
{
"source": "joseph",
"source_type": "person",
"relationship": "job",
"target": "software engineer",
"target_type": "job"
}
]
}
```
## Best Practices
- Enable Graph Memory for applications where understanding context and relationships between memories is important
- Graph Memory works best with a rich history of related conversations
- Consider Graph Memory for long-running assistants that need to track evolving information
## Performance Considerations
Graph Memory requires additional processing and may increase response times slightly for very large memory stores. However, for most use cases, the improved retrieval quality outweighs the minimal performance impact.
If you have any questions, please feel free to reach out to us using one of the following methods: