updated docs for mem0 architecture diagram (#2185)

This commit is contained in:
Prateek Chhikara
2025-02-01 11:43:17 -08:00
committed by GitHub
parent 75a4a253f7
commit a8fafb9368
6 changed files with 143 additions and 79 deletions

View File

@@ -8,8 +8,57 @@ title: Overview
[Mem0](https://mem0.dev/wd) (pronounced "mem-zero") enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. Mem0 remembers user preferences and traits and continuously updates over time, making it ideal for applications like customer support chatbots and AI assistants.
## Understanding Mem0
Mem0, described as "_The Memory Layer for your AI Agents_," leverages advanced LLMs and algorithms to detect, store, and retrieve memories from conversations and interactions. It identifies key information such as facts, user preferences, and other contextual information, smartly updates memories over time by resolving contradictions, and supports the development of an AI Agent that evolves with the user interactions. When needed, Mem0 employs a smart search system to find memories, ranking them based on relevance, importance, and recency to ensure only the most useful information is presented.
Mem0 provides multiple endpoints through which users can interact with their memories. The two main endpoints are `add` and `search`. The `add` endpoint lets users ingest their conversations into Mem0, storing them as memories. The `search` endpoint handles retrieval, allowing users to query their set of stored memories.
### ADD Memories
<Frame caption="Architecture diagram illustrating the process of adding memories.">
<img src="images/add_architecture.png" />
</Frame>
When a user has a conversation, Mem0 uses an LLM to understand and extract important information. This model is designed to capture detailed information while maintaining the full context of the conversation.
Here's how the process works:
1. First, the LLM extracts two key elements:
* Relevant memories
* Important entities and their relationships
2. The system then compares this new information with existing data to identify contradictions, if present.
3. A second LLM evaluates the new information and decides whether to:
* Add it as new data
* Update existing information
* Delete outdated information
4. These changes are automatically made to two databases:
* A vector database (for storing memories)
* A graph database (for storing relationships)
This entire process happens continuously with each user interaction, ensuring that the system always maintains an up-to-date understanding of the user's information.
### SEARCH Memories
<Frame caption="Architecture diagram illustrating the memory search process.">
<img src="images/search_architecture.png" />
</Frame>
When a user asks Mem0 a question, the system uses smart memory lookup to find relevant information. Here's how it works:
1. The user submits a question to Mem0
2. The LLM processes this question in two ways:
* It rewrites the question to search the vector database better
* It identifies important entities and their relationships from the question
3. The system then performs two parallel searches:
* It searches the vector database using the rewritten question and semantic search
* It searches the graph database using the identified entities and relationships using graph queries
4. Finally, Mem0 combines the results from both databases to provide a complete answer to the user's question
This approach ensures that Mem0 can find and return all relevant information, whether it's stored as memories in the vector database or as relationships in the graph database.
## Getting Started
Mem0 offers two powerful ways to leverage our technology: our [managed platform](/platform/overview) and our [open source solution](/open-source/quickstart).
## Getting Started
<CardGroup cols={3}>
<Card title="Quickstart" icon="rocket" href="/quickstart">
@@ -22,47 +71,9 @@ Mem0 offers two powerful ways to leverage our technology: our [managed platform]
See what you can build with Mem0
</Card>
</CardGroup>
## Key Features
- OpenAI-compatible API: Easily switch between OpenAI and Mem0
- Advanced memory management: Save costs by efficiently handling long-term context
- Flexible deployment: Choose between managed platform or self-hosted solution
<Card title="All Mem0 Features" icon="list" href="/features" horizontal="false">
</Card>
# Memory Classification in mem0
Mem0 uses a sophisticated classification system to determine which parts of text should be extracted as memories. Not all text content will generate memories, as the system is designed to identify specific types of memorable information.
### When Memories Are Not Generated
There are several scenarios where mem0 may return an empty list of memories:
- When users input definitional questions (e.g., "What is backpropagation?")
- For general concept explanations that don't contain personal or experiential information
- Technical definitions and theoretical explanations
- General knowledge statements without personal context
- Abstract or theoretical content
### Example Scenarios
```
Input: "What is machine learning?"
No memories extracted - Content is definitional and does not meet memory classification criteria.
Input: "Yesterday I learned about machine learning in class"
Memory extracted - Contains personal experience and temporal context.
```
### Best Practices
To ensure successful memory extraction:
- Include temporal markers (when events occurred)
- Add personal context or experiences
- Frame information in terms of real-world applications or experiences
- Include specific examples or cases rather than general definitions
## Need help?
If you have any questions, please feel free to reach out to us using one of the following methods:
<Snippet file="get-help.mdx"/>