Docs updates (#2229)

This commit is contained in:
Lennex Zinyando
2025-02-19 23:48:48 +02:00
committed by GitHub
parent d4df9f6dfe
commit db512950b9
4 changed files with 131 additions and 44 deletions

View File

@@ -8,55 +8,27 @@ iconType: "solid"
🔔 New Feature: [Webhooks](/features/webhook) are now available! Configure real-time notifications for memory events in your Mem0 project.
</Note>
[Mem0](https://mem0.dev/wd) (pronounced "mem-zero") enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. Mem0 remembers user preferences and traits and continuously updates over time, making it ideal for applications like customer support chatbots and AI assistants.
# Introduction
## Understanding Mem0
[Mem0](https://mem0.dev/wd) (pronounced "mem-zero") enhances AI assistants by giving them persistent, contextual memory. AI systems using Mem0 actively learn from and adapt to user interactions over time.
Mem0, described as "_The Memory Layer for your AI Agents_," leverages advanced LLMs and algorithms to detect, store, and retrieve memories from conversations and interactions. It identifies key information such as facts, user preferences, and other contextual information, smartly updates memories over time by resolving contradictions, and supports the development of an AI Agent that evolves with the user interactions. When needed, Mem0 employs a smart search system to find memories, ranking them based on relevance, importance, and recency to ensure only the most useful information is presented.
Mem0's memory layer combines LLMs with vector based storage. LLMs extract and process key information from conversations, while the vector storage enables efficient semantic search and retrieval of memories. This architecture helps AI agents connect past interactions with current context for more relevant responses.
Mem0 provides multiple endpoints through which users can interact with their memories. The two main endpoints are `add` and `search`. The `add` endpoint lets users ingest their conversations into Mem0, storing them as memories. The `search` endpoint handles retrieval, allowing users to query their set of stored memories.
## Key Features
### ADD Memories
- **Memory Processing**: Uses LLMs to automatically extract and store important information from conversations while maintaining full context
- **Memory Management**: Continuously updates and resolves contradictions in stored information to maintain accuracy
- **Dual Storage Architecture**: Combines vector database for memory storage and graph database for relationship tracking
- **Smart Retrieval System**: Employs semantic search and graph queries to find relevant memories based on importance and recency
- **Simple API Integration**: Provides easy-to-use endpoints for adding (`add`) and retrieving (`search`) memories
<Frame caption="Architecture diagram illustrating the process of adding memories.">
<img src="images/add_architecture.png" />
</Frame>
## Use Cases
When a user has a conversation, Mem0 uses an LLM to understand and extract important information. This model is designed to capture detailed information while maintaining the full context of the conversation.
Here's how the process works:
1. First, the LLM extracts two key elements:
* Relevant memories
* Important entities and their relationships
2. The system then compares this new information with existing data to identify contradictions, if present.
3. A second LLM evaluates the new information and decides whether to:
* Add it as new data
* Update existing information
* Delete outdated information
4. These changes are automatically made to two databases:
* A vector database (for storing memories)
* A graph database (for storing relationships)
This entire process happens continuously with each user interaction, ensuring that the system always maintains an up-to-date understanding of the user's information.
### SEARCH Memories
<Frame caption="Architecture diagram illustrating the memory search process.">
<img src="images/search_architecture.png" />
</Frame>
When a user asks Mem0 a question, the system uses smart memory lookup to find relevant information. Here's how it works:
1. The user submits a question to Mem0
2. The LLM processes this question in two ways:
* It rewrites the question to search the vector database better
* It identifies important entities and their relationships from the question
3. The system then performs two parallel searches:
* It searches the vector database using the rewritten question and semantic search
* It searches the graph database using the identified entities and relationships using graph queries
4. Finally, Mem0 combines the results from both databases to provide a complete answer to the user's question
This approach ensures that Mem0 can find and return all relevant information, whether it's stored as memories in the vector database or as relationships in the graph database.
- **Customer Support Chatbots**: Create support agents that remember customer history, preferences, and past interactions to provide personalized assistance
- **Personal AI Tutors**: Build educational assistants that track student progress, adapt to learning patterns, and provide contextual help
- **Healthcare Applications**: Develop healthcare assistants that maintain patient history and provide personalized care recommendations
- **Enterprise Knowledge Management**: Power systems that learn from organizational interactions and maintain institutional knowledge
- **Personalized AI Assistants**: Create assistants that learn user preferences and adapt their responses over time
## Getting Started
Mem0 offers two powerful ways to leverage our technology: our [managed platform](/platform/overview) and our [open source solution](/open-source/quickstart).
@@ -74,7 +46,6 @@ Mem0 offers two powerful ways to leverage our technology: our [managed platform]
</Card>
</CardGroup>
## Need help?
If you have any questions, please feel free to reach out to us using one of the following methods: