[Mem0] Fix issues and update docs (#1477)

This commit is contained in:
Deshraj Yadav
2024-07-14 22:21:07 -07:00
committed by GitHub
parent f842a92e25
commit 4ec51f2dd6
21 changed files with 1257 additions and 1010 deletions

View File

@@ -0,0 +1,11 @@
<CardGroup cols={3}>
<Card title="Talk to founders" icon="calendar" href="https://cal.com/taranjeetio/meet">
Talk to founders
</Card>
<Card title="Slack" icon="slack" href="https://embedchain.ai/slack" color="#4A154B">
Join our slack community
</Card>
<Card title="Discord" icon="discord" href="https://discord.gg/6PzXDgEjG5" color="#7289DA">
Join our discord community
</Card>
</CardGroup>

View File

@@ -0,0 +1,106 @@
---
title: Customer Support AI Agent
---
You can create a personalized Customer Support AI Agent using Mem0. This guide will walk you through the necessary steps and provide the complete code to get you started.
## Overview
The Customer Support AI Agent leverages Mem0 to retain information across interactions, enabling a personalized and efficient support experience.
## Setup
Install the necessary packages using pip:
```bash
pip install openai mem0ai
```
## Full Code Example
Below is the simplified code to create and interact with a Customer Support AI Agent using Mem0:
```python
from openai import OpenAI
from mem0 import Memory
class CustomerSupportAIAgent:
def __init__(self):
"""
Initialize the CustomerSupportAIAgent with memory configuration and OpenAI client.
"""
config = {
"vector_store": {
"provider": "qdrant",
"config": {
"host": "localhost",
"port": 6333,
}
},
}
self.memory = Memory.from_config(config)
self.client = OpenAI()
self.app_id = "customer-support"
def handle_query(self, query, user_id=None):
"""
Handle a customer query and store the relevant information in memory.
:param query: The customer query to handle.
:param user_id: Optional user ID to associate with the memory.
"""
# Start a streaming chat completion request to the AI
stream = self.client.chat.completions.create(
model="gpt-4",
stream=True,
messages=[
{"role": "system", "content": "You are a customer support AI agent."},
{"role": "user", "content": query}
]
)
# Store the query in memory
self.memory.add(query, user_id=user_id, metadata={"app_id": self.app_id})
# Print the response from the AI in real-time
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
def get_memories(self, user_id=None):
"""
Retrieve all memories associated with the given customer ID.
:param user_id: Optional user ID to filter memories.
:return: List of memories.
"""
return self.memory.get_all(user_id=user_id)
# Instantiate the CustomerSupportAIAgent
support_agent = CustomerSupportAIAgent()
# Define a customer ID
customer_id = "jane_doe"
# Handle a customer query
support_agent.handle_query("I need help with my recent order. It hasn't arrived yet.", user_id=customer_id)
```
### Fetching Memories
You can fetch all the memories at any point in time using the following code:
```python
memories = support_agent.get_memories(user_id=customer_id)
for m in memories:
print(m['text'])
```
### Key Points
- **Initialization**: The CustomerSupportAIAgent class is initialized with the necessary memory configuration and OpenAI client setup.
- **Handling Queries**: The handle_query method sends a query to the AI and stores the relevant information in memory.
- **Retrieving Memories**: The get_memories method fetches all stored memories associated with a customer.
### Conclusion
As the conversation progresses, Mem0's memory automatically updates based on the interactions, providing a continuously improving personalized support experience.

View File

@@ -0,0 +1,28 @@
---
title: Overview
description: How to use mem0 in your existing applications?
---
With Mem0, you can create stateful LLM-based applications such as chatbots, virtual assistants, or AI agents. Mem0 enhances your applications by providing a memory layer that makes responses:
- More personalized
- More reliable
- Cost-effective by reducing the number of LLM interactions
- More engaging
- Enables long-term memory
Here are some examples of how Mem0 can be integrated into various applications:
## Example Use Cases
<CardGroup cols={1}>
<Card title="Personalized AI Tutor" icon="square-2" href="/examples/personal-ai-tutor">
<img width="100%" src="/images/ai-tutor.png" />
Build a Personalized AI Tutor that adapts to student progress and learning preferences. This tutor can offer tailored lessons, remember past interactions, and provide a more effective and engaging educational experience.
</Card>
<Card title="Customer Support Agent" icon="square-1" href="/examples/customer-support-agent">
<img width="100%" src="/images/customer-support-agent.png" />
Develop a Personal AI Assistant that can remember user preferences, past interactions, and context to provide personalized and efficient assistance. This assistant can manage tasks, provide reminders, and adapt to individual user needs, enhancing productivity and user experience.
</Card>
</CardGroup>

View File

@@ -0,0 +1,108 @@
---
title: Personalized AI Tutor
---
You can create a personalized AI Tutor using Mem0. This guide will walk you through the necessary steps and provide the complete code to get you started.
## Overview
The Personalized AI Tutor leverages Mem0 to retain information across interactions, enabling a tailored learning experience. By integrating with OpenAI's GPT-4 model, the tutor can provide detailed and context-aware responses to user queries.
## Setup
Before you begin, ensure you have the required dependencies installed. You can install the necessary packages using pip:
```bash
pip install openai mem0ai
```
## Full Code Example
Below is the complete code to create and interact with a Personalized AI Tutor using Mem0:
```python
from openai import OpenAI
from mem0 import Memory
# Initialize the OpenAI client
client = OpenAI()
class PersonalAITutor:
def __init__(self):
"""
Initialize the PersonalAITutor with memory configuration and OpenAI client.
"""
config = {
"vector_store": {
"provider": "qdrant",
"config": {
"host": "localhost",
"port": 6333,
}
},
}
self.memory = Memory.from_config(config)
self.client = client
self.app_id = "app-1"
def ask(self, question, user_id=None):
"""
Ask a question to the AI and store the relevant facts in memory
:param question: The question to ask the AI.
:param user_id: Optional user ID to associate with the memory.
"""
# Start a streaming chat completion request to the AI
stream = self.client.chat.completions.create(
model="gpt-4",
stream=True,
messages=[
{"role": "system", "content": "You are a personal AI Tutor."},
{"role": "user", "content": question}
]
)
# Store the question in memory
self.memory.add(question, user_id=user_id, metadata={"app_id": self.app_id})
# Print the response from the AI in real-time
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
def get_memories(self, user_id=None):
"""
Retrieve all memories associated with the given user ID.
:param user_id: Optional user ID to filter memories.
:return: List of memories.
"""
return self.memory.get_all(user_id=user_id)
# Instantiate the PersonalAITutor
ai_tutor = PersonalAITutor()
# Define a user ID
user_id = "john_doe"
# Ask a question
ai_tutor.ask("I am learning introduction to CS. What is queue? Briefly explain.", user_id=user_id)
```
### Fetching Memories
You can fetch all the memories at any point in time using the following code:
```python
memories = ai_tutor.get_memories(user_id=user_id)
for m in memories:
print(m['text'])
```
### Key Points
- **Initialization**: The PersonalAITutor class is initialized with the necessary memory configuration and OpenAI client setup.
- **Asking Questions**: The ask method sends a question to the AI and stores the relevant information in memory.
- **Retrieving Memories**: The get_memories method fetches all stored memories associated with a user.
### Conclusion
As the conversation progresses, Mem0's memory automatically updates based on the interactions, providing a continuously improving personalized learning experience. This setup ensures that the AI Tutor can offer contextually relevant and accurate responses, enhancing the overall educational process.

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 MiB

BIN
docs/images/ai-tutor.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 843 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.6 MiB

View File

@@ -1,191 +0,0 @@
---
title: Introduction
description: 'Welcome to the Mem0 documentation'
---
Mem0 is the long-term memory for AI Agents.
## Installation
```bash
pip install mem0ai
```
## Usage
### Instantiate
```python
from mem0 import Memory
m = Memory()
```
Mem0 uses Qdrant by default for storing the semantic memories. If you want to use Qdrant in server mode, use the following method to instantiate.
Run qdrant first:
```bash
docker pull qdrant/qdrant
docker run -p 6333:6333 -p 6334:6334 \
-v $(pwd)/qdrant_storage:/qdrant/storage:z \
qdrant/qdrant
```
Then, instantiate memory with qdrant server:
```python
from mem0 import Memory
config = {
"vector_store": {
"provider": "qdrant",
"config": {
"host": "localhost",
"port": 6333,
}
},
}
m = Memory.from_config(config)
```
### Store a Memory
```python
m.add("Likes to play cricket over weekend", user_id="deshraj", metadata={"foo": "bar"})
# Output:
# [
# {
# 'id': 'm1',
# 'event': 'add',
# 'data': 'Likes to play cricket over weekend'
# }
# ]
# Similarly, you can store a memory for an agent
m.add("Agent X is best travel agent in Paris", agent_id="agent-x", metadata={"type": "long-term"})
```
### Retrieve all memories
#### 1. Get all memories
```python
m.get_all()
# Output:
# [
# {
# 'id': 'm1',
# 'text': 'Likes to play cricket over weekend',
# 'metadata': {
# 'data': 'Likes to play cricket over weekend'
# }
# },
# {
# 'id': 'm2',
# 'text': 'Agent X is best travel agent in Paris',
# 'metadata': {
# 'data': 'Agent X is best travel agent in Paris'
# }
# }
# ]
```
#### 2. Get memories for specific user
```python
m.get_all(user_id="deshraj")
```
#### 3. Get memories for specific agent
```python
m.get_all(agent_id="agent-x")
```
#### 4. Get memories for a user during an agent run
```python
m.get_all(agent_id="agent-x", user_id="deshraj")
```
### Retrieve a Memory
```python
memory_id = "m1"
m.get(memory_id)
# Output:
# {
# 'id': '1',
# 'text': 'Likes to play cricket over weekend',
# 'metadata': {
# 'data': 'Likes to play cricket over weekend'
# }
# }
```
### Search for related memories
```python
m.search(query="What is my name", user_id="deshraj")
```
### Update a Memory
```python
m.update(memory_id="m1", data="Likes to play tennis")
```
### Get history of a Memory
```python
m.history(memory_id="m1")
# Output:
# [
# {
# 'id': 'h1',
# 'memory_id': 'm1',
# 'prev_value': None,
# 'new_value': 'Likes to play cricket over weekend',
# 'event': 'add',
# 'timestamp': '2024-06-12 21:00:54.466687',
# 'is_deleted': 0
# },
# {
# 'id': 'h2',
# 'memory_id': 'm1',
# 'prev_value': 'Likes to play cricket over weekend',
# 'new_value': 'Likes to play tennis',
# 'event': 'update',
# 'timestamp': '2024-06-12 21:01:17.230943',
# 'is_deleted': 0
# }
# ]
```
### Delete a Memory
```python
m.delete(memory_id="m1")
```
### Delete memories of a user or agent
```python
m.delete_all(user_id="deshraj")
m.delete_all(agent_id="agent-x")
```
### Delete all Memories
```python
m.reset()
```
## Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change. Please make sure to update tests as appropriate.
## License
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)

BIN
docs/logo/favicon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://mintlify.com/schema.json",
"name": "Mem0.ai",
"favicon": "/logo/light.svg",
"favicon": "/logo/favicon.png",
"colors": {
"primary": "#3B2FC9",
"light": "#6673FF",
@@ -16,6 +16,12 @@
"light": "/logo/light.svg",
"href": "https://github.com/embedchain/embedchain"
},
"tabs": [
{
"name": "💡 Examples",
"url": "examples"
}
],
"topbarLinks": [
{
"name": "Support",
@@ -32,13 +38,27 @@
"name": "Discord",
"icon": "discord",
"url": "https://mem0.ai/discord/"
},
{
"name": "Talk to founders",
"icon": "calendar",
"url": "https://cal.com/taranjeetio/meet"
}
],
"navigation": [
{
"group": "Get Started",
"pages": [
"introduction"
"overview",
"quickstart"
]
},
{
"group": "💡 Examples",
"pages": [
"examples/overview",
"examples/personal-ai-tutor",
"examples/customer-support-agent"
]
}
],

59
docs/overview.mdx Normal file
View File

@@ -0,0 +1,59 @@
---
title: 📚 Overview
description: 'Welcome to the Mem0 docs!'
---
> Mem0 provides a smart, self-improving memory layer for Large Language Models, enabling personalized AI experiences across applications.
## Core features
- **User, Session, and AI Agent Memory**: Retains information across user sessions, interactions, and AI agents, ensuring continuity and context.
- **Adaptive Personalization**: Continuously improves personalization based on user interactions and feedback.
- **Developer-Friendly API**: Offers a straightforward API for seamless integration into various applications.
- **Platform Consistency**: Ensures consistent behavior and data across different platforms and devices.
- **Managed Service**: Provides a hosted solution for easy deployment and maintenance.
If you are looking to quick start, jump to one of the following links:
<CardGroup cols={2}>
<Card title="Quickstart" icon="square-1" href="/quickstart/">
Jump to quickstart section to get started
</Card>
<Card title="Examples" icon="square-2" href="/examples/overview/">
Checkout curated examples
</Card>
</CardGroup>
## Common Use Cases
- **Personalized Learning Assistants**: Long-term memory allows learning assistants to remember user preferences, past interactions, and progress, providing a more tailored and effective learning experience.
- **Customer Support AI Agents**: By retaining information from previous interactions, customer support bots can offer more accurate and context-aware assistance, improving customer satisfaction and reducing resolution times.
- **Healthcare Assistants**: Long-term memory enables healthcare assistants to keep track of patient history, medication schedules, and treatment plans, ensuring personalized and consistent care.
- **Virtual Companions**: Virtual companions can use long-term memory to build deeper relationships with users by remembering personal details, preferences, and past conversations, making interactions more meaningful.
- **Productivity Tools**: Long-term memory helps productivity tools remember user habits, frequently used documents, and task history, streamlining workflows and enhancing efficiency.
- **Gaming AI**: In gaming, AI with long-term memory can create more immersive experiences by remembering player choices, strategies, and progress, adapting the game environment accordingly.
## How is Mem0 different from RAG?
Mem0's memory implementation for Large Language Models (LLMs) offers several advantages over Retrieval-Augmented Generation (RAG):
- **Entity Relationships**: Mem0 can understand and relate entities across different interactions, unlike RAG which retrieves information from static documents. This leads to a deeper understanding of context and relationships.
- **Recency, Relevancy, and Decay**: Mem0 prioritizes recent interactions and gradually forgets outdated information, ensuring the memory remains relevant and up-to-date for more accurate responses.
- **Contextual Continuity**: Mem0 retains information across sessions, maintaining continuity in conversations and interactions, which is essential for long-term engagement applications like virtual companions or personalized learning assistants.
- **Adaptive Learning**: Mem0 improves its personalization based on user interactions and feedback, making the memory more accurate and tailored to individual users over time.
- **Dynamic Updates**: Mem0 can dynamically update its memory with new information and interactions, unlike RAG which relies on static data. This allows for real-time adjustments and improvements, enhancing the user experience.
These advanced memory capabilities make Mem0 a powerful tool for developers aiming to create personalized and context-aware AI applications.
If you have any questions, please feel free to reach out to us using one of the following methods:
<Snippet file="get-help.mdx" />

195
docs/quickstart.mdx Normal file
View File

@@ -0,0 +1,195 @@
---
title: 🚀 Quickstart
description: 'Get started with Mem0 quickly!'
---
> Welcome to the Mem0 quickstart guide. This guide will help you get up and running with Mem0 in no time.
## Installation
To install Mem0, you can use pip. Run the following command in your terminal:
```bash
pip install mem0ai
```
## Basic Usage
### Initialize Mem0
<Tabs>
<Tab title="Basic">
```python
from mem0 import Memory
m = Memory()
```
</Tab>
<Tab title="Advanced">
If you want to run Mem0 in production, initialize using the following method:
Run Qdrant first:
```bash
docker pull qdrant/qdrant
docker run -p 6333:6333 -p 6334:6334 \
-v $(pwd)/qdrant_storage:/qdrant/storage:z \
qdrant/qdrant
```
Then, instantiate memory with qdrant server:
```python
from mem0 import Memory
config = {
"vector_store": {
"provider": "qdrant",
"config": {
"host": "localhost",
"port": 6333,
}
},
}
m = Memory.from_config(config)
```
</Tab>
</Tabs>
### Store a Memory
```python
# For a user
result = m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
print(result)
```
Output:
```python
[
{
'id': 'm1',
'event': 'add',
'data': 'Likes to play cricket on weekends'
}
]
```
### Retrieve Memories
```python
# Get all memories
all_memories = m.get_all()
print(all_memories)
```
Output:
```python
[
{
'id': 'm1',
'text': 'Likes to play cricket on weekends',
'metadata': {
'data': 'Likes to play cricket on weekends',
'category': 'hobbies'
}
},
# ... other memories ...
]
```
```python
# Get a single memory by ID
specific_memory = m.get("m1")
print(specific_memory)
```
Output:
```python
{
'id': 'm1',
'text': 'Likes to play cricket on weekends',
'metadata': {
'data': 'Likes to play cricket on weekends',
'category': 'hobbies'
}
}
```
### Search Memories
```python
related_memories = m.search(query="What are Alice's hobbies?", user_id="alice")
print(related_memories)
```
Output:
```python
[
{
'id': 'm1',
'text': 'Likes to play cricket on weekends',
'metadata': {
'data': 'Likes to play cricket on weekends',
'category': 'hobbies'
},
'score': 0.85 # Similarity score
},
# ... other related memories ...
]
```
### Update a Memory
```python
result = m.update(memory_id="m1", data="Likes to play tennis on weekends")
print(result)
```
Output:
```python
{
'id': 'm1',
'event': 'update',
'data': 'Likes to play tennis on weekends'
}
```
### Memory History
```python
history = m.history(memory_id="m1")
print(history)
```
Output:
```python
[
{
'id': 'h1',
'memory_id': 'm1',
'prev_value': None,
'new_value': 'Likes to play cricket on weekends',
'event': 'add',
'timestamp': '2024-07-14 10:00:54.466687',
'is_deleted': 0
},
{
'id': 'h2',
'memory_id': 'm1',
'prev_value': 'Likes to play cricket on weekends',
'new_value': 'Likes to play tennis on weekends',
'event': 'update',
'timestamp': '2024-07-14 10:15:17.230943',
'is_deleted': 0
}
]
```
If you have any questions, please feel free to reach out to us using one of the following methods:
<Snippet file="get-help.mdx" />