[Mem0] Update docs and improve readability (#1727)

This commit is contained in:
Deshraj Yadav
2024-08-21 00:18:43 -07:00
committed by GitHub
parent 2d66c23116
commit 7de35b4a68
21 changed files with 365 additions and 320 deletions

View File

@@ -1,3 +1,7 @@
---
title: Azure OpenAI
---
To use Azure OpenAI embedding models, set the `AZURE_OPENAI_API_KEY` environment variable. You can obtain the Azure OpenAI API key from the Azure.
### Usage

View File

@@ -1,3 +1,7 @@
---
title: Hugging Face
---
You can use embedding models from Huggingface to run Mem0 locally.
### Usage

View File

@@ -1,3 +1,7 @@
---
title: OpenAI
---
To use OpenAI embedding models, set the `OPENAI_API_KEY` environment variable. You can obtain the OpenAI API key from the [OpenAI Platform](https://platform.openai.com/account/api-keys).
### Usage

View File

@@ -4,12 +4,19 @@ title: Overview
Mem0 offers support for various embedding models, allowing users to choose the one that best suits their needs.
## Supported Embedders
See the list of supported embedders below.
<CardGroup cols={4}>
<Card title="OpenAI" href="/components/embedders/models/openai"></Card>
<Card title="Azure OpenAI" href="/components/embedders/models/azure_openai"></Card>
<Card title="Ollama" href="/components/embedders/models/ollama"></Card>
<Card title="Hugging Face" href="/components/embedders/models/huggingface"></Card>
</CardGroup>
## Usage
To utilize a embedder, you must provide a configuration to customize its usage. If no configuration is supplied, a default configuration will be applied, and `OpenAI` will be used as the embedder.
For a comprehensive list of available parameters for embedder configuration, please refer to [Config](./config).
To view all supported embedders, visit the [Supported embedders](./models).

View File

@@ -1,3 +1,7 @@
---
title: AWS Bedrock
---
### Setup
- Before using the AWS Bedrock LLM, make sure you have the appropriate model access from [Bedrock Console](https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/modelaccess).
- You will also need to authenticate the `boto3` client by using a method in the [AWS documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials)

View File

@@ -1,3 +1,7 @@
---
title: Azure OpenAI
---
To use Azure OpenAI models, you have to set the `AZURE_OPENAI_API_KEY`, `AZURE_OPENAI_ENDPOINT`, and `OPENAI_API_VERSION` environment variables. You can obtain the Azure API key from the [Azure](https://azure.microsoft.com/).
## Usage

View File

@@ -1,3 +1,7 @@
---
title: Google AI
---
To use Google AI model, you have to set the `GOOGLE_API_KEY` environment variable. You can obtain the Google API key from the [Google Maker Suite](https://makersuite.google.com/app/apikey)
## Usage

View File

@@ -1,3 +1,7 @@
---
title: Mistral AI
---
To use mistral's models, please Obtain the Mistral AI api key from their [console](https://console.mistral.ai/). Set the `MISTRAL_API_KEY` environment variable to use the model as given below in the example.
## Usage

View File

@@ -1,3 +1,7 @@
---
title: OpenAI
---
To use OpenAI LLM models, you have to set the `OPENAI_API_KEY` environment variable. You can obtain the OpenAI API key from the [OpenAI Platform](https://platform.openai.com/account/api-keys).
## Usage

View File

@@ -11,3 +11,16 @@ To use a llm, you must provide a configuration to customize its usage. If no con
For a comprehensive list of available parameters for llm configuration, please refer to [Config](./config).
To view all supported llms, visit the [Supported LLMs](./models).
<CardGroup cols={4}>
<Card title="OpenAI" href="/components/llms/models/openai"></Card>
<Card title="Ollama" href="/components/llms/models/ollama"></Card>
<Card title="Azure OpenAI" href="/components/llms/models/azure_openai"></Card>
<Card title="Anthropic" href="/components/llms/models/anthropic"></Card>
<Card title="Together" href="/components/llms/models/together"></Card>
<Card title="Groq" href="/components/llms/models/groq"></Card>
<Card title="Litellm" href="/components/llms/models/litellm"></Card>
<Card title="Mistral AI" href="/components/llms/models/mistral_ai"></Card>
<Card title="Google AI" href="/components/llms/models/google_ai"></Card>
<Card title="AWS bedrock" href="/components/llms/models/aws_bedrock"></Card>
</CardGroup>

View File

@@ -4,14 +4,22 @@ title: Overview
Mem0 includes built-in support for various popular databases. Memory can utilize the database provided by the user, ensuring efficient use for specific needs.
## Supported Vector Databases
See the list of supported vector databases below.
<CardGroup cols={3}>
<Card title="Qdrant" href="/components/vectordbs/dbs/qdrant"></Card>
<Card title="Chroma" href="/components/vectordbs/dbs/chroma"></Card>
<Card title="Pgvector" href="/components/vectordbs/dbs/pgvector"></Card>
</CardGroup>
## Usage
To utilize a vector database, you must provide a configuration to customize its usage. If no configuration is supplied, a default configuration will be applied, and `Qdrant` will be used as the vector database.
For a comprehensive list of available parameters for vector database configuration, please refer to [Config](./config).
To view all supported vector databases, visit the [Supported Vector Databases](./dbs).
## Common issues
### Using model with different dimensions
@@ -22,3 +30,4 @@ for example 768, you may encounter below error:
`ValueError: shapes (0,1536) and (768,) not aligned: 1536 (dim 1) != 768 (dim 0)`
you could add `"embedding_model_dims": 768,` to the config of the vector_store to overcome this issue.

View File

@@ -1,153 +0,0 @@
---
title: Autogen with Mem0
---
This guide demonstrates how to integrate AutoGen with Mem0 to create a conversational AI system with memory capabilities. The system includes a customer service bot and a manager agent, both leveraging Mem0 for context-aware interactions.
## Installation
First, install the required libraries:
```bash
pip install pyautogen mem0ai
```
## Setup
Import the necessary modules and set up your API keys:
```python
import os
from autogen import ConversableAgent
from mem0 import MemoryClient
os.environ["OPENAI_API_KEY"] = "your_openai_api_key_here"
os.environ["MEM0_API_KEY"] = "your_mem0_api_key_here"
```
## Initialize Agents and Memory
Create the conversational agent and Mem0 client:
```python
agent = ConversableAgent(
"chatbot",
llm_config={"config_list": [{"model": "gpt-4o", "api_key": os.environ.get("OPENAI_API_KEY")}]},
code_execution_config=False,
function_map=None,
human_input_mode="NEVER",
)
memory = MemoryClient(api_key=os.environ.get("MEM0_API_KEY"))
```
## Storing Conversations in Memory
You can store conversations in Mem0 for future reference:
<Accordion title="Conversation">
```python
conversation = [
{
"role": "assistant",
"content": "Hi, I'm Best Buy's chatbot!\n\nThanks for being a My Best Buy TotalTM member.\n\nWhat can I help you with?"
},
{
"role": "user",
"content": "Seeing horizontal lines on our tv. TV model: Sony - 77\" Class BRAVIA XR A80K OLED 4K UHD Smart Google TV"
},
{
"role": "assistant",
"content": "Thanks for being a My Best Buy Total™ member. I can connect you to an expert immediately - just one perk of your membership!\n\nSelect the button below when you're ready to chat."
},
{
"role": "assistant",
"content": "Good evening, thank you for choosing Best Buy, Fnu. My name is Lovely. I hope you are doing well. I'm sorry to hear that you're seeing horizontal lines on your TV.\n\nI'm absolutely committed to exploring all possible ways to assist you to fix this issue.\n\nTo ensure that we are on the right account, may I please have your email address registered with your Best Buy account?"
},
{
"role": "user",
"content": "dd@gmail.com"
},
{
"role": "assistant",
"content": "Perfect! Thank you for providing all the details, surely you have made my job easier by doing this. I really appreciate it.\n\nI also want to take a moment to express our heartfelt appreciation for your trust and loyalty. Thank you for being an amazing customer of BestBuy Total.\n\nCould you please help me with the order number or product's details to check it quickly?\n\nSamsung - 49\" Odyssey OLED G9 (G95SC) DQHD 240Hz 0.03ms G-Sync Compatible Curved Smart Gaming Monitor - Silver - just to confirm this is the item, right?"
},
{
"role": "user",
"content": "Order number: 112217629"
},
{
"role": "assistant",
"content": "Superb! Thank you for confirmation.\n\nThank you for your patience. After exploring all possible solutions, I can help you to arrange a home repair appointment for your device. Our Geek Squad experts will visit your home to inspect and fix your device.\n\nIt's great that you have a protection plan - rest assured, we've got your back! As a valued Total member, you can avail this service at a minimal service fee. This fee, applicable to all repairs, covers the cost of diagnosing the issue and any small parts needed for the repair. It's part of our 24-month free protection plan.\n\nPlease click here to review the service fee and plan coverage details -\n\nhttps://www.bestbuy.com/site/best-buy-membership/best-buy-protection/pcmcat1608643232014.c?id=pcmcat1608643232014#jl-servicefees\n\nFnu - just to confirm shall I proceed to schedule the appointment?"
},
{
"role": "user",
"content": "Yes please"
}
]
```
</Accordion>
```python
memory.add(messages=conversation, user_id="customer_service_bot")
```
## Retrieving and Using Memory
When you need to answer a question, retrieve relevant memories and use them for context:
```python
data = "Which TV am I using?"
relevant_memories = memory.search(data, user_id="customer_service_bot")
flatten_relevant_memories = "\n".join([m["memory"] for m in relevant_memories])
prompt = f"""Answer the user question considering the memories.
Memories:
{flatten_relevant_memories}
Question: {data}
"""
reply = agent.generate_reply(messages=[{"content": prompt, "role": "user"}])
print(reply)
```
## Multi-Agent Conversation
You can create multiple agents for more complex interactions:
```python
manager = ConversableAgent(
"manager",
system_message="You are a manager who helps in resolving customer issues.",
llm_config={"config_list": [{"model": "gpt-4", "temperature": 0, "api_key": os.environ.get("OPENAI_API_KEY")}]},
human_input_mode="NEVER"
)
customer_bot = ConversableAgent(
"customer_bot",
system_message="You are a customer service bot who gathers information on issues customers are facing.",
llm_config={"config_list": [{"model": "gpt-4", "temperature": 0, "api_key": os.environ.get("OPENAI_API_KEY")}]},
human_input_mode="NEVER"
)
data = "What appointment is booked?"
relevant_memories = memory.search(data, user_id="customer_service_bot")
flatten_relevant_memories = "\n".join([m["memory"] for m in relevant_memories])
prompt = f"""
Context:
{flatten_relevant_memories}
Question: {data}
"""
result = manager.send(prompt, customer_bot, request_reply=True)
```
This setup allows for a manager agent to interact with a customer service bot, both having access to the shared memory from Mem0.
## Conclusion
By integrating AutoGen with Mem0, you can create sophisticated conversational AI systems that maintain context across interactions. This approach is particularly useful for customer service applications, where understanding user history and preferences is crucial for providing personalized assistance.

View File

@@ -1,123 +0,0 @@
---
title: LangGraph with Mem0
---
This guide demonstrates how to create a personalized Customer Support AI Agent using LangGraph and Mem0. The agent retains information across interactions, enabling a personalized and efficient support experience.
## Overview
The Customer Support AI Agent leverages LangGraph for conversational flow and Mem0 for memory retention, creating a more context-aware and personalized support experience.
## Setup
Install the necessary packages using pip:
```bash
pip install langgraph langchain-openai mem0ai
```
## Full Code Example
Below is the complete code to create and interact with a Customer Support AI Agent using LangGraph and Mem0:
```python
from typing import Annotated, TypedDict, List
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from mem0 import Memory
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
llm = ChatOpenAI(model="gpt-4o")
mem0 = Memory()
# Define the State
class State(TypedDict):
messages: Annotated[List[HumanMessage | AIMessage], add_messages]
mem0_user_id: str
graph = StateGraph(State)
def chatbot(state: State):
messages = state["messages"]
user_id = state["mem0_user_id"]
# Retrieve relevant memories
memories = mem0.search(messages[-1].content, user_id=user_id)
context = "Relevant information from previous conversations:\n"
for memory in memories:
context += f"- {memory['memory']}\n"
system_message = SystemMessage(content=f"""You are a helpful customer support assistant. Use the provided context to personalize your responses and remember user preferences and past interactions.
{context}""")
full_messages = [system_message] + messages
response = llm.invoke(full_messages)
# Store the interaction in Mem0
mem0.add(f"User: {messages[-1].content}\nAssistant: {response.content}", user_id=user_id)
return {"messages": [response]}
# Add nodes to the graph
graph.add_node("chatbot", chatbot)
# Add edge from START to chatbot
graph.add_edge(START, "chatbot")
# Add edge from chatbot back to itself
graph.add_edge("chatbot", "chatbot")
compiled_graph = graph.compile()
def run_conversation(user_input: str, mem0_user_id: str):
config = {"configurable": {"thread_id": mem0_user_id}}
state = {"messages": [HumanMessage(content=user_input)], "mem0_user_id": mem0_user_id}
for event in compiled_graph.stream(state, config):
for value in event.values():
if value.get("messages"):
print("Customer Support:", value["messages"][-1].content)
return # Exit after printing the response
if __name__ == "__main__":
print("Welcome to Customer Support! How can I assist you today?")
mem0_user_id = "test123"
while True:
user_input = input("You: ")
if user_input.lower() in ['quit', 'exit', 'bye']:
print("Customer Support: Thank you for contacting us. Have a great day!")
break
run_conversation(user_input, mem0_user_id)
```
## Key Components
1. **State Definition**: The `State` class defines the structure of the conversation state, including messages and user ID.
2. **Chatbot Node**: The `chatbot` function handles the core logic, including:
- Retrieving relevant memories
- Preparing context and system message
- Generating responses
- Storing interactions in Mem0
3. **Graph Setup**: The code sets up a `StateGraph` with the chatbot node and necessary edges.
4. **Conversation Runner**: The `run_conversation` function manages the flow of the conversation, processing user input and displaying responses.
## Usage
To use the Customer Support AI Agent:
1. Run the script.
2. Enter your queries when prompted.
3. Type 'quit', 'exit', or 'bye' to end the conversation.
## Key Points
- **Memory Integration**: Mem0 is used to store and retrieve relevant information from past interactions.
- **Personalization**: The agent uses past interactions to provide more contextual and personalized responses.
- **Flexible Architecture**: The LangGraph structure allows for easy expansion and modification of the conversation flow.
## Conclusion
This Customer Support AI Agent demonstrates the power of combining LangGraph for conversation management and Mem0 for memory retention. As the conversation progresses, the agent's responses become increasingly personalized, providing an improved support experience.

View File

@@ -14,19 +14,19 @@ With Mem0, you can create stateful LLM-based applications such as chatbots, virt
Here are some examples of how Mem0 can be integrated into various applications:
## Example Use Cases
## Examples
<CardGroup cols={1}>
<Card title="Personal AI Tutor" icon="square-1" href="/examples/personal-ai-tutor">
<img width="100%" src="/images/ai-tutor.png" />
<CardGroup cols={2}>
<Card title="Mem0 with Ollama" icon="square-1" href="/examples/mem0-with-ollama">
Run Mem0 locally with Ollama.
</Card>
<Card title="Personal AI Tutor" icon="square-2" href="/examples/personal-ai-tutor">
Create a Personalized AI Tutor that adapts to student progress and learning preferences.
</Card>
<Card title="Personal Travel Assistant" icon="square-2" href="/examples/personal-travel-assistant">
<img src="/images/personal-travel-agent.png" />
<Card title="Personal Travel Assistant" icon="square-3" href="/examples/personal-travel-assistant">
Build a Personalized AI Travel Assistant that understands your travel preferences and past itineraries.
</Card>
<Card title="Customer Support Agent" icon="square-3" href="/examples/customer-support-agent">
<img width="100%" src="/images/customer-support-agent.png" />
<Card title="Customer Support Agent" icon="square-4" href="/examples/customer-support-agent">
Develop a Personal AI Assistant that remembers user preferences, past interactions, and context to provide personalized and efficient assistance.
</Card>
</CardGroup>

View File

@@ -0,0 +1,130 @@
Build conversational AI agents with memory capabilities. This integration combines AutoGen for creating AI agents with Mem0 for memory management, enabling context-aware and personalized interactions.
## Overview
In this guide, we'll explore an example of creating a conversational AI system with memory:
- A customer service bot that can recall previous interactions and provide personalized responses.
## Setup and Configuration
Install necessary libraries:
```bash
pip install pyautogen mem0ai openai
```
First, we'll import the necessary libraries and set up our configurations.
<Note>Remember to get the Mem0 API key from [Mem0 Platform](https://app.mem0.ai).</Note>
```python
import os
from autogen import ConversableAgent
from mem0 import MemoryClient
from openai import OpenAI
# Configuration
OPENAI_API_KEY = 'sk-xxx' # Replace with your actual OpenAI API key
MEM0_API_KEY = 'your-mem0-key' # Replace with your actual Mem0 API key from https://app.mem0.ai
USER_ID = "customer_service_bot"
# Set up OpenAI API key
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY
# Initialize Mem0 and AutoGen agents
memory_client = MemoryClient(api_key=MEM0_API_KEY)
agent = ConversableAgent(
"chatbot",
llm_config={"config_list": [{"model": "gpt-4", "api_key": OPENAI_API_KEY}]},
code_execution_config=False,
human_input_mode="NEVER",
)
```
## Storing Conversations in Memory
Add conversation history to Mem0 for future reference:
```python
conversation = [
{"role": "assistant", "content": "Hi, I'm Best Buy's chatbot! How can I help you?"},
{"role": "user", "content": "I'm seeing horizontal lines on my TV."},
{"role": "assistant", "content": "I'm sorry to hear that. Can you provide your TV model?"},
{"role": "user", "content": "It's a Sony - 77\" Class BRAVIA XR A80K OLED 4K UHD Smart Google TV"},
{"role": "assistant", "content": "Thank you for the information. Let's troubleshoot this issue..."}
]
memory_client.add(messages=conversation, user_id=USER_ID)
print("Conversation added to memory.")
```
## Retrieving and Using Memory
Create a function to get context-aware responses based on user's question and previous interactions:
```python
def get_context_aware_response(question):
relevant_memories = memory_client.search(question, user_id=USER_ID)
context = "\n".join([m["memory"] for m in relevant_memories])
prompt = f"""Answer the user question considering the previous interactions:
Previous interactions:
{context}
Question: {question}
"""
reply = agent.generate_reply(messages=[{"content": prompt, "role": "user"}])
return reply
# Example usage
question = "What was the issue with my TV?"
answer = get_context_aware_response(question)
print("Context-aware answer:", answer)
```
## Multi-Agent Conversation
For more complex scenarios, you can create multiple agents:
```python
manager = ConversableAgent(
"manager",
system_message="You are a manager who helps in resolving complex customer issues.",
llm_config={"config_list": [{"model": "gpt-4", "api_key": OPENAI_API_KEY}]},
human_input_mode="NEVER"
)
def escalate_to_manager(question):
relevant_memories = memory_client.search(question, user_id=USER_ID)
context = "\n".join([m["memory"] for m in relevant_memories])
prompt = f"""
Context from previous interactions:
{context}
Customer question: {question}
As a manager, how would you address this issue?
"""
manager_response = manager.generate_reply(messages=[{"content": prompt, "role": "user"}])
return manager_response
# Example usage
complex_question = "I'm not satisfied with the troubleshooting steps. What else can be done?"
manager_answer = escalate_to_manager(complex_question)
print("Manager's response:", manager_answer)
```
## Conclusion
By integrating AutoGen with Mem0, you've created a conversational AI system with memory capabilities. This example demonstrates a customer service bot that can recall previous interactions and provide context-aware responses, with the ability to escalate complex issues to a manager agent.
This integration enables the creation of more intelligent and personalized AI agents for various applications, such as customer support, virtual assistants, and interactive chatbots.
## Help
In case of any questions, please feel free to reach out to us using one of the following methods:
<Snippet file="get-help.mdx" />

View File

@@ -0,0 +1,144 @@
---
title: LangGraph
---
Build a personalized Customer Support AI Agent using LangGraph for conversation flow and Mem0 for memory retention. This integration enables context-aware and efficient support experiences.
## Overview
In this guide, we'll create a Customer Support AI Agent that:
1. Uses LangGraph to manage conversation flow
2. Leverages Mem0 to store and retrieve relevant information from past interactions
3. Provides personalized responses based on user history
## Setup and Configuration
Install necessary libraries:
```bash
pip install langgraph langchain-openai mem0ai
```
Import required modules and set up configurations:
<Note>Remember to get the Mem0 API key from [Mem0 Platform](https://app.mem0.ai).</Note>
```python
from typing import Annotated, TypedDict, List
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from mem0 import Memory
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
# Configuration
OPENAI_API_KEY = 'sk-xxx' # Replace with your actual OpenAI API key
MEM0_API_KEY = 'your-mem0-key' # Replace with your actual Mem0 API key
# Initialize LangChain and Mem0
llm = ChatOpenAI(model="gpt-4", api_key=OPENAI_API_KEY)
mem0 = Memory(api_key=MEM0_API_KEY)
```
## Define State and Graph
Set up the conversation state and LangGraph structure:
```python
class State(TypedDict):
messages: Annotated[List[HumanMessage | AIMessage], add_messages]
mem0_user_id: str
graph = StateGraph(State)
```
## Create Chatbot Function
Define the core logic for the Customer Support AI Agent:
```python
def chatbot(state: State):
messages = state["messages"]
user_id = state["mem0_user_id"]
# Retrieve relevant memories
memories = mem0.search(messages[-1].content, user_id=user_id)
context = "Relevant information from previous conversations:\n"
for memory in memories:
context += f"- {memory['memory']}\n"
system_message = SystemMessage(content=f"""You are a helpful customer support assistant. Use the provided context to personalize your responses and remember user preferences and past interactions.
{context}""")
full_messages = [system_message] + messages
response = llm.invoke(full_messages)
# Store the interaction in Mem0
mem0.add(f"User: {messages[-1].content}\nAssistant: {response.content}", user_id=user_id)
return {"messages": [response]}
```
## Set Up Graph Structure
Configure the LangGraph with appropriate nodes and edges:
```python
graph.add_node("chatbot", chatbot)
graph.add_edge(START, "chatbot")
graph.add_edge("chatbot", "chatbot")
compiled_graph = graph.compile()
```
## Create Conversation Runner
Implement a function to manage the conversation flow:
```python
def run_conversation(user_input: str, mem0_user_id: str):
config = {"configurable": {"thread_id": mem0_user_id}}
state = {"messages": [HumanMessage(content=user_input)], "mem0_user_id": mem0_user_id}
for event in compiled_graph.stream(state, config):
for value in event.values():
if value.get("messages"):
print("Customer Support:", value["messages"][-1].content)
return
```
## Main Interaction Loop
Set up the main program loop for user interaction:
```python
if __name__ == "__main__":
print("Welcome to Customer Support! How can I assist you today?")
mem0_user_id = "customer_123" # You can generate or retrieve this based on your user management system
while True:
user_input = input("You: ")
if user_input.lower() in ['quit', 'exit', 'bye']:
print("Customer Support: Thank you for contacting us. Have a great day!")
break
run_conversation(user_input, mem0_user_id)
```
## Key Features
1. **Memory Integration**: Uses Mem0 to store and retrieve relevant information from past interactions.
2. **Personalization**: Provides context-aware responses based on user history.
3. **Flexible Architecture**: LangGraph structure allows for easy expansion of the conversation flow.
4. **Continuous Learning**: Each interaction is stored, improving future responses.
## Conclusion
By integrating LangGraph with Mem0, you can build a personalized Customer Support AI Agent that can maintain context across interactions and provide personalized assistance.
## Help
- For more details on LangGraph, visit the [LangChain documentation](https://python.langchain.com/docs/langgraph).
- For Mem0 documentation, refer to the [Mem0 Platform](https://app.mem0.ai/).
- If you need further assistance, please feel free to reach out to us through following methods:
<Snippet file="get-help.mdx" />

View File

@@ -3,9 +3,9 @@
"name": "Mem0.ai",
"favicon": "/logo/favicon.png",
"colors": {
"primary": "#3B2FC9",
"light": "#6673FF",
"dark": "#3B2FC9",
"primary": "#6c60f0",
"light": "#E6FFA2",
"dark": "#a3df02",
"background": {
"dark": "#0f1117",
"light": "#fff"
@@ -121,6 +121,14 @@
}
]
},
{
"group": "Integrations",
"pages": [
"integrations/multion",
"integrations/autogen",
"integrations/langgraph"
]
},
{
"group": "💡 Examples",
"pages": [
@@ -128,24 +136,9 @@
"examples/mem0-with-ollama",
"examples/personal-ai-tutor",
"examples/customer-support-agent",
"examples/langgraph",
"examples/autogen",
"examples/personal-travel-assistant"
]
},
{
"group": "Knowledge Base",
"pages": [
"knowledge-base/introduction"
]
},
{
"group": "Integrations",
"pages": [
"integrations/multion"
]
}
],
"footerSocials": {
"discord": "https://mem0.ai/discord",

View File

@@ -4,14 +4,14 @@ title: Overview
[Mem0](https://mem0.ai) (pronounced "mem-zero") enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. Mem0 remembers user preferences, adapts to individual needs, and continuously improves over time, making it ideal for customer support chatbots, AI assistants, and autonomous systems.
Mem0 offers two powerful ways to leverage our technology: [our managed Platform](#mem0-platform-managed-solution) and [our Open Source solution](#mem0-open-source).
Mem0 offers two powerful ways to leverage our technology: [our managed Platform](#mem0-platform-managed-solution) and [our open source solution](#mem0-open-source).
<CardGroup cols={2}>
<Card title="Mem0 Platform" icon="chart-simple" href="#mem0-platform-managed-solution">
Better, faster and fully managed, hassle free solution.
Better, faster, fully managed, and hassle free solution.
</Card>
<Card title="Mem0 Open Source" icon="code-branch" href="#mem0-open-source">
Self hosted, fully customizable and open source.
Self hosted, fully customizable, and open source.
</Card>
</CardGroup>
@@ -314,8 +314,6 @@ all_memories = m.get_all()
- Advanced memory management: Efficient handling of long-term context
- Flexible deployment: Choose between managed platform or self-hosted solution
Discover all features →
## Need help?
<Snippet file="get-help.mdx"/>

View File

@@ -30,16 +30,11 @@ Mem0 Platform is a managed service that revolutionizes the way AI applications h
- Gaming AI
## Getting Started
Ready to supercharge your AI application with Mem0? Follow these steps:
1. **Sign Up**: Create your Mem0 account at our platform.
2. **API Key**: Generate your API key in the dashboard.
3. **Installation**: Install our Python SDK using pip: `pip install mem0ai`
4. **Quick Implementation**: Check out our [Quickstart Guide](/platform/quickstart) to start using Mem0 quickly.
Check out our [Quickstart Guide](/platform/quickstart) to start using Mem0 quickly.
## Next Steps
- Explore our API Reference for detailed endpoint documentation.
- Join our [slack](https://mem0.ai/slack) or [discord](https://mem0.ai/discord) with other developers and get support.
- Join our [discord](https://mem0.ai/discord) or [slack](https://mem0.ai/slack) with other developers and get support.
We're excited to see what you'll build with Mem0 Platform. Let's create smarter, more personalized AI experiences together!