Docs Update (#2591)

This commit is contained in:
Prateek Chhikara
2025-04-29 08:15:25 -07:00
committed by GitHub
parent 6d13e83001
commit 393a4fd5a6
111 changed files with 2296 additions and 99 deletions

161
README.md
View File

@@ -1,24 +1,20 @@
<p align="center"> <p align="center">
<a href="https://github.com/mem0ai/mem0"> <a href="https://github.com/mem0ai/mem0">
<img src="docs/images/banner-sm.png" width="800px" alt="Mem0 - The Memory Layer for Personalized AI"> <img src="docs/images/banner-sm.png" width="800px" alt="Mem0 - The Memory Layer for Personalized AI">
</a> </a>
</p>
<p align="center" style="display: flex; justify-content: center; gap: 20px; align-items: center;"> <p align="center" style="display: flex; justify-content: center; gap: 20px; align-items: center;">
<a href="https://trendshift.io/repositories/11194" target="_blank"> <a href="https://trendshift.io/repositories/11194" target="blank">
<img src="https://trendshift.io/api/badge/repositories/11194" alt="mem0ai%2Fmem0 | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/> <img src="https://trendshift.io/api/badge/repositories/11194" alt="mem0ai%2Fmem0 | Trendshift" width="250" height="55"/>
</a>
<a href="https://www.ycombinator.com/launches/LpA-mem0-open-source-memory-layer-for-ai-apps" target="_blank">
<img alt="Launch YC: Mem0 - Open Source Memory Layer for AI Apps" src="https://www.ycombinator.com/launches/LpA-mem0-open-source-memory-layer-for-ai-apps/upvote_embed.svg"/>
</a> </a>
</p> </p>
<p align="center">
<p align="center"> <a href="https://mem0.ai">Learn more</a>
<a href="https://mem0.ai">Learn more</a> ·
· <a href="https://mem0.dev/DiG">Join Discord</a>
<a href="https://mem0.dev/DiG">Join Discord</a> ·
· <a href="https://mem0.dev/demo">Demo</a>
<a href="https://mem0.dev/demo">Demo</a>
</p>
</p> </p>
<p align="center"> <p align="center">
@@ -26,55 +22,71 @@
<img src="https://dcbadge.vercel.app/api/server/6PzXDgEjG5?style=flat" alt="Mem0 Discord"> <img src="https://dcbadge.vercel.app/api/server/6PzXDgEjG5?style=flat" alt="Mem0 Discord">
</a> </a>
<a href="https://pepy.tech/project/mem0ai"> <a href="https://pepy.tech/project/mem0ai">
<img src="https://img.shields.io/pypi/dm/mem0ai" alt="Mem0 PyPI - Downloads" > <img src="https://img.shields.io/pypi/dm/mem0ai" alt="Mem0 PyPI - Downloads">
</a> </a>
<a href="https://github.com/mem0ai/mem0"> <a href="https://github.com/mem0ai/mem0">
<img src="https://img.shields.io/github/commit-activity/m/mem0ai/mem0?style=flat-square" alt="GitHub commit activity"> <img src="https://img.shields.io/github/commit-activity/m/mem0ai/mem0?style=flat-square" alt="GitHub commit activity">
</a> </a>
<a href="https://pypi.org/project/mem0ai" target="_blank"> <a href="https://pypi.org/project/mem0ai" target="blank">
<img src="https://img.shields.io/pypi/v/mem0ai?color=%2334D058&label=pypi%20package" alt="Package version"> <img src="https://img.shields.io/pypi/v/mem0ai?color=%2334D058&label=pypi%20package" alt="Package version">
</a> </a>
<a href="https://www.npmjs.com/package/mem0ai" target="_blank"> <a href="https://www.npmjs.com/package/mem0ai" target="blank">
<img src="https://img.shields.io/npm/v/mem0ai" alt="Npm package"> <img src="https://img.shields.io/npm/v/mem0ai" alt="Npm package">
</a> </a>
<a href="https://www.ycombinator.com/companies/mem0"> <a href="https://www.ycombinator.com/companies/mem0">
<img src="https://img.shields.io/badge/Y%20Combinator-S24-orange?style=flat-square" alt="Y Combinator S24"> <img src="https://img.shields.io/badge/Y%20Combinator-S24-orange?style=flat-square" alt="Y Combinator S24">
</a> </a>
</p> </p>
<p align="center">
<a href="https://mem0.ai/research"><strong>📄 Building Production-Ready AI Agents with Scalable Long-Term Memory →</strong></a>
</p>
<p align="center">
<strong>⚡ +26% Accuracy vs. OpenAI Memory • 🚀 91% Faster • 💰 90% Fewer Tokens</strong>
</p>
## 🔥 Research Highlights
- **+26% Accuracy** over OpenAI Memory on the LOCOMO benchmark
- **91% Faster Responses** than full-context, ensuring low-latency at scale
- **90% Lower Token Usage** than full-context, cutting costs without compromise
- [Read the full paper](https://mem0.ai/research)
# Introduction # Introduction
[Mem0](https://mem0.ai) (pronounced as "mem-zero") enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. Mem0 remembers user preferences, adapts to individual needs, and continuously improves over time, making it ideal for customer support chatbots, AI assistants, and autonomous systems. [Mem0](https://mem0.ai) ("mem-zero") enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. It remembers user preferences, adapts to individual needs, and continuously learns over timeideal for customer support chatbots, AI assistants, and autonomous systems.
### Features & Use Cases ### Key Features & Use Cases
Core Capabilities: **Core Capabilities:**
- **Multi-Level Memory**: User, Session, and AI Agent memory retention with adaptive personalization - **Multi-Level Memory**: Seamlessly retains User, Session, and Agent state with adaptive personalization
- **Developer-Friendly**: Simple API integration, cross-platform consistency, and hassle-free managed service - **Developer-Friendly**: Intuitive API, cross-platform SDKs, and a fully managed service option
Applications: **Applications:**
- **AI Assistants**: Seamless conversations with context and personalization - **AI Assistants**: Consistent, context-rich conversations
- **Learning & Support**: Tailored content recommendations and context-aware customer assistance - **Customer Support**: Recall past tickets and user history for tailored help
- **Healthcare & Companions**: Patient history tracking and deeper relationship building - **Healthcare**: Track patient preferences and history for personalized care
- **Productivity & Gaming**: Streamlined workflows and adaptive environments based on user behavior - **Productivity & Gaming**: Adaptive workflows and environments based on user behavior
## Get Started ## 🚀 Quickstart Guide <a name="quickstart"></a>
Get started quickly with [Mem0 Platform](https://app.mem0.ai) - our fully managed solution that provides automatic updates, advanced analytics, enterprise security, and dedicated support. [Create a free account](https://app.mem0.ai) to begin. Choose between our hosted platform or self-hosted package:
For complete control, you can self-host Mem0 using our open-source package. See the [Quickstart guide](#quickstart) below to set up your own instance. ### Hosted Platform
## Quickstart Guide <a name="quickstart"></a> Get up and running in minutes with automatic updates, analytics, and enterprise security.
Install the Mem0 package via pip: 1. Sign up on [Mem0 Platform](https://app.mem0.ai)
2. Embed the memory layer via SDK or API keys
### Self-Hosted (Open Source)
Install the sdk via pip:
```bash ```bash
pip install mem0ai pip install mem0ai
``` ```
Install the Mem0 package via npm: Install sdk via npm:
```bash ```bash
npm install mem0ai npm install mem0ai
``` ```
@@ -96,7 +108,7 @@ def chat_with_memories(message: str, user_id: str = "default_user") -> str:
# Retrieve relevant memories # Retrieve relevant memories
relevant_memories = memory.search(query=message, user_id=user_id, limit=3) relevant_memories = memory.search(query=message, user_id=user_id, limit=3)
memories_str = "\n".join(f"- {entry['memory']}" for entry in relevant_memories["results"]) memories_str = "\n".join(f"- {entry['memory']}" for entry in relevant_memories["results"])
# Generate Assistant response # Generate Assistant response
system_prompt = f"You are a helpful AI. Answer the question based on query and memories.\nUser Memories:\n{memories_str}" system_prompt = f"You are a helpful AI. Answer the question based on query and memories.\nUser Memories:\n{memories_str}"
messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": message}] messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": message}]
@@ -122,68 +134,21 @@ if __name__ == "__main__":
main() main()
``` ```
See the example for [Node.js](https://docs.mem0.ai/examples/ai_companion_js). For detailed integration steps, see the [Quickstart](https://docs.mem0.ai/quickstart) and [API Reference](https://docs.mem0.ai).
For more advanced usage and API documentation, visit our [documentation](https://docs.mem0.ai). ## 🔗 Integrations & Demos
> [!TIP] - **ChatGPT with Memory**: Personalized chat powered by Mem0 ([Live Demo](https://mem0.dev/demo))
> For a hassle-free experience, try our [hosted platform](https://app.mem0.ai) with automatic updates and enterprise features. - **Browser Extension**: Store memories across ChatGPT, Perplexity, and Claude ([Chrome Extension](https://chromewebstore.google.com/detail/mem0))
- **Langgraph Support**: Build a customer bot with Langgraph + Mem0 ([Guide](https://docs.mem0.ai/integrations/langgraph))
- **CrewAI Integration**: Tailor CrewAI outputs with Mem0 ([Example](https://docs.mem0.ai/integrations/crewai))
## Demos ## 📚 Documentation & Support
- Mem0 - ChatGPT with Memory: A personalized AI chat app powered by Mem0 that remembers your preferences, facts, and memories. - Full docs: https://docs.mem0.ai
- Community: [Discord](https://mem0.dev/DiG) · [Twitter](https://x.com/mem0ai)
- Contact: founders@mem0.ai
[Mem0 - ChatGPT with Memory](https://github.com/user-attachments/assets/cebc4f8e-bdb9-4837-868d-13c5ab7bb433) ## ⚖️ License
Try live [demo](https://mem0.dev/demo/) Apache 2.0 — see the [LICENSE](LICENSE) file for details.
<br/><br/>
- AI Companion: Experience personalized conversations with an AI that remembers your preferences and past interactions
[AI Companion Demo](https://github.com/user-attachments/assets/3fc72023-a72c-4593-8be0-3cee3ba744da)
<br/><br/>
- Enhance your AI interactions by storing memories across ChatGPT, Perplexity, and Claude using our browser extension. Get [chrome extension](https://chromewebstore.google.com/detail/mem0/onihkkbipkfeijkadecaafbgagkhglop?hl=en).
[Chrome Extension Demo](https://github.com/user-attachments/assets/ca92e40b-c453-4ff6-b25e-739fb18a8650)
<br/><br/>
- Customer support bot using <strong>Langgraph and Mem0</strong>. Get the complete code from [here](https://docs.mem0.ai/integrations/langgraph)
[Langgraph: Customer Bot](https://github.com/user-attachments/assets/ca6b482e-7f46-42c8-aa08-f88d1d93a5f4)
<br/><br/>
- Use Mem0 with CrewAI to get personalized results. Full example [here](https://docs.mem0.ai/integrations/crewai)
[CrewAI Demo](https://github.com/user-attachments/assets/69172a79-ccb9-4340-91f1-caa7d2dd4213)
## Documentation
For detailed usage instructions and API reference, visit our [documentation](https://docs.mem0.ai). You'll find:
- Complete API reference
- Integration guides
- Advanced configuration options
- Best practices and examples
- More details about:
- Open-source version
- [Hosted Mem0 Platform](https://app.mem0.ai)
## Support
Join our community for support and discussions. If you have any questions, feel free to reach out to us using one of the following methods:
- [Join our Discord](https://mem0.dev/DiG)
- [Follow us on Twitter](https://x.com/mem0ai)
- [Email founders](mailto:founders@mem0.ai)
## License
This project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.

View File

@@ -0,0 +1,3 @@
<Note type="info">
📢 Announcing our research paper: Mem0 achieves <strong>26%</strong> higher accuracy than OpenAI Memory, <strong>91%</strong> lower latency, and <strong>90%</strong> token savings! [Read the paper](https://mem0.ai/research) to learn how we're revolutionizing AI agent memory.
</Note>

View File

@@ -4,6 +4,8 @@ icon: "info"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Mem0 provides a powerful set of APIs that allow you to integrate advanced memory management capabilities into your applications. Our APIs are designed to be intuitive, efficient, and scalable, enabling you to create, retrieve, update, and delete memories across various entities such as users, agents, apps, and runs. Mem0 provides a powerful set of APIs that allow you to integrate advanced memory management capabilities into your applications. Our APIs are designed to be intuitive, efficient, and scalable, enabling you to create, retrieve, update, and delete memories across various entities such as users, agents, apps, and runs.
## Key Features ## Key Features

View File

@@ -3,6 +3,8 @@ title: "Product Updates"
mode: "wide" mode: "wide"
--- ---
<Snippet file="paper-release.mdx" />
<Tabs> <Tabs>
<Tab title="Python"> <Tab title="Python">

View File

@@ -4,6 +4,8 @@ icon: "gear"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Config in mem0 is a dictionary that specifies the settings for your embedding models. It allows you to customize the behavior and connection details of your chosen embedder. Config in mem0 is a dictionary that specifies the settings for your embedding models. It allows you to customize the behavior and connection details of your chosen embedder.
## How to define configurations? ## How to define configurations?

View File

@@ -4,6 +4,8 @@ icon: "info"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Mem0 offers support for various embedding models, allowing users to choose the one that best suits their needs. Mem0 offers support for various embedding models, allowing users to choose the one that best suits their needs.
## Supported Embedders ## Supported Embedders

View File

@@ -4,6 +4,8 @@ icon: "gear"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
## How to define configurations? ## How to define configurations?
<Tabs> <Tabs>

View File

@@ -2,6 +2,8 @@
title: Anthropic title: Anthropic
--- ---
<Snippet file="paper-release.mdx" />
To use anthropic's models, please set the `ANTHROPIC_API_KEY` which you find on their [Account Settings Page](https://console.anthropic.com/account/keys). To use anthropic's models, please set the `ANTHROPIC_API_KEY` which you find on their [Account Settings Page](https://console.anthropic.com/account/keys).
## Usage ## Usage

View File

@@ -2,6 +2,8 @@
title: AWS Bedrock title: AWS Bedrock
--- ---
<Snippet file="paper-release.mdx" />
### Setup ### Setup
- Before using the AWS Bedrock LLM, make sure you have the appropriate model access from [Bedrock Console](https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/modelaccess). - Before using the AWS Bedrock LLM, make sure you have the appropriate model access from [Bedrock Console](https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/modelaccess).
- You will also need to authenticate the `boto3` client by using a method in the [AWS documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials) - You will also need to authenticate the `boto3` client by using a method in the [AWS documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials)

View File

@@ -2,6 +2,8 @@
title: Azure OpenAI title: Azure OpenAI
--- ---
<Snippet file="paper-release.mdx" />
<Note> Mem0 Now Supports Azure OpenAI Models in TypeScript SDK </Note> <Note> Mem0 Now Supports Azure OpenAI Models in TypeScript SDK </Note>
To use Azure OpenAI models, you have to set the `LLM_AZURE_OPENAI_API_KEY`, `LLM_AZURE_ENDPOINT`, `LLM_AZURE_DEPLOYMENT` and `LLM_AZURE_API_VERSION` environment variables. You can obtain the Azure API key from the [Azure](https://azure.microsoft.com/). To use Azure OpenAI models, you have to set the `LLM_AZURE_OPENAI_API_KEY`, `LLM_AZURE_ENDPOINT`, `LLM_AZURE_DEPLOYMENT` and `LLM_AZURE_API_VERSION` environment variables. You can obtain the Azure API key from the [Azure](https://azure.microsoft.com/).

View File

@@ -2,6 +2,8 @@
title: DeepSeek title: DeepSeek
--- ---
<Snippet file="paper-release.mdx" />
To use DeepSeek LLM models, you have to set the `DEEPSEEK_API_KEY` environment variable. You can also optionally set `DEEPSEEK_API_BASE` if you need to use a different API endpoint (defaults to "https://api.deepseek.com"). To use DeepSeek LLM models, you have to set the `DEEPSEEK_API_KEY` environment variable. You can also optionally set `DEEPSEEK_API_BASE` if you need to use a different API endpoint (defaults to "https://api.deepseek.com").
## Usage ## Usage

View File

@@ -2,6 +2,8 @@
title: Gemini title: Gemini
--- ---
<Snippet file="paper-release.mdx" />
To use Gemini model, you have to set the `GEMINI_API_KEY` environment variable. You can obtain the Gemini API key from the [Google AI Studio](https://aistudio.google.com/app/apikey) To use Gemini model, you have to set the `GEMINI_API_KEY` environment variable. You can obtain the Gemini API key from the [Google AI Studio](https://aistudio.google.com/app/apikey)
## Usage ## Usage

View File

@@ -2,6 +2,8 @@
title: Google AI title: Google AI
--- ---
<Snippet file="paper-release.mdx" />
To use Google AI model, you have to set the `GOOGLE_API_KEY` environment variable. You can obtain the Google API key from the [Google Maker Suite](https://makersuite.google.com/app/apikey) To use Google AI model, you have to set the `GOOGLE_API_KEY` environment variable. You can obtain the Google API key from the [Google Maker Suite](https://makersuite.google.com/app/apikey)
## Usage ## Usage

View File

@@ -2,6 +2,8 @@
title: Groq title: Groq
--- ---
<Snippet file="paper-release.mdx" />
[Groq](https://groq.com/) is the creator of the world's first Language Processing Unit (LPU), providing exceptional speed performance for AI workloads running on their LPU Inference Engine. [Groq](https://groq.com/) is the creator of the world's first Language Processing Unit (LPU), providing exceptional speed performance for AI workloads running on their LPU Inference Engine.
In order to use LLMs from Groq, go to their [platform](https://console.groq.com/keys) and get the API key. Set the API key as `GROQ_API_KEY` environment variable to use the model as given below in the example. In order to use LLMs from Groq, go to their [platform](https://console.groq.com/keys) and get the API key. Set the API key as `GROQ_API_KEY` environment variable to use the model as given below in the example.

View File

@@ -2,6 +2,8 @@
title: LangChain title: LangChain
--- ---
<Snippet file="paper-release.mdx" />
Mem0 supports LangChain as a provider to access a wide range of LLM models. LangChain is a framework for developing applications powered by language models, making it easy to integrate various LLM providers through a consistent interface. Mem0 supports LangChain as a provider to access a wide range of LLM models. LangChain is a framework for developing applications powered by language models, making it easy to integrate various LLM providers through a consistent interface.
For a complete list of available chat models supported by LangChain, refer to the [LangChain Chat Models documentation](https://python.langchain.com/docs/integrations/chat). For a complete list of available chat models supported by LangChain, refer to the [LangChain Chat Models documentation](https://python.langchain.com/docs/integrations/chat).

View File

@@ -1,3 +1,5 @@
<Snippet file="paper-release.mdx" />
[Litellm](https://litellm.vercel.app/docs/) is compatible with over 100 large language models (LLMs), all using a standardized input/output format. You can explore the [available models](https://litellm.vercel.app/docs/providers) to use with Litellm. Ensure you set the `API_KEY` for the model you choose to use. [Litellm](https://litellm.vercel.app/docs/) is compatible with over 100 large language models (LLMs), all using a standardized input/output format. You can explore the [available models](https://litellm.vercel.app/docs/providers) to use with Litellm. Ensure you set the `API_KEY` for the model you choose to use.
## Usage ## Usage

View File

@@ -2,6 +2,8 @@
title: LM Studio title: LM Studio
--- ---
<Snippet file="paper-release.mdx" />
To use LM Studio with Mem0, you'll need to have LM Studio running locally with its server enabled. LM Studio provides a way to run local LLMs with an OpenAI-compatible API. To use LM Studio with Mem0, you'll need to have LM Studio running locally with its server enabled. LM Studio provides a way to run local LLMs with an OpenAI-compatible API.
## Usage ## Usage

View File

@@ -2,6 +2,8 @@
title: Mistral AI title: Mistral AI
--- ---
<Snippet file="paper-release.mdx" />
To use mistral's models, please obtain the Mistral AI api key from their [console](https://console.mistral.ai/). Set the `MISTRAL_API_KEY` environment variable to use the model as given below in the example. To use mistral's models, please obtain the Mistral AI api key from their [console](https://console.mistral.ai/). Set the `MISTRAL_API_KEY` environment variable to use the model as given below in the example.
## Usage ## Usage

View File

@@ -1,3 +1,5 @@
<Snippet file="paper-release.mdx" />
You can use LLMs from Ollama to run Mem0 locally. These [models](https://ollama.com/search?c=tools) support tool support. You can use LLMs from Ollama to run Mem0 locally. These [models](https://ollama.com/search?c=tools) support tool support.
## Usage ## Usage

View File

@@ -2,6 +2,8 @@
title: OpenAI title: OpenAI
--- ---
<Snippet file="paper-release.mdx" />
To use OpenAI LLM models, you have to set the `OPENAI_API_KEY` environment variable. You can obtain the OpenAI API key from the [OpenAI Platform](https://platform.openai.com/account/api-keys). To use OpenAI LLM models, you have to set the `OPENAI_API_KEY` environment variable. You can obtain the OpenAI API key from the [OpenAI Platform](https://platform.openai.com/account/api-keys).
## Usage ## Usage

View File

@@ -1,3 +1,5 @@
<Snippet file="paper-release.mdx" />
To use TogetherAI LLM models, you have to set the `TOGETHER_API_KEY` environment variable. You can obtain the TogetherAI API key from their [Account settings page](https://api.together.xyz/settings/api-keys). To use TogetherAI LLM models, you have to set the `TOGETHER_API_KEY` environment variable. You can obtain the TogetherAI API key from their [Account settings page](https://api.together.xyz/settings/api-keys).
## Usage ## Usage

View File

@@ -2,6 +2,8 @@
title: xAI title: xAI
--- ---
<Snippet file="paper-release.mdx" />
[xAI](https://x.ai/) is a new AI company founded by Elon Musk that develops large language models, including Grok. Grok is trained on real-time data from X (formerly Twitter) and aims to provide accurate, up-to-date responses with a touch of wit and humor. [xAI](https://x.ai/) is a new AI company founded by Elon Musk that develops large language models, including Grok. Grok is trained on real-time data from X (formerly Twitter) and aims to provide accurate, up-to-date responses with a touch of wit and humor.
In order to use LLMs from xAI, go to their [platform](https://console.x.ai) and get the API key. Set the API key as `XAI_API_KEY` environment variable to use the model as given below in the example. In order to use LLMs from xAI, go to their [platform](https://console.x.ai) and get the API key. Set the API key as `XAI_API_KEY` environment variable to use the model as given below in the example.

View File

@@ -4,6 +4,8 @@ icon: "info"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Mem0 includes built-in support for various popular large language models. Memory can utilize the LLM provided by the user, ensuring efficient use for specific needs. Mem0 includes built-in support for various popular large language models. Memory can utilize the LLM provided by the user, ensuring efficient use for specific needs.
## Usage ## Usage

View File

@@ -4,6 +4,8 @@ icon: "gear"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
## How to define configurations? ## How to define configurations?
The `config` is defined as an object with two main keys: The `config` is defined as an object with two main keys:

View File

@@ -4,6 +4,8 @@ icon: "info"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Mem0 includes built-in support for various popular databases. Memory can utilize the database provided by the user, ensuring efficient use for specific needs. Mem0 includes built-in support for various popular databases. Memory can utilize the database provided by the user, ensuring efficient use for specific needs.
## Supported Vector Databases ## Supported Vector Databases

View File

@@ -3,6 +3,8 @@ title: Development
icon: "code" icon: "code"
--- ---
<Snippet file="paper-release.mdx" />
# Development Contributions # Development Contributions
We strive to make contributions **easy, collaborative, and enjoyable**. Follow the steps below to ensure a smooth contribution process. We strive to make contributions **easy, collaborative, and enjoyable**. Follow the steps below to ensure a smooth contribution process.

View File

@@ -3,6 +3,8 @@ title: Documentation
icon: "book" icon: "book"
--- ---
<Snippet file="paper-release.mdx" />
# Documentation Contributions # Documentation Contributions
## 📌 Prerequisites ## 📌 Prerequisites

View File

@@ -5,6 +5,8 @@ icon: "gear"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Mem0 provides two core operations for managing memories in AI applications: adding new memories and searching existing ones. This guide covers how these operations work and how to use them effectively in your application. Mem0 provides two core operations for managing memories in AI applications: adding new memories and searching existing ones. This guide covers how these operations work and how to use them effectively in your application.

View File

@@ -4,6 +4,9 @@ description: Understanding different types of memory in AI Applications
icon: "memory" icon: "memory"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
To build useful AI applications, we need to understand how different memory systems work together. This guide explores the fundamental types of memory in AI systems and shows how Mem0 implements these concepts. To build useful AI applications, we need to understand how different memory systems work together. This guide explores the fundamental types of memory in AI systems and shows how Mem0 implements these concepts.
## Why Memory Matters ## Why Memory Matters

View File

@@ -3,6 +3,8 @@ title: Overview
description: How to use mem0 in your existing applications? description: How to use mem0 in your existing applications?
--- ---
<Snippet file="paper-release.mdx" />
With Mem0, you can create stateful LLM-based applications such as chatbots, virtual assistants, or AI agents. Mem0 enhances your applications by providing a memory layer that makes responses: With Mem0, you can create stateful LLM-based applications such as chatbots, virtual assistants, or AI agents. Mem0 enhances your applications by providing a memory layer that makes responses:

View File

@@ -2,6 +2,8 @@
title: AI Companion title: AI Companion
--- ---
<Snippet file="paper-release.mdx" />
You can create a personalised AI Companion using Mem0. This guide will walk you through the necessary steps and provide the complete code to get you started. You can create a personalised AI Companion using Mem0. This guide will walk you through the necessary steps and provide the complete code to get you started.
## Overview ## Overview

View File

@@ -2,6 +2,8 @@
title: AI Companion in Node.js title: AI Companion in Node.js
--- ---
<Snippet file="paper-release.mdx" />
You can create a personalised AI Companion using Mem0. This guide will walk you through the necessary steps and provide the complete code to get you started. You can create a personalised AI Companion using Mem0. This guide will walk you through the necessary steps and provide the complete code to get you started.
## Overview ## Overview

View File

@@ -1,5 +1,7 @@
# Mem0 Chrome Extension # Mem0 Chrome Extension
<Snippet file="paper-release.mdx" />
Enhance your AI interactions with **Mem0**, a Chrome extension that introduces a universal memory layer across platforms like `ChatGPT`, `Claude`, and `Perplexity`. Mem0 ensures seamless context sharing, making your AI experiences more personalized and efficient. Enhance your AI interactions with **Mem0**, a Chrome extension that introduces a universal memory layer across platforms like `ChatGPT`, `Claude`, and `Perplexity`. Mem0 ensures seamless context sharing, making your AI experiences more personalized and efficient.
<Note> <Note>

View File

@@ -2,6 +2,8 @@
title: Customer Support AI Agent title: Customer Support AI Agent
--- ---
<Snippet file="paper-release.mdx" />
You can create a personalized Customer Support AI Agent using Mem0. This guide will walk you through the necessary steps and provide the complete code to get you started. You can create a personalized Customer Support AI Agent using Mem0. This guide will walk you through the necessary steps and provide the complete code to get you started.
## Overview ## Overview

View File

@@ -1,6 +1,7 @@
--- ---
title: Document Editing with Mem0 title: Document Editing with Mem0
--- ---
<Snippet file="paper-release.mdx" />
This guide demonstrates how to leverage **Mem0** to edit documents efficiently, ensuring they align with your unique writing style and preferences. This guide demonstrates how to leverage **Mem0** to edit documents efficiently, ensuring they align with your unique writing style and preferences.

View File

@@ -2,6 +2,8 @@
title: Email Processing with Mem0 title: Email Processing with Mem0
--- ---
<Snippet file="paper-release.mdx" />
This guide demonstrates how to build an intelligent email processing system using Mem0's memory capabilities. You'll learn how to store, categorize, retrieve, and analyze emails to create a smart email management solution. This guide demonstrates how to build an intelligent email processing system using Mem0's memory capabilities. You'll learn how to store, categorize, retrieve, and analyze emails to create a smart email management solution.
## Overview ## Overview

View File

@@ -1,6 +1,8 @@
--- ---
title: LlamaIndex ReAct Agent title: LlamaIndex ReAct Agent
--- ---
<Snippet file="paper-release.mdx" />
Create a ReAct Agent with LlamaIndex which uses Mem0 as the memory store. Create a ReAct Agent with LlamaIndex which uses Mem0 as the memory store.
### Overview ### Overview

View File

@@ -2,6 +2,8 @@
title: Mem0 as an Agentic Tool title: Mem0 as an Agentic Tool
--- ---
<Snippet file="paper-release.mdx" />
Integrate Mem0's memory capabilities with OpenAI's Agents SDK to create AI agents with persistent memory. Integrate Mem0's memory capabilities with OpenAI's Agents SDK to create AI agents with persistent memory.
You can create agents that remember past conversations and use that context to provide better responses. You can create agents that remember past conversations and use that context to provide better responses.

View File

@@ -2,6 +2,9 @@
title: Mem0 Demo title: Mem0 Demo
--- ---
<Snippet file="paper-release.mdx" />
You can create a personalized AI Companion using Mem0. This guide will walk you through the necessary steps and provide the complete setup instructions to get you started. You can create a personalized AI Companion using Mem0. This guide will walk you through the necessary steps and provide the complete setup instructions to get you started.
<video <video

View File

@@ -2,6 +2,8 @@
title: Mem0 with Mastra title: Mem0 with Mastra
--- ---
<Snippet file="paper-release.mdx" />
In this example you'll learn how to use the Mem0 to add long-term memory capabilities to [Mastra's agent](https://mastra.ai/) via tool-use. In this example you'll learn how to use the Mem0 to add long-term memory capabilities to [Mastra's agent](https://mastra.ai/) via tool-use.
This memory integration can work alongside Mastra's [agent memory features](https://mastra.ai/docs/agents/01-agent-memory). This memory integration can work alongside Mastra's [agent memory features](https://mastra.ai/docs/agents/01-agent-memory).

View File

@@ -3,6 +3,8 @@ title: 'Mem0 with OpenAI Agents SDK for Voice'
description: 'Integrate memory capabilities into your voice agents using Mem0 and OpenAI Agents SDK' description: 'Integrate memory capabilities into your voice agents using Mem0 and OpenAI Agents SDK'
--- ---
<Snippet file="paper-release.mdx" />
# Building Voice Agents with Memory using Mem0 and OpenAI Agents SDK # Building Voice Agents with Memory using Mem0 and OpenAI Agents SDK
This guide demonstrates how to combine OpenAI's Agents SDK for voice applications with Mem0's memory capabilities to create a voice assistant that remembers user preferences and past interactions. This guide demonstrates how to combine OpenAI's Agents SDK for voice applications with Mem0's memory capabilities to create a voice assistant that remembers user preferences and past interactions.

View File

@@ -2,6 +2,8 @@
title: Mem0 with Ollama title: Mem0 with Ollama
--- ---
<Snippet file="paper-release.mdx" />
## Running Mem0 Locally with Ollama ## Running Mem0 Locally with Ollama
Mem0 can be utilized entirely locally by leveraging Ollama for both the embedding model and the language model (LLM). This guide will walk you through the necessary steps and provide the complete code to get you started. Mem0 can be utilized entirely locally by leveraging Ollama for both the embedding model and the language model (LLM). This guide will walk you through the necessary steps and provide the complete code to get you started.

View File

@@ -2,6 +2,8 @@
title: Multimodal Demo with Mem0 title: Multimodal Demo with Mem0
--- ---
<Snippet file="paper-release.mdx" />
Enhance your AI interactions with **Mem0**'s multimodal capabilities. Mem0 now supports image understanding, allowing for richer context and more natural interactions across supported AI platforms. Enhance your AI interactions with **Mem0**'s multimodal capabilities. Mem0 now supports image understanding, allowing for richer context and more natural interactions across supported AI platforms.
> 🎉 Experience the power of multimodal AI! Test out Mem0's image understanding capabilities at [multimodal-demo.mem0.ai](https://multimodal-demo.mem0.ai) > 🎉 Experience the power of multimodal AI! Test out Mem0's image understanding capabilities at [multimodal-demo.mem0.ai](https://multimodal-demo.mem0.ai)

View File

@@ -2,6 +2,8 @@
title: OpenAI Inbuilt Tools title: OpenAI Inbuilt Tools
--- ---
<Snippet file="paper-release.mdx" />
Integrate Mem0s memory capabilities with OpenAIs Inbuilt Tools to create AI agents with persistent memory. Integrate Mem0s memory capabilities with OpenAIs Inbuilt Tools to create AI agents with persistent memory.
## Getting Started ## Getting Started

View File

@@ -2,6 +2,8 @@
title: Personalized AI Tutor title: Personalized AI Tutor
--- ---
<Snippet file="paper-release.mdx" />
You can create a personalized AI Tutor using Mem0. This guide will walk you through the necessary steps and provide the complete code to get you started. You can create a personalized AI Tutor using Mem0. This guide will walk you through the necessary steps and provide the complete code to get you started.
## Overview ## Overview

View File

@@ -1,6 +1,9 @@
--- ---
title: Personal AI Travel Assistant title: Personal AI Travel Assistant
--- ---
<Snippet file="paper-release.mdx" />
Create a personalized AI Travel Assistant using Mem0. This guide provides step-by-step instructions and the complete code to get you started. Create a personalized AI Travel Assistant using Mem0. This guide provides step-by-step instructions and the complete code to get you started.
## Overview ## Overview

View File

@@ -2,6 +2,8 @@
title: Personalized Deep Research title: Personalized Deep Research
--- ---
<Snippet file="paper-release.mdx" />
Deep Research is an intelligent agent that synthesizes large amounts of online data and completes complex research tasks, customized to your unique preferences and insights. Built on Mem0's technology, it enhances AI-driven online exploration with personalized memories. Deep Research is an intelligent agent that synthesizes large amounts of online data and completes complex research tasks, customized to your unique preferences and insights. Built on Mem0's technology, it enhances AI-driven online exploration with personalized memories.
## Overview ## Overview

View File

@@ -2,6 +2,8 @@
title: YouTube Assistant Extension title: YouTube Assistant Extension
--- ---
<Snippet file="paper-release.mdx" />
Enhance your YouTube experience with Mem0's **YouTube Assistant**, a Chrome extension that brings AI-powered chat directly to your YouTube videos. Get instant, personalized answers about video content while leveraging your own knowledge and memories - all without leaving the page. Enhance your YouTube experience with Mem0's **YouTube Assistant**, a Chrome extension that brings AI-powered chat directly to your YouTube videos. Get instant, personalized answers about video content while leveraging your own knowledge and memories - all without leaving the page.
## Features ## Features

View File

@@ -4,6 +4,7 @@ icon: "question"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
<AccordionGroup> <AccordionGroup>
<Accordion title="How does Mem0 work?"> <Accordion title="How does Mem0 work?">

View File

@@ -4,6 +4,8 @@ icon: "wrench"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
## Core features ## Core features
- **User, Session, and AI Agent Memory**: Retains information across sessions and interactions for users and AI agents, ensuring continuity and context. - **User, Session, and AI Agent Memory**: Retains information across sessions and interactions for users and AI agents, ensuring continuity and context.

View File

@@ -4,6 +4,8 @@ icon: "magnifying-glass"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Mem0's **Advanced Retrieval** feature delivers superior search results by leveraging state-of-the-art search algorithms. Beyond the default search functionality, Mem0 offers the following advanced retrieval modes: Mem0's **Advanced Retrieval** feature delivers superior search results by leveraging state-of-the-art search algorithms. Beyond the default search functionality, Mem0 offers the following advanced retrieval modes:
1. **Keyword Search** 1. **Keyword Search**

View File

@@ -5,6 +5,7 @@ icon: "bolt"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
The `AsyncMemoryClient` is an asynchronous client for interacting with the Mem0 API. It provides similar functionality to the synchronous `MemoryClient` but allows for non-blocking operations, which can be beneficial in applications that require high concurrency. The `AsyncMemoryClient` is an asynchronous client for interacting with the Mem0 API. It provides similar functionality to the synchronous `MemoryClient` but allows for non-blocking operations, which can be beneficial in applications that require high concurrency.
## Initialization ## Initialization

View File

@@ -4,6 +4,8 @@ icon: "square-plus"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Mem0 now supports an contextual add version (v2). To use it, set `version="v2"` during the add call. The default version is v1, which is deprecated now. We recommend migrating to `v2` for new applications. Mem0 now supports an contextual add version (v2). To use it, set `version="v2"` during the add call. The default version is v1, which is deprecated now. We recommend migrating to `v2` for new applications.
## Key Differences Between v1 and v2 ## Key Differences Between v1 and v2

View File

@@ -5,6 +5,8 @@ icon: "tags"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
## How to set custom categories? ## How to set custom categories?
You can now create custom categories tailored to your specific needs, instead of using the default categories such as travel, sports, music, and more (see [default categories](#default-categories) below). **When custom categories are provided, they will override the default categories.** You can now create custom categories tailored to your specific needs, instead of using the default categories such as travel, sports, music, and more (see [default categories](#default-categories) below). **When custom categories are provided, they will override the default categories.**

View File

@@ -5,6 +5,8 @@ icon: "pencil"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
## Introduction to Custom Fact Extraction Prompt ## Introduction to Custom Fact Extraction Prompt
Custom fact extraction prompt allow you to tailor the behavior of your Mem0 instance to specific use cases or domains. Custom fact extraction prompt allow you to tailor the behavior of your Mem0 instance to specific use cases or domains.

View File

@@ -5,6 +5,8 @@ icon: "pencil"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
## Introduction to Custom Instructions ## Introduction to Custom Instructions
Custom instructions allow you to define specific guidelines for your project. This feature helps ensure consistency and provides clear direction for handling project-specific requirements. Custom instructions allow you to define specific guidelines for your project. This feature helps ensure consistency and provides clear direction for handling project-specific requirements.

View File

@@ -3,9 +3,12 @@ title: Custom Update Memory Prompt
icon: "pencil" icon: "pencil"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Update memory prompt is a prompt used to determine the action to be performed on the memory. Update memory prompt is a prompt used to determine the action to be performed on the memory.
By customizing this prompt, you can control how the memory is updated. By customizing this prompt, you can control how the memory is updated.
## Introduction ## Introduction
Mem0 memory system compares the newly retrieved facts with the existing memory and determines the action to be performed on the memory. Mem0 memory system compares the newly retrieved facts with the existing memory and determines the action to be performed on the memory.

View File

@@ -5,6 +5,8 @@ icon: "arrow-right"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
## How to use Direct Import? ## How to use Direct Import?
The Direct Import feature allows users to skip the memory deduction phase and directly input pre-defined memories into the system for storage and retrieval. The Direct Import feature allows users to skip the memory deduction phase and directly input pre-defined memories into the system for storage and retrieval.
To enable this feature, you need to set the `infer` parameter to `False` in the `add` method. To enable this feature, you need to set the `infer` parameter to `False` in the `add` method.

View File

@@ -5,6 +5,8 @@ icon: "clock"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
## Benefits of Memory Expiration ## Benefits of Memory Expiration
Setting expiration dates for memories offers several advantages: Setting expiration dates for memories offers several advantages:

View File

@@ -4,6 +4,8 @@ icon: "thumbs-up"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Mem0's **Feedback Mechanism** allows you to provide feedback on the memories generated by your application. This feedback is used to improve the accuracy of the memories and the search results. Mem0's **Feedback Mechanism** allows you to provide feedback on the memories generated by your application. This feedback is used to improve the accuracy of the memories and the search results.
## How it works ## How it works

View File

@@ -5,6 +5,8 @@ iconType: "solid"
description: "Enable graph-based memory retrieval for more contextually relevant results" description: "Enable graph-based memory retrieval for more contextually relevant results"
--- ---
<Snippet file="paper-release.mdx" />
## Overview ## Overview
Graph Memory enhances memory pipeline by creating relationships between entities in your data. It builds a network of interconnected information for more contextually relevant search results. Graph Memory enhances memory pipeline by creating relationships between entities in your data. It builds a network of interconnected information for more contextually relevant search results.

View File

@@ -5,6 +5,8 @@ icon: "file-export"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
## Overview ## Overview
The Memory Export feature allows you to create structured exports of memories using customizable Pydantic schemas. This process enables you to transform your stored memories into specific data formats that match your needs. You can apply various filters to narrow down which memories to export and define exactly how the data should be structured. The Memory Export feature allows you to create structured exports of memories using customizable Pydantic schemas. This process enables you to transform your stored memories into specific data formats that match your needs. You can apply various filters to narrow down which memories to export and define exactly how the data should be structured.

View File

@@ -5,6 +5,8 @@ icon: "image"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Mem0 extends its capabilities beyond text by supporting multimodal data, including images and documents. With this feature, users can seamlessly integrate visual and document content into their interactions—allowing Mem0 to extract relevant information from various media types and enrich the memory system. Mem0 extends its capabilities beyond text by supporting multimodal data, including images and documents. With this feature, users can seamlessly integrate visual and document content into their interactions—allowing Mem0 to extract relevant information from various media types and enrich the memory system.
## How It Works ## How It Works

View File

@@ -4,6 +4,8 @@ icon: "code"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Mem0 can be easily integrated into chat applications to enhance conversational agents with structured memory. Mem0's APIs are designed to be compatible with OpenAI's, with the goal of making it easy to leverage Mem0 in applications you may have already built. Mem0 can be easily integrated into chat applications to enhance conversational agents with structured memory. Mem0's APIs are designed to be compatible with OpenAI's, with the goal of making it easy to leverage Mem0 in applications you may have already built.
If you have a `Mem0 API key`, you can use it to initialize the client. Alternatively, you can initialize Mem0 without an API key if you're using it locally. If you have a `Mem0 API key`, you can use it to initialize the client. Alternatively, you can initialize Mem0 without an API key if you're using it locally.

View File

@@ -4,6 +4,8 @@ icon: "info"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Learn about the key features and capabilities that make Mem0 a powerful platform for memory management and retrieval. Learn about the key features and capabilities that make Mem0 a powerful platform for memory management and retrieval.
## Core Features ## Core Features

View File

@@ -5,6 +5,8 @@ icon: "filter"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
## Benefits of Memory Customization ## Benefits of Memory Customization
Memory customization offers several key benefits: Memory customization offers several key benefits:

View File

@@ -5,6 +5,8 @@ icon: "clock"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
## Overview ## Overview
The Memory Timestamps feature allows you to specify when a memory was created, regardless of when it's actually added to the system. This powerful capability enables you to: The Memory Timestamps feature allows you to specify when a memory was created, regardless of when it's actually added to the system. This powerful capability enables you to:

View File

@@ -5,6 +5,8 @@ icon: "webhook"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
## Overview ## Overview
Webhooks enable real-time notifications for memory events in your Mem0 project. Webhooks are configured at the project level, meaning each webhook is tied to a specific project and receives events solely from that project. You can configure webhooks to send HTTP POST requests to your specified URLs whenever memories are created, updated, or deleted. Webhooks enable real-time notifications for memory events in your Mem0 project. Webhooks are configured at the project level, meaning each webhook is tied to a specific project and receives events solely from that project. You can configure webhooks to send HTTP POST requests to your specified URLs whenever memories are created, updated, or deleted.

View File

@@ -3,6 +3,8 @@ title: Overview
description: How to integrate Mem0 into other frameworks description: How to integrate Mem0 into other frameworks
--- ---
<Snippet file="paper-release.mdx" />
Mem0 seamlessly integrates with popular AI frameworks and tools to enhance your LLM-based applications with persistent memory capabilities. By integrating Mem0, your applications benefit from: Mem0 seamlessly integrates with popular AI frameworks and tools to enhance your LLM-based applications with persistent memory capabilities. By integrating Mem0, your applications benefit from:
- Enhanced context management across multiple frameworks - Enhanced context management across multiple frameworks

View File

@@ -1,6 +1,7 @@
--- ---
title: Agno title: Agno
--- ---
<Snippet file="paper-release.mdx" />
Integrate [**Mem0**](https://github.com/mem0ai/mem0) with [Agno](https://github.com/agno-agi/agno), a Python framework for building autonomous agents. This integration enables Agno agents to access persistent memory across conversations, enhancing context retention and personalization. Integrate [**Mem0**](https://github.com/mem0ai/mem0) with [Agno](https://github.com/agno-agi/agno), a Python framework for building autonomous agents. This integration enables Agno agents to access persistent memory across conversations, enhancing context retention and personalization.

View File

@@ -1,5 +1,7 @@
Build conversational AI agents with memory capabilities. This integration combines AutoGen for creating AI agents with Mem0 for memory management, enabling context-aware and personalized interactions. Build conversational AI agents with memory capabilities. This integration combines AutoGen for creating AI agents with Mem0 for memory management, enabling context-aware and personalized interactions.
<Snippet file="paper-release.mdx" />
## Overview ## Overview
In this guide, we'll explore an example of creating a conversational AI system with memory: In this guide, we'll explore an example of creating a conversational AI system with memory:

View File

@@ -2,6 +2,8 @@
title: CrewAI title: CrewAI
--- ---
<Snippet file="paper-release.mdx" />
Build an AI system that combines CrewAI's agent-based architecture with Mem0's memory capabilities. This integration enables persistent memory across agent interactions and personalized task execution based on user history. Build an AI system that combines CrewAI's agent-based architecture with Mem0's memory capabilities. This integration enables persistent memory across agent interactions and personalized task execution based on user history.
## Overview ## Overview

View File

@@ -2,6 +2,8 @@
title: Dify title: Dify
--- ---
<Snippet file="paper-release.mdx" />
# Integrating Mem0 with Dify AI # Integrating Mem0 with Dify AI
Mem0 brings a robust memory layer to Dify AI, empowering your AI agents with persistent conversation storage and retrieval capabilities. With Mem0, your Dify applications gain the ability to recall past interactions and maintain context, ensuring more natural and insightful conversations. Mem0 brings a robust memory layer to Dify AI, empowering your AI agents with persistent conversation storage and retrieval capabilities. With Mem0, your Dify applications gain the ability to recall past interactions and maintain context, ensuring more natural and insightful conversations.

View File

@@ -2,6 +2,8 @@
title: ElevenLabs title: ElevenLabs
--- ---
<Snippet file="paper-release.mdx" />
Create voice-based conversational AI agents with memory capabilities by integrating ElevenLabs and Mem0. This integration enables persistent, context-aware voice interactions that remember past conversations. Create voice-based conversational AI agents with memory capabilities by integrating ElevenLabs and Mem0. This integration enables persistent, context-aware voice interactions that remember past conversations.
## Overview ## Overview

View File

@@ -2,6 +2,8 @@
title: Flowise title: Flowise
--- ---
<Snippet file="paper-release.mdx" />
The [**Mem0 Memory**](https://github.com/mem0ai/mem0) integration with [Flowise](https://github.com/FlowiseAI/Flowise) enables persistent memory capabilities for your AI chatflows. [Flowise](https://flowiseai.com/) is an open-source low-code tool for developers to build customized LLM orchestration flows & AI agents using a drag & drop interface. The [**Mem0 Memory**](https://github.com/mem0ai/mem0) integration with [Flowise](https://github.com/FlowiseAI/Flowise) enables persistent memory capabilities for your AI chatflows. [Flowise](https://flowiseai.com/) is an open-source low-code tool for developers to build customized LLM orchestration flows & AI agents using a drag & drop interface.
## Overview ## Overview

View File

@@ -2,6 +2,8 @@
title: Keywords AI title: Keywords AI
--- ---
<Snippet file="paper-release.mdx" />
Build AI applications with persistent memory and comprehensive LLM observability by integrating Mem0 with Keywords AI. Build AI applications with persistent memory and comprehensive LLM observability by integrating Mem0 with Keywords AI.
## Overview ## Overview

View File

@@ -3,6 +3,8 @@ title: Langchain Tools
description: 'Integrate Mem0 with LangChain tools to enable AI agents to store, search, and manage memories through structured interfaces' description: 'Integrate Mem0 with LangChain tools to enable AI agents to store, search, and manage memories through structured interfaces'
--- ---
<Snippet file="paper-release.mdx" />
## Overview ## Overview
Mem0 provides a suite of tools for storing, searching, and retrieving memories, enabling agents to maintain context and learn from past interactions. The tools are built as Langchain tools, making them easily integrable with any AI agent implementation. Mem0 provides a suite of tools for storing, searching, and retrieving memories, enabling agents to maintain context and learn from past interactions. The tools are built as Langchain tools, making them easily integrable with any AI agent implementation.

View File

@@ -2,6 +2,8 @@
title: Langchain title: Langchain
--- ---
<Snippet file="paper-release.mdx" />
Build a personalized Travel Agent AI using LangChain for conversation flow and Mem0 for memory retention. This integration enables context-aware and efficient travel planning experiences. Build a personalized Travel Agent AI using LangChain for conversation flow and Mem0 for memory retention. This integration enables context-aware and efficient travel planning experiences.
## Overview ## Overview

View File

@@ -2,6 +2,8 @@
title: LangGraph title: LangGraph
--- ---
<Snippet file="paper-release.mdx" />
Build a personalized Customer Support AI Agent using LangGraph for conversation flow and Mem0 for memory retention. This integration enables context-aware and efficient support experiences. Build a personalized Customer Support AI Agent using LangGraph for conversation flow and Mem0 for memory retention. This integration enables context-aware and efficient support experiences.
## Overview ## Overview

View File

@@ -2,6 +2,8 @@
title: Livekit title: Livekit
--- ---
<Snippet file="paper-release.mdx" />
This guide demonstrates how to create a memory-enabled voice assistant using LiveKit, Deepgram, OpenAI, and Mem0, focusing on creating an intelligent, context-aware travel planning agent. This guide demonstrates how to create a memory-enabled voice assistant using LiveKit, Deepgram, OpenAI, and Mem0, focusing on creating an intelligent, context-aware travel planning agent.
## Prerequisites ## Prerequisites

View File

@@ -2,6 +2,8 @@
title: LlamaIndex title: LlamaIndex
--- ---
<Snippet file="paper-release.mdx" />
LlamaIndex supports Mem0 as a [memory store](https://llamahub.ai/l/memory/llama-index-memory-mem0). In this guide, we'll show you how to use it. LlamaIndex supports Mem0 as a [memory store](https://llamahub.ai/l/memory/llama-index-memory-mem0). In this guide, we'll show you how to use it.
<Note type="info"> <Note type="info">

View File

@@ -2,6 +2,8 @@
title: MCP Server title: MCP Server
--- ---
<Snippet file="paper-release.mdx" />
## Integrating mem0 as an MCP Server in Cursor ## Integrating mem0 as an MCP Server in Cursor
[mem0](https://github.com/mem0ai/mem0-mcp) is a powerful tool designed to enhance AI-driven workflows, particularly in code generation and contextual memory. In this guide, we'll walk through integrating mem0 as an **MCP (Model Context Protocol) server** within [Cursor](https://cursor.sh/), an AI-powered coding editor. [mem0](https://github.com/mem0ai/mem0-mcp) is a powerful tool designed to enhance AI-driven workflows, particularly in code generation and contextual memory. In this guide, we'll walk through integrating mem0 as an **MCP (Model Context Protocol) server** within [Cursor](https://cursor.sh/), an AI-powered coding editor.

View File

@@ -2,6 +2,8 @@
title: MultiOn title: MultiOn
--- ---
<Snippet file="paper-release.mdx" />
Build a personal browser agent that remembers user preferences and automates web tasks. It integrates Mem0 for memory management with MultiOn for executing browser actions, enabling personalized and efficient web interactions. Build a personal browser agent that remembers user preferences and automates web tasks. It integrates Mem0 for memory management with MultiOn for executing browser actions, enabling personalized and efficient web interactions.
## Overview ## Overview

View File

@@ -3,6 +3,8 @@ title: 'Pipecat'
description: 'Integrate Mem0 with Pipecat for conversational memory in AI agents' description: 'Integrate Mem0 with Pipecat for conversational memory in AI agents'
--- ---
<Snippet file="paper-release.mdx" />
# Pipecat Integration # Pipecat Integration
Mem0 seamlessly integrates with [Pipecat](https://pipecat.ai), providing long-term memory capabilities for conversational AI agents. This integration allows your Pipecat-powered applications to remember past conversations and provide personalized responses based on user history. Mem0 seamlessly integrates with [Pipecat](https://pipecat.ai), providing long-term memory capabilities for conversational AI agents. This integration allows your Pipecat-powered applications to remember past conversations and provide personalized responses based on user history.

View File

@@ -2,6 +2,8 @@
title: Vercel AI SDK title: Vercel AI SDK
--- ---
<Snippet file="paper-release.mdx" />
The [**Mem0 AI SDK Provider**](https://www.npmjs.com/package/@mem0/vercel-ai-provider) is a library developed by **Mem0** to integrate with the Vercel AI SDK. This library brings enhanced AI interaction capabilities to your applications by introducing persistent memory functionality. The [**Mem0 AI SDK Provider**](https://www.npmjs.com/package/@mem0/vercel-ai-provider) is a library developed by **Mem0** to integrate with the Vercel AI SDK. This library brings enhanced AI interaction capabilities to your applications by introducing persistent memory functionality.
<Note type="info"> <Note type="info">

View File

@@ -5,6 +5,8 @@ icon: "list-check"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Graph Memory is a powerful feature that allows users to create and utilize complex relationships between pieces of information. Graph Memory is a powerful feature that allows users to create and utilize complex relationships between pieces of information.
## Graph Memory supports the following features: ## Graph Memory supports the following features:

View File

@@ -5,6 +5,8 @@ icon: "database"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Mem0 now supports **Graph Memory**. Mem0 now supports **Graph Memory**.
With Graph Memory, users can now create and utilize complex relationships between pieces of information, allowing for more nuanced and context-aware responses. With Graph Memory, users can now create and utilize complex relationships between pieces of information, allowing for more nuanced and context-aware responses.
This integration enables users to leverage the strengths of both vector-based and graph-based approaches, resulting in more accurate and comprehensive information retrieval and generation. This integration enables users to leverage the strengths of both vector-based and graph-based approaches, resulting in more accurate and comprehensive information retrieval and generation.

View File

@@ -4,6 +4,8 @@ icon: "image"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Mem0 extends its capabilities beyond text by supporting multimodal data, including images. Users can seamlessly integrate images into their interactions, allowing Mem0 to extract pertinent information from visual content and enrich the memory system. Mem0 extends its capabilities beyond text by supporting multimodal data, including images. Users can seamlessly integrate images into their interactions, allowing Mem0 to extract pertinent information from visual content and enrich the memory system.
## How It Works ## How It Works

View File

@@ -5,6 +5,8 @@ icon: "node"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
> Welcome to the Mem0 quickstart guide. This guide will help you get up and running with Mem0 in no time. > Welcome to the Mem0 quickstart guide. This guide will help you get up and running with Mem0 in no time.
## Installation ## Installation

View File

@@ -5,6 +5,8 @@ icon: "python"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
> Welcome to the Mem0 quickstart guide. This guide will help you get up and running with Mem0 in no time. > Welcome to the Mem0 quickstart guide. This guide will help you get up and running with Mem0 in no time.
## Installation ## Installation

View File

@@ -4,6 +4,8 @@ icon: "info"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Welcome to Mem0 Open Source - a powerful, self-hosted memory management solution for AI agents and assistants. With Mem0 OSS, you get full control over your infrastructure while maintaining complete customization flexibility. Welcome to Mem0 Open Source - a powerful, self-hosted memory management solution for AI agents and assistants. With Mem0 OSS, you get full control over your infrastructure while maintaining complete customization flexibility.
We offer two SDKs for Python and Node.js. We offer two SDKs for Python and Node.js.

View File

@@ -4,6 +4,8 @@ icon: "info"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
<Note type="info"> <Note type="info">
🎉 We now support [Grok 3](components/llms/models/xAI)! Enhance your AI assistants with the latest and most capable language model from xAI. 🎉 We now support [Grok 3](components/llms/models/xAI)! Enhance your AI assistants with the latest and most capable language model from xAI.
</Note> </Note>

View File

@@ -5,6 +5,8 @@ icon: "eye"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
## Welcome to Mem0 Platform ## Welcome to Mem0 Platform
The Mem0 Platform is a managed service and the easiest way to add our powerful memory layer to your applications. The Mem0 Platform is a managed service and the easiest way to add our powerful memory layer to your applications.

View File

@@ -5,6 +5,8 @@ icon: "book"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
<Note type="info"> <Note type="info">
🎉 Looking for TypeScript support? Mem0 has you covered! Check out an example [here](/platform/quickstart/#4-11-working-with-mem0-in-typescript). 🎉 Looking for TypeScript support? Mem0 has you covered! Check out an example [here](/platform/quickstart/#4-11-working-with-mem0-in-typescript).
</Note> </Note>

View File

@@ -3,6 +3,10 @@ title: Quickstart
icon: "bolt" icon: "bolt"
iconType: "solid" iconType: "solid"
--- ---
<Snippet file="paper-release.mdx" />
Mem0 offers two powerful ways to leverage our technology: [our managed platform](#mem0-platform-managed-solution) and [our open source solution](#mem0-open-source). Mem0 offers two powerful ways to leverage our technology: [our managed platform](#mem0-platform-managed-solution) and [our open source solution](#mem0-open-source).
Check out our [Playground](https://mem0.dev/pd-pg) to see Mem0 in action. Check out our [Playground](https://mem0.dev/pd-pg) to see Mem0 in action.

31
evaluation/Makefile Normal file
View File

@@ -0,0 +1,31 @@
# Run the experiments
run-mem0-add:
python run_experiments.py --technique_type mem0 --method add
run-mem0-search:
python run_experiments.py --technique_type mem0 --method search --output_folder results/ --top_k 30
run-mem0-plus-add:
python run_experiments.py --technique_type mem0 --method add --is_graph
run-mem0-plus-search:
python run_experiments.py --technique_type mem0 --method search --is_graph --output_folder results/ --top_k 30
run-rag:
python run_experiments.py --technique_type rag --chunk_size 500 --num_chunks 1 --output_folder results/
run-full-context:
python run_experiments.py --technique_type rag --chunk_size -1 --num_chunks 1 --output_folder results/
run-langmem:
python run_experiments.py --technique_type langmem --output_folder results/
run-zep-add:
python run_experiments.py --technique_type zep --method add --output_folder results/
run-zep-search:
python run_experiments.py --technique_type zep --method search --output_folder results/
run-openai:
python run_experiments.py --technique_type openai --output_folder results/

View File

@@ -0,0 +1,192 @@
# Mem0: Building ProductionReady AI Agents with Scalable LongTerm Memory
[![arXiv](https://img.shields.io/badge/arXiv-Paper-b31b1b.svg)](https://arxiv.org/abs/XXXX.XXXXX)
[![Website](https://img.shields.io/badge/Website-Project-blue)](https://mem0.ai/research)
This repository contains the code and dataset for our paper: **Mem0: Building ProductionReady AI Agents with Scalable LongTerm Memory**.
## 📋 Overview
This project evaluates Mem0 and compares it with different memory and retrieval techniques for AI systems:
1. **Established LOCOMO Benchmarks**: We evaluate against five established approaches from the literature: LoCoMo, ReadAgent, MemoryBank, MemGPT, and A-Mem.
2. **Open-Source Memory Solutions**: We test promising open-source memory architectures including LangMem, which provides flexible memory management capabilities.
3. **RAG Systems**: We implement Retrieval-Augmented Generation with various configurations, testing different chunk sizes and retrieval counts to optimize performance.
4. **Full-Context Processing**: We examine the effectiveness of passing the entire conversation history within the context window of the LLM as a baseline approach.
5. **Proprietary Memory Systems**: We evaluate OpenAI's built-in memory feature available in their ChatGPT interface to compare against commercial solutions.
6. **Third-Party Memory Providers**: We incorporate Zep, a specialized memory management platform designed for AI agents, to assess the performance of dedicated memory infrastructure.
We test these techniques on the LOCOMO dataset, which contains conversational data with various question types to evaluate memory recall and understanding.
## 🔍 Dataset
The dataset is located in the `dataset/` directory:
- `locomo10.json`: Original dataset
- `locomo10_rag.json`: Dataset formatted for RAG experiments
## 📁 Project Structure
```
.
├── src/ # Source code for different memory techniques
│ ├── mem0/ # Implementation of the Mem0 technique
│ ├── openai/ # Implementation of the OpenAI memory
│ ├── zep/ # Implementation of the Zep memory
│ ├── rag.py # Implementation of the RAG technique
│ └── langmem.py # Implementation of the Language-based memory
├── metrics/ # Code for evaluation metrics
├── results/ # Results of experiments
├── dataset/ # Dataset files
├── evals.py # Evaluation script
├── run_experiments.py # Script to run experiments
├── generate_scores.py # Script to generate scores from results
└── prompts.py # Prompts used for the models
```
## 🚀 Getting Started
### Prerequisites
Create a `.env` file with your API keys and configurations. The following keys are required:
```
# OpenAI API key for GPT models and embeddings
OPENAI_API_KEY="your-openai-api-key"
# Mem0 API keys (for Mem0 and Mem0+ techniques)
MEM0_API_KEY="your-mem0-api-key"
MEM0_PROJECT_ID="your-mem0-project-id"
MEM0_ORGANIZATION_ID="your-mem0-organization-id"
# Model configuration
MODEL="gpt-4o-mini" # or your preferred model
EMBEDDING_MODEL="text-embedding-3-small" # or your preferred embedding model
ZEP_API_KEY="api-key-from-zep"
```
### Running Experiments
You can run experiments using the provided Makefile commands:
#### Memory Techniques
```bash
# Run Mem0 experiments
make run-mem0-add # Add memories using Mem0
make run-mem0-search # Search memories using Mem0
# Run Mem0+ experiments (with graph-based search)
make run-mem0-plus-add # Add memories using Mem0+
make run-mem0-plus-search # Search memories using Mem0+
# Run RAG experiments
make run-rag # Run RAG with chunk size 500
make run-full-context # Run RAG with full context
# Run LangMem experiments
make run-langmem # Run LangMem
# Run Zep experiments
make run-zep-add # Add memories using Zep
make run-zep-search # Search memories using Zep
# Run OpenAI experiments
make run-openai # Run OpenAI experiments
```
Alternatively, you can run experiments directly with custom parameters:
```bash
python run_experiments.py --technique_type [mem0|rag|langmem] [additional parameters]
```
#### Command-line Parameters:
| Parameter | Description | Default |
|-----------|-------------|---------|
| `--technique_type` | Memory technique to use (mem0, rag, langmem) | mem0 |
| `--method` | Method to use (add, search) | add |
| `--chunk_size` | Chunk size for processing | 1000 |
| `--top_k` | Number of top memories to retrieve | 30 |
| `--filter_memories` | Whether to filter memories | False |
| `--is_graph` | Whether to use graph-based search | False |
| `--num_chunks` | Number of chunks to process for RAG | 1 |
### 📊 Evaluation
To evaluate results, run:
```bash
python evals.py --input_file [path_to_results] --output_file [output_path]
```
This script:
1. Processes each question-answer pair
2. Calculates BLEU and F1 scores automatically
3. Uses an LLM judge to evaluate answer correctness
4. Saves the combined results to the output file
### 📈 Generating Scores
Generate final scores with:
```bash
python generate_scores.py
```
This script:
1. Loads the evaluation metrics data
2. Calculates mean scores for each category (BLEU, F1, LLM)
3. Reports the number of questions per category
4. Calculates overall mean scores across all categories
Example output:
```
Mean Scores Per Category:
bleu_score f1_score llm_score count
category
1 0.xxxx 0.xxxx 0.xxxx xx
2 0.xxxx 0.xxxx 0.xxxx xx
3 0.xxxx 0.xxxx 0.xxxx xx
Overall Mean Scores:
bleu_score 0.xxxx
f1_score 0.xxxx
llm_score 0.xxxx
```
## 📏 Evaluation Metrics
We use several metrics to evaluate the performance of different memory techniques:
1. **BLEU Score**: Measures the similarity between the model's response and the ground truth
2. **F1 Score**: Measures the harmonic mean of precision and recall
3. **LLM Score**: A binary score (0 or 1) determined by an LLM judge evaluating the correctness of responses
4. **Token Consumption**: Number of tokens required to generate final answer.
5. **Latency**: Time required during search and to generate response.
## 📚 Citation
If you use this code or dataset in your research, please cite our paper:
```bibtex
@article{mem0,
title={Mem0: Building ProductionReady AI Agents with Scalable LongTerm Memory},
author={---},
journal={arXiv preprint},
year={2025}
}
```
## 📄 License
[MIT License](LICENSE)
## 👥 Contributors
- [Prateek Chhikara](https://github.com/prateekchhikara)
- [Dev Khant](https://github.com/Dev-Khant)
- [Saket Aryan](https://github.com/whysosaket)
- [Taranjeet Singh](https://github.com/taranjeet)
- [Deshraj Yadav](https://github.com/deshraj)

81
evaluation/evals.py Normal file
View File

@@ -0,0 +1,81 @@
import json
import argparse
from metrics.utils import calculate_metrics, calculate_bleu_scores
from metrics.llm_judge import evaluate_llm_judge
from collections import defaultdict
from tqdm import tqdm
import concurrent.futures
import threading
def process_item(item_data):
k, v = item_data
local_results = defaultdict(list)
for item in v:
gt_answer = str(item['answer'])
pred_answer = str(item['response'])
category = str(item['category'])
question = str(item['question'])
# Skip category 5
if category == '5':
continue
metrics = calculate_metrics(pred_answer, gt_answer)
bleu_scores = calculate_bleu_scores(pred_answer, gt_answer)
llm_score = evaluate_llm_judge(question, gt_answer, pred_answer)
local_results[k].append({
"question": question,
"answer": gt_answer,
"response": pred_answer,
"category": category,
"bleu_score": bleu_scores["bleu1"],
"f1_score": metrics["f1"],
"llm_score": llm_score
})
return local_results
def main():
parser = argparse.ArgumentParser(description='Evaluate RAG results')
parser.add_argument('--input_file', type=str,
default="results/rag_results_500_k1.json",
help='Path to the input dataset file')
parser.add_argument('--output_file', type=str,
default="evaluation_metrics.json",
help='Path to save the evaluation results')
parser.add_argument('--max_workers', type=int, default=10,
help='Maximum number of worker threads')
args = parser.parse_args()
with open(args.input_file, 'r') as f:
data = json.load(f)
results = defaultdict(list)
results_lock = threading.Lock()
# Use ThreadPoolExecutor with specified workers
with concurrent.futures.ThreadPoolExecutor(max_workers=args.max_workers) as executor:
futures = [executor.submit(process_item, item_data)
for item_data in data.items()]
for future in tqdm(concurrent.futures.as_completed(futures),
total=len(futures)):
local_results = future.result()
with results_lock:
for k, items in local_results.items():
results[k].extend(items)
# Save results to JSON file
with open(args.output_file, 'w') as f:
json.dump(results, f, indent=4)
print(f"Results saved to {args.output_file}")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,41 @@
import pandas as pd
import json
# Load the evaluation metrics data
with open('evaluation_metrics.json', 'r') as f:
data = json.load(f)
# Flatten the data into a list of question items
all_items = []
for key in data:
all_items.extend(data[key])
# Convert to DataFrame
df = pd.DataFrame(all_items)
# Convert category to numeric type
df['category'] = pd.to_numeric(df['category'])
# Calculate mean scores by category
result = df.groupby('category').agg({
'bleu_score': 'mean',
'f1_score': 'mean',
'llm_score': 'mean'
}).round(4)
# Add count of questions per category
result['count'] = df.groupby('category').size()
# Print the results
print("Mean Scores Per Category:")
print(result)
# Calculate overall means
overall_means = df.agg({
'bleu_score': 'mean',
'f1_score': 'mean',
'llm_score': 'mean'
}).round(4)
print("\nOverall Mean Scores:")
print(overall_means)

View File

@@ -0,0 +1,127 @@
from openai import OpenAI
import json
from collections import defaultdict
import numpy as np
import argparse
client = OpenAI()
ACCURACY_PROMPT = """
Your task is to label an answer to a question as CORRECT or WRONG. You will be given the following data:
(1) a question (posed by one user to another user),
(2) a gold (ground truth) answer,
(3) a generated answer
which you will score as CORRECT/WRONG.
The point of the question is to ask about something one user should know about the other user based on their prior conversations.
The gold answer will usually be a concise and short answer that includes the referenced topic, for example:
Question: Do you remember what I got the last time I went to Hawaii?
Gold answer: A shell necklace
The generated answer might be much longer, but you should be generous with your grading - as long as it touches on the same topic as the gold answer, it should be counted as CORRECT.
For time related questions, the gold answer will be a specific date, month, year, etc. The generated answer might be much longer or use relative time references (like "last Tuesday" or "next month"), but you should be generous with your grading - as long as it refers to the same date or time period as the gold answer, it should be counted as CORRECT. Even if the format differs (e.g., "May 7th" vs "7 May"), consider it CORRECT if it's the same date.
Now its time for the real question:
Question: {question}
Gold answer: {gold_answer}
Generated answer: {generated_answer}
First, provide a short (one sentence) explanation of your reasoning, then finish with CORRECT or WRONG.
Do NOT include both CORRECT and WRONG in your response, or it will break the evaluation script.
Just return the label CORRECT or WRONG in a json format with the key as "label".
"""
def evaluate_llm_judge(question, gold_answer, generated_answer):
"""Evaluate the generated answer against the gold answer using an LLM judge."""
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{
"role": "user",
"content": ACCURACY_PROMPT.format(
question=question,
gold_answer=gold_answer,
generated_answer=generated_answer
)
}],
response_format={"type": "json_object"},
temperature=0.0
)
label = json.loads(response.choices[0].message.content)['label']
return 1 if label == "CORRECT" else 0
def main():
"""Main function to evaluate RAG results using LLM judge."""
parser = argparse.ArgumentParser(
description='Evaluate RAG results using LLM judge'
)
parser.add_argument(
'--input_file',
type=str,
default="results/default_run_v4_k30_new_graph.json",
help='Path to the input dataset file'
)
args = parser.parse_args()
dataset_path = args.input_file
output_path = f"results/llm_judge_{dataset_path.split('/')[-1]}"
with open(dataset_path, "r") as f:
data = json.load(f)
LLM_JUDGE = defaultdict(list)
RESULTS = defaultdict(list)
index = 0
for k, v in data.items():
for x in v:
question = x['question']
gold_answer = x['answer']
generated_answer = x['response']
category = x['category']
# Skip category 5
if int(category) == 5:
continue
# Evaluate the answer
label = evaluate_llm_judge(question, gold_answer, generated_answer)
LLM_JUDGE[category].append(label)
# Store the results
RESULTS[index].append({
"question": question,
"gt_answer": gold_answer,
"response": generated_answer,
"category": category,
"llm_label": label
})
# Save intermediate results
with open(output_path, "w") as f:
json.dump(RESULTS, f, indent=4)
# Print current accuracy for all categories
print("All categories accuracy:")
for cat, results in LLM_JUDGE.items():
if results: # Only print if there are results for this category
print(f" Category {cat}: {np.mean(results):.4f} "
f"({sum(results)}/{len(results)})")
print("------------------------------------------")
index += 1
# Save final results
with open(output_path, "w") as f:
json.dump(RESULTS, f, indent=4)
# Print final summary
print("PATH: ", dataset_path)
print("------------------------------------------")
for k, v in LLM_JUDGE.items():
print(k, np.mean(v))
if __name__ == "__main__":
main()

Some files were not shown because too many files have changed in this diff Show More