Integrate self-hosted Supabase with mem0 system

- Configure mem0 to use self-hosted Supabase instead of Qdrant for vector storage
- Update docker-compose to connect containers to localai network
- Install vecs library for Supabase pgvector integration
- Create comprehensive test suite for Supabase + mem0 integration
- Update documentation to reflect Supabase configuration
- All containers now connected to shared localai network
- Successful vector storage and retrieval tests completed

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Docker Config Backup
2025-07-31 06:57:10 +02:00
parent 724c553a2e
commit 41cd78207a
36 changed files with 2533 additions and 405 deletions

32
docs/backup/README.md Normal file
View File

@@ -0,0 +1,32 @@
# Mintlify Starter Kit
Click on `Use this template` to copy the Mintlify starter kit. The starter kit contains examples including
- Guide pages
- Navigation
- Customizations
- API Reference pages
- Use of popular components
### Development
Install the [Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview the documentation changes locally. To install, use the following command
```
npm i -g mintlify
```
Run the following command at the root of your documentation (where mint.json is)
```
mintlify dev
```
### Publishing Changes
Install our Github App to auto propagate changes from your repo to your deployment. Changes will be deployed to production automatically after pushing to the default branch. Find the link to install on your dashboard.
#### Troubleshooting
- Mintlify dev isn't running - Run `mintlify install` it'll re-install dependencies.
- Page loads as a 404 - Make sure you are running in a folder with `mint.json`

View File

@@ -0,0 +1,193 @@
---
title: Overview
icon: "info"
iconType: "solid"
---
<Snippet file="async-memory-add.mdx" />
Mem0 provides a powerful set of APIs that allow you to integrate advanced memory management capabilities into your applications. Our APIs are designed to be intuitive, efficient, and scalable, enabling you to create, retrieve, update, and delete memories across various entities such as users, agents, apps, and runs.
## Key Features
- **Memory Management**: Add, retrieve, update, and delete memories with ease.
- **Entity-based Operations**: Perform operations on memories associated with specific users, agents, apps, or runs.
- **Advanced Search**: Utilize our search API to find relevant memories based on various criteria.
- **History Tracking**: Access the history of memory interactions for comprehensive analysis.
- **User Management**: Manage user entities and their associated memories.
## API Structure
Our API is organized into several main categories:
1. **Memory APIs**: Core operations for managing individual memories and collections.
2. **Entities APIs**: Manage different entity types (users, agents, etc.) and their associated memories.
3. **Search API**: Advanced search functionality to retrieve relevant memories.
4. **History API**: Track and retrieve the history of memory interactions.
## Authentication
All API requests require authentication using HTTP Basic Auth. Ensure you include your API key in the Authorization header of each request.
## Organizations and projects (optional)
Organizations and projects provide the following capabilities:
- **Multi-org/project Support**: Specify organization and project when initializing the Mem0 client to attribute API usage appropriately
- **Member Management**: Control access to data through organization and project membership
- **Access Control**: Only members can access memories and data within their organization/project scope
- **Team Isolation**: Maintain data separation between different teams and projects for secure collaboration
Example with the mem0 Python package:
<Tabs>
<Tab title="Python">
```python
from mem0 import MemoryClient
client = MemoryClient(org_id='YOUR_ORG_ID', project_id='YOUR_PROJECT_ID')
```
</Tab>
<Tab title="Node.js">
```javascript
import { MemoryClient } from "mem0ai";
const client = new MemoryClient({organizationId: "YOUR_ORG_ID", projectId: "YOUR_PROJECT_ID"});
```
</Tab>
</Tabs>
### Project Management Methods
The Mem0 client provides comprehensive project management capabilities through the `client.project` interface:
#### Get Project Details
Retrieve information about the current project:
```python
# Get all project details
project_info = client.project.get()
# Get specific fields only
project_info = client.project.get(fields=["name", "description", "custom_categories"])
```
#### Create a New Project
Create a new project within your organization:
```python
# Create a project with name and description
new_project = client.project.create(
name="My New Project",
description="A project for managing customer support memories"
)
```
#### Update Project Settings
Modify project configuration including custom instructions, categories, and graph settings:
```python
# Update project with custom categories
client.project.update(
custom_categories=[
{"customer_preferences": "Customer likes, dislikes, and preferences"},
{"support_history": "Previous support interactions and resolutions"}
]
)
# Update project with custom instructions
client.project.update(
custom_instructions="..."
)
# Enable graph memory for the project
client.project.update(enable_graph=True)
# Update multiple settings at once
client.project.update(
custom_instructions="...",
custom_categories=[
{"personal_info": "User personal information and preferences"},
{"work_context": "Professional context and work-related information"}
],
enable_graph=True
)
```
#### Delete Project
<Note>
This action will remove all memories, messages, and other related data in the project. This operation is irreversible.
</Note>
Remove a project and all its associated data:
```python
# Delete the current project (irreversible)
result = client.project.delete()
```
#### Member Management
Manage project members and their access levels:
```python
# Get all project members
members = client.project.get_members()
# Add a new member as a reader
client.project.add_member(
email="colleague@company.com",
role="READER" # or "OWNER"
)
# Update a member's role
client.project.update_member(
email="colleague@company.com",
role="OWNER"
)
# Remove a member from the project
client.project.remove_member(email="colleague@company.com")
```
#### Member Roles
- **READER**: Can view and search memories, but cannot modify project settings or manage members
- **OWNER**: Full access including project modification, member management, and all reader permissions
#### Async Support
All project methods are also available in async mode:
```python
from mem0 import AsyncMemoryClient
async def manage_project():
client = AsyncMemoryClient(org_id='YOUR_ORG_ID', project_id='YOUR_PROJECT_ID')
# All methods support async/await
project_info = await client.project.get()
await client.project.update(enable_graph=True)
members = await client.project.get_members()
# To call the async function properly
import asyncio
asyncio.run(manage_project())
```
## Getting Started
To begin using the Mem0 API, you'll need to:
1. Sign up for a [Mem0 account](https://app.mem0.ai) and obtain your API key.
2. Familiarize yourself with the API endpoints and their functionalities.
3. Make your first API call to add or retrieve a memory.
Explore the detailed documentation for each API endpoint to learn more about request/response formats, parameters, and example usage.

1018
docs/backup/changelog.mdx Normal file

File diff suppressed because it is too large Load Diff

404
docs/backup/docs.json Normal file
View File

@@ -0,0 +1,404 @@
{
"$schema": "https://mintlify.com/docs.json",
"theme": "maple",
"name": "Mem0",
"description": "Mem0 is a self-improving memory layer for LLM applications, enabling personalized AI experiences that save costs and delight users.",
"colors": {
"primary": "#6c60f0",
"light": "#E6FFA2",
"dark": "#a3df02"
},
"favicon": "/logo/favicon.png",
"navigation": {
"anchors": [
{
"anchor": "Documentation",
"icon": "book-open",
"tabs": [
{
"tab": "Documentation",
"groups": [
{
"group": "Getting Started",
"icon": "rocket",
"pages": [
"what-is-mem0",
"quickstart",
"faqs"
]
},
{
"group": "Core Concepts",
"icon": "brain",
"pages": [
"core-concepts/memory-types",
{
"group": "Memory Operations",
"icon": "gear",
"pages": [
"core-concepts/memory-operations/add",
"core-concepts/memory-operations/search",
"core-concepts/memory-operations/update",
"core-concepts/memory-operations/delete"
]
}
]
},
{
"group": "Platform",
"icon": "cogs",
"pages": [
"platform/overview",
"platform/quickstart",
{
"group": "Features",
"icon": "star",
"pages": [
"platform/features/platform-overview",
"platform/features/contextual-add",
"platform/features/async-client",
"platform/features/advanced-retrieval",
"platform/features/criteria-retrieval",
"platform/features/multimodal-support",
"platform/features/selective-memory",
"platform/features/custom-categories",
"platform/features/custom-instructions",
"platform/features/direct-import",
"platform/features/memory-export",
"platform/features/timestamp",
"platform/features/expiration-date",
"platform/features/webhooks",
"platform/features/feedback-mechanism",
"platform/features/group-chat"
]
}
]
},
{
"group": "Open Source",
"icon": "code-branch",
"pages": [
"open-source/overview",
"open-source/python-quickstart",
"open-source/node-quickstart",
{
"group": "Features",
"icon": "star",
"pages": [
"open-source/features/async-memory",
"open-source/features/openai_compatibility",
"open-source/features/custom-fact-extraction-prompt",
"open-source/features/custom-update-memory-prompt",
"open-source/features/multimodal-support",
"open-source/features/rest-api"
]
},
{
"group": "Graph Memory",
"icon": "spider-web",
"pages": [
"open-source/graph_memory/overview",
"open-source/graph_memory/features"
]
},
{
"group": "LLMs",
"icon": "brain",
"pages": [
"components/llms/overview",
"components/llms/config",
{
"group": "Supported LLMs",
"icon": "list",
"pages": [
"components/llms/models/openai",
"components/llms/models/anthropic",
"components/llms/models/azure_openai",
"components/llms/models/ollama",
"components/llms/models/together",
"components/llms/models/groq",
"components/llms/models/litellm",
"components/llms/models/mistral_AI",
"components/llms/models/google_AI",
"components/llms/models/aws_bedrock",
"components/llms/models/gemini",
"components/llms/models/deepseek",
"components/llms/models/xAI",
"components/llms/models/sarvam",
"components/llms/models/lmstudio",
"components/llms/models/langchain",
"components/llms/models/vllm"
]
}
]
},
{
"group": "Vector Databases",
"icon": "database",
"pages": [
"components/vectordbs/overview",
"components/vectordbs/config",
{
"group": "Supported Vector Databases",
"icon": "server",
"pages": [
"components/vectordbs/dbs/qdrant",
"components/vectordbs/dbs/chroma",
"components/vectordbs/dbs/pgvector",
"components/vectordbs/dbs/milvus",
"components/vectordbs/dbs/pinecone",
"components/vectordbs/dbs/mongodb",
"components/vectordbs/dbs/azure",
"components/vectordbs/dbs/redis",
"components/vectordbs/dbs/elasticsearch",
"components/vectordbs/dbs/opensearch",
"components/vectordbs/dbs/supabase",
"components/vectordbs/dbs/vertex_ai",
"components/vectordbs/dbs/weaviate",
"components/vectordbs/dbs/faiss",
"components/vectordbs/dbs/langchain",
"components/vectordbs/dbs/baidu"
]
}
]
},
{
"group": "Embedding Models",
"icon": "layer-group",
"pages": [
"components/embedders/overview",
"components/embedders/config",
{
"group": "Supported Embedding Models",
"icon": "list",
"pages": [
"components/embedders/models/openai",
"components/embedders/models/azure_openai",
"components/embedders/models/ollama",
"components/embedders/models/huggingface",
"components/embedders/models/vertexai",
"components/embedders/models/gemini",
"components/embedders/models/lmstudio",
"components/embedders/models/together",
"components/embedders/models/langchain",
"components/embedders/models/aws_bedrock"
]
}
]
}
]
},
{
"group": "Contribution",
"icon": "handshake",
"pages": [
"contributing/development",
"contributing/documentation"
]
}
]
},
{
"tab": "OpenMemory",
"icon": "square-terminal",
"pages": [
"openmemory/overview",
"openmemory/quickstart",
"openmemory/integrations"
]
},
{
"tab": "Examples",
"groups": [
{
"group": "💡 Examples",
"icon": "lightbulb",
"pages": [
"examples",
"examples/aws_example",
"examples/mem0-demo",
"examples/ai_companion_js",
"examples/collaborative-task-agent",
"examples/llamaindex-multiagent-learning-system",
"examples/eliza_os",
"examples/mem0-mastra",
"examples/mem0-with-ollama",
"examples/personal-ai-tutor",
"examples/customer-support-agent",
"examples/personal-travel-assistant",
"examples/llama-index-mem0",
"examples/chrome-extension",
"examples/memory-guided-content-writing",
"examples/multimodal-demo",
"examples/personalized-deep-research",
"examples/mem0-agentic-tool",
"examples/openai-inbuilt-tools",
"examples/mem0-openai-voice-demo",
"examples/mem0-google-adk-healthcare-assistant",
"examples/email_processing",
"examples/youtube-assistant"
]
}
]
},
{
"tab": "Integrations",
"groups": [
{
"group": "Integrations",
"icon": "plug",
"pages": [
"integrations",
"integrations/langchain",
"integrations/langgraph",
"integrations/llama-index",
"integrations/agno",
"integrations/autogen",
"integrations/crewai",
"integrations/openai-agents-sdk",
"integrations/google-ai-adk",
"integrations/mastra",
"integrations/vercel-ai-sdk",
"integrations/livekit",
"integrations/pipecat",
"integrations/elevenlabs",
"integrations/aws-bedrock",
"integrations/flowise",
"integrations/langchain-tools",
"integrations/agentops",
"integrations/keywords",
"integrations/dify",
"integrations/raycast"
]
}
]
},
{
"tab": "API Reference",
"icon": "square-terminal",
"groups": [
{
"group": "API Reference",
"icon": "terminal",
"pages": [
"api-reference",
{
"group": "Memory APIs",
"icon": "microchip",
"pages": [
"api-reference/memory/add-memories",
"api-reference/memory/v2-search-memories",
"api-reference/memory/v1-search-memories",
"api-reference/memory/v2-get-memories",
"api-reference/memory/v1-get-memories",
"api-reference/memory/history-memory",
"api-reference/memory/get-memory",
"api-reference/memory/update-memory",
"api-reference/memory/batch-update",
"api-reference/memory/delete-memory",
"api-reference/memory/batch-delete",
"api-reference/memory/delete-memories",
"api-reference/memory/create-memory-export",
"api-reference/memory/get-memory-export",
"api-reference/memory/feedback"
]
},
{
"group": "Entities APIs",
"icon": "users",
"pages": [
"api-reference/entities/get-users",
"api-reference/entities/delete-user"
]
},
{
"group": "Organizations APIs",
"icon": "building",
"pages": [
"api-reference/organization/create-org",
"api-reference/organization/get-orgs",
"api-reference/organization/get-org",
"api-reference/organization/get-org-members",
"api-reference/organization/add-org-member",
"api-reference/organization/delete-org"
]
},
{
"group": "Project APIs",
"icon": "folder",
"pages": [
"api-reference/project/create-project",
"api-reference/project/get-projects",
"api-reference/project/get-project",
"api-reference/project/get-project-members",
"api-reference/project/add-project-member",
"api-reference/project/delete-project"
]
},
{
"group": "Webhook APIs",
"icon": "webhook",
"pages": [
"api-reference/webhook/create-webhook",
"api-reference/webhook/get-webhook",
"api-reference/webhook/update-webhook",
"api-reference/webhook/delete-webhook"
]
}
]
}
]
},
{
"tab": "Changelog",
"icon": "clock",
"groups": [
{
"group": "Product Updates",
"icon": "rocket",
"pages": [
"changelog"
]
}
]
}
]
}
]
},
"logo": {
"light": "/logo/light.svg",
"dark": "/logo/dark.svg",
"href": "https://github.com/mem0ai/mem0"
},
"background": {
"color": {
"light": "#fff",
"dark": "#0f1117"
}
},
"navbar": {
"primary": {
"type": "button",
"label": "Your Dashboard",
"href": "https://app.mem0.ai"
}
},
"footer": {
"socials": {
"discord": "https://mem0.dev/DiD",
"x": "https://x.com/mem0ai",
"github": "https://github.com/mem0ai",
"linkedin": "https://www.linkedin.com/company/mem0/"
}
},
"integrations": {
"posthog": {
"apiKey": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX",
"apiHost": "https://mango.mem0.ai"
},
"intercom": {
"appId": "jjv2r0tt"
}
}
}

87
docs/backup/examples.mdx Normal file
View File

@@ -0,0 +1,87 @@
---
title: Overview
description: How to use mem0 in your existing applications?
---
<Snippet file="blank-notif.mdx" />
With Mem0, you can create stateful LLM-based applications such as chatbots, virtual assistants, or AI agents. Mem0 enhances your applications by providing a memory layer that makes responses:
- More personalized
- More reliable
- Cost-effective by reducing the number of LLM interactions
- More engaging
- Enables long-term memory
Here are some examples of how Mem0 can be integrated into various applications:
## Examples
Explore how **Mem0** can power real-world applications and bring personalized, intelligent experiences to life:
<CardGroup cols={2}>
<Card title="AI Companion in Node.js" icon="node" href="/examples/ai_companion_js">
Build a personalized AI Companion in **Node.js** that remembers conversations and adapts over time using Mem0.
</Card>
<Card title="Mem0 with Ollama" icon="server" href="/examples/mem0-with-ollama">
Run **Mem0 locally** with **Ollama** to create private, stateful AI experiences without relying on cloud APIs.
</Card>
<Card title="Personal AI Tutor" icon="graduation-cap" href="/examples/personal-ai-tutor">
Create an **AI Tutor** that adapts to student progress, learning style, and history — for a truly customized learning experience.
</Card>
<Card title="Personal Travel Assistant" icon="plane" href="/examples/personal-travel-assistant">
Develop a **Personal Travel Assistant** that remembers your preferences, past trips, and helps plan future adventures.
</Card>
<Card title="Customer Support Agent" icon="headset" href="/examples/customer-support-agent">
Build a **Customer Support AI** that recalls user preferences, past chats, and provides context-aware, efficient help.
</Card>
<Card title="LlamaIndex + Mem0" icon="book-open" href="/examples/llama-index-mem0">
Combine **LlamaIndex** and Mem0 to create a powerful **ReAct Agent** with persistent memory for smarter interactions.
</Card>
<Card title="Chrome Extension" icon="puzzle-piece" href="/examples/chrome-extension">
Add **long-term memory** to ChatGPT, Claude, or Perplexity via the **Mem0 Chrome Extension** — personalize your AI chats anywhere.
</Card>
<Card title="YouTube Assistant" icon="puzzle-piece" href="/examples/youtube-assistant">
Integrate **Mem0** into **YouTube's** native UI, providing personalized responses with video context.
</Card>
<Card title="Document Writing Assistant" icon="pen" href="/examples/document-writing">
Create a **Writing Assistant** that understands and adapts to your unique style, improving consistency and productivity.
</Card>
<Card title="Multimodal AI Demo" icon="image" href="/examples/multimodal-demo">
Supercharge AI with **Mem0's multimodal memory** — blend text, images, and more for richer, context-aware interactions.
</Card>
<Card title="Personalized Research Agent" icon="magnifying-glass" href="/examples/personalized-deep-research">
Build a **Deep Research AI** that remembers your research goals and compiles insights from vast information sources.
</Card>
<Card title="Mem0 as an Agentic Tool" icon="robot" href="/examples/mem0-agentic-tool">
Integrate Mem0's memory capabilities with OpenAI's Agents SDK to create AI agents with persistent memory.
</Card>
<Card title="OpenAI Inbuilt Tools" icon="robot" href="/examples/openai-inbuilt-tools">
Use Mem0's memory capabilities with OpenAI's Inbuilt Tools to create AI agents with persistent memory.
</Card>
<Card title="Mem0 OpenAI Voice Demo" icon="microphone" href="/examples/mem0-openai-voice-demo">
Use Mem0's memory capabilities with OpenAI's Inbuilt Tools to create AI agents with persistent memory.
</Card>
<Card title="Healthcare Assistant Google ADK" icon="microphone" href="/examples/mem0-google-adk-healthcare-assistant">
Build a personalized healthcare assistant with persistent memory using Google's ADK and Mem0.
</Card>
<Card title="Email Processing" icon="envelope" href="/examples/email_processing">
Use Mem0's memory capabilities to process emails and create AI agents with persistent memory.
</Card>
</CardGroup>

149
docs/backup/faqs.mdx Normal file
View File

@@ -0,0 +1,149 @@
---
title: FAQs
icon: "question"
iconType: "solid"
---
<Snippet file="async-memory-add.mdx" />
<AccordionGroup>
<Accordion title="How does Mem0 work?">
Mem0 utilizes a sophisticated hybrid database system to efficiently manage and retrieve memories for AI agents and assistants. Each memory is linked to a unique identifier, such as a user ID or agent ID, enabling Mem0 to organize and access memories tailored to specific individuals or contexts.
When a message is added to Mem0 via the `add` method, the system extracts pertinent facts and preferences, distributing them across various data stores: a vector database and a graph database. This hybrid strategy ensures that diverse types of information are stored optimally, facilitating swift and effective searches.
When an AI agent or LLM needs to access memories, it employs the `search` method. Mem0 conducts a comprehensive search across these data stores, retrieving relevant information from each.
The retrieved memories can be seamlessly integrated into the system prompt as required, enhancing the personalization and relevance of responses.
</Accordion>
<Accordion title="What are the key features of Mem0?">
- **User, Session, and AI Agent Memory**: Retains information across sessions and interactions for users and AI agents, ensuring continuity and context.
- **Adaptive Personalization**: Continuously updates memories based on user interactions and feedback.
- **Developer-Friendly API**: Offers a straightforward API for seamless integration into various applications.
- **Platform Consistency**: Ensures consistent behavior and data across different platforms and devices.
- **Managed Service**: Provides a hosted solution for easy deployment and maintenance.
- **Save Costs**: Saves costs by adding relevant memories instead of complete transcripts to context window
</Accordion>
<Accordion title="How Mem0 is different from traditional RAG?">
Mem0's memory implementation for Large Language Models (LLMs) offers several advantages over Retrieval-Augmented Generation (RAG):
- **Entity Relationships**: Mem0 can understand and relate entities across different interactions, unlike RAG which retrieves information from static documents. This leads to a deeper understanding of context and relationships.
- **Contextual Continuity**: Mem0 retains information across sessions, maintaining continuity in conversations and interactions, which is essential for long-term engagement applications like virtual companions or personalized learning assistants.
- **Adaptive Learning**: Mem0 improves its personalization based on user interactions and feedback, making the memory more accurate and tailored to individual users over time.
- **Dynamic Updates**: Mem0 can dynamically update its memory with new information and interactions, unlike RAG which relies on static data. This allows for real-time adjustments and improvements, enhancing the user experience.
These advanced memory capabilities make Mem0 a powerful tool for developers aiming to create personalized and context-aware AI applications.
</Accordion>
<Accordion title="What are the common use-cases of Mem0?">
- **Personalized Learning Assistants**: Long-term memory allows learning assistants to remember user preferences, strengths and weaknesses, and progress, providing a more tailored and effective learning experience.
- **Customer Support AI Agents**: By retaining information from previous interactions, customer support bots can offer more accurate and context-aware assistance, improving customer satisfaction and reducing resolution times.
- **Healthcare Assistants**: Long-term memory enables healthcare assistants to keep track of patient history, medication schedules, and treatment plans, ensuring personalized and consistent care.
- **Virtual Companions**: Virtual companions can use long-term memory to build deeper relationships with users by remembering personal details, preferences, and past conversations, making interactions more delightful.
- **Productivity Tools**: Long-term memory helps productivity tools remember user habits, frequently used documents, and task history, streamlining workflows and enhancing efficiency.
- **Gaming AI**: In gaming, AI with long-term memory can create more immersive experiences by remembering player choices, strategies, and progress, adapting the game environment accordingly.
</Accordion>
<Accordion title="Why aren't my memories being created?">
Mem0 uses a sophisticated classification system to determine which parts of text should be extracted as memories. Not all text content will generate memories, as the system is designed to identify specific types of memorable information.
There are several scenarios where mem0 may return an empty list of memories:
- When users input definitional questions (e.g., "What is backpropagation?")
- For general concept explanations that don't contain personal or experiential information
- Technical definitions and theoretical explanations
- General knowledge statements without personal context
- Abstract or theoretical content
Example Scenarios
```
Input: "What is machine learning?"
No memories extracted - Content is definitional and does not meet memory classification criteria.
Input: "Yesterday I learned about machine learning in class"
Memory extracted - Contains personal experience and temporal context.
```
Best Practices
To ensure successful memory extraction:
- Include temporal markers (when events occurred)
- Add personal context or experiences
- Frame information in terms of real-world applications or experiences
- Include specific examples or cases rather than general definitions
</Accordion>
<Accordion title="How do I configure Mem0 for AWS Lambda?">
When deploying Mem0 on AWS Lambda, you'll need to modify the storage directory configuration due to Lambda's file system restrictions. By default, Lambda only allows writing to the `/tmp` directory.
To configure Mem0 for AWS Lambda, set the `MEM0_DIR` environment variable to point to a writable directory in `/tmp`:
```bash
MEM0_DIR=/tmp/.mem0
```
If you're not using environment variables, you'll need to modify the storage path in your code:
```python
# Change from
home_dir = os.path.expanduser("~")
mem0_dir = os.environ.get("MEM0_DIR") or os.path.join(home_dir, ".mem0")
# To
mem0_dir = os.environ.get("MEM0_DIR", "/tmp/.mem0")
```
Note that the `/tmp` directory in Lambda has a size limit of 512MB and its contents are not persistent between function invocations.
</Accordion>
<Accordion title="How can I use metadata with Mem0?">
Metadata is the recommended approach for incorporating additional information with Mem0. You can store any type of structured data as metadata during the `add` method, such as location, timestamp, weather conditions, user state, or application context. This enriches your memories with valuable contextual information that can be used for more precise retrieval and filtering.
During retrieval, you have two main approaches for using metadata:
1. **Pre-filtering**: Include metadata parameters in your initial search query to narrow down the memory pool
2. **Post-processing**: Retrieve a broader set of memories based on query, then apply metadata filters to refine the results
Examples of useful metadata you might store:
- **Contextual information**: Location, time, device type, application state
- **User attributes**: Preferences, skill levels, demographic information
- **Interaction details**: Conversation topics, sentiment, urgency levels
- **Custom tags**: Any domain-specific categorization relevant to your application
This flexibility allows you to create highly contextually aware AI applications that can adapt to specific user needs and situations. Metadata provides an additional dimension for memory retrieval, enabling more precise and relevant responses.
</Accordion>
<Accordion title="How do I disable telemetry in Mem0?">
To disable telemetry in Mem0, you can set the `MEM0_TELEMETRY` environment variable to `False`:
```bash
MEM0_TELEMETRY=False
```
You can also disable telemetry programmatically in your code:
```python
import os
os.environ["MEM0_TELEMETRY"] = "False"
```
Setting this environment variable will prevent Mem0 from collecting and sending any usage data, ensuring complete privacy for your application.
</Accordion>
</AccordionGroup>

23
docs/backup/features.mdx Normal file
View File

@@ -0,0 +1,23 @@
---
title: Features
icon: "wrench"
iconType: "solid"
---
<Snippet file="blank-notif.mdx" />
## Core features
- **User, Session, and AI Agent Memory**: Retains information across sessions and interactions for users and AI agents, ensuring continuity and context.
- **Adaptive Personalization**: Continuously updates memories based on user interactions and feedback.
- **Developer-Friendly API**: Offers a straightforward API for seamless integration into various applications.
- **Platform Consistency**: Ensures consistent behavior and data across different platforms and devices.
- **Managed Service**: Provides a hosted solution for easy deployment and maintenance.
- **Save Costs**: Saves costs by adding relevant memories instead of complete transcripts to context window
If you have any questions, please feel free to reach out to us using one of the following methods:
<Snippet file="get-help.mdx" />

File diff suppressed because one or more lines are too long

122
docs/backup/llms.txt Normal file
View File

@@ -0,0 +1,122 @@
# Mem0
## High Level
[What is Mem0?](https://docs.mem0.ai/overview): consider using Mem0, when building conversational AI agents with memory. The page discusses the main reasons to use Mem0: Context Management, Smart Retrieval System, Simple API Integration and Dual Storage Architecture(vector and graph database).
[How is Mem0 different from traditional RAG?](https://docs.mem0.ai/faqs#how-mem0-is-different-from-traditional-rag): Mem0's memory for LLMs offers superior entity relationship understanding, contextual continuity, and adaptive learning compared to RAG. It retains information across sessions, dynamically updates with new data, and personalizes interactions, making it ideal for context-aware AI applications.
## Concepts
[Memory Types](https://docs.mem0.ai/core-concepts/memory-types): Mem0 implements both short-term memory (for conversation history and immediate context) and long-term memory (for persistent storage of factual, episodic, and semantic information) to maintain context and personalization across interactions.
[Memory Operations](https://docs.mem0.ai/core-concepts/memory-operations): Two core operations power Mem0's functionality: `add` Processes and stores conversations through information extraction, conflict resolution, and dual storage and `search` Retrieves relevant memories using semantic search with query processing and result ranking
[Information Processing](https://docs.mem0.ai/core-concepts/memory-operations): When adding memories, Mem0 uses LLMs to extract relevant information, identify entities and relationships, and resolve conflicts with existing data.
[Storage Architecture](https://docs.mem0.ai/core-concepts/memory-types): Mem0 combines vector embeddings for semantic information storage with efficient retrieval mechanisms, enabling fast access to relevant past interactions while maintaining user-specific context across sessions.
## How-to guides
### Installation
[Mem0 Installation](https://docs.mem0.ai/quickstart): Mem0 offers two installation options: Mem0 Platform (Managed Solution) and Mem0 OSS.
[How to: install Mem0 Platform (Managed Solution)](https://docs.mem0.ai/quickstart#mem0-platform-managed-solution): The easiest way to get started with Mem0 is to use the managed platform. This approach eliminates the need to set up and maintain your own infrastructure.
[How to: install Mem0 OSS](https://docs.mem0.ai/quickstart#mem0-open-source): If you prefer to manage your own infrastructure, you can install Mem0 OSS. This option requires setting up your own infrastructure and managing your own vector database.
## Platform
### Usage
[Initialize Client](https://docs.mem0.ai/platform/quickstart#3-instantiate-client): Mem0 gives two ways to initialize the client: Synchronous -> MemoryClient and Asynchronous -> AsyncMemoryClient.
[How to: add memories](https://docs.mem0.ai/platform/quickstart#4-1-create-memories): Add memories to Mem0 using messages or a simple string. The `add` method allows adding memories to a specific memory by providing `user_id`, `agent_id`, `run_id`, or `app_id`.
[How to: search memories](https://docs.mem0.ai/platform/quickstart#4-2-search-memories): Search memories in Mem0 by passing a query string and optional parameters. The `search` method returns a list of memories sorted by relevance. Csutom filters can also be passed to make the search more specific.
[How to: get all memories](https://docs.mem0.ai/platform/quickstart#4-4-get-all-memories): Get all the memories by passing a `user_id`, `agent_id`, `run_id`, or `app_id`. Also custom filters can be passed to filter the memories.
[How to: get memory history](https://docs.mem0.ai/platform/quickstart#4-5-get-memory-history): Get the history of a specific memory by passing `memory_id` after adding memories using the `add` method.
[How to: update memory](https://docs.mem0.ai/platform/quickstart#4-6-update-memory): Update a memory by passing `memory_id` after adding memories using the `add` method.
[How to: delete memory](https://docs.mem0.ai/platform/quickstart#4-7-delete-memories): Delete memories by passing `memory_id` after adding memories using the `add` method.
[How to: reset client](https://docs.mem0.ai/platform/quickstart#4-8-reset-client): Reset the client where all the memories, users, agents, sessions and runs are deleted.
[How to: batch update memories](https://docs.mem0.ai/platform/quickstart#4-9-batch-update-memories): Batch update memories by passing a list of memories to the `batch_update` method.
[How to: batch delete memories](https://docs.mem0.ai/platform/quickstart#4-10-batch-delete-memories): Batch delete memories by passing a list of memories to the `batch_delete` method.
## Features
[Graph Memory](https://docs.mem0.ai/platform/features/graph-memory): Mem0's graph memory system builds relationships between entities in your data, enabling contextually relevant retrieval by analyzing connections between information points - activate it with `enable_graph=True` to enhance search results beyond direct semantic matches, ideal for applications tracking evolving relationships.
[Advanced Retrieval](https://docs.mem0.ai/platform/features/advanced-retrieval): Mem0 offers enhanced search capabilities through three advanced retrieval modes: keyword search (improves recall by matching specific terms), reranking (ensures most relevant results appear first using neural networks), and filtering (narrows results by specific criteria) - each can be enabled independently or in combination to optimize search precision and relevance.
[Multimodal Support](https://docs.mem0.ai/platform/features/multimodal-support): Mem0 extends beyond text by supporting images and documents (JPG, PNG, MDX, TXT, PDF), allowing users to integrate visual and document content through direct URLs or Base64 encoding, enhancing the memory system's ability to understand and recall information from various media types.
[Memory Customization](https://docs.mem0.ai/platform/features/selective-memory): Mem0 enables selective memory storage through inclusion and exclusion rules, allowing users to focus on relevant information (like specific topics) while omitting irrelevant data (such as food preferences), resulting in more efficient, accurate, and privacy-conscious AI interactions.
[Custom Categories](https://docs.mem0.ai/platform/features/custom-categories): Mem0 allows setting custom categories at both project level and during individual API calls, overriding default categories (like personal_details, family, sports) with more specific ones to improve memory categorization accuracy - simply provide a list of category dictionaries with descriptive definitions when adding memories.
[Async Client](https://docs.mem0.ai/platform/features/async-client): Mem0 provides an AsyncMemoryClient for non-blocking operations, offering the same functionality as the synchronous client (add, search, get_all, delete, etc.) but with async/await support, making it ideal for high-concurrency applications that need to perform memory operations without blocking execution.
[Memory Export](https://docs.mem0.ai/platform/features/memory-export): Mem0 enables exporting memories in structured formats using customizable Pydantic schemas, allowing you to transform stored memories into specific data structures by defining schemas, submitting export jobs with optional processing instructions, and retrieving the formatted data with various filtering options.
## OSS
### Usage
#### Python
[Initialize python client](https://docs.mem0.ai/open-source/python-quickstart#installation): Install Mem0 with `pip install mem0ai`, then initialize the client with `from mem0 import Memory; m = Memory()` (requires OpenAI API key). For advanced usage, configure with custom parameters or enable graph memory with `Memory(enable_graph=True)`.
[Configuration Parameters](https://docs.mem0.ai/open-source/python-quickstart#configuration-parameters): Mem0 offers extensive configuration options for vector stores (provider, host, port), LLMs (provider, model, temperature, API keys), embedders (provider, model), graph stores (provider, URL, credentials), and general settings (history path, API version, custom prompts) - all customizable through a comprehensive configuration dictionary.
[How to: add memories](https://docs.mem0.ai/open-source/python-quickstart#store-a-memory): Add memories to Mem0 using the Python OSS client's `add` method with messages or a simple string. This method allows adding memories to a specific memory by providing `user_id`, `agent_id`, `run_id`, or `app_id`.
[How to: search memories](https://docs.mem0.ai/open-source/python-quickstart#search-memories): Search memories in Mem0 using the Python OSS client by passing a query string and optional parameters. The `search` method returns a list of memories sorted by relevance. Custom filters can also be passed to make the search more specific.
[How to: get all memories](https://docs.mem0.ai/open-source/python-quickstart#retrieve-memories): Get all the memories using the Python OSS client by passing a `user_id`, `agent_id`, `run_id`, or `app_id`. Also custom filters can be passed to filter the memories.
[How to: get memory history](https://docs.mem0.ai/open-source/python-quickstart#memory-history): Get the history of a specific memory in the Python OSS client by passing `memory_id` after adding memories using the `add` method.
[How to: update memory](https://docs.mem0.ai/open-source/python-quickstart#update-a-memory): Update a memory in the Python OSS client by passing `memory_id` after adding memories using the `add` method.
[How to: delete memory](https://docs.mem0.ai/open-source/python-quickstart#delete-memory): Delete memories in the Python OSS client by passing `memory_id` after adding memories using the `add` method.
#### Node.js
[Initialize node client](https://docs.mem0.ai/open-source/node-quickstart#installation): Install Mem0 with `npm install mem0ai`, then initialize the client with `import { Memory } from 'mem0ai'; const m = new Memory();` (requires OpenAI API key). For advanced usage, configure with custom parameters or enable graph memory with `new Memory({ enableGraph: true })`.
[How to: add memories](https://docs.mem0.ai/open-source/node-quickstart#store-a-memory): Add memories to Mem0 using the Node.js OSS client's `add` method with messages or a simple string. This method allows adding memories to a specific memory by providing `user_id`, `agent_id`, `run_id`, or `app_id`.
[How to: search memories](https://docs.mem0.ai/open-source/node-quickstart#search-memories): Search memories in Mem0 using the Node.js OSS client by passing a query string and optional parameters. The `search` method returns a list of memories sorted by relevance. Custom filters can also be passed to make the search more specific.
[How to: get all memories](https://docs.mem0.ai/open-source/node-quickstart#retrieve-memories): Get all the memories using the Node.js OSS client by passing a `user_id`, `agent_id`, `run_id`, or `app_id`. Also custom filters can be passed to filter the memories.
[How to: get memory history](https://docs.mem0.ai/open-source/node-quickstart#memory-history): Get the history of a specific memory in the Node.js OSS client by passing `memory_id` after adding memories using the `add` method.
[How to: update memory](https://docs.mem0.ai/open-source/node-quickstart#update-a-memory): Update a memory in the Node.js OSS client by passing `memory_id` after adding memories using the `add` method.
[How to: delete memory](https://docs.mem0.ai/open-source/node-quickstart#delete-memory): Delete memories in the Node.js OSS client by passing `memory_id` after adding memories using the `add` method.
### Features
[OpenAI Compatibility](https://docs.mem0.ai/open-source/features/openai_compatibility): Mem0 offers seamless integration with OpenAI-compatible APIs, allowing developers to enhance conversational agents with structured memory by initializing with a Mem0 API key (or locally without one), supporting various LLM providers, and enabling personalized responses through user context persistence across interactions with parameters like user_id, agent_id, and custom filters.
[Custom Fact Extraction Prompt](https://docs.mem0.ai/open-source/features/custom-fact-extraction-prompt): Mem0 enables custom fact extraction prompts to tailor information extraction for specific use cases by defining domain-specific examples and formats, allowing precise control over what information is extracted from messages - simply provide a custom prompt with few-shot examples in the config when initializing the Memory client.
[Custom Update Memory Prompt](https://docs.mem0.ai/open-source/features/custom-update-memory-prompt): Mem0 enables customizing the update memory prompt to control how memories are modified by comparing newly retrieved facts with existing memories and determining appropriate actions (add, update, delete, or no change) based on custom logic and examples provided in the prompt configuration.
[REST API Server](https://docs.mem0.ai/open-source/features/rest-api): Mem0 provides a FastAPI-based REST API server that supports core operations (create/retrieve/search/update/delete memories) with OpenAPI documentation at /docs, easily deployable via Docker Compose with pre-configured databases (postgres pgvector, neo4j) - just set OPENAI_API_KEY to get started.
[Graph Memory](https://docs.mem0.ai/open-source/graph_memory/overview): Mem0's open-source graph memory system enables building and querying relationships between entities by installing with `pip install "mem0ai[graph]"` and configuring a graph store provider (like Neo4j) - this allows for more contextual memory retrieval by combining vector and graph-based approaches to track evolving relationships between information points.
### Components
#### LLMs
[OpenAI](https://docs.mem0.ai/components/llms/models/openai): Integrate OpenAI LLM models by setting OPENAI_API_KEY and configuring the Memory client with provider settings - supports both standard models (like gpt-4) and structured-outputs, with options for temperature, max tokens and Openrouter integration.
[Anthropic](https://docs.mem0.ai/components/llms/models/anthropic): Integrate Anthropic LLM models by setting ANTHROPIC_API_KEY from your Account Settings Page and configuring the Memory client with provider settings - supports models like claude-3-7-sonnet-latest with customizable temperature and max_tokens parameters.
[Google AI](https://docs.mem0.ai/components/llms/models/google_AI): Integrate Gemini LLM models by setting GEMINI_API_KEY from Google Maker Suite and configuring the Memory client with litellm provider - supports models like gemini-pro with customizable temperature and max_tokens parameters.
[Groq](https://docs.mem0.ai/components/llms/models/groq): Integrate Groq's Language Processing Unit (LPU) optimized models by setting GROQ_API_KEY and configuring the Memory client with provider settings - supports models like mixtral-8x7b-32768 with customizable temperature and max_tokens parameters for high-performance AI inference.
[Together](https://docs.mem0.ai/components/llms/models/together): Integrate Together LLM models by setting TOGETHER_API_KEY and configuring the Memory client with provider settings - supports both standard models (like together-llama-3-8b-instant) and structured-outputs, with options for temperature, max tokens and Openrouter integration.
[Deepseek](https://docs.mem0.ai/components/llms/models/deepseek): Integrate Deepseek LLM models by setting DEEPSEEK_API_KEY and configuring the Memory client with provider settings - supports both standard models (like deepseek-chat) and structured-outputs, with options for temperature, max tokens and Openrouter integration.
[xAI](https://docs.mem0.ai/components/llms/models/xai): Integrate xAI LLM models by setting XAI_API_KEY and configuring the Memory client with provider settings - supports both standard models (like xai-chat) and structured-outputs, with options for temperature, max tokens and Openrouter integration.
[LM Studio](https://docs.mem0.ai/components/llms/models/lmstudio): Run Mem0 locally with LM Studio by configuring the Memory client with provider settings and a local LM Studio server - supports using LM Studio for both LLM inference and embeddings, requiring no external API keys when running fully locally with appropriate models loaded.
[Ollama](https://docs.mem0.ai/components/llms/models/ollama): Run Mem0 locally with Ollama LLM models by configuring the Memory client with provider settings like model (e.g. mixtral:8x7b), temperature and max_tokens - supports tool calling and requires only OpenAI API key for embeddings.
[AWS Bedrock](https://docs.mem0.ai/components/llms/models/aws_bedrock): Integrate AWS Bedrock LLM models by setting AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and configuring the Memory client with provider settings - supports both standard models (like bedrock-anthropic-claude-3-5-sonnet) and structured-outputs, with options for temperature, max tokens and Openrouter integration.
[Azure OpenAI](https://docs.mem0.ai/components/llms/models/azure_openai): Integrate Azure OpenAI LLM models by setting LLM_AZURE_OPENAI_API_KEY, LLM_AZURE_ENDPOINT, LLM_AZURE_DEPLOYMENT and LLM_AZURE_API_VERSION environment variables and configuring the Memory client with provider settings - supports both standard and structured-output models with customizable deployment, API version, endpoint and headers (note: some features like parallel tool calling and temperature are currently unsupported).
[LiteLLM](https://docs.mem0.ai/components/llms/models/litellm): Integrate LiteLLM LLM models by setting LITELLM_API_KEY and configuring the Memory client with provider settings - supports both standard models (like llama3.1:8b) and structured-outputs, with options for temperature, max tokens and Openrouter integration.
[Mistral](https://docs.mem0.ai/components/llms/models/mistral): Integrate Mistral LLM models by setting MISTRAL_API_KEY and configuring the Memory client with provider settings - supports both standard models (like mistral-large-latest) and structured-outputs, with options for temperature, max tokens and Openrouter integration.
#### Embedders
[OpenAI](https://docs.mem0.ai/components/embedders/models/openai): Integrate OpenAI embedding models by setting OPENAI_API_KEY and configuring the Memory client with provider settings - supports models like text-embedding-3-small (default) and text-embedding-3-large with customizable dimensions.
[Azure OpenAI](https://docs.mem0.ai/components/embedders/models/azure_openai): Integrate Azure OpenAI embedding models by setting EMBEDDING_AZURE_OPENAI_API_KEY, EMBEDDING_AZURE_ENDPOINT, EMBEDDING_AZURE_DEPLOYMENT and EMBEDDING_AZURE_API_VERSION environment variables and configuring the Memory client with provider settings - supports models like text-embedding-3-large with customizable dimensions and Azure-specific configurations through azure_kwargs.
[Vertex AI](https://docs.mem0.ai/components/embedders/models/google_ai): Integrate Google Cloud's Vertex AI embedding models by setting GOOGLE_APPLICATION_CREDENTIALS environment variable to your service account credentials JSON file and configuring the Memory client with provider settings - supports models like text-embedding-004 with customizable embedding types (RETRIEVAL_DOCUMENT, RETRIEVAL_QUERY, etc.) and dimensions.
[Groq](https://docs.mem0.ai/components/embedders/models/groq): Integrate Groq embedding models by setting GROQ_API_KEY and configuring the Memory client with provider settings - supports models like text-embedding-3-small (default) and text-embedding-3-large with customizable dimensions.
[Hugging Face](https://docs.mem0.ai/components/embedders/models/hugging_face): Run Mem0 locally with Hugging Face embedding models by configuring the Memory client with provider settings like model (e.g. multi-qa-MiniLM-L6-cos-v1), embedding dimensions and model_kwargs - requires only OpenAI API key for LLM functionality.
[Ollama](https://docs.mem0.ai/components/embedders/models/ollama): Run Mem0 locally with Ollama embedding models by configuring the Memory client with provider settings like model (e.g. nomic-embed-text), embedding dimensions (default 512) and custom base URL - requires only OpenAI API key for LLM functionality.
[Gemini](https://docs.mem0.ai/components/embedders/models/gemini): Integrate Gemini embedding models by setting GOOGLE_API_KEY and configuring the Memory client with provider settings - supports models like text-embedding-004 with customizable dimensions (default 768) and requires OpenAI API key for LLM functionality.
#### Vector Stores
[Qdrant](https://docs.mem0.ai/components/vectordbs/dbs/qdrant): Integrate Qdrant vector database by configuring the Memory client with provider settings like collection_name, host, port, and other parameters - supports both local and remote deployments with options for persistent storage and custom client configurations.
[Pinecone](https://docs.mem0.ai/components/vectordbs/dbs/pinecone): Integrate Pinecone's managed vector database by configuring the Memory client with serverless or pod deployment options, supporting high-performance vector search with customizable embedding dimensions, distance metrics, and cloud providers (AWS/GCP/Azure) - requires PINECONE_API_KEY and matching embedding model dimensions (e.g. 1536 for OpenAI).
[Milvus](https://docs.mem0.ai/components/vectordbs/dbs/milvus): Integrate Milvus open-source vector database by configuring the Memory client with provider settings like url (default localhost:19530), token (for Zilliz cloud), collection_name, embedding_model_dims (default 1536) and metric_type - supports both local and cloud deployments for AI applications of any scale.
[Weaviate](https://docs.mem0.ai/components/vectordbs/dbs/weaviate): Integrate Weaviate open-source vector search engine by configuring the Memory client with provider settings like collection_name (default: mem0), cluster_url, auth_client_secret and embedding_model_dims (default: 1536) - enables efficient storage and retrieval of high-dimensional vector embeddings.
[Chroma](https://docs.mem0.ai/components/vectordbs/dbs/chroma): Integrate Chroma AI-native open-source vector database by configuring the Memory client with provider settings like collection_name (default: mem0), path (default: db), host, port, and client - enables simple storage and search of embeddings with focus on speed and ease of use.
[Faiss](https://docs.mem0.ai/components/vectordbs/dbs/faiss): Integrate Faiss, a high-performance library for similarity search and clustering of dense vectors, by configuring the Memory client with settings like collection_name, path, and distance_strategy (euclidean/cosine/inner_product) - supports efficient local vector search with in-memory or persistent storage options and is optimized for large-scale production use.
[PGVector](https://docs.mem0.ai/components/vectordbs/dbs/pgvector): Integrate Postgres vector similarity search by configuring the Memory client with database connection settings (user, password, host, port), collection name, embedding dimensions and indexing options (diskann/hnsw) - requires creating vector extension in Postgres and supports both local and cloud deployments.
[Elasticsearch](https://docs.mem0.ai/components/vectordbs/dbs/elasticsearch): Integrate Elasticsearch vector database by configuring host, port, collection name and authentication settings - supports k-NN vector search, cloud/local deployments, custom search queries, and requires `pip install elasticsearch>=8.0.0`.
[Redis](https://docs.mem0.ai/components/vectordbs/dbs/redis): Integrate Redis vector database for real-time vector storage and search by configuring collection name, embedding dimensions (default 1536), and Redis URL - supports both local Docker deployment and remote Redis Stack instances.
[Supabase](https://docs.mem0.ai/components/vectordbs/dbs/supabase): Integrate Supabase's PostgreSQL database with pgvector extension by configuring connection string, collection name, and optional index settings (hnsw/ivfflat) - enables efficient vector similarity search with support for different distance measures (cosine/l2/l1) and requires SQL migrations to enable vector functionality.
[Azure AI Search](https://docs.mem0.ai/components/vectordbs/dbs/azure): Integrate Azure AI Search (formerly Azure Cognitive Search) by configuring service_name, api_key and collection_name - supports vector compression (none/scalar/binary), hybrid search modes, and customizable vector dimensions with automatic extraction of filterable fields like user_id.
[Vertex AI Search](https://docs.mem0.ai/components/vectordbs/dbs/vertex_ai): Integrate Google Cloud's Vertex AI Vector Search by configuring endpoint_id, index_id, deployment_index_id, project details and optional region/credentials - enables efficient vector similarity search through Google Cloud's managed service with support for both get and search operations.

5321
docs/backup/openapi.json Normal file

File diff suppressed because it is too large Load Diff

51
docs/backup/overview.mdx Normal file
View File

@@ -0,0 +1,51 @@
---
title: Overview
icon: "info"
iconType: "solid"
---
<Snippet file="blank-notif.mdx" />
# Introduction
[Mem0](https://mem0.dev/wd) (pronounced "mem-zero") enhances AI assistants by giving them persistent, contextual memory. AI systems using Mem0 actively learn from and adapt to user interactions over time.
Mem0's memory layer combines LLMs with vector based storage. LLMs extract and process key information from conversations, while the vector storage enables efficient semantic search and retrieval of memories. This architecture helps AI agents connect past interactions with current context for more relevant responses.
## Key Features
- **Memory Processing**: Uses LLMs to automatically extract and store important information from conversations while maintaining full context
- **Memory Management**: Continuously updates and resolves contradictions in stored information to maintain accuracy
- **Dual Storage Architecture**: Combines vector database for memory storage and graph database for relationship tracking
- **Smart Retrieval System**: Employs semantic search and graph queries to find relevant memories based on importance and recency
- **Simple API Integration**: Provides easy-to-use endpoints for adding (`add`) and retrieving (`search`) memories
## Use Cases
- **Customer Support Chatbots**: Create support agents that remember customer history, preferences, and past interactions to provide personalized assistance
- **Personal AI Tutors**: Build educational assistants that track student progress, adapt to learning patterns, and provide contextual help
- **Healthcare Applications**: Develop healthcare assistants that maintain patient history and provide personalized care recommendations
- **Enterprise Knowledge Management**: Power systems that learn from organizational interactions and maintain institutional knowledge
- **Personalized AI Assistants**: Create assistants that learn user preferences and adapt their responses over time
## Getting Started
Mem0 offers two powerful ways to leverage our technology: our [managed platform](/platform/overview) and our [open source solution](/open-source/quickstart).
<CardGroup cols={3}>
<Card title="Quickstart" icon="rocket" href="/quickstart">
Integrate Mem0 in a few lines of code
</Card>
<Card title="Playground" icon="play" href="https://app.mem0.ai/playground">
Mem0 in action
</Card>
<Card title="Examples" icon="lightbulb" href="/examples">
See what you can build with Mem0
</Card>
</CardGroup>
## Need help?
If you have any questions, please feel free to reach out to us using one of the following methods:
<Snippet file="get-help.mdx"/>

View File

@@ -0,0 +1,114 @@
---
title: What is Mem0?
icon: "brain"
iconType: "solid"
---
<Snippet file="async-memory-add.mdx" />
Mem0 is a memory layer designed for modern AI agents. It acts as a persistent memory layer that agents can use to:
- Recall relevant past interactions
- Store important user preferences and factual context
- Learn from successes and failures
It gives AI agents memory so they can remember, learn, and evolve across interactions. Mem0 integrates easily into your agent stack and scales from prototypes to production systems.
## Stateless vs. Stateful Agents
Most current agents are stateless: they process a query, generate a response, and forget everything. Even with huge context windows, everything resets the next session.
Stateful agents, powered by Mem0, are different. They retain context, recall what matters, and behave more intelligently over time.
<Frame>
<img src="../images/stateless-vs-stateful-agent.png" />
</Frame>
## Where Memory Fits in the Agent Stack
Mem0 sits alongside your retriever, planner, and LLM. Unlike retrieval-based systems (like RAG), Mem0 tracks past interactions, stores long-term knowledge, and evolves the agents behavior.
<Frame>
<img src="../images/memory-agent-stack.png" />
</Frame>
Memory is not about pushing more tokens into a prompt but about intelligently remembering context that matters. This distinction matters:
| Capability | Context Window | Mem0 Memory |
|------------------|------------------------|-----------------------------|
| Retention | Temporary | Persistent |
| Cost | Grows with input size | Optimized (only what matters) |
| Recall | Token proximity | Relevance + intent-based |
| Personalization | None | Deep, evolving profile |
| Behavior | Reactive | Adaptive |
## Memory vs. RAG: Complementary Tools
RAG (Retrieval-Augmented Generation) is great for fetching facts from documents. But its stateless. It doesnt know who the user is, what theyve asked before, or what failed last time.
Mem0 provides continuity. It stores decisions, preferences, and context—not just knowledge.
| Aspect | RAG | Mem0 Memory |
|--------------------|-------------------------------|-------------------------------|
| Statefulness | Stateless | Stateful |
| Recall Type | Document lookup | Evolving user context |
| Use Case | Ground answers in data | Guide behavior across time |
Together, theyre stronger: RAG informs the LLM; Mem0 shapes its memory.
## Types of Memory in Mem0
Mem0 supports different kinds of memory to mimic how humans store information:
- **Working Memory**: short-term session awareness
- **Factual Memory**: long-term structured knowledge (e.g., preferences, settings)
- **Episodic Memory**: records specific past conversations
- **Semantic Memory**: builds general knowledge over time
## Why Developers Choose Mem0
Mem0 isnt a wrapper around a vector store. Its a full memory engine with:
- **LLM-based extraction**: Intelligently decides what to remember
- **Filtering & decay**: Avoids memory bloat, forgets irrelevant info
- **Costs Reduction**: Save compute costs with smart prompt injection of only relevant memories
- **Dashboards & APIs**: Observability, fine-grained control
- **Cloud and OSS**: Use our platform version or our open-source SDK version
You plug Mem0 into your agent framework, it doesnt replace your LLM or workflows. Instead, it adds a smart memory layer on top.
## Core Capabilities
- **Reduced token usage and faster responses**: sub-50 ms lookups
- **Semantic memory**: procedural, episodic, and factual support
- **Multimodal support**: handle both text and images
- **Graph memory**: connect insights and entities across sessions
- **Host your way**: either a managed service or a self-hosted version
## Getting Started
Mem0 offers two powerful ways to leverage our technology: our [managed platform](/platform/overview) and our [open source solution](/open-source/overview).
<CardGroup cols={3}>
<Card title="Quickstart" icon="rocket" href="/quickstart">
Integrate Mem0 in a few lines of code
</Card>
<Card title="Playground" icon="play" href="https://app.mem0.ai/playground">
Mem0 in action
</Card>
<Card title="Examples" icon="lightbulb" href="/examples">
See what you can build with Mem0
</Card>
</CardGroup>
## Need help?
If you have any questions, please feel free to reach out to us using one of the following methods:
<Snippet file="get-help.mdx"/>