Files
t6_mem0/docs/llms.txt
2025-05-22 10:59:18 +05:30

122 lines
24 KiB
Plaintext

# Mem0
## High Level
[What is Mem0?](https://docs.mem0.ai/overview): consider using Mem0, when building conversational AI agents with memory. The page discusses the main reasons to use Mem0: Context Management, Smart Retrieval System, Simple API Integration and Dual Storage Architecture(vector and graph database).
[How is Mem0 different from traditional RAG?](https://docs.mem0.ai/faqs#how-mem0-is-different-from-traditional-rag): Mem0's memory for LLMs offers superior entity relationship understanding, contextual continuity, and adaptive learning compared to RAG. It retains information across sessions, dynamically updates with new data, and personalizes interactions, making it ideal for context-aware AI applications.
## Concepts
[Memory Types](https://docs.mem0.ai/core-concepts/memory-types): Mem0 implements both short-term memory (for conversation history and immediate context) and long-term memory (for persistent storage of factual, episodic, and semantic information) to maintain context and personalization across interactions.
[Memory Operations](https://docs.mem0.ai/core-concepts/memory-operations): Two core operations power Mem0's functionality: `add` Processes and stores conversations through information extraction, conflict resolution, and dual storage and `search` Retrieves relevant memories using semantic search with query processing and result ranking
[Information Processing](https://docs.mem0.ai/core-concepts/memory-operations): When adding memories, Mem0 uses LLMs to extract relevant information, identify entities and relationships, and resolve conflicts with existing data.
[Storage Architecture](https://docs.mem0.ai/core-concepts/memory-types): Mem0 combines vector embeddings for semantic information storage with efficient retrieval mechanisms, enabling fast access to relevant past interactions while maintaining user-specific context across sessions.
## How-to guides
### Installation
[Mem0 Installation](https://docs.mem0.ai/quickstart): Mem0 offers two installation options: Mem0 Platform (Managed Solution) and Mem0 OSS.
[How to: install Mem0 Platform (Managed Solution)](https://docs.mem0.ai/quickstart#mem0-platform-managed-solution): The easiest way to get started with Mem0 is to use the managed platform. This approach eliminates the need to set up and maintain your own infrastructure.
[How to: install Mem0 OSS](https://docs.mem0.ai/quickstart#mem0-open-source): If you prefer to manage your own infrastructure, you can install Mem0 OSS. This option requires setting up your own infrastructure and managing your own vector database.
## Platform
### Usage
[Initialize Client](https://docs.mem0.ai/platform/quickstart#3-instantiate-client): Mem0 gives two ways to initialize the client: Synchronous -> MemoryClient and Asynchronous -> AsyncMemoryClient.
[How to: add memories](https://docs.mem0.ai/platform/quickstart#4-1-create-memories): Add memories to Mem0 using messages or a simple string. The `add` method allows adding memories to a specific memory by providing `user_id`, `agent_id`, `run_id`, or `app_id`.
[How to: search memories](https://docs.mem0.ai/platform/quickstart#4-2-search-memories): Search memories in Mem0 by passing a query string and optional parameters. The `search` method returns a list of memories sorted by relevance. Csutom filters can also be passed to make the search more specific.
[How to: get all memories](https://docs.mem0.ai/platform/quickstart#4-4-get-all-memories): Get all the memories by passing a `user_id`, `agent_id`, `run_id`, or `app_id`. Also custom filters can be passed to filter the memories.
[How to: get memory history](https://docs.mem0.ai/platform/quickstart#4-5-get-memory-history): Get the history of a specific memory by passing `memory_id` after adding memories using the `add` method.
[How to: update memory](https://docs.mem0.ai/platform/quickstart#4-6-update-memory): Update a memory by passing `memory_id` after adding memories using the `add` method.
[How to: delete memory](https://docs.mem0.ai/platform/quickstart#4-7-delete-memories): Delete memories by passing `memory_id` after adding memories using the `add` method.
[How to: reset client](https://docs.mem0.ai/platform/quickstart#4-8-reset-client): Reset the client where all the memories, users, agents, sessions and runs are deleted.
[How to: batch update memories](https://docs.mem0.ai/platform/quickstart#4-9-batch-update-memories): Batch update memories by passing a list of memories to the `batch_update` method.
[How to: batch delete memories](https://docs.mem0.ai/platform/quickstart#4-10-batch-delete-memories): Batch delete memories by passing a list of memories to the `batch_delete` method.
## Features
[Graph Memory](https://docs.mem0.ai/platform/features/graph-memory): Mem0's graph memory system builds relationships between entities in your data, enabling contextually relevant retrieval by analyzing connections between information points - activate it with `enable_graph=True` to enhance search results beyond direct semantic matches, ideal for applications tracking evolving relationships.
[Advanced Retrieval](https://docs.mem0.ai/platform/features/advanced-retrieval): Mem0 offers enhanced search capabilities through three advanced retrieval modes: keyword search (improves recall by matching specific terms), reranking (ensures most relevant results appear first using neural networks), and filtering (narrows results by specific criteria) - each can be enabled independently or in combination to optimize search precision and relevance.
[Multimodal Support](https://docs.mem0.ai/platform/features/multimodal-support): Mem0 extends beyond text by supporting images and documents (JPG, PNG, MDX, TXT, PDF), allowing users to integrate visual and document content through direct URLs or Base64 encoding, enhancing the memory system's ability to understand and recall information from various media types.
[Memory Customization](https://docs.mem0.ai/platform/features/selective-memory): Mem0 enables selective memory storage through inclusion and exclusion rules, allowing users to focus on relevant information (like specific topics) while omitting irrelevant data (such as food preferences), resulting in more efficient, accurate, and privacy-conscious AI interactions.
[Custom Categories](https://docs.mem0.ai/platform/features/custom-categories): Mem0 allows setting custom categories at both project level and during individual API calls, overriding default categories (like personal_details, family, sports) with more specific ones to improve memory categorization accuracy - simply provide a list of category dictionaries with descriptive definitions when adding memories.
[Async Client](https://docs.mem0.ai/platform/features/async-client): Mem0 provides an AsyncMemoryClient for non-blocking operations, offering the same functionality as the synchronous client (add, search, get_all, delete, etc.) but with async/await support, making it ideal for high-concurrency applications that need to perform memory operations without blocking execution.
[Memory Export](https://docs.mem0.ai/platform/features/memory-export): Mem0 enables exporting memories in structured formats using customizable Pydantic schemas, allowing you to transform stored memories into specific data structures by defining schemas, submitting export jobs with optional processing instructions, and retrieving the formatted data with various filtering options.
## OSS
### Usage
#### Python
[Initialize python client](https://docs.mem0.ai/open-source/python-quickstart#installation): Install Mem0 with `pip install mem0ai`, then initialize the client with `from mem0 import Memory; m = Memory()` (requires OpenAI API key). For advanced usage, configure with custom parameters or enable graph memory with `Memory(enable_graph=True)`.
[Configuration Parameters](https://docs.mem0.ai/open-source/python-quickstart#configuration-parameters): Mem0 offers extensive configuration options for vector stores (provider, host, port), LLMs (provider, model, temperature, API keys), embedders (provider, model), graph stores (provider, URL, credentials), and general settings (history path, API version, custom prompts) - all customizable through a comprehensive configuration dictionary.
[How to: add memories](https://docs.mem0.ai/open-source/python-quickstart#store-a-memory): Add memories to Mem0 using the Python OSS client's `add` method with messages or a simple string. This method allows adding memories to a specific memory by providing `user_id`, `agent_id`, `run_id`, or `app_id`.
[How to: search memories](https://docs.mem0.ai/open-source/python-quickstart#search-memories): Search memories in Mem0 using the Python OSS client by passing a query string and optional parameters. The `search` method returns a list of memories sorted by relevance. Custom filters can also be passed to make the search more specific.
[How to: get all memories](https://docs.mem0.ai/open-source/python-quickstart#retrieve-memories): Get all the memories using the Python OSS client by passing a `user_id`, `agent_id`, `run_id`, or `app_id`. Also custom filters can be passed to filter the memories.
[How to: get memory history](https://docs.mem0.ai/open-source/python-quickstart#memory-history): Get the history of a specific memory in the Python OSS client by passing `memory_id` after adding memories using the `add` method.
[How to: update memory](https://docs.mem0.ai/open-source/python-quickstart#update-a-memory): Update a memory in the Python OSS client by passing `memory_id` after adding memories using the `add` method.
[How to: delete memory](https://docs.mem0.ai/open-source/python-quickstart#delete-memory): Delete memories in the Python OSS client by passing `memory_id` after adding memories using the `add` method.
#### Node.js
[Initialize node client](https://docs.mem0.ai/open-source/node-quickstart#installation): Install Mem0 with `npm install mem0ai`, then initialize the client with `import { Memory } from 'mem0ai'; const m = new Memory();` (requires OpenAI API key). For advanced usage, configure with custom parameters or enable graph memory with `new Memory({ enableGraph: true })`.
[How to: add memories](https://docs.mem0.ai/open-source/node-quickstart#store-a-memory): Add memories to Mem0 using the Node.js OSS client's `add` method with messages or a simple string. This method allows adding memories to a specific memory by providing `user_id`, `agent_id`, `run_id`, or `app_id`.
[How to: search memories](https://docs.mem0.ai/open-source/node-quickstart#search-memories): Search memories in Mem0 using the Node.js OSS client by passing a query string and optional parameters. The `search` method returns a list of memories sorted by relevance. Custom filters can also be passed to make the search more specific.
[How to: get all memories](https://docs.mem0.ai/open-source/node-quickstart#retrieve-memories): Get all the memories using the Node.js OSS client by passing a `user_id`, `agent_id`, `run_id`, or `app_id`. Also custom filters can be passed to filter the memories.
[How to: get memory history](https://docs.mem0.ai/open-source/node-quickstart#memory-history): Get the history of a specific memory in the Node.js OSS client by passing `memory_id` after adding memories using the `add` method.
[How to: update memory](https://docs.mem0.ai/open-source/node-quickstart#update-a-memory): Update a memory in the Node.js OSS client by passing `memory_id` after adding memories using the `add` method.
[How to: delete memory](https://docs.mem0.ai/open-source/node-quickstart#delete-memory): Delete memories in the Node.js OSS client by passing `memory_id` after adding memories using the `add` method.
### Features
[OpenAI Compatibility](https://docs.mem0.ai/open-source/features/openai_compatibility): Mem0 offers seamless integration with OpenAI-compatible APIs, allowing developers to enhance conversational agents with structured memory by initializing with a Mem0 API key (or locally without one), supporting various LLM providers, and enabling personalized responses through user context persistence across interactions with parameters like user_id, agent_id, and custom filters.
[Custom Fact Extraction Prompt](https://docs.mem0.ai/open-source/features/custom-fact-extraction-prompt): Mem0 enables custom fact extraction prompts to tailor information extraction for specific use cases by defining domain-specific examples and formats, allowing precise control over what information is extracted from messages - simply provide a custom prompt with few-shot examples in the config when initializing the Memory client.
[Custom Update Memory Prompt](https://docs.mem0.ai/open-source/features/custom-update-memory-prompt): Mem0 enables customizing the update memory prompt to control how memories are modified by comparing newly retrieved facts with existing memories and determining appropriate actions (add, update, delete, or no change) based on custom logic and examples provided in the prompt configuration.
[REST API Server](https://docs.mem0.ai/open-source/features/rest-api): Mem0 provides a FastAPI-based REST API server that supports core operations (create/retrieve/search/update/delete memories) with OpenAPI documentation at /docs, easily deployable via Docker Compose with pre-configured databases (postgres pgvector, neo4j) - just set OPENAI_API_KEY to get started.
[Graph Memory](https://docs.mem0.ai/open-source/graph_memory/overview): Mem0's open-source graph memory system enables building and querying relationships between entities by installing with `pip install "mem0ai[graph]"` and configuring a graph store provider (like Neo4j) - this allows for more contextual memory retrieval by combining vector and graph-based approaches to track evolving relationships between information points.
### Components
#### LLMs
[OpenAI](https://docs.mem0.ai/components/llms/models/openai): Integrate OpenAI LLM models by setting OPENAI_API_KEY and configuring the Memory client with provider settings - supports both standard models (like gpt-4) and structured-outputs, with options for temperature, max tokens and Openrouter integration.
[Anthropic](https://docs.mem0.ai/components/llms/models/anthropic): Integrate Anthropic LLM models by setting ANTHROPIC_API_KEY from your Account Settings Page and configuring the Memory client with provider settings - supports models like claude-3-7-sonnet-latest with customizable temperature and max_tokens parameters.
[Google AI](https://docs.mem0.ai/components/llms/models/google_AI): Integrate Gemini LLM models by setting GEMINI_API_KEY from Google Maker Suite and configuring the Memory client with litellm provider - supports models like gemini-pro with customizable temperature and max_tokens parameters.
[Groq](https://docs.mem0.ai/components/llms/models/groq): Integrate Groq's Language Processing Unit (LPU) optimized models by setting GROQ_API_KEY and configuring the Memory client with provider settings - supports models like mixtral-8x7b-32768 with customizable temperature and max_tokens parameters for high-performance AI inference.
[Together](https://docs.mem0.ai/components/llms/models/together): Integrate Together LLM models by setting TOGETHER_API_KEY and configuring the Memory client with provider settings - supports both standard models (like together-llama-3-8b-instant) and structured-outputs, with options for temperature, max tokens and Openrouter integration.
[Deepseek](https://docs.mem0.ai/components/llms/models/deepseek): Integrate Deepseek LLM models by setting DEEPSEEK_API_KEY and configuring the Memory client with provider settings - supports both standard models (like deepseek-chat) and structured-outputs, with options for temperature, max tokens and Openrouter integration.
[xAI](https://docs.mem0.ai/components/llms/models/xai): Integrate xAI LLM models by setting XAI_API_KEY and configuring the Memory client with provider settings - supports both standard models (like xai-chat) and structured-outputs, with options for temperature, max tokens and Openrouter integration.
[LM Studio](https://docs.mem0.ai/components/llms/models/lmstudio): Run Mem0 locally with LM Studio by configuring the Memory client with provider settings and a local LM Studio server - supports using LM Studio for both LLM inference and embeddings, requiring no external API keys when running fully locally with appropriate models loaded.
[Ollama](https://docs.mem0.ai/components/llms/models/ollama): Run Mem0 locally with Ollama LLM models by configuring the Memory client with provider settings like model (e.g. mixtral:8x7b), temperature and max_tokens - supports tool calling and requires only OpenAI API key for embeddings.
[AWS Bedrock](https://docs.mem0.ai/components/llms/models/aws_bedrock): Integrate AWS Bedrock LLM models by setting AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and configuring the Memory client with provider settings - supports both standard models (like bedrock-anthropic-claude-3-5-sonnet) and structured-outputs, with options for temperature, max tokens and Openrouter integration.
[Azure OpenAI](https://docs.mem0.ai/components/llms/models/azure_openai): Integrate Azure OpenAI LLM models by setting LLM_AZURE_OPENAI_API_KEY, LLM_AZURE_ENDPOINT, LLM_AZURE_DEPLOYMENT and LLM_AZURE_API_VERSION environment variables and configuring the Memory client with provider settings - supports both standard and structured-output models with customizable deployment, API version, endpoint and headers (note: some features like parallel tool calling and temperature are currently unsupported).
[LiteLLM](https://docs.mem0.ai/components/llms/models/litellm): Integrate LiteLLM LLM models by setting LITELLM_API_KEY and configuring the Memory client with provider settings - supports both standard models (like llama3.1:8b) and structured-outputs, with options for temperature, max tokens and Openrouter integration.
[Mistral](https://docs.mem0.ai/components/llms/models/mistral): Integrate Mistral LLM models by setting MISTRAL_API_KEY and configuring the Memory client with provider settings - supports both standard models (like mistral-large-latest) and structured-outputs, with options for temperature, max tokens and Openrouter integration.
#### Embedders
[OpenAI](https://docs.mem0.ai/components/embedders/models/openai): Integrate OpenAI embedding models by setting OPENAI_API_KEY and configuring the Memory client with provider settings - supports models like text-embedding-3-small (default) and text-embedding-3-large with customizable dimensions.
[Azure OpenAI](https://docs.mem0.ai/components/embedders/models/azure_openai): Integrate Azure OpenAI embedding models by setting EMBEDDING_AZURE_OPENAI_API_KEY, EMBEDDING_AZURE_ENDPOINT, EMBEDDING_AZURE_DEPLOYMENT and EMBEDDING_AZURE_API_VERSION environment variables and configuring the Memory client with provider settings - supports models like text-embedding-3-large with customizable dimensions and Azure-specific configurations through azure_kwargs.
[Vertex AI](https://docs.mem0.ai/components/embedders/models/google_ai): Integrate Google Cloud's Vertex AI embedding models by setting GOOGLE_APPLICATION_CREDENTIALS environment variable to your service account credentials JSON file and configuring the Memory client with provider settings - supports models like text-embedding-004 with customizable embedding types (RETRIEVAL_DOCUMENT, RETRIEVAL_QUERY, etc.) and dimensions.
[Groq](https://docs.mem0.ai/components/embedders/models/groq): Integrate Groq embedding models by setting GROQ_API_KEY and configuring the Memory client with provider settings - supports models like text-embedding-3-small (default) and text-embedding-3-large with customizable dimensions.
[Hugging Face](https://docs.mem0.ai/components/embedders/models/hugging_face): Run Mem0 locally with Hugging Face embedding models by configuring the Memory client with provider settings like model (e.g. multi-qa-MiniLM-L6-cos-v1), embedding dimensions and model_kwargs - requires only OpenAI API key for LLM functionality.
[Ollama](https://docs.mem0.ai/components/embedders/models/ollama): Run Mem0 locally with Ollama embedding models by configuring the Memory client with provider settings like model (e.g. nomic-embed-text), embedding dimensions (default 512) and custom base URL - requires only OpenAI API key for LLM functionality.
[Gemini](https://docs.mem0.ai/components/embedders/models/gemini): Integrate Gemini embedding models by setting GOOGLE_API_KEY and configuring the Memory client with provider settings - supports models like text-embedding-004 with customizable dimensions (default 768) and requires OpenAI API key for LLM functionality.
#### Vector Stores
[Qdrant](https://docs.mem0.ai/components/vectordbs/dbs/qdrant): Integrate Qdrant vector database by configuring the Memory client with provider settings like collection_name, host, port, and other parameters - supports both local and remote deployments with options for persistent storage and custom client configurations.
[Pinecone](https://docs.mem0.ai/components/vectordbs/dbs/pinecone): Integrate Pinecone's managed vector database by configuring the Memory client with serverless or pod deployment options, supporting high-performance vector search with customizable embedding dimensions, distance metrics, and cloud providers (AWS/GCP/Azure) - requires PINECONE_API_KEY and matching embedding model dimensions (e.g. 1536 for OpenAI).
[Milvus](https://docs.mem0.ai/components/vectordbs/dbs/milvus): Integrate Milvus open-source vector database by configuring the Memory client with provider settings like url (default localhost:19530), token (for Zilliz cloud), collection_name, embedding_model_dims (default 1536) and metric_type - supports both local and cloud deployments for AI applications of any scale.
[Weaviate](https://docs.mem0.ai/components/vectordbs/dbs/weaviate): Integrate Weaviate open-source vector search engine by configuring the Memory client with provider settings like collection_name (default: mem0), cluster_url, auth_client_secret and embedding_model_dims (default: 1536) - enables efficient storage and retrieval of high-dimensional vector embeddings.
[Chroma](https://docs.mem0.ai/components/vectordbs/dbs/chroma): Integrate Chroma AI-native open-source vector database by configuring the Memory client with provider settings like collection_name (default: mem0), path (default: db), host, port, and client - enables simple storage and search of embeddings with focus on speed and ease of use.
[Faiss](https://docs.mem0.ai/components/vectordbs/dbs/faiss): Integrate Faiss, a high-performance library for similarity search and clustering of dense vectors, by configuring the Memory client with settings like collection_name, path, and distance_strategy (euclidean/cosine/inner_product) - supports efficient local vector search with in-memory or persistent storage options and is optimized for large-scale production use.
[PGVector](https://docs.mem0.ai/components/vectordbs/dbs/pgvector): Integrate Postgres vector similarity search by configuring the Memory client with database connection settings (user, password, host, port), collection name, embedding dimensions and indexing options (diskann/hnsw) - requires creating vector extension in Postgres and supports both local and cloud deployments.
[Elasticsearch](https://docs.mem0.ai/components/vectordbs/dbs/elasticsearch): Integrate Elasticsearch vector database by configuring host, port, collection name and authentication settings - supports k-NN vector search, cloud/local deployments, custom search queries, and requires `pip install elasticsearch>=8.0.0`.
[Redis](https://docs.mem0.ai/components/vectordbs/dbs/redis): Integrate Redis vector database for real-time vector storage and search by configuring collection name, embedding dimensions (default 1536), and Redis URL - supports both local Docker deployment and remote Redis Stack instances.
[Supabase](https://docs.mem0.ai/components/vectordbs/dbs/supabase): Integrate Supabase's PostgreSQL database with pgvector extension by configuring connection string, collection name, and optional index settings (hnsw/ivfflat) - enables efficient vector similarity search with support for different distance measures (cosine/l2/l1) and requires SQL migrations to enable vector functionality.
[Azure AI Search](https://docs.mem0.ai/components/vectordbs/dbs/azure): Integrate Azure AI Search (formerly Azure Cognitive Search) by configuring service_name, api_key and collection_name - supports vector compression (none/scalar/binary), hybrid search modes, and customizable vector dimensions with automatic extraction of filterable fields like user_id.
[Vertex AI Search](https://docs.mem0.ai/components/vectordbs/dbs/vertex_ai): Integrate Google Cloud's Vertex AI Vector Search by configuring endpoint_id, index_id, deployment_index_id, project details and optional region/credentials - enables efficient vector similarity search through Google Cloud's managed service with support for both get and search operations.