diff --git a/docs/components/llms/overview.mdx b/docs/components/llms/overview.mdx index 3f65d6ad..41b5fd8c 100644 --- a/docs/components/llms/overview.mdx +++ b/docs/components/llms/overview.mdx @@ -14,7 +14,9 @@ To use a llm, you must provide a configuration to customize its usage. If no con For a comprehensive list of available parameters for llm configuration, please refer to [Config](./config). -To view all supported llms, visit the [Supported LLMs](./models). +## Supported LLMs + +See the list of supported LLMs below. All LLMs are supported in Python. The following LLMs are also supported in TypeScript: **OpenAI**, **Anthropic**, and **Groq**. diff --git a/docs/core-concepts/memory-operations/delete.mdx b/docs/core-concepts/memory-operations/delete.mdx index a803450d..bdfd3563 100644 --- a/docs/core-concepts/memory-operations/delete.mdx +++ b/docs/core-concepts/memory-operations/delete.mdx @@ -9,9 +9,9 @@ iconType: "solid" Memories can become outdated, irrelevant, or need to be removed for privacy or compliance reasons. Mem0 offers flexible ways to delete memory: -1. **Delete a Single Memory** – using a specific memory ID -2. **Batch Delete** – delete multiple known memory IDs (up to 1000) -3. **Filtered Delete** – delete memories matching a filter (e.g., `user_id`, `metadata`, `run_id`) +1. **Delete a Single Memory**: Using a specific memory ID +2. **Batch Delete**: Delete multiple known memory IDs (up to 1000) +3. **Filtered Delete**: Delete memories matching a filter (e.g., `user_id`, `metadata`, `run_id`) This page walks through code example for each method. diff --git a/docs/core-concepts/memory-operations/update.mdx b/docs/core-concepts/memory-operations/update.mdx index dde13d15..94d22c3a 100644 --- a/docs/core-concepts/memory-operations/update.mdx +++ b/docs/core-concepts/memory-operations/update.mdx @@ -10,8 +10,8 @@ iconType: "solid" User preferences, interests, and behaviors often evolve over time. The `update` operation lets you revise a stored memory, whether it's updating facts and memories, rephrasing a message, or enriching metadata. Mem0 supports both: -- **Single Memory Update** – for one specific memory using its ID -- **Batch Update** – for updating many memories at once (up to 1000) +- **Single Memory Update** for one specific memory using its ID +- **Batch Update** for updating many memories at once (up to 1000) This guide includes usage for both single update and batch update of memories through **Mem0 Platform** diff --git a/docs/examples/llama-index-mem0.mdx b/docs/examples/llama-index-mem0.mdx index 73743292..7712ad45 100644 --- a/docs/examples/llama-index-mem0.mdx +++ b/docs/examples/llama-index-mem0.mdx @@ -22,7 +22,7 @@ os.environ["OPENAI_API_KEY"] = "" llm = OpenAI(model="gpt-4o") ``` -Initialize the Mem0 client. You can find your API key [here](https://app.mem0.ai/dashboard/). Read about Mem0 [Open Source](https://docs.mem0.ai/open-source/quickstart). +Initialize the Mem0 client. You can find your API key [here](https://app.mem0.ai/dashboard/api-keys). Read about Mem0 [Open Source](https://docs.mem0.ai/open-source/overview). ```python os.environ["MEM0_API_KEY"] = "" diff --git a/docs/integrations/llama-index.mdx b/docs/integrations/llama-index.mdx index 1e3aabdf..725b91f9 100644 --- a/docs/integrations/llama-index.mdx +++ b/docs/integrations/llama-index.mdx @@ -63,7 +63,7 @@ context = { Set your Mem0 OSS by providing configuration details: - To know more about Mem0 OSS, read [Mem0 OSS Quickstart](https://docs.mem0.ai/open-source/quickstart). + To know more about Mem0 OSS, read [Mem0 OSS Quickstart](https://docs.mem0.ai/open-source/overview). ```python diff --git a/docs/open-source/python-quickstart.mdx b/docs/open-source/python-quickstart.mdx index edc7e4c7..a53907c5 100644 --- a/docs/open-source/python-quickstart.mdx +++ b/docs/open-source/python-quickstart.mdx @@ -513,7 +513,7 @@ chat_completion = client.chat.completions.create( ## APIs -Get started with using Mem0 APIs in your applications. For more details, refer to the [Platform](/platform/quickstart.mdx). +Get started with using Mem0 APIs in your applications. For more details, refer to the [Platform](../platform/quickstart). Here is an example of how to use Mem0 APIs: diff --git a/docs/openmemory/overview.mdx b/docs/openmemory/overview.mdx index 31fc7b10..a5f50cca 100644 --- a/docs/openmemory/overview.mdx +++ b/docs/openmemory/overview.mdx @@ -24,7 +24,7 @@ Add shared, persistent, low-friction memory to your MCP-compatible clients in se Example installation: `npx @openmemory/install --client claude --env OPENMEMORY_API_KEY=your-key` -OpenMemory is a local memory infrastructure powered by Mem0 that lets you carry your memory accross any AI app. It provides a unified memory layer that stays with you, enabling agents and assistants to remember what matters across applications. +OpenMemory is a local memory infrastructure powered by Mem0 that lets you carry your memory across any AI app. It provides a unified memory layer that stays with you, enabling agents and assistants to remember what matters across applications. OpenMemory UI @@ -59,7 +59,7 @@ curl -sL https://raw.githubusercontent.com/mem0ai/mem0/main/openmemory/run.sh | ``` This will start the OpenMemory server and the OpenMemory UI. Deleting the container will lead to the deletion of the memory store. -We suggest you follow the instructions [here](/openmemory/quickstart#setting-up-openmemory) to set up OpenMemory on your local machine, with more persistant memory store. +We suggest you follow the instructions [here](/openmemory/quickstart#setting-up-openmemory) to set up OpenMemory on your local machine, with more persistent memory store. ## How the OpenMemory MCP Server Works diff --git a/docs/platform/features/advanced-retrieval.mdx b/docs/platform/features/advanced-retrieval.mdx index e25ce1a7..1e01ae4b 100644 --- a/docs/platform/features/advanced-retrieval.mdx +++ b/docs/platform/features/advanced-retrieval.mdx @@ -16,8 +16,6 @@ You can enable any of the following modes independently or together: Each enhancement can be toggled independently via the `search()` API call. These flags are off by default. These are useful when building agents that require fine-grained retrieval control ---- - ## Keyword Search Keyword search expands the result set by including memories that contain lexically similar terms and important keywords from the query, even if they're not semantically similar. @@ -53,7 +51,7 @@ results = client.search( - May slightly reduce precision - Adds ~10ms latency ---- + ## Reranking @@ -91,7 +89,7 @@ results = client.search( - Adds ~150–200ms latency - Higher computational cost ---- + ## Filtering @@ -129,7 +127,7 @@ results = client.search( - Adds ~200-300ms latency - Best for focused, specific queries ---- + ## Combining Modes @@ -148,7 +146,7 @@ results = client.search( This configuration broadens the candidate pool with keywords, improves ordering via rerank, and finally cuts noise with filtering. Combining all modes may add up to ~450ms latency per query. ---- + ## Performance Benchmarks @@ -158,7 +156,7 @@ This configuration broadens the candidate pool with keywords, improves ordering | `rerank` | 150–200ms | | `filter_memories`| 200–300ms | ---- + ## Best Practices & Limitations @@ -171,6 +169,7 @@ This configuration broadens the candidate pool with keywords, improves ordering You can enable or disable these search modes by passing the respective parameters to the `search` method. There is no required sequence for these modes, and any combination can be used based on your needs. +--- If you have any questions, please feel free to reach out to us using one of the following methods: diff --git a/docs/platform/features/criteria-retrieval.mdx b/docs/platform/features/criteria-retrieval.mdx index a6d495a3..8ac02801 100644 --- a/docs/platform/features/criteria-retrieval.mdx +++ b/docs/platform/features/criteria-retrieval.mdx @@ -19,7 +19,7 @@ You define **criteria** - custom attributes like "joy", "negativity", "confidenc This gives you nuanced, intent-aware memory search that adapts to your use case. ---- + ## When to Use Criteria Retrieval @@ -29,7 +29,7 @@ Use Criteria Retrieval if: - You want to guide memory selection based on **context**, not just content - You have domain-specific signals like "risk", "positivity", "confidence", etc. that shape recall ---- + ## Setting Up Criteria Retrieval @@ -156,7 +156,7 @@ results_without_criteria = client.search( 2. **Score Distribution**: With criteria, scores are more spread out (0.116 to 0.666) and reflect the criteria weights, while without criteria, scores are more clustered (0.336 to 0.607) and based purely on relevance. 3. **Trait Sensitivity**: “Rainy day” content is penalized due to negative tone. “Storm curiosity” is recognized and scored accordingly. ---- + ## Key Differences vs. Standard Search @@ -172,7 +172,7 @@ results_without_criteria = client.search( If no criteria are defined for a project, `version="v2"` behaves like normal search. ---- + ## Best Practices @@ -182,7 +182,7 @@ If no criteria are defined for a project, `version="v2"` behaves like normal sea - Avoid redundant or ambiguous criteria (e.g. “positivity” + “joy”) - Always handle empty result sets in your application logic ---- + ## How It Works @@ -197,7 +197,7 @@ This lets you prioritize memories that align with your agent’s goals and not j Criteria retrieval is currently supported only in search v2. Make sure to use `version="v2"` when performing searches with custom criteria. ---- + ## Summary diff --git a/docs/platform/features/platform-overview.mdx b/docs/platform/features/platform-overview.mdx index 9e28b313..401e2d0d 100644 --- a/docs/platform/features/platform-overview.mdx +++ b/docs/platform/features/platform-overview.mdx @@ -11,34 +11,34 @@ Learn about the key features and capabilities that make Mem0 a powerful platform ## Core Features - + Superior search results using state-of-the-art algorithms, including keyword search, reranking, and filtering capabilities. - + Only send your latest conversation history - we automatically retrieve the rest and generate properly contextualized memories. - + Process and analyze various types of content including images. - + Customize and curate stored memories to focus on relevant information while excluding unnecessary data, enabling improved accuracy, privacy control, and resource efficiency. - + Create and manage custom categories to organize memories based on your specific needs and requirements. - + Define specific guidelines for your project to ensure consistent handling of information and requirements. - + Tailor the behavior of your Mem0 instance with custom prompts for specific use cases or domains. - + Asynchronous client for non-blocking operations and high concurrency applications. - + Export memories in structured formats using customizable Pydantic schemas. - + Add memories in the form of nodes and edges in a graph database and search for related memories. diff --git a/docs/platform/overview.mdx b/docs/platform/overview.mdx index 267cd153..37b6d278 100644 --- a/docs/platform/overview.mdx +++ b/docs/platform/overview.mdx @@ -26,11 +26,11 @@ Mem0 Platform offers a powerful, user-centric solution for AI memory management ## Getting Started -Check out our [Platform Guide](/platform/guide) to start using Mem0 platform quickly. +Check out our [Platform Guide](/platform/quickstart) to start using Mem0 platform quickly. ## Next Steps - Sign up to the [Mem0 Platform](https://mem0.dev/pd) -- Join our [Discord](https://mem0.dev/Did) or [Slack](https://mem0.dev/slack) with other developers and get support. +- Join our [Discord](https://mem0.dev/Did) with other developers and get support. We're excited to see what you'll build with Mem0 Platform. Let's create smarter, more personalized AI experiences together! diff --git a/docs/quickstart.mdx b/docs/quickstart.mdx index 3f3d99fb..fc5e1e39 100644 --- a/docs/quickstart.mdx +++ b/docs/quickstart.mdx @@ -5,9 +5,6 @@ iconType: "solid" --- - -🎉 We're excited to announce that Claude 4 is now available with Mem0! Check it out [here](components/llms/models/anthropic). - Mem0 offers two powerful ways to leverage our technology: [our managed platform](#mem0-platform-managed-solution) and [our open source solution](#mem0-open-source). @@ -418,7 +415,7 @@ const relatedMemories = memory.search("Should I drink coffee or tea?", { userId: Learn more about Mem0 OSS Python SDK - + Learn more about Mem0 OSS Node.js SDK \ No newline at end of file diff --git a/docs/what-is-mem0.mdx b/docs/what-is-mem0.mdx index b69e934e..f98c848d 100644 --- a/docs/what-is-mem0.mdx +++ b/docs/what-is-mem0.mdx @@ -92,7 +92,7 @@ You plug Mem0 into your agent framework, it doesn’t replace your LLM or workfl ## Getting Started -Mem0 offers two powerful ways to leverage our technology: our [managed platform](/platform/overview) and our [open source solution](/open-source/quickstart). +Mem0 offers two powerful ways to leverage our technology: our [managed platform](/platform/overview) and our [open source solution](/open-source/overview). diff --git a/mem0-ts/README.md b/mem0-ts/README.md index e0b7c157..8abb07a9 100644 --- a/mem0-ts/README.md +++ b/mem0-ts/README.md @@ -2,8 +2,8 @@ Mem0 is a self-improving memory layer for LLM applications, enabling personalized AI experiences that save costs and delight users. We offer both cloud and open-source solutions to cater to different needs. -See the complete [OSS Docs](https://docs.mem0.ai/open-source-typescript/quickstart). -See the complete [Platform API Reference](https://docs.mem0.ai/api-reference/overview). +See the complete [OSS Docs](https://docs.mem0.ai/open-source/node-quickstart). +See the complete [Platform API Reference](https://docs.mem0.ai/api-reference). ## 1. Installation @@ -61,5 +61,4 @@ If you have any questions or need assistance, please reach out to us: - Email: founders@mem0.ai - [Join our discord community](https://mem0.ai/discord) -- [Join our slack community](https://mem0.ai/slack) -- GitHub Issues: [Report bugs or request features](https://github.com/mem0ai/mem0ai-node/issues) +- GitHub Issues: [Report bugs or request features](https://github.com/mem0ai/mem0/issues)