doc: Broken links fixed in docs (#3034)

This commit is contained in:
Antaripa Saha
2025-06-25 17:18:29 +05:30
committed by GitHub
parent aaf879322c
commit a98842422b
14 changed files with 42 additions and 45 deletions

View File

@@ -14,7 +14,9 @@ To use a llm, you must provide a configuration to customize its usage. If no con
For a comprehensive list of available parameters for llm configuration, please refer to [Config](./config).
To view all supported llms, visit the [Supported LLMs](./models).
## Supported LLMs
See the list of supported LLMs below.
<Note>
All LLMs are supported in Python. The following LLMs are also supported in TypeScript: **OpenAI**, **Anthropic**, and **Groq**.

View File

@@ -9,9 +9,9 @@ iconType: "solid"
Memories can become outdated, irrelevant, or need to be removed for privacy or compliance reasons. Mem0 offers flexible ways to delete memory:
1. **Delete a Single Memory** using a specific memory ID
2. **Batch Delete** delete multiple known memory IDs (up to 1000)
3. **Filtered Delete** delete memories matching a filter (e.g., `user_id`, `metadata`, `run_id`)
1. **Delete a Single Memory**: Using a specific memory ID
2. **Batch Delete**: Delete multiple known memory IDs (up to 1000)
3. **Filtered Delete**: Delete memories matching a filter (e.g., `user_id`, `metadata`, `run_id`)
This page walks through code example for each method.

View File

@@ -10,8 +10,8 @@ iconType: "solid"
User preferences, interests, and behaviors often evolve over time. The `update` operation lets you revise a stored memory, whether it's updating facts and memories, rephrasing a message, or enriching metadata.
Mem0 supports both:
- **Single Memory Update** for one specific memory using its ID
- **Batch Update** for updating many memories at once (up to 1000)
- **Single Memory Update** for one specific memory using its ID
- **Batch Update** for updating many memories at once (up to 1000)
This guide includes usage for both single update and batch update of memories through **Mem0 Platform**

View File

@@ -22,7 +22,7 @@ os.environ["OPENAI_API_KEY"] = "<your-openai-api-key>"
llm = OpenAI(model="gpt-4o")
```
Initialize the Mem0 client. You can find your API key [here](https://app.mem0.ai/dashboard/). Read about Mem0 [Open Source](https://docs.mem0.ai/open-source/quickstart).
Initialize the Mem0 client. You can find your API key [here](https://app.mem0.ai/dashboard/api-keys). Read about Mem0 [Open Source](https://docs.mem0.ai/open-source/overview).
```python
os.environ["MEM0_API_KEY"] = "<your-mem0-api-key>"

View File

@@ -63,7 +63,7 @@ context = {
Set your Mem0 OSS by providing configuration details:
<Note type="info">
To know more about Mem0 OSS, read [Mem0 OSS Quickstart](https://docs.mem0.ai/open-source/quickstart).
To know more about Mem0 OSS, read [Mem0 OSS Quickstart](https://docs.mem0.ai/open-source/overview).
</Note>
```python

View File

@@ -513,7 +513,7 @@ chat_completion = client.chat.completions.create(
## APIs
Get started with using Mem0 APIs in your applications. For more details, refer to the [Platform](/platform/quickstart.mdx).
Get started with using Mem0 APIs in your applications. For more details, refer to the [Platform](../platform/quickstart).
Here is an example of how to use Mem0 APIs:

View File

@@ -24,7 +24,7 @@ Add shared, persistent, low-friction memory to your MCP-compatible clients in se
Example installation: `npx @openmemory/install --client claude --env OPENMEMORY_API_KEY=your-key`
OpenMemory is a local memory infrastructure powered by Mem0 that lets you carry your memory accross any AI app. It provides a unified memory layer that stays with you, enabling agents and assistants to remember what matters across applications.
OpenMemory is a local memory infrastructure powered by Mem0 that lets you carry your memory across any AI app. It provides a unified memory layer that stays with you, enabling agents and assistants to remember what matters across applications.
<img src="https://github.com/user-attachments/assets/3c701757-ad82-4afa-bfbe-e049c2b4320b" alt="OpenMemory UI" />
@@ -59,7 +59,7 @@ curl -sL https://raw.githubusercontent.com/mem0ai/mem0/main/openmemory/run.sh |
```
This will start the OpenMemory server and the OpenMemory UI. Deleting the container will lead to the deletion of the memory store.
We suggest you follow the instructions [here](/openmemory/quickstart#setting-up-openmemory) to set up OpenMemory on your local machine, with more persistant memory store.
We suggest you follow the instructions [here](/openmemory/quickstart#setting-up-openmemory) to set up OpenMemory on your local machine, with more persistent memory store.
## How the OpenMemory MCP Server Works

View File

@@ -16,8 +16,6 @@ You can enable any of the following modes independently or together:
Each enhancement can be toggled independently via the `search()` API call. These flags are off by default. These are useful when building agents that require fine-grained retrieval control
---
## Keyword Search
Keyword search expands the result set by including memories that contain lexically similar terms and important keywords from the query, even if they're not semantically similar.
@@ -53,7 +51,7 @@ results = client.search(
- May slightly reduce precision
- Adds ~10ms latency
---
## Reranking
@@ -91,7 +89,7 @@ results = client.search(
- Adds ~150200ms latency
- Higher computational cost
---
## Filtering
@@ -129,7 +127,7 @@ results = client.search(
- Adds ~200-300ms latency
- Best for focused, specific queries
---
## Combining Modes
@@ -148,7 +146,7 @@ results = client.search(
This configuration broadens the candidate pool with keywords, improves ordering via rerank, and finally cuts noise with filtering.
<Note> Combining all modes may add up to ~450ms latency per query. </Note>
---
## Performance Benchmarks
@@ -158,7 +156,7 @@ This configuration broadens the candidate pool with keywords, improves ordering
| `rerank` | 150200ms |
| `filter_memories`| 200300ms |
---
## Best Practices & Limitations
@@ -171,6 +169,7 @@ This configuration broadens the candidate pool with keywords, improves ordering
<Note> You can enable or disable these search modes by passing the respective parameters to the `search` method. There is no required sequence for these modes, and any combination can be used based on your needs. </Note>
---
If you have any questions, please feel free to reach out to us using one of the following methods:
<Snippet file="get-help.mdx" />

View File

@@ -19,7 +19,7 @@ You define **criteria** - custom attributes like "joy", "negativity", "confidenc
This gives you nuanced, intent-aware memory search that adapts to your use case.
---
## When to Use Criteria Retrieval
@@ -29,7 +29,7 @@ Use Criteria Retrieval if:
- You want to guide memory selection based on **context**, not just content
- You have domain-specific signals like "risk", "positivity", "confidence", etc. that shape recall
---
## Setting Up Criteria Retrieval
@@ -156,7 +156,7 @@ results_without_criteria = client.search(
2. **Score Distribution**: With criteria, scores are more spread out (0.116 to 0.666) and reflect the criteria weights, while without criteria, scores are more clustered (0.336 to 0.607) and based purely on relevance.
3. **Trait Sensitivity**: “Rainy day” content is penalized due to negative tone. “Storm curiosity” is recognized and scored accordingly.
---
## Key Differences vs. Standard Search
@@ -172,7 +172,7 @@ results_without_criteria = client.search(
If no criteria are defined for a project, `version="v2"` behaves like normal search.
</Note>
---
## Best Practices
@@ -182,7 +182,7 @@ If no criteria are defined for a project, `version="v2"` behaves like normal sea
- Avoid redundant or ambiguous criteria (e.g. “positivity” + “joy”)
- Always handle empty result sets in your application logic
---
## How It Works
@@ -197,7 +197,7 @@ This lets you prioritize memories that align with your agents goals and not j
Criteria retrieval is currently supported only in search v2. Make sure to use `version="v2"` when performing searches with custom criteria.
</Note>
---
## Summary

View File

@@ -11,34 +11,34 @@ Learn about the key features and capabilities that make Mem0 a powerful platform
## Core Features
<CardGroup>
<Card title="Advanced Retrieval" icon="magnifying-glass" href="/features/advanced-retrieval">
<Card title="Advanced Retrieval" icon="magnifying-glass" href="advanced-retrieval">
Superior search results using state-of-the-art algorithms, including keyword search, reranking, and filtering capabilities.
</Card>
<Card title="Contextual Add" icon="square-plus" href="/features/contextual-add">
<Card title="Contextual Add" icon="square-plus" href="contextual-add">
Only send your latest conversation history - we automatically retrieve the rest and generate properly contextualized memories.
</Card>
<Card title="Multimodal Support" icon="photo-film" href="/features/multimodal-support">
<Card title="Multimodal Support" icon="photo-film" href="multimodal-support">
Process and analyze various types of content including images.
</Card>
<Card title="Memory Customization" icon="filter" href="/features/selective-memory">
<Card title="Memory Customization" icon="filter" href="selective-memory">
Customize and curate stored memories to focus on relevant information while excluding unnecessary data, enabling improved accuracy, privacy control, and resource efficiency.
</Card>
<Card title="Custom Categories" icon="tags" href="/features/custom-categories">
<Card title="Custom Categories" icon="tags" href="custom-categories">
Create and manage custom categories to organize memories based on your specific needs and requirements.
</Card>
<Card title="Custom Instructions" icon="list-check" href="/features/custom-instructions">
<Card title="Custom Instructions" icon="list-check" href="custom-instructions">
Define specific guidelines for your project to ensure consistent handling of information and requirements.
</Card>
<Card title="Direct Import" icon="message-bot" href="/features/direct-import">
<Card title="Direct Import" icon="message-bot" href="direct-import">
Tailor the behavior of your Mem0 instance with custom prompts for specific use cases or domains.
</Card>
<Card title="Async Client" icon="bolt" href="/features/async-client">
<Card title="Async Client" icon="bolt" href="async-client">
Asynchronous client for non-blocking operations and high concurrency applications.
</Card>
<Card title="Memory Export" icon="file-export" href="/features/memory-export">
<Card title="Memory Export" icon="file-export" href="memory-export">
Export memories in structured formats using customizable Pydantic schemas.
</Card>
<Card title="Graph Memory" icon="graph" href="/features/graph-memory">
<Card title="Graph Memory" icon="graph" href="graph-memory">
Add memories in the form of nodes and edges in a graph database and search for related memories.
</Card>
</CardGroup>

View File

@@ -26,11 +26,11 @@ Mem0 Platform offers a powerful, user-centric solution for AI memory management
## Getting Started
Check out our [Platform Guide](/platform/guide) to start using Mem0 platform quickly.
Check out our [Platform Guide](/platform/quickstart) to start using Mem0 platform quickly.
## Next Steps
- Sign up to the [Mem0 Platform](https://mem0.dev/pd)
- Join our [Discord](https://mem0.dev/Did) or [Slack](https://mem0.dev/slack) with other developers and get support.
- Join our [Discord](https://mem0.dev/Did) with other developers and get support.
We're excited to see what you'll build with Mem0 Platform. Let's create smarter, more personalized AI experiences together!

View File

@@ -5,9 +5,6 @@ iconType: "solid"
---
<Snippet file="paper-release.mdx" />
<Note type="info">
🎉 We're excited to announce that Claude 4 is now available with Mem0! Check it out [here](components/llms/models/anthropic).
</Note>
Mem0 offers two powerful ways to leverage our technology: [our managed platform](#mem0-platform-managed-solution) and [our open source solution](#mem0-open-source).
@@ -418,7 +415,7 @@ const relatedMemories = memory.search("Should I drink coffee or tea?", { userId:
<Card title="Mem0 OSS Python SDK" icon="python" href="/open-source/python-quickstart">
Learn more about Mem0 OSS Python SDK
</Card>
<Card title="Mem0 OSS Node.js SDK" icon="node" href="/open-source-typescript/quickstart">
<Card title="Mem0 OSS Node.js SDK" icon="node" href="/open-source/node-quickstart">
Learn more about Mem0 OSS Node.js SDK
</Card>
</CardGroup>

View File

@@ -92,7 +92,7 @@ You plug Mem0 into your agent framework, it doesnt replace your LLM or workfl
## Getting Started
Mem0 offers two powerful ways to leverage our technology: our [managed platform](/platform/overview) and our [open source solution](/open-source/quickstart).
Mem0 offers two powerful ways to leverage our technology: our [managed platform](/platform/overview) and our [open source solution](/open-source/overview).
<CardGroup cols={3}>

View File

@@ -2,8 +2,8 @@
Mem0 is a self-improving memory layer for LLM applications, enabling personalized AI experiences that save costs and delight users. We offer both cloud and open-source solutions to cater to different needs.
See the complete [OSS Docs](https://docs.mem0.ai/open-source-typescript/quickstart).
See the complete [Platform API Reference](https://docs.mem0.ai/api-reference/overview).
See the complete [OSS Docs](https://docs.mem0.ai/open-source/node-quickstart).
See the complete [Platform API Reference](https://docs.mem0.ai/api-reference).
## 1. Installation
@@ -61,5 +61,4 @@ If you have any questions or need assistance, please reach out to us:
- Email: founders@mem0.ai
- [Join our discord community](https://mem0.ai/discord)
- [Join our slack community](https://mem0.ai/slack)
- GitHub Issues: [Report bugs or request features](https://github.com/mem0ai/mem0ai-node/issues)
- GitHub Issues: [Report bugs or request features](https://github.com/mem0ai/mem0/issues)