Update Docs (#2277)

This commit is contained in:
Saket Aryan
2025-03-01 06:07:05 +05:30
committed by GitHub
parent c1aba35884
commit 5606c3ffb8
30 changed files with 437 additions and 877 deletions

View File

@@ -8,16 +8,18 @@ Config in mem0 is a dictionary that specifies the settings for your embedding mo
## How to define configurations?
The config is defined as a Python dictionary with two main keys:
The config is defined as an object (or dictionary) with two main keys:
- `embedder`: Specifies the embedder provider and its configuration
- `provider`: The name of the embedder (e.g., "openai", "ollama")
- `config`: A nested dictionary containing provider-specific settings
- `config`: A nested object or dictionary containing provider-specific settings
## How to use configurations?
Here's a general example of how to use the config with mem0:
```python
<CodeGroup>
```python Python
import os
from mem0 import Memory
@@ -36,6 +38,25 @@ m = Memory.from_config(config)
m.add("Your text here", user_id="user", metadata={"category": "example"})
```
```typescript TypeScript
import { Memory } from 'mem0ai/oss';
const config = {
embedder: {
provider: 'openai',
config: {
apiKey: process.env.OPENAI_API_KEY || '',
model: 'text-embedding-3-small',
// Provider-specific settings go here
},
},
};
const memory = new Memory(config);
await memory.add("Your text here", { userId: "user", metadata: { category: "example" } });
```
</CodeGroup>
## Why is Config Needed?
Config is essential for:
@@ -47,6 +68,8 @@ Config is essential for:
Here's a comprehensive list of all parameters that can be used across different embedders:
<Tabs>
<Tab title="Python">
| Parameter | Description | Provider |
|-----------|-------------|----------|
| `model` | Embedding model to use | All |
@@ -61,7 +84,15 @@ Here's a comprehensive list of all parameters that can be used across different
| `memory_add_embedding_type` | The type of embedding to use for the add memory action | VertexAI |
| `memory_update_embedding_type` | The type of embedding to use for the update memory action | VertexAI |
| `memory_search_embedding_type` | The type of embedding to use for the search memory action | VertexAI |
</Tab>
<Tab title="TypeScript">
| Parameter | Description | Provider |
|-----------|-------------|----------|
| `model` | Embedding model to use | All |
| `apiKey` | API key of the provider | All |
| `embeddingDims` | Dimensions of the embedding model | All |
</Tab>
</Tabs>
## Supported Embedding Models

View File

@@ -6,7 +6,8 @@ To use OpenAI embedding models, set the `OPENAI_API_KEY` environment variable. Y
### Usage
```python
<CodeGroup>
```python Python
import os
from mem0 import Memory
@@ -25,12 +26,41 @@ m = Memory.from_config(config)
m.add("I'm visiting Paris", user_id="john")
```
```typescript TypeScript
import { Memory } from 'mem0ai/oss';
const config = {
embedder: {
provider: 'openai',
config: {
apiKey: 'your-openai-api-key',
model: 'text-embedding-3-large',
},
},
};
const memory = new Memory(config);
await memory.add("I'm visiting Paris", { userId: "john" });
```
</CodeGroup>
### Config
Here are the parameters available for configuring OpenAI embedder:
<Tabs>
<Tab title="Python">
| Parameter | Description | Default Value |
| --- | --- | --- |
| `model` | The name of the embedding model to use | `text-embedding-3-small` |
| `embedding_dims` | Dimensions of the embedding model | `1536` |
| `api_key` | The OpenAI API key | `None` |
</Tab>
<Tab title="TypeScript">
| Parameter | Description | Default Value |
| --- | --- | --- |
| `model` | The name of the embedding model to use | `text-embedding-3-small` |
| `embeddingDims` | Dimensions of the embedding model | `1536` |
| `apiKey` | The OpenAI API key | `None` |
</Tab>
</Tabs>

View File

@@ -10,6 +10,10 @@ Mem0 offers support for various embedding models, allowing users to choose the o
See the list of supported embedders below.
<Note>
The following embedders are supported in the Python implementation. The TypeScript implementation currently only supports OpenAI.
</Note>
<CardGroup cols={4}>
<Card title="OpenAI" href="/components/embedders/models/openai"></Card>
<Card title="Azure OpenAI" href="/components/embedders/models/azure_openai"></Card>

View File

@@ -6,26 +6,40 @@ iconType: "solid"
## How to define configurations?
The `config` is defined as a Python dictionary with two main keys:
- `llm`: Specifies the llm provider and its configuration
- `provider`: The name of the llm (e.g., "openai", "groq")
- `config`: A nested dictionary containing provider-specific settings
<Tabs>
<Tab title="Python">
The `config` is defined as a Python dictionary with two main keys:
- `llm`: Specifies the llm provider and its configuration
- `provider`: The name of the llm (e.g., "openai", "groq")
- `config`: A nested dictionary containing provider-specific settings
</Tab>
<Tab title="TypeScript">
The `config` is defined as a TypeScript object with these keys:
- `llm`: Specifies the LLM provider and its configuration (required)
- `provider`: The name of the LLM (e.g., "openai", "groq")
- `config`: A nested object containing provider-specific settings
- `embedder`: Specifies the embedder provider and its configuration (optional)
- `vectorStore`: Specifies the vector store provider and its configuration (optional)
- `historyDbPath`: Path to the history database file (optional)
</Tab>
</Tabs>
### Config Values Precedence
Config values are applied in the following order of precedence (from highest to lowest):
1. Values explicitly set in the `config` dictionary
1. Values explicitly set in the `config` object/dictionary
2. Environment variables (e.g., `OPENAI_API_KEY`, `OPENAI_API_BASE`)
3. Default values defined in the LLM implementation
This means that values specified in the `config` dictionary will override corresponding environment variables, which in turn override default values.
This means that values specified in the `config` will override corresponding environment variables, which in turn override default values.
## How to Use Config
Here's a general example of how to use the config with mem0:
Here's a general example of how to use the config with Mem0:
```python
<CodeGroup>
```python Python
import os
from mem0 import Memory
@@ -44,39 +58,70 @@ m = Memory.from_config(config)
m.add("Your text here", user_id="user", metadata={"category": "example"})
```
```typescript TypeScript
import { Memory } from 'mem0ai/oss';
// Minimal configuration with just the LLM settings
const config = {
llm: {
provider: 'your_chosen_provider',
config: {
// Provider-specific settings go here
}
}
};
const memory = new Memory(config);
await memory.add("Your text here", { userId: "user123", metadata: { category: "example" } });
```
</CodeGroup>
## Why is Config Needed?
Config is essential for:
1. Specifying which llm to use.
1. Specifying which LLM to use.
2. Providing necessary connection details (e.g., model, api_key, temperature).
3. Ensuring proper initialization and connection to your chosen llm.
3. Ensuring proper initialization and connection to your chosen LLM.
## Master List of All Params in Config
Here's a comprehensive list of all parameters that can be used across different llms:
Here's a comprehensive list of all parameters that can be used across different LLMs:
Here's the table based on the provided parameters:
| Parameter | Description | Provider |
|----------------------|-----------------------------------------------|-------------------|
| `model` | Embedding model to use | All |
| `temperature` | Temperature of the model | All |
| `api_key` | API key to use | All |
| `max_tokens` | Tokens to generate | All |
| `top_p` | Probability threshold for nucleus sampling | All |
| `top_k` | Number of highest probability tokens to keep | All |
| `http_client_proxies`| Allow proxy server settings | AzureOpenAI |
| `models` | List of models | Openrouter |
| `route` | Routing strategy | Openrouter |
| `openrouter_base_url`| Base URL for Openrouter API | Openrouter |
| `site_url` | Site URL | Openrouter |
| `app_name` | Application name | Openrouter |
| `ollama_base_url` | Base URL for Ollama API | Ollama |
| `openai_base_url` | Base URL for OpenAI API | OpenAI |
| `azure_kwargs` | Azure LLM args for initialization | AzureOpenAI |
| `deepseek_base_url` | Base URL for DeepSeek API | DeepSeek |
| `xai_base_url` | Base URL for XAI API | XAI |
<Tabs>
<Tab title="Python">
| Parameter | Description | Provider |
|----------------------|-----------------------------------------------|-------------------|
| `model` | Embedding model to use | All |
| `temperature` | Temperature of the model | All |
| `api_key` | API key to use | All |
| `max_tokens` | Tokens to generate | All |
| `top_p` | Probability threshold for nucleus sampling | All |
| `top_k` | Number of highest probability tokens to keep | All |
| `http_client_proxies`| Allow proxy server settings | AzureOpenAI |
| `models` | List of models | Openrouter |
| `route` | Routing strategy | Openrouter |
| `openrouter_base_url`| Base URL for Openrouter API | Openrouter |
| `site_url` | Site URL | Openrouter |
| `app_name` | Application name | Openrouter |
| `ollama_base_url` | Base URL for Ollama API | Ollama |
| `openai_base_url` | Base URL for OpenAI API | OpenAI |
| `azure_kwargs` | Azure LLM args for initialization | AzureOpenAI |
| `deepseek_base_url` | Base URL for DeepSeek API | DeepSeek |
| `xai_base_url` | Base URL for XAI API | XAI |
</Tab>
<Tab title="TypeScript">
| Parameter | Description | Provider |
|----------------------|-----------------------------------------------|-------------------|
| `model` | Embedding model to use | All |
| `temperature` | Temperature of the model | All |
| `apiKey` | API key to use | All |
| `maxTokens` | Tokens to generate | All |
| `topP` | Probability threshold for nucleus sampling | All |
| `topK` | Number of highest probability tokens to keep | All |
| `openaiBaseUrl` | Base URL for OpenAI API | OpenAI |
</Tab>
</Tabs>
## Supported LLMs
For detailed information on configuring specific llms, please visit the [LLMs](./models) section. There you'll find information for each supported llm with provider-specific usage examples and configuration details.
For detailed information on configuring specific LLMs, please visit the [LLMs](./models) section. There you'll find information for each supported LLM with provider-specific usage examples and configuration details.

View File

@@ -1,8 +1,13 @@
---
title: Anthropic
---
To use anthropic's models, please set the `ANTHROPIC_API_KEY` which you find on their [Account Settings Page](https://console.anthropic.com/account/keys).
## Usage
```python
<CodeGroup>
```python Python
import os
from mem0 import Memory
@@ -24,6 +29,26 @@ m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
```typescript TypeScript
import { Memory } from 'mem0ai/oss';
const config = {
llm: {
provider: 'anthropic',
config: {
apiKey: process.env.ANTHROPIC_API_KEY || '',
model: 'claude-3-7-sonnet-latest',
temperature: 0.1,
maxTokens: 2000,
},
},
};
const memory = new Memory(config);
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
```
</CodeGroup>
## Config
All available parameters for the `anthropic` config are present in [Master List of All Params in Config](../config).

View File

@@ -1,10 +1,15 @@
---
title: Groq
---
[Groq](https://groq.com/) is the creator of the world's first Language Processing Unit (LPU), providing exceptional speed performance for AI workloads running on their LPU Inference Engine.
In order to use LLMs from Groq, go to their [platform](https://console.groq.com/keys) and get the API key. Set the API key as `GROQ_API_KEY` environment variable to use the model as given below in the example.
## Usage
```python
<CodeGroup>
```python Python
import os
from mem0 import Memory
@@ -26,6 +31,26 @@ m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
```typescript TypeScript
import { Memory } from 'mem0ai/oss';
const config = {
llm: {
provider: 'groq',
config: {
apiKey: process.env.GROQ_API_KEY || '',
model: 'mixtral-8x7b-32768',
temperature: 0.1,
maxTokens: 1000,
},
},
};
const memory = new Memory(config);
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
```
</CodeGroup>
## Config
All available parameters for the `groq` config are present in [Master List of All Params in Config](../config).

View File

@@ -6,7 +6,8 @@ To use OpenAI LLM models, you have to set the `OPENAI_API_KEY` environment varia
## Usage
```python
<CodeGroup>
```python Python
import os
from mem0 import Memory
@@ -38,6 +39,26 @@ m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
```typescript TypeScript
import { Memory } from 'mem0ai/oss';
const config = {
llm: {
provider: 'openai',
config: {
apiKey: process.env.OPENAI_API_KEY || '',
model: 'gpt-4-turbo-preview',
temperature: 0.2,
maxTokens: 1500,
},
},
};
const memory = new Memory(config);
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
```
</CodeGroup>
We also support the new [OpenAI structured-outputs](https://platform.openai.com/docs/guides/structured-outputs/introduction) model.
```python
@@ -59,7 +80,9 @@ config = {
m = Memory.from_config(config)
```
<Note>
OpenAI structured-outputs is currently only available in the Python implementation.
</Note>
## Config

View File

@@ -14,20 +14,24 @@ For a comprehensive list of available parameters for llm configuration, please r
To view all supported llms, visit the [Supported LLMs](./models).
<Note>
All LLMs are supported in Python. The following LLMs are also supported in TypeScript: **OpenAI**, **Anthropic**, and **Groq**.
</Note>
<CardGroup cols={4}>
<Card title="OpenAI" href="/components/llms/models/openai"></Card>
<Card title="Ollama" href="/components/llms/models/ollama"></Card>
<Card title="Azure OpenAI" href="/components/llms/models/azure_openai"></Card>
<Card title="Anthropic" href="/components/llms/models/anthropic"></Card>
<Card title="Together" href="/components/llms/models/together"></Card>
<Card title="Groq" href="/components/llms/models/groq"></Card>
<Card title="Litellm" href="/components/llms/models/litellm"></Card>
<Card title="Mistral AI" href="/components/llms/models/mistral_ai"></Card>
<Card title="Google AI" href="/components/llms/models/google_ai"></Card>
<Card title="AWS bedrock" href="/components/llms/models/aws_bedrock"></Card>
<Card title="Gemini" href="/components/llms/models/gemini"></Card>
<Card title="DeepSeek" href="/components/llms/models/deepseek"></Card>
<Card title="xAI" href="/components/llms/models/xAI"></Card>
<Card title="OpenAI" href="/components/llms/models/openai" />
<Card title="Ollama" href="/components/llms/models/ollama" />
<Card title="Azure OpenAI" href="/components/llms/models/azure_openai" />
<Card title="Anthropic" href="/components/llms/models/anthropic" />
<Card title="Together" href="/components/llms/models/together" />
<Card title="Groq" href="/components/llms/models/groq" />
<Card title="Litellm" href="/components/llms/models/litellm" />
<Card title="Mistral AI" href="/components/llms/models/mistral_ai" />
<Card title="Google AI" href="/components/llms/models/google_ai" />
<Card title="AWS bedrock" href="/components/llms/models/aws_bedrock" />
<Card title="Gemini" href="/components/llms/models/gemini" />
<Card title="DeepSeek" href="/components/llms/models/deepseek" />
<Card title="xAI" href="/components/llms/models/xAI" />
</CardGroup>
## Structured vs Unstructured Outputs

View File

@@ -6,16 +6,17 @@ iconType: "solid"
## How to define configurations?
The `config` is defined as a Python dictionary with two main keys:
The `config` is defined as an object with two main keys:
- `vector_store`: Specifies the vector database provider and its configuration
- `provider`: The name of the vector database (e.g., "chroma", "pgvector", "qdrant", "milvus","azure_ai_search")
- `config`: A nested dictionary containing provider-specific settings
- `provider`: The name of the vector database (e.g., "chroma", "pgvector", "qdrant", "milvus", "azure_ai_search")
- `config`: A nested object containing provider-specific settings
## How to Use Config
Here's a general example of how to use the config with mem0:
```python
<CodeGroup>
```python Python
import os
from mem0 import Memory
@@ -34,6 +35,29 @@ m = Memory.from_config(config)
m.add("Your text here", user_id="user", metadata={"category": "example"})
```
```typescript TypeScript
// Example for in-memory vector database (Only supported in TypeScript)
import { Memory } from 'mem0ai/oss';
const configMemory = {
vector_store: {
provider: 'memory',
config: {
collectionName: 'memories',
dimension: 1536,
},
},
};
const memory = new Memory(configMemory);
await memory.add("Your text here", { userId: "user", metadata: { category: "example" } });
```
</CodeGroup>
<Note>
The in-memory vector database is only supported in the TypeScript implementation.
</Note>
## Why is Config Needed?
Config is essential for:
@@ -46,6 +70,8 @@ Config is essential for:
Here's a comprehensive list of all parameters that can be used across different vector databases:
<Tabs>
<Tab title="Python">
| Parameter | Description |
|-----------|-------------|
| `collection_name` | Name of the collection |
@@ -60,6 +86,24 @@ Here's a comprehensive list of all parameters that can be used across different
| `url` | Full URL for the server |
| `api_key` | API key for the server |
| `on_disk` | Enable persistent storage |
</Tab>
<Tab title="TypeScript">
| Parameter | Description |
|-----------|-------------|
| `collectionName` | Name of the collection |
| `embeddingModelDims` | Dimensions of the embedding model |
| `dimension` | Dimensions of the embedding model (for memory provider) |
| `host` | Host where the server is running |
| `port` | Port where the server is running |
| `url` | URL for the server |
| `apiKey` | API key for the server |
| `path` | Path for the database |
| `onDisk` | Enable persistent storage |
| `redisUrl` | URL for the Redis server |
| `username` | Username for database connection |
| `password` | Password for database connection |
</Tab>
</Tabs>
## Customizing Config

View File

@@ -2,7 +2,8 @@
### Usage
```python
<CodeGroup>
```python Python
import os
from mem0 import Memory
@@ -23,10 +24,32 @@ m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
```typescript TypeScript
import { Memory } from 'mem0ai/oss';
const config = {
vectorStore: {
provider: 'qdrant',
config: {
collectionName: 'memories',
embeddingModelDims: 1536,
host: 'localhost',
port: 6333,
},
},
};
const memory = new Memory(config);
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
```
</CodeGroup>
### Config
Let's see the available parameters for the `qdrant` config:
<Tabs>
<Tab title="Python">
| Parameter | Description | Default Value |
| --- | --- | --- |
| `collection_name` | The name of the collection to store the vectors | `mem0` |
@@ -37,4 +60,18 @@ Let's see the available parameters for the `qdrant` config:
| `path` | Path for the qdrant database | `/tmp/qdrant` |
| `url` | Full URL for the qdrant server | `None` |
| `api_key` | API key for the qdrant server | `None` |
| `on_disk` | For enabling persistent storage | `False` |
| `on_disk` | For enabling persistent storage | `False` |
</Tab>
<Tab title="TypeScript">
| Parameter | Description | Default Value |
| --- | --- | --- |
| `collectionName` | The name of the collection to store the vectors | `mem0` |
| `embeddingModelDims` | Dimensions of the embedding model | `1536` |
| `host` | The host where the Qdrant server is running | `None` |
| `port` | The port where the Qdrant server is running | `None` |
| `path` | Path for the Qdrant database | `/tmp/qdrant` |
| `url` | Full URL for the Qdrant server | `None` |
| `apiKey` | API key for the Qdrant server | `None` |
| `onDisk` | For enabling persistent storage | `False` |
</Tab>
</Tabs>

View File

@@ -12,7 +12,8 @@ docker run -d --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:lat
### Usage
```python
<CodeGroup>
```python Python
import os
from mem0 import Memory
@@ -34,12 +35,46 @@ m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
```typescript TypeScript
import { Memory } from 'mem0ai/oss';
const config = {
vectorStore: {
provider: 'redis',
config: {
collectionName: 'memories',
embeddingModelDims: 1536,
redisUrl: 'redis://localhost:6379',
username: 'your-redis-username',
password: 'your-redis-password',
},
},
};
const memory = new Memory(config);
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
```
</CodeGroup>
### Config
Let's see the available parameters for the `redis` config:
<Tabs>
<Tab title="Python">
| Parameter | Description | Default Value |
| --- | --- | --- |
| `collection_name` | The name of the collection to store the vectors | `mem0` |
| `embedding_model_dims` | Dimensions of the embedding model | `1536` |
| `redis_url` | The URL of the Redis server | `None` |
| `redis_url` | The URL of the Redis server | `None` |
</Tab>
<Tab title="TypeScript">
| Parameter | Description | Default Value |
| --- | --- | --- |
| `collectionName` | The name of the collection to store the vectors | `mem0` |
| `embeddingModelDims` | Dimensions of the embedding model | `1536` |
| `redisUrl` | The URL of the Redis server | `None` |
| `username` | Username for Redis connection | `None` |
| `password` | Password for Redis connection | `None` |
</Tab>
</Tabs>

View File

@@ -10,6 +10,10 @@ Mem0 includes built-in support for various popular databases. Memory can utilize
See the list of supported vector databases below.
<Note>
The following vector databases are supported in the Python implementation. The TypeScript implementation currently only supports Qdrant, Redis and in-memory vector database.
</Note>
<CardGroup cols={3}>
<Card title="Qdrant" href="/components/vectordbs/dbs/qdrant"></Card>
<Card title="Chroma" href="/components/vectordbs/dbs/chroma"></Card>