Update Docs (#2277)
This commit is contained in:
@@ -8,16 +8,18 @@ Config in mem0 is a dictionary that specifies the settings for your embedding mo
|
|||||||
|
|
||||||
## How to define configurations?
|
## How to define configurations?
|
||||||
|
|
||||||
The config is defined as a Python dictionary with two main keys:
|
The config is defined as an object (or dictionary) with two main keys:
|
||||||
- `embedder`: Specifies the embedder provider and its configuration
|
- `embedder`: Specifies the embedder provider and its configuration
|
||||||
- `provider`: The name of the embedder (e.g., "openai", "ollama")
|
- `provider`: The name of the embedder (e.g., "openai", "ollama")
|
||||||
- `config`: A nested dictionary containing provider-specific settings
|
- `config`: A nested object or dictionary containing provider-specific settings
|
||||||
|
|
||||||
|
|
||||||
## How to use configurations?
|
## How to use configurations?
|
||||||
|
|
||||||
Here's a general example of how to use the config with mem0:
|
Here's a general example of how to use the config with mem0:
|
||||||
|
|
||||||
```python
|
<CodeGroup>
|
||||||
|
```python Python
|
||||||
import os
|
import os
|
||||||
from mem0 import Memory
|
from mem0 import Memory
|
||||||
|
|
||||||
@@ -36,6 +38,25 @@ m = Memory.from_config(config)
|
|||||||
m.add("Your text here", user_id="user", metadata={"category": "example"})
|
m.add("Your text here", user_id="user", metadata={"category": "example"})
|
||||||
```
|
```
|
||||||
|
|
||||||
|
```typescript TypeScript
|
||||||
|
import { Memory } from 'mem0ai/oss';
|
||||||
|
|
||||||
|
const config = {
|
||||||
|
embedder: {
|
||||||
|
provider: 'openai',
|
||||||
|
config: {
|
||||||
|
apiKey: process.env.OPENAI_API_KEY || '',
|
||||||
|
model: 'text-embedding-3-small',
|
||||||
|
// Provider-specific settings go here
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const memory = new Memory(config);
|
||||||
|
await memory.add("Your text here", { userId: "user", metadata: { category: "example" } });
|
||||||
|
```
|
||||||
|
</CodeGroup>
|
||||||
|
|
||||||
## Why is Config Needed?
|
## Why is Config Needed?
|
||||||
|
|
||||||
Config is essential for:
|
Config is essential for:
|
||||||
@@ -47,6 +68,8 @@ Config is essential for:
|
|||||||
|
|
||||||
Here's a comprehensive list of all parameters that can be used across different embedders:
|
Here's a comprehensive list of all parameters that can be used across different embedders:
|
||||||
|
|
||||||
|
<Tabs>
|
||||||
|
<Tab title="Python">
|
||||||
| Parameter | Description | Provider |
|
| Parameter | Description | Provider |
|
||||||
|-----------|-------------|----------|
|
|-----------|-------------|----------|
|
||||||
| `model` | Embedding model to use | All |
|
| `model` | Embedding model to use | All |
|
||||||
@@ -61,7 +84,15 @@ Here's a comprehensive list of all parameters that can be used across different
|
|||||||
| `memory_add_embedding_type` | The type of embedding to use for the add memory action | VertexAI |
|
| `memory_add_embedding_type` | The type of embedding to use for the add memory action | VertexAI |
|
||||||
| `memory_update_embedding_type` | The type of embedding to use for the update memory action | VertexAI |
|
| `memory_update_embedding_type` | The type of embedding to use for the update memory action | VertexAI |
|
||||||
| `memory_search_embedding_type` | The type of embedding to use for the search memory action | VertexAI |
|
| `memory_search_embedding_type` | The type of embedding to use for the search memory action | VertexAI |
|
||||||
|
</Tab>
|
||||||
|
<Tab title="TypeScript">
|
||||||
|
| Parameter | Description | Provider |
|
||||||
|
|-----------|-------------|----------|
|
||||||
|
| `model` | Embedding model to use | All |
|
||||||
|
| `apiKey` | API key of the provider | All |
|
||||||
|
| `embeddingDims` | Dimensions of the embedding model | All |
|
||||||
|
</Tab>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
## Supported Embedding Models
|
## Supported Embedding Models
|
||||||
|
|
||||||
|
|||||||
@@ -6,7 +6,8 @@ To use OpenAI embedding models, set the `OPENAI_API_KEY` environment variable. Y
|
|||||||
|
|
||||||
### Usage
|
### Usage
|
||||||
|
|
||||||
```python
|
<CodeGroup>
|
||||||
|
```python Python
|
||||||
import os
|
import os
|
||||||
from mem0 import Memory
|
from mem0 import Memory
|
||||||
|
|
||||||
@@ -25,12 +26,41 @@ m = Memory.from_config(config)
|
|||||||
m.add("I'm visiting Paris", user_id="john")
|
m.add("I'm visiting Paris", user_id="john")
|
||||||
```
|
```
|
||||||
|
|
||||||
|
```typescript TypeScript
|
||||||
|
import { Memory } from 'mem0ai/oss';
|
||||||
|
|
||||||
|
const config = {
|
||||||
|
embedder: {
|
||||||
|
provider: 'openai',
|
||||||
|
config: {
|
||||||
|
apiKey: 'your-openai-api-key',
|
||||||
|
model: 'text-embedding-3-large',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const memory = new Memory(config);
|
||||||
|
await memory.add("I'm visiting Paris", { userId: "john" });
|
||||||
|
```
|
||||||
|
</CodeGroup>
|
||||||
|
|
||||||
### Config
|
### Config
|
||||||
|
|
||||||
Here are the parameters available for configuring OpenAI embedder:
|
Here are the parameters available for configuring OpenAI embedder:
|
||||||
|
|
||||||
|
<Tabs>
|
||||||
|
<Tab title="Python">
|
||||||
| Parameter | Description | Default Value |
|
| Parameter | Description | Default Value |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
| `model` | The name of the embedding model to use | `text-embedding-3-small` |
|
| `model` | The name of the embedding model to use | `text-embedding-3-small` |
|
||||||
| `embedding_dims` | Dimensions of the embedding model | `1536` |
|
| `embedding_dims` | Dimensions of the embedding model | `1536` |
|
||||||
| `api_key` | The OpenAI API key | `None` |
|
| `api_key` | The OpenAI API key | `None` |
|
||||||
|
</Tab>
|
||||||
|
<Tab title="TypeScript">
|
||||||
|
| Parameter | Description | Default Value |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `model` | The name of the embedding model to use | `text-embedding-3-small` |
|
||||||
|
| `embeddingDims` | Dimensions of the embedding model | `1536` |
|
||||||
|
| `apiKey` | The OpenAI API key | `None` |
|
||||||
|
</Tab>
|
||||||
|
</Tabs>
|
||||||
|
|||||||
@@ -10,6 +10,10 @@ Mem0 offers support for various embedding models, allowing users to choose the o
|
|||||||
|
|
||||||
See the list of supported embedders below.
|
See the list of supported embedders below.
|
||||||
|
|
||||||
|
<Note>
|
||||||
|
The following embedders are supported in the Python implementation. The TypeScript implementation currently only supports OpenAI.
|
||||||
|
</Note>
|
||||||
|
|
||||||
<CardGroup cols={4}>
|
<CardGroup cols={4}>
|
||||||
<Card title="OpenAI" href="/components/embedders/models/openai"></Card>
|
<Card title="OpenAI" href="/components/embedders/models/openai"></Card>
|
||||||
<Card title="Azure OpenAI" href="/components/embedders/models/azure_openai"></Card>
|
<Card title="Azure OpenAI" href="/components/embedders/models/azure_openai"></Card>
|
||||||
|
|||||||
@@ -6,26 +6,40 @@ iconType: "solid"
|
|||||||
|
|
||||||
## How to define configurations?
|
## How to define configurations?
|
||||||
|
|
||||||
The `config` is defined as a Python dictionary with two main keys:
|
<Tabs>
|
||||||
- `llm`: Specifies the llm provider and its configuration
|
<Tab title="Python">
|
||||||
- `provider`: The name of the llm (e.g., "openai", "groq")
|
The `config` is defined as a Python dictionary with two main keys:
|
||||||
- `config`: A nested dictionary containing provider-specific settings
|
- `llm`: Specifies the llm provider and its configuration
|
||||||
|
- `provider`: The name of the llm (e.g., "openai", "groq")
|
||||||
|
- `config`: A nested dictionary containing provider-specific settings
|
||||||
|
</Tab>
|
||||||
|
<Tab title="TypeScript">
|
||||||
|
The `config` is defined as a TypeScript object with these keys:
|
||||||
|
- `llm`: Specifies the LLM provider and its configuration (required)
|
||||||
|
- `provider`: The name of the LLM (e.g., "openai", "groq")
|
||||||
|
- `config`: A nested object containing provider-specific settings
|
||||||
|
- `embedder`: Specifies the embedder provider and its configuration (optional)
|
||||||
|
- `vectorStore`: Specifies the vector store provider and its configuration (optional)
|
||||||
|
- `historyDbPath`: Path to the history database file (optional)
|
||||||
|
</Tab>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
### Config Values Precedence
|
### Config Values Precedence
|
||||||
|
|
||||||
Config values are applied in the following order of precedence (from highest to lowest):
|
Config values are applied in the following order of precedence (from highest to lowest):
|
||||||
|
|
||||||
1. Values explicitly set in the `config` dictionary
|
1. Values explicitly set in the `config` object/dictionary
|
||||||
2. Environment variables (e.g., `OPENAI_API_KEY`, `OPENAI_API_BASE`)
|
2. Environment variables (e.g., `OPENAI_API_KEY`, `OPENAI_API_BASE`)
|
||||||
3. Default values defined in the LLM implementation
|
3. Default values defined in the LLM implementation
|
||||||
|
|
||||||
This means that values specified in the `config` dictionary will override corresponding environment variables, which in turn override default values.
|
This means that values specified in the `config` will override corresponding environment variables, which in turn override default values.
|
||||||
|
|
||||||
## How to Use Config
|
## How to Use Config
|
||||||
|
|
||||||
Here's a general example of how to use the config with mem0:
|
Here's a general example of how to use the config with Mem0:
|
||||||
|
|
||||||
```python
|
<CodeGroup>
|
||||||
|
```python Python
|
||||||
import os
|
import os
|
||||||
from mem0 import Memory
|
from mem0 import Memory
|
||||||
|
|
||||||
@@ -44,39 +58,70 @@ m = Memory.from_config(config)
|
|||||||
m.add("Your text here", user_id="user", metadata={"category": "example"})
|
m.add("Your text here", user_id="user", metadata={"category": "example"})
|
||||||
```
|
```
|
||||||
|
|
||||||
|
```typescript TypeScript
|
||||||
|
import { Memory } from 'mem0ai/oss';
|
||||||
|
|
||||||
|
// Minimal configuration with just the LLM settings
|
||||||
|
const config = {
|
||||||
|
llm: {
|
||||||
|
provider: 'your_chosen_provider',
|
||||||
|
config: {
|
||||||
|
// Provider-specific settings go here
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const memory = new Memory(config);
|
||||||
|
await memory.add("Your text here", { userId: "user123", metadata: { category: "example" } });
|
||||||
|
```
|
||||||
|
</CodeGroup>
|
||||||
|
|
||||||
## Why is Config Needed?
|
## Why is Config Needed?
|
||||||
|
|
||||||
Config is essential for:
|
Config is essential for:
|
||||||
1. Specifying which llm to use.
|
1. Specifying which LLM to use.
|
||||||
2. Providing necessary connection details (e.g., model, api_key, temperature).
|
2. Providing necessary connection details (e.g., model, api_key, temperature).
|
||||||
3. Ensuring proper initialization and connection to your chosen llm.
|
3. Ensuring proper initialization and connection to your chosen LLM.
|
||||||
|
|
||||||
## Master List of All Params in Config
|
## Master List of All Params in Config
|
||||||
|
|
||||||
Here's a comprehensive list of all parameters that can be used across different llms:
|
Here's a comprehensive list of all parameters that can be used across different LLMs:
|
||||||
|
|
||||||
Here's the table based on the provided parameters:
|
<Tabs>
|
||||||
|
<Tab title="Python">
|
||||||
| Parameter | Description | Provider |
|
| Parameter | Description | Provider |
|
||||||
|----------------------|-----------------------------------------------|-------------------|
|
|----------------------|-----------------------------------------------|-------------------|
|
||||||
| `model` | Embedding model to use | All |
|
| `model` | Embedding model to use | All |
|
||||||
| `temperature` | Temperature of the model | All |
|
| `temperature` | Temperature of the model | All |
|
||||||
| `api_key` | API key to use | All |
|
| `api_key` | API key to use | All |
|
||||||
| `max_tokens` | Tokens to generate | All |
|
| `max_tokens` | Tokens to generate | All |
|
||||||
| `top_p` | Probability threshold for nucleus sampling | All |
|
| `top_p` | Probability threshold for nucleus sampling | All |
|
||||||
| `top_k` | Number of highest probability tokens to keep | All |
|
| `top_k` | Number of highest probability tokens to keep | All |
|
||||||
| `http_client_proxies`| Allow proxy server settings | AzureOpenAI |
|
| `http_client_proxies`| Allow proxy server settings | AzureOpenAI |
|
||||||
| `models` | List of models | Openrouter |
|
| `models` | List of models | Openrouter |
|
||||||
| `route` | Routing strategy | Openrouter |
|
| `route` | Routing strategy | Openrouter |
|
||||||
| `openrouter_base_url`| Base URL for Openrouter API | Openrouter |
|
| `openrouter_base_url`| Base URL for Openrouter API | Openrouter |
|
||||||
| `site_url` | Site URL | Openrouter |
|
| `site_url` | Site URL | Openrouter |
|
||||||
| `app_name` | Application name | Openrouter |
|
| `app_name` | Application name | Openrouter |
|
||||||
| `ollama_base_url` | Base URL for Ollama API | Ollama |
|
| `ollama_base_url` | Base URL for Ollama API | Ollama |
|
||||||
| `openai_base_url` | Base URL for OpenAI API | OpenAI |
|
| `openai_base_url` | Base URL for OpenAI API | OpenAI |
|
||||||
| `azure_kwargs` | Azure LLM args for initialization | AzureOpenAI |
|
| `azure_kwargs` | Azure LLM args for initialization | AzureOpenAI |
|
||||||
| `deepseek_base_url` | Base URL for DeepSeek API | DeepSeek |
|
| `deepseek_base_url` | Base URL for DeepSeek API | DeepSeek |
|
||||||
| `xai_base_url` | Base URL for XAI API | XAI |
|
| `xai_base_url` | Base URL for XAI API | XAI |
|
||||||
|
</Tab>
|
||||||
|
<Tab title="TypeScript">
|
||||||
|
| Parameter | Description | Provider |
|
||||||
|
|----------------------|-----------------------------------------------|-------------------|
|
||||||
|
| `model` | Embedding model to use | All |
|
||||||
|
| `temperature` | Temperature of the model | All |
|
||||||
|
| `apiKey` | API key to use | All |
|
||||||
|
| `maxTokens` | Tokens to generate | All |
|
||||||
|
| `topP` | Probability threshold for nucleus sampling | All |
|
||||||
|
| `topK` | Number of highest probability tokens to keep | All |
|
||||||
|
| `openaiBaseUrl` | Base URL for OpenAI API | OpenAI |
|
||||||
|
</Tab>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
## Supported LLMs
|
## Supported LLMs
|
||||||
|
|
||||||
For detailed information on configuring specific llms, please visit the [LLMs](./models) section. There you'll find information for each supported llm with provider-specific usage examples and configuration details.
|
For detailed information on configuring specific LLMs, please visit the [LLMs](./models) section. There you'll find information for each supported LLM with provider-specific usage examples and configuration details.
|
||||||
|
|||||||
@@ -1,8 +1,13 @@
|
|||||||
|
---
|
||||||
|
title: Anthropic
|
||||||
|
---
|
||||||
|
|
||||||
To use anthropic's models, please set the `ANTHROPIC_API_KEY` which you find on their [Account Settings Page](https://console.anthropic.com/account/keys).
|
To use anthropic's models, please set the `ANTHROPIC_API_KEY` which you find on their [Account Settings Page](https://console.anthropic.com/account/keys).
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
```python
|
<CodeGroup>
|
||||||
|
```python Python
|
||||||
import os
|
import os
|
||||||
from mem0 import Memory
|
from mem0 import Memory
|
||||||
|
|
||||||
@@ -24,6 +29,26 @@ m = Memory.from_config(config)
|
|||||||
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
|
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
|
||||||
```
|
```
|
||||||
|
|
||||||
|
```typescript TypeScript
|
||||||
|
import { Memory } from 'mem0ai/oss';
|
||||||
|
|
||||||
|
const config = {
|
||||||
|
llm: {
|
||||||
|
provider: 'anthropic',
|
||||||
|
config: {
|
||||||
|
apiKey: process.env.ANTHROPIC_API_KEY || '',
|
||||||
|
model: 'claude-3-7-sonnet-latest',
|
||||||
|
temperature: 0.1,
|
||||||
|
maxTokens: 2000,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const memory = new Memory(config);
|
||||||
|
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
|
||||||
|
```
|
||||||
|
</CodeGroup>
|
||||||
|
|
||||||
## Config
|
## Config
|
||||||
|
|
||||||
All available parameters for the `anthropic` config are present in [Master List of All Params in Config](../config).
|
All available parameters for the `anthropic` config are present in [Master List of All Params in Config](../config).
|
||||||
@@ -1,10 +1,15 @@
|
|||||||
|
---
|
||||||
|
title: Groq
|
||||||
|
---
|
||||||
|
|
||||||
[Groq](https://groq.com/) is the creator of the world's first Language Processing Unit (LPU), providing exceptional speed performance for AI workloads running on their LPU Inference Engine.
|
[Groq](https://groq.com/) is the creator of the world's first Language Processing Unit (LPU), providing exceptional speed performance for AI workloads running on their LPU Inference Engine.
|
||||||
|
|
||||||
In order to use LLMs from Groq, go to their [platform](https://console.groq.com/keys) and get the API key. Set the API key as `GROQ_API_KEY` environment variable to use the model as given below in the example.
|
In order to use LLMs from Groq, go to their [platform](https://console.groq.com/keys) and get the API key. Set the API key as `GROQ_API_KEY` environment variable to use the model as given below in the example.
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
```python
|
<CodeGroup>
|
||||||
|
```python Python
|
||||||
import os
|
import os
|
||||||
from mem0 import Memory
|
from mem0 import Memory
|
||||||
|
|
||||||
@@ -26,6 +31,26 @@ m = Memory.from_config(config)
|
|||||||
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
|
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
|
||||||
```
|
```
|
||||||
|
|
||||||
|
```typescript TypeScript
|
||||||
|
import { Memory } from 'mem0ai/oss';
|
||||||
|
|
||||||
|
const config = {
|
||||||
|
llm: {
|
||||||
|
provider: 'groq',
|
||||||
|
config: {
|
||||||
|
apiKey: process.env.GROQ_API_KEY || '',
|
||||||
|
model: 'mixtral-8x7b-32768',
|
||||||
|
temperature: 0.1,
|
||||||
|
maxTokens: 1000,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const memory = new Memory(config);
|
||||||
|
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
|
||||||
|
```
|
||||||
|
</CodeGroup>
|
||||||
|
|
||||||
## Config
|
## Config
|
||||||
|
|
||||||
All available parameters for the `groq` config are present in [Master List of All Params in Config](../config).
|
All available parameters for the `groq` config are present in [Master List of All Params in Config](../config).
|
||||||
@@ -6,7 +6,8 @@ To use OpenAI LLM models, you have to set the `OPENAI_API_KEY` environment varia
|
|||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
```python
|
<CodeGroup>
|
||||||
|
```python Python
|
||||||
import os
|
import os
|
||||||
from mem0 import Memory
|
from mem0 import Memory
|
||||||
|
|
||||||
@@ -38,6 +39,26 @@ m = Memory.from_config(config)
|
|||||||
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
|
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
|
||||||
```
|
```
|
||||||
|
|
||||||
|
```typescript TypeScript
|
||||||
|
import { Memory } from 'mem0ai/oss';
|
||||||
|
|
||||||
|
const config = {
|
||||||
|
llm: {
|
||||||
|
provider: 'openai',
|
||||||
|
config: {
|
||||||
|
apiKey: process.env.OPENAI_API_KEY || '',
|
||||||
|
model: 'gpt-4-turbo-preview',
|
||||||
|
temperature: 0.2,
|
||||||
|
maxTokens: 1500,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const memory = new Memory(config);
|
||||||
|
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
|
||||||
|
```
|
||||||
|
</CodeGroup>
|
||||||
|
|
||||||
We also support the new [OpenAI structured-outputs](https://platform.openai.com/docs/guides/structured-outputs/introduction) model.
|
We also support the new [OpenAI structured-outputs](https://platform.openai.com/docs/guides/structured-outputs/introduction) model.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
@@ -59,7 +80,9 @@ config = {
|
|||||||
m = Memory.from_config(config)
|
m = Memory.from_config(config)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
<Note>
|
||||||
|
OpenAI structured-outputs is currently only available in the Python implementation.
|
||||||
|
</Note>
|
||||||
|
|
||||||
## Config
|
## Config
|
||||||
|
|
||||||
|
|||||||
@@ -14,20 +14,24 @@ For a comprehensive list of available parameters for llm configuration, please r
|
|||||||
|
|
||||||
To view all supported llms, visit the [Supported LLMs](./models).
|
To view all supported llms, visit the [Supported LLMs](./models).
|
||||||
|
|
||||||
|
<Note>
|
||||||
|
All LLMs are supported in Python. The following LLMs are also supported in TypeScript: **OpenAI**, **Anthropic**, and **Groq**.
|
||||||
|
</Note>
|
||||||
|
|
||||||
<CardGroup cols={4}>
|
<CardGroup cols={4}>
|
||||||
<Card title="OpenAI" href="/components/llms/models/openai"></Card>
|
<Card title="OpenAI" href="/components/llms/models/openai" />
|
||||||
<Card title="Ollama" href="/components/llms/models/ollama"></Card>
|
<Card title="Ollama" href="/components/llms/models/ollama" />
|
||||||
<Card title="Azure OpenAI" href="/components/llms/models/azure_openai"></Card>
|
<Card title="Azure OpenAI" href="/components/llms/models/azure_openai" />
|
||||||
<Card title="Anthropic" href="/components/llms/models/anthropic"></Card>
|
<Card title="Anthropic" href="/components/llms/models/anthropic" />
|
||||||
<Card title="Together" href="/components/llms/models/together"></Card>
|
<Card title="Together" href="/components/llms/models/together" />
|
||||||
<Card title="Groq" href="/components/llms/models/groq"></Card>
|
<Card title="Groq" href="/components/llms/models/groq" />
|
||||||
<Card title="Litellm" href="/components/llms/models/litellm"></Card>
|
<Card title="Litellm" href="/components/llms/models/litellm" />
|
||||||
<Card title="Mistral AI" href="/components/llms/models/mistral_ai"></Card>
|
<Card title="Mistral AI" href="/components/llms/models/mistral_ai" />
|
||||||
<Card title="Google AI" href="/components/llms/models/google_ai"></Card>
|
<Card title="Google AI" href="/components/llms/models/google_ai" />
|
||||||
<Card title="AWS bedrock" href="/components/llms/models/aws_bedrock"></Card>
|
<Card title="AWS bedrock" href="/components/llms/models/aws_bedrock" />
|
||||||
<Card title="Gemini" href="/components/llms/models/gemini"></Card>
|
<Card title="Gemini" href="/components/llms/models/gemini" />
|
||||||
<Card title="DeepSeek" href="/components/llms/models/deepseek"></Card>
|
<Card title="DeepSeek" href="/components/llms/models/deepseek" />
|
||||||
<Card title="xAI" href="/components/llms/models/xAI"></Card>
|
<Card title="xAI" href="/components/llms/models/xAI" />
|
||||||
</CardGroup>
|
</CardGroup>
|
||||||
|
|
||||||
## Structured vs Unstructured Outputs
|
## Structured vs Unstructured Outputs
|
||||||
|
|||||||
@@ -6,16 +6,17 @@ iconType: "solid"
|
|||||||
|
|
||||||
## How to define configurations?
|
## How to define configurations?
|
||||||
|
|
||||||
The `config` is defined as a Python dictionary with two main keys:
|
The `config` is defined as an object with two main keys:
|
||||||
- `vector_store`: Specifies the vector database provider and its configuration
|
- `vector_store`: Specifies the vector database provider and its configuration
|
||||||
- `provider`: The name of the vector database (e.g., "chroma", "pgvector", "qdrant", "milvus","azure_ai_search")
|
- `provider`: The name of the vector database (e.g., "chroma", "pgvector", "qdrant", "milvus", "azure_ai_search")
|
||||||
- `config`: A nested dictionary containing provider-specific settings
|
- `config`: A nested object containing provider-specific settings
|
||||||
|
|
||||||
## How to Use Config
|
## How to Use Config
|
||||||
|
|
||||||
Here's a general example of how to use the config with mem0:
|
Here's a general example of how to use the config with mem0:
|
||||||
|
|
||||||
```python
|
<CodeGroup>
|
||||||
|
```python Python
|
||||||
import os
|
import os
|
||||||
from mem0 import Memory
|
from mem0 import Memory
|
||||||
|
|
||||||
@@ -34,6 +35,29 @@ m = Memory.from_config(config)
|
|||||||
m.add("Your text here", user_id="user", metadata={"category": "example"})
|
m.add("Your text here", user_id="user", metadata={"category": "example"})
|
||||||
```
|
```
|
||||||
|
|
||||||
|
```typescript TypeScript
|
||||||
|
// Example for in-memory vector database (Only supported in TypeScript)
|
||||||
|
import { Memory } from 'mem0ai/oss';
|
||||||
|
|
||||||
|
const configMemory = {
|
||||||
|
vector_store: {
|
||||||
|
provider: 'memory',
|
||||||
|
config: {
|
||||||
|
collectionName: 'memories',
|
||||||
|
dimension: 1536,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const memory = new Memory(configMemory);
|
||||||
|
await memory.add("Your text here", { userId: "user", metadata: { category: "example" } });
|
||||||
|
```
|
||||||
|
</CodeGroup>
|
||||||
|
|
||||||
|
<Note>
|
||||||
|
The in-memory vector database is only supported in the TypeScript implementation.
|
||||||
|
</Note>
|
||||||
|
|
||||||
## Why is Config Needed?
|
## Why is Config Needed?
|
||||||
|
|
||||||
Config is essential for:
|
Config is essential for:
|
||||||
@@ -46,6 +70,8 @@ Config is essential for:
|
|||||||
|
|
||||||
Here's a comprehensive list of all parameters that can be used across different vector databases:
|
Here's a comprehensive list of all parameters that can be used across different vector databases:
|
||||||
|
|
||||||
|
<Tabs>
|
||||||
|
<Tab title="Python">
|
||||||
| Parameter | Description |
|
| Parameter | Description |
|
||||||
|-----------|-------------|
|
|-----------|-------------|
|
||||||
| `collection_name` | Name of the collection |
|
| `collection_name` | Name of the collection |
|
||||||
@@ -60,6 +86,24 @@ Here's a comprehensive list of all parameters that can be used across different
|
|||||||
| `url` | Full URL for the server |
|
| `url` | Full URL for the server |
|
||||||
| `api_key` | API key for the server |
|
| `api_key` | API key for the server |
|
||||||
| `on_disk` | Enable persistent storage |
|
| `on_disk` | Enable persistent storage |
|
||||||
|
</Tab>
|
||||||
|
<Tab title="TypeScript">
|
||||||
|
| Parameter | Description |
|
||||||
|
|-----------|-------------|
|
||||||
|
| `collectionName` | Name of the collection |
|
||||||
|
| `embeddingModelDims` | Dimensions of the embedding model |
|
||||||
|
| `dimension` | Dimensions of the embedding model (for memory provider) |
|
||||||
|
| `host` | Host where the server is running |
|
||||||
|
| `port` | Port where the server is running |
|
||||||
|
| `url` | URL for the server |
|
||||||
|
| `apiKey` | API key for the server |
|
||||||
|
| `path` | Path for the database |
|
||||||
|
| `onDisk` | Enable persistent storage |
|
||||||
|
| `redisUrl` | URL for the Redis server |
|
||||||
|
| `username` | Username for database connection |
|
||||||
|
| `password` | Password for database connection |
|
||||||
|
</Tab>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
## Customizing Config
|
## Customizing Config
|
||||||
|
|
||||||
|
|||||||
@@ -2,7 +2,8 @@
|
|||||||
|
|
||||||
### Usage
|
### Usage
|
||||||
|
|
||||||
```python
|
<CodeGroup>
|
||||||
|
```python Python
|
||||||
import os
|
import os
|
||||||
from mem0 import Memory
|
from mem0 import Memory
|
||||||
|
|
||||||
@@ -23,10 +24,32 @@ m = Memory.from_config(config)
|
|||||||
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
|
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
|
||||||
```
|
```
|
||||||
|
|
||||||
|
```typescript TypeScript
|
||||||
|
import { Memory } from 'mem0ai/oss';
|
||||||
|
|
||||||
|
const config = {
|
||||||
|
vectorStore: {
|
||||||
|
provider: 'qdrant',
|
||||||
|
config: {
|
||||||
|
collectionName: 'memories',
|
||||||
|
embeddingModelDims: 1536,
|
||||||
|
host: 'localhost',
|
||||||
|
port: 6333,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const memory = new Memory(config);
|
||||||
|
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
|
||||||
|
```
|
||||||
|
</CodeGroup>
|
||||||
|
|
||||||
### Config
|
### Config
|
||||||
|
|
||||||
Let's see the available parameters for the `qdrant` config:
|
Let's see the available parameters for the `qdrant` config:
|
||||||
|
|
||||||
|
<Tabs>
|
||||||
|
<Tab title="Python">
|
||||||
| Parameter | Description | Default Value |
|
| Parameter | Description | Default Value |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
| `collection_name` | The name of the collection to store the vectors | `mem0` |
|
| `collection_name` | The name of the collection to store the vectors | `mem0` |
|
||||||
@@ -37,4 +60,18 @@ Let's see the available parameters for the `qdrant` config:
|
|||||||
| `path` | Path for the qdrant database | `/tmp/qdrant` |
|
| `path` | Path for the qdrant database | `/tmp/qdrant` |
|
||||||
| `url` | Full URL for the qdrant server | `None` |
|
| `url` | Full URL for the qdrant server | `None` |
|
||||||
| `api_key` | API key for the qdrant server | `None` |
|
| `api_key` | API key for the qdrant server | `None` |
|
||||||
| `on_disk` | For enabling persistent storage | `False` |
|
| `on_disk` | For enabling persistent storage | `False` |
|
||||||
|
</Tab>
|
||||||
|
<Tab title="TypeScript">
|
||||||
|
| Parameter | Description | Default Value |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `collectionName` | The name of the collection to store the vectors | `mem0` |
|
||||||
|
| `embeddingModelDims` | Dimensions of the embedding model | `1536` |
|
||||||
|
| `host` | The host where the Qdrant server is running | `None` |
|
||||||
|
| `port` | The port where the Qdrant server is running | `None` |
|
||||||
|
| `path` | Path for the Qdrant database | `/tmp/qdrant` |
|
||||||
|
| `url` | Full URL for the Qdrant server | `None` |
|
||||||
|
| `apiKey` | API key for the Qdrant server | `None` |
|
||||||
|
| `onDisk` | For enabling persistent storage | `False` |
|
||||||
|
</Tab>
|
||||||
|
</Tabs>
|
||||||
@@ -12,7 +12,8 @@ docker run -d --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:lat
|
|||||||
|
|
||||||
### Usage
|
### Usage
|
||||||
|
|
||||||
```python
|
<CodeGroup>
|
||||||
|
```python Python
|
||||||
import os
|
import os
|
||||||
from mem0 import Memory
|
from mem0 import Memory
|
||||||
|
|
||||||
@@ -34,12 +35,46 @@ m = Memory.from_config(config)
|
|||||||
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
|
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
|
||||||
```
|
```
|
||||||
|
|
||||||
|
```typescript TypeScript
|
||||||
|
import { Memory } from 'mem0ai/oss';
|
||||||
|
|
||||||
|
const config = {
|
||||||
|
vectorStore: {
|
||||||
|
provider: 'redis',
|
||||||
|
config: {
|
||||||
|
collectionName: 'memories',
|
||||||
|
embeddingModelDims: 1536,
|
||||||
|
redisUrl: 'redis://localhost:6379',
|
||||||
|
username: 'your-redis-username',
|
||||||
|
password: 'your-redis-password',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const memory = new Memory(config);
|
||||||
|
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
|
||||||
|
```
|
||||||
|
</CodeGroup>
|
||||||
|
|
||||||
### Config
|
### Config
|
||||||
|
|
||||||
Let's see the available parameters for the `redis` config:
|
Let's see the available parameters for the `redis` config:
|
||||||
|
|
||||||
|
<Tabs>
|
||||||
|
<Tab title="Python">
|
||||||
| Parameter | Description | Default Value |
|
| Parameter | Description | Default Value |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
| `collection_name` | The name of the collection to store the vectors | `mem0` |
|
| `collection_name` | The name of the collection to store the vectors | `mem0` |
|
||||||
| `embedding_model_dims` | Dimensions of the embedding model | `1536` |
|
| `embedding_model_dims` | Dimensions of the embedding model | `1536` |
|
||||||
| `redis_url` | The URL of the Redis server | `None` |
|
| `redis_url` | The URL of the Redis server | `None` |
|
||||||
|
</Tab>
|
||||||
|
<Tab title="TypeScript">
|
||||||
|
| Parameter | Description | Default Value |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `collectionName` | The name of the collection to store the vectors | `mem0` |
|
||||||
|
| `embeddingModelDims` | Dimensions of the embedding model | `1536` |
|
||||||
|
| `redisUrl` | The URL of the Redis server | `None` |
|
||||||
|
| `username` | Username for Redis connection | `None` |
|
||||||
|
| `password` | Password for Redis connection | `None` |
|
||||||
|
</Tab>
|
||||||
|
</Tabs>
|
||||||
@@ -10,6 +10,10 @@ Mem0 includes built-in support for various popular databases. Memory can utilize
|
|||||||
|
|
||||||
See the list of supported vector databases below.
|
See the list of supported vector databases below.
|
||||||
|
|
||||||
|
<Note>
|
||||||
|
The following vector databases are supported in the Python implementation. The TypeScript implementation currently only supports Qdrant, Redis and in-memory vector database.
|
||||||
|
</Note>
|
||||||
|
|
||||||
<CardGroup cols={3}>
|
<CardGroup cols={3}>
|
||||||
<Card title="Qdrant" href="/components/vectordbs/dbs/qdrant"></Card>
|
<Card title="Qdrant" href="/components/vectordbs/dbs/qdrant"></Card>
|
||||||
<Card title="Chroma" href="/components/vectordbs/dbs/chroma"></Card>
|
<Card title="Chroma" href="/components/vectordbs/dbs/chroma"></Card>
|
||||||
|
|||||||
@@ -65,6 +65,7 @@
|
|||||||
"pages": [
|
"pages": [
|
||||||
"open-source/quickstart",
|
"open-source/quickstart",
|
||||||
"open-source/python-quickstart",
|
"open-source/python-quickstart",
|
||||||
|
"open-source-typescript/quickstart",
|
||||||
{
|
{
|
||||||
"group": "Features",
|
"group": "Features",
|
||||||
"icon": "wrench",
|
"icon": "wrench",
|
||||||
@@ -151,67 +152,6 @@
|
|||||||
]
|
]
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
},
|
|
||||||
{
|
|
||||||
"group": "Node.js",
|
|
||||||
"icon": "js",
|
|
||||||
"pages": [
|
|
||||||
"open-source-typescript/quickstart",
|
|
||||||
{
|
|
||||||
"group": "Features",
|
|
||||||
"icon": "wrench",
|
|
||||||
"pages": [
|
|
||||||
"open-source-typescript/features/custom-prompts"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"group": "LLMs",
|
|
||||||
"icon": "brain",
|
|
||||||
"pages": [
|
|
||||||
"open-source-typescript/components/llms/overview",
|
|
||||||
"open-source-typescript/components/llms/config",
|
|
||||||
{
|
|
||||||
"group": "Supported LLMs",
|
|
||||||
"icon": "list",
|
|
||||||
"pages": [
|
|
||||||
"open-source-typescript/components/llms/models/openai",
|
|
||||||
"open-source-typescript/components/llms/models/anthropic",
|
|
||||||
"open-source-typescript/components/llms/models/groq"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},{
|
|
||||||
"group": "Vector Databases",
|
|
||||||
"icon": "database",
|
|
||||||
"pages": [
|
|
||||||
"open-source-typescript/components/vectordbs/overview",
|
|
||||||
"open-source-typescript/components/vectordbs/config",
|
|
||||||
{
|
|
||||||
"group": "Supported Vector Databases",
|
|
||||||
"icon": "server",
|
|
||||||
"pages": [
|
|
||||||
"open-source-typescript/components/vectordbs/dbs/qdrant",
|
|
||||||
"open-source-typescript/components/vectordbs/dbs/redis"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"group": "Embedding Models",
|
|
||||||
"icon": "layer-group",
|
|
||||||
"pages": [
|
|
||||||
"open-source-typescript/components/embedders/overview",
|
|
||||||
"open-source-typescript/components/embedders/config",
|
|
||||||
{
|
|
||||||
"group": "Supported Embedding Models",
|
|
||||||
"icon": "list",
|
|
||||||
"pages": [
|
|
||||||
"open-source-typescript/components/embedders/models/openai"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -17,7 +17,8 @@ To create an effective custom prompt:
|
|||||||
|
|
||||||
Example of a custom prompt:
|
Example of a custom prompt:
|
||||||
|
|
||||||
```python
|
<CodeGroup>
|
||||||
|
```python Python
|
||||||
custom_prompt = """
|
custom_prompt = """
|
||||||
Please only extract entities containing customer support information, order details, and user information.
|
Please only extract entities containing customer support information, order details, and user information.
|
||||||
Here are some few shot examples:
|
Here are some few shot examples:
|
||||||
@@ -39,12 +40,37 @@ Output: {{"facts" : ["Ordered red shirt, size medium", "Received blue shirt inst
|
|||||||
|
|
||||||
Return the facts and customer information in a json format as shown above.
|
Return the facts and customer information in a json format as shown above.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Here we initialize the custom prompt in the config.
|
```typescript TypeScript
|
||||||
|
const customPrompt = `
|
||||||
|
Please only extract entities containing customer support information, order details, and user information.
|
||||||
|
Here are some few shot examples:
|
||||||
|
|
||||||
```python
|
Input: Hi.
|
||||||
|
Output: {"facts" : []}
|
||||||
|
|
||||||
|
Input: The weather is nice today.
|
||||||
|
Output: {"facts" : []}
|
||||||
|
|
||||||
|
Input: My order #12345 hasn't arrived yet.
|
||||||
|
Output: {"facts" : ["Order #12345 not received"]}
|
||||||
|
|
||||||
|
Input: I am John Doe, and I would like to return the shoes I bought last week.
|
||||||
|
Output: {"facts" : ["Customer name: John Doe", "Wants to return shoes", "Purchase made last week"]}
|
||||||
|
|
||||||
|
Input: I ordered a red shirt, size medium, but received a blue one instead.
|
||||||
|
Output: {"facts" : ["Ordered red shirt, size medium", "Received blue shirt instead"]}
|
||||||
|
|
||||||
|
Return the facts and customer information in a json format as shown above.
|
||||||
|
`;
|
||||||
|
```
|
||||||
|
</CodeGroup>
|
||||||
|
|
||||||
|
Here we initialize the custom prompt in the config:
|
||||||
|
|
||||||
|
<CodeGroup>
|
||||||
|
```python Python
|
||||||
from mem0 import Memory
|
from mem0 import Memory
|
||||||
|
|
||||||
config = {
|
config = {
|
||||||
@@ -63,15 +89,40 @@ config = {
|
|||||||
m = Memory.from_config(config_dict=config, user_id="alice")
|
m = Memory.from_config(config_dict=config, user_id="alice")
|
||||||
```
|
```
|
||||||
|
|
||||||
|
```typescript TypeScript
|
||||||
|
import { Memory } from 'mem0ai/oss';
|
||||||
|
|
||||||
|
const config = {
|
||||||
|
version: 'v1.1',
|
||||||
|
llm: {
|
||||||
|
provider: 'openai',
|
||||||
|
config: {
|
||||||
|
apiKey: process.env.OPENAI_API_KEY || '',
|
||||||
|
model: 'gpt-4-turbo-preview',
|
||||||
|
temperature: 0.2,
|
||||||
|
maxTokens: 1500,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
customPrompt: customPrompt
|
||||||
|
};
|
||||||
|
|
||||||
|
const memory = new Memory(config);
|
||||||
|
```
|
||||||
|
</CodeGroup>
|
||||||
|
|
||||||
### Example 1
|
### Example 1
|
||||||
|
|
||||||
In this example, we are adding a memory of a user ordering a laptop. As seen in the output, the custom prompt is used to extract the relevant information from the user's message.
|
In this example, we are adding a memory of a user ordering a laptop. As seen in the output, the custom prompt is used to extract the relevant information from the user's message.
|
||||||
|
|
||||||
<CodeGroup>
|
<CodeGroup>
|
||||||
```python Code
|
```python Python
|
||||||
m.add("Yesterday, I ordered a laptop, the order id is 12345", user_id="alice")
|
m.add("Yesterday, I ordered a laptop, the order id is 12345", user_id="alice")
|
||||||
```
|
```
|
||||||
|
|
||||||
|
```typescript TypeScript
|
||||||
|
await memory.add('Yesterday, I ordered a laptop, the order id is 12345', { userId: "user123" });
|
||||||
|
```
|
||||||
|
|
||||||
```json Output
|
```json Output
|
||||||
{
|
{
|
||||||
"results": [
|
"results": [
|
||||||
@@ -97,11 +148,16 @@ m.add("Yesterday, I ordered a laptop, the order id is 12345", user_id="alice")
|
|||||||
|
|
||||||
In this example, we are adding a memory of a user liking to go on hikes. This add message is not specific to the use-case mentioned in the custom prompt.
|
In this example, we are adding a memory of a user liking to go on hikes. This add message is not specific to the use-case mentioned in the custom prompt.
|
||||||
Hence, the memory is not added.
|
Hence, the memory is not added.
|
||||||
|
|
||||||
<CodeGroup>
|
<CodeGroup>
|
||||||
```python Code
|
```python Python
|
||||||
m.add("I like going to hikes", user_id="alice")
|
m.add("I like going to hikes", user_id="alice")
|
||||||
```
|
```
|
||||||
|
|
||||||
|
```typescript TypeScript
|
||||||
|
await memory.add('I like going to hikes', { userId: "user123" });
|
||||||
|
```
|
||||||
|
|
||||||
```json Output
|
```json Output
|
||||||
{
|
{
|
||||||
"results": [],
|
"results": [],
|
||||||
@@ -109,3 +165,5 @@ m.add("I like going to hikes", user_id="alice")
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
</CodeGroup>
|
</CodeGroup>
|
||||||
|
|
||||||
|
The custom prompt will process both the user and assistant messages to extract relevant information according to the defined format.
|
||||||
|
|||||||
@@ -1,56 +0,0 @@
|
|||||||
---
|
|
||||||
title: Configurations
|
|
||||||
icon: "gear"
|
|
||||||
iconType: "solid"
|
|
||||||
---
|
|
||||||
|
|
||||||
Config in Mem0 is a dictionary that specifies the settings for your embedding models. It allows you to customize the behavior and connection details of your chosen embedder.
|
|
||||||
|
|
||||||
## How to define configurations?
|
|
||||||
|
|
||||||
The config is defined as a TypeScript object with two main keys:
|
|
||||||
- `embedder`: Specifies the embedder provider and its configuration
|
|
||||||
- `provider`: The name of the embedder (e.g., "openai", "ollama")
|
|
||||||
- `config`: A nested object containing provider-specific settings
|
|
||||||
|
|
||||||
## How to use configurations?
|
|
||||||
|
|
||||||
Here's a general example of how to use the config with Mem0:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { Memory } from 'mem0ai/oss';
|
|
||||||
|
|
||||||
const config = {
|
|
||||||
embedder: {
|
|
||||||
provider: 'openai',
|
|
||||||
config: {
|
|
||||||
apiKey: 'your-openai-api-key',
|
|
||||||
model: 'text-embedding-3-small',
|
|
||||||
},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
const memory = new Memory(config);
|
|
||||||
await memory.add("Your text here", { userId: "user", metadata: { category: "example" } });
|
|
||||||
```
|
|
||||||
|
|
||||||
## Why is Config Needed?
|
|
||||||
|
|
||||||
Config is essential for:
|
|
||||||
1. Specifying which embedding model to use.
|
|
||||||
2. Providing necessary connection details (e.g., model, api_key, embedding_dims).
|
|
||||||
3. Ensuring proper initialization and connection to your chosen embedder.
|
|
||||||
|
|
||||||
## Master List of All Params in Config
|
|
||||||
|
|
||||||
Here's a comprehensive list of all parameters that can be used across different embedders:
|
|
||||||
|
|
||||||
| Parameter | Description |
|
|
||||||
|------------------------|--------------------------------------------------|
|
|
||||||
| `model` | Embedding model to use |
|
|
||||||
| `apiKey` | API key of the provider |
|
|
||||||
| `embeddingDims` | Dimensions of the embedding model |
|
|
||||||
|
|
||||||
## Supported Embedding Models
|
|
||||||
|
|
||||||
For detailed information on configuring specific embedders, please visit the [Embedding Models](./models) section. There you'll find information for each supported embedder with provider-specific usage examples and configuration details.
|
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
---
|
|
||||||
title: OpenAI
|
|
||||||
---
|
|
||||||
|
|
||||||
To use OpenAI embedding models, you need to provide the API key directly in your configuration. You can obtain the OpenAI API key from the [OpenAI Platform](https://platform.openai.com/account/api-keys).
|
|
||||||
|
|
||||||
### Usage
|
|
||||||
|
|
||||||
Here's how to configure OpenAI embedding models in your application:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { Memory } from 'mem0ai/oss';
|
|
||||||
|
|
||||||
const config = {
|
|
||||||
embedder: {
|
|
||||||
provider: 'openai',
|
|
||||||
config: {
|
|
||||||
apiKey: 'your-openai-api-key',
|
|
||||||
model: 'text-embedding-3-large',
|
|
||||||
},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
const memory = new Memory(config);
|
|
||||||
await memory.add("I'm visiting Paris", { userId: "john" });
|
|
||||||
```
|
|
||||||
|
|
||||||
### Config
|
|
||||||
|
|
||||||
Here are the parameters available for configuring the OpenAI embedder:
|
|
||||||
|
|
||||||
| Parameter | Description | Default Value |
|
|
||||||
|------------------------|--------------------------------------------------|---------------|
|
|
||||||
| `model` | The name of the embedding model to use | `text-embedding-3-small` |
|
|
||||||
| `embedding_dims` | Dimensions of the embedding model | `1536` |
|
|
||||||
| `api_key` | The OpenAI API key | `None` |
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
---
|
|
||||||
title: Overview
|
|
||||||
icon: "info"
|
|
||||||
iconType: "solid"
|
|
||||||
---
|
|
||||||
|
|
||||||
Mem0 offers support for various embedding models, allowing users to choose the one that best suits their needs.
|
|
||||||
|
|
||||||
## Supported Embedders
|
|
||||||
|
|
||||||
See the list of supported embedders below.
|
|
||||||
|
|
||||||
<CardGroup cols={1}>
|
|
||||||
<Card title="OpenAI" href="/components/embedders/models/openai"></Card>
|
|
||||||
</CardGroup>
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
To utilize a embedder, you must provide a configuration to customize its usage. If no configuration is supplied, a default configuration will be applied, and `OpenAI` will be used as the embedder.
|
|
||||||
|
|
||||||
For a comprehensive list of available parameters for embedder configuration, please refer to [Config](./config).
|
|
||||||
@@ -1,85 +0,0 @@
|
|||||||
---
|
|
||||||
title: Configurations
|
|
||||||
icon: "gear"
|
|
||||||
iconType: "solid"
|
|
||||||
---
|
|
||||||
|
|
||||||
## How to define configurations?
|
|
||||||
|
|
||||||
The `config` is defined as a TypeScript object with two main keys:
|
|
||||||
- `llm`: Specifies the LLM provider and its configuration
|
|
||||||
- `provider`: The name of the LLM (e.g., "openai", "groq")
|
|
||||||
- `config`: A nested object containing provider-specific settings
|
|
||||||
|
|
||||||
### Config Values Precedence
|
|
||||||
|
|
||||||
Config values are applied in the following order of precedence (from highest to lowest):
|
|
||||||
|
|
||||||
1. Values explicitly set in the `config` object
|
|
||||||
2. Environment variables (e.g., `OPENAI_API_KEY`, `OPENAI_API_BASE`)
|
|
||||||
3. Default values defined in the LLM implementation
|
|
||||||
|
|
||||||
This means that values specified in the `config` object will override corresponding environment variables, which in turn override default values.
|
|
||||||
|
|
||||||
## How to Use Config
|
|
||||||
|
|
||||||
Here's a general example of how to use the config with Mem0:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { Memory } from 'mem0ai/oss';
|
|
||||||
|
|
||||||
const config = {
|
|
||||||
llm: {
|
|
||||||
provider: 'openai',
|
|
||||||
config: {
|
|
||||||
apiKey: process.env.OPENAI_API_KEY || '',
|
|
||||||
model: 'gpt-4-turbo-preview',
|
|
||||||
temperature: 0.2,
|
|
||||||
maxTokens: 1500,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
embedder: {
|
|
||||||
provider: 'openai',
|
|
||||||
config: {
|
|
||||||
apiKey: process.env.OPENAI_API_KEY || '',
|
|
||||||
model: 'text-embedding-3-small',
|
|
||||||
},
|
|
||||||
},
|
|
||||||
vectorStore: {
|
|
||||||
provider: 'memory',
|
|
||||||
config: {
|
|
||||||
collectionName: 'memories',
|
|
||||||
dimension: 1536,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
historyDbPath: 'memory.db',
|
|
||||||
};
|
|
||||||
|
|
||||||
const memory = new Memory(config);
|
|
||||||
memory.add("Your text here", { userId: "user123", metadata: { category: "example" } });
|
|
||||||
```
|
|
||||||
|
|
||||||
## Why is Config Needed?
|
|
||||||
|
|
||||||
Config is essential for:
|
|
||||||
1. Specifying which LLM to use.
|
|
||||||
2. Providing necessary connection details (e.g., model, api_key, temperature).
|
|
||||||
3. Ensuring proper initialization and connection to your chosen LLM.
|
|
||||||
|
|
||||||
## Master List of All Params in Config
|
|
||||||
|
|
||||||
Here's a comprehensive list of all parameters that can be used across different LLMs:
|
|
||||||
|
|
||||||
| Parameter | Description | Provider |
|
|
||||||
|----------------------|-----------------------------------------------|-------------------|
|
|
||||||
| `model` | Embedding model to use | All |
|
|
||||||
| `temperature` | Temperature of the model | All |
|
|
||||||
| `apiKey` | API key to use | All |
|
|
||||||
| `maxTokens` | Tokens to generate | All |
|
|
||||||
| `topP` | Probability threshold for nucleus sampling | All |
|
|
||||||
| `topK` | Number of highest probability tokens to keep | All |
|
|
||||||
| `openaiBaseUrl` | Base URL for OpenAI API | OpenAI |
|
|
||||||
|
|
||||||
## Supported LLMs
|
|
||||||
|
|
||||||
For detailed information on configuring specific LLMs, please visit the [LLMs](./models) section. There you'll find information for each supported LLM with provider-specific usage examples and configuration details.
|
|
||||||
@@ -1,30 +0,0 @@
|
|||||||
---
|
|
||||||
title: Anthropic
|
|
||||||
---
|
|
||||||
|
|
||||||
To use Anthropic's models, please set the `ANTHROPIC_API_KEY`, which you can find on their [Account Settings Page](https://console.anthropic.com/account/keys).
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { Memory } from 'mem0ai/oss';
|
|
||||||
|
|
||||||
const config = {
|
|
||||||
llm: {
|
|
||||||
provider: 'anthropic',
|
|
||||||
config: {
|
|
||||||
apiKey: process.env.ANTHROPIC_API_KEY || '',
|
|
||||||
model: 'claude-3-7-sonnet-latest',
|
|
||||||
temperature: 0.1,
|
|
||||||
maxTokens: 2000,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
const memory = new Memory(config);
|
|
||||||
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
|
|
||||||
```
|
|
||||||
|
|
||||||
## Config
|
|
||||||
|
|
||||||
All available parameters for the `anthropic` config are present in the [Master List of All Params in Config](../config).
|
|
||||||
@@ -1,32 +0,0 @@
|
|||||||
---
|
|
||||||
title: Groq
|
|
||||||
---
|
|
||||||
|
|
||||||
[Groq](https://groq.com/) is the creator of the world's first Language Processing Unit (LPU), providing exceptional speed performance for AI workloads running on their LPU Inference Engine.
|
|
||||||
|
|
||||||
In order to use LLMs from Groq, go to their [platform](https://console.groq.com/keys) and get the API key. Set the API key as `GROQ_API_KEY` environment variable to use the model as given below in the example.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { Memory } from 'mem0ai/oss';
|
|
||||||
|
|
||||||
const config = {
|
|
||||||
llm: {
|
|
||||||
provider: 'groq',
|
|
||||||
config: {
|
|
||||||
apiKey: process.env.GROQ_API_KEY || '',
|
|
||||||
model: 'mixtral-8x7b-32768',
|
|
||||||
temperature: 0.1,
|
|
||||||
maxTokens: 1000,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
const memory = new Memory(config);
|
|
||||||
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
|
|
||||||
```
|
|
||||||
|
|
||||||
## Config
|
|
||||||
|
|
||||||
All available parameters for the `groq` config are present in the [Master List of All Params in Config](../config).
|
|
||||||
@@ -1,30 +0,0 @@
|
|||||||
---
|
|
||||||
title: OpenAI
|
|
||||||
---
|
|
||||||
|
|
||||||
To use OpenAI LLM models, you need to set the `OPENAI_API_KEY` environment variable. You can obtain the OpenAI API key from the [OpenAI Platform](https://platform.openai.com/account/api-keys).
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { Memory } from 'mem0ai/oss';
|
|
||||||
|
|
||||||
const config = {
|
|
||||||
llm: {
|
|
||||||
provider: 'openai',
|
|
||||||
config: {
|
|
||||||
apiKey: process.env.OPENAI_API_KEY || '',
|
|
||||||
model: 'gpt-4-turbo-preview',
|
|
||||||
temperature: 0.2,
|
|
||||||
maxTokens: 1500,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
const memory = new Memory(config);
|
|
||||||
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
|
|
||||||
```
|
|
||||||
|
|
||||||
## Config
|
|
||||||
|
|
||||||
All available parameters for the `openai` config are present in the [Master List of All Params in Config](../config).
|
|
||||||
@@ -1,45 +0,0 @@
|
|||||||
---
|
|
||||||
title: Overview
|
|
||||||
icon: "info"
|
|
||||||
iconType: "solid"
|
|
||||||
---
|
|
||||||
|
|
||||||
Mem0 includes built-in support for various popular large language models. Memory can utilize the LLM provided by the user, ensuring efficient use for specific needs.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
To use a llm, you must provide a configuration to customize its usage. If no configuration is supplied, a default configuration will be applied, and `OpenAI` will be used as the llm.
|
|
||||||
|
|
||||||
For a comprehensive list of available parameters for llm configuration, please refer to [Config](./config).
|
|
||||||
|
|
||||||
To view all supported llms, visit the [Supported LLMs](./models).
|
|
||||||
|
|
||||||
<CardGroup cols={3}>
|
|
||||||
<Card title="OpenAI" href="/open-source-typescript/components/llms/models/openai"></Card>
|
|
||||||
<Card title="Anthropic" href="/open-source-typescript/components/llms/models/anthropic"></Card>
|
|
||||||
<Card title="Groq" href="/open-source-typescript/components/llms/models/groq"></Card>
|
|
||||||
</CardGroup>
|
|
||||||
|
|
||||||
## Structured vs Unstructured Outputs
|
|
||||||
|
|
||||||
Mem0 supports two types of OpenAI LLM formats, each with its own strengths and use cases:
|
|
||||||
|
|
||||||
### Structured Outputs
|
|
||||||
|
|
||||||
Structured outputs are LLMs that align with OpenAI's structured outputs model:
|
|
||||||
|
|
||||||
- **Optimized for:** Returning structured responses (e.g., JSON objects)
|
|
||||||
- **Benefits:** Precise, easily parseable data
|
|
||||||
- **Ideal for:** Data extraction, form filling, API responses
|
|
||||||
- **Learn more:** [OpenAI Structured Outputs Guide](https://platform.openai.com/docs/guides/structured-outputs/introduction)
|
|
||||||
|
|
||||||
### Unstructured Outputs
|
|
||||||
|
|
||||||
Unstructured outputs correspond to OpenAI's standard, free-form text model:
|
|
||||||
|
|
||||||
- **Flexibility:** Returns open-ended, natural language responses
|
|
||||||
- **Customization:** Use the `response_format` parameter to guide output
|
|
||||||
- **Trade-off:** Less efficient than structured outputs for specific data needs
|
|
||||||
- **Best for:** Creative writing, explanations, general conversation
|
|
||||||
|
|
||||||
Choose the format that best suits your application's requirements for optimal performance and usability.
|
|
||||||
@@ -1,100 +0,0 @@
|
|||||||
---
|
|
||||||
title: Configurations
|
|
||||||
icon: "gear"
|
|
||||||
iconType: "solid"
|
|
||||||
---
|
|
||||||
|
|
||||||
## How to define configurations?
|
|
||||||
|
|
||||||
The `config` is defined as a TypeScript object with two main keys:
|
|
||||||
- `vectorStore`: Specifies the vector database provider and its configuration
|
|
||||||
- `provider`: The name of the vector database (e.g., "chroma", "pgvector", "qdrant", "milvus", "azure_ai_search", "redis", "memory")
|
|
||||||
- `config`: A nested object containing provider-specific settings
|
|
||||||
|
|
||||||
## In-Memory Storage Option
|
|
||||||
|
|
||||||
We also support an in-memory storage option for the vector store, which is useful for reduced overhead and faster access times. Here's how to configure it:
|
|
||||||
|
|
||||||
### Example for In-Memory Storage
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
const configMemory = {
|
|
||||||
vector_store: {
|
|
||||||
provider: 'memory',
|
|
||||||
config: {
|
|
||||||
collectionName: 'memories',
|
|
||||||
dimension: 1536,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
const memory = new Memory(configMemory);
|
|
||||||
await memory.add("Your text here", { userId: "user", metadata: { category: "example" } });
|
|
||||||
```
|
|
||||||
|
|
||||||
## How to Use Config
|
|
||||||
|
|
||||||
Here's a general example of how to use the config with Mem0:
|
|
||||||
|
|
||||||
### Example for qdrant
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { Memory } from 'mem0ai/oss';
|
|
||||||
|
|
||||||
const config = {
|
|
||||||
vector_store: {
|
|
||||||
provider: 'qdrant',
|
|
||||||
config: {
|
|
||||||
collectionName: 'memories',
|
|
||||||
embeddingModelDims: 1536,
|
|
||||||
host: 'localhost',
|
|
||||||
port: 6333,
|
|
||||||
url: 'https://your-qdrant-url.com',
|
|
||||||
apiKey: 'your-qdrant-api-key',
|
|
||||||
onDisk: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
const memory = new Memory(config);
|
|
||||||
await memory.add("Your text here", { userId: "user", metadata: { category: "example" } });
|
|
||||||
```
|
|
||||||
|
|
||||||
## Why is Config Needed?
|
|
||||||
|
|
||||||
Config is essential for:
|
|
||||||
1. Specifying which vector database to use.
|
|
||||||
2. Providing necessary connection details (e.g., host, port, credentials).
|
|
||||||
3. Customizing database-specific settings (e.g., collection name, path).
|
|
||||||
4. Ensuring proper initialization and connection to your chosen vector store.
|
|
||||||
|
|
||||||
## Master List of All Params in Config
|
|
||||||
|
|
||||||
Here's a comprehensive list of all parameters that can be used across different vector databases:
|
|
||||||
|
|
||||||
| Parameter | Description |
|
|
||||||
|------------------------|--------------------------------------|
|
|
||||||
| `collectionName` | Name of the collection |
|
|
||||||
| `dimension` | Dimensions of the embedding model |
|
|
||||||
| `host` | Host where the server is running |
|
|
||||||
| `port` | Port where the server is running |
|
|
||||||
| `embeddingModelDims` | Dimensions of the embedding model |
|
|
||||||
| `url` | URL for the Qdrant server |
|
|
||||||
| `apiKey` | API key for the Qdrant server |
|
|
||||||
| `path` | Path for the Qdrant server |
|
|
||||||
| `onDisk` | Enable persistent storage (for Qdrant) |
|
|
||||||
| `redisUrl` | URL for the Redis server |
|
|
||||||
| `username` | Username for Redis connection |
|
|
||||||
| `password` | Password for Redis connection |
|
|
||||||
|
|
||||||
## Customizing Config
|
|
||||||
|
|
||||||
Each vector database has its own specific configuration requirements. To customize the config for your chosen vector store:
|
|
||||||
|
|
||||||
1. Identify the vector database you want to use from [supported vector databases](./dbs).
|
|
||||||
2. Refer to the `Config` section in the respective vector database's documentation.
|
|
||||||
3. Include only the relevant parameters for your chosen database in the `config` object.
|
|
||||||
|
|
||||||
## Supported Vector Databases
|
|
||||||
|
|
||||||
For detailed information on configuring specific vector databases, please visit the [Supported Vector Databases](./dbs) section. There you'll find individual pages for each supported vector store with provider-specific usage examples and configuration details.
|
|
||||||
@@ -1,44 +0,0 @@
|
|||||||
[pgvector](https://github.com/pgvector/pgvector) is open-source vector similarity search for Postgres. After connecting with Postgres, run `CREATE EXTENSION IF NOT EXISTS vector;` to create the vector extension.
|
|
||||||
|
|
||||||
### Usage
|
|
||||||
|
|
||||||
Here's how to configure pgvector in your application:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { Memory } from 'mem0ai/oss';
|
|
||||||
|
|
||||||
const config = {
|
|
||||||
vector_store: {
|
|
||||||
provider: 'pgvector',
|
|
||||||
config: {
|
|
||||||
collectionName: 'memories',
|
|
||||||
dimension: 1536,
|
|
||||||
dbname: 'vectordb',
|
|
||||||
user: 'postgres',
|
|
||||||
password: 'postgres',
|
|
||||||
host: 'localhost',
|
|
||||||
port: 5432,
|
|
||||||
embeddingModelDims: 1536,
|
|
||||||
hnsw: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
const memory = new Memory(config);
|
|
||||||
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
|
|
||||||
```
|
|
||||||
|
|
||||||
### Config
|
|
||||||
|
|
||||||
Here's the parameters available for configuring pgvector:
|
|
||||||
|
|
||||||
| Parameter | Description | Default Value |
|
|
||||||
|------------------------|--------------------------------------------------|---------------|
|
|
||||||
| `dbname` | The name of the database | `postgres` |
|
|
||||||
| `collectionName` | The name of the collection | `mem0` |
|
|
||||||
| `embeddingModelDims` | Dimensions of the embedding model | `1536` |
|
|
||||||
| `user` | Username to connect to the database | `None` |
|
|
||||||
| `password` | Password to connect to the database | `None` |
|
|
||||||
| `host` | The host where the Postgres server is running | `None` |
|
|
||||||
| `port` | The port where the Postgres server is running | `None` |
|
|
||||||
| `hnsw` | Enable HNSW indexing | `False` |
|
|
||||||
@@ -1,42 +0,0 @@
|
|||||||
[Qdrant](https://qdrant.tech/) is an open-source vector search engine. It is designed to work with large-scale datasets and provides a high-performance search engine for vector data.
|
|
||||||
|
|
||||||
### Usage
|
|
||||||
|
|
||||||
Here's how to configure Qdrant in your application:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { Memory } from 'mem0ai/oss';
|
|
||||||
|
|
||||||
const config = {
|
|
||||||
vector_store: {
|
|
||||||
provider: 'qdrant',
|
|
||||||
config: {
|
|
||||||
collectionName: 'memories',
|
|
||||||
embeddingModelDims: 1536,
|
|
||||||
host: 'localhost',
|
|
||||||
port: 6333,
|
|
||||||
url: 'https://your-qdrant-url.com',
|
|
||||||
apiKey: 'your-qdrant-api-key',
|
|
||||||
onDisk: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
const memory = new Memory(config);
|
|
||||||
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
|
|
||||||
```
|
|
||||||
|
|
||||||
### Config
|
|
||||||
|
|
||||||
Let's see the available parameters for the `qdrant` config:
|
|
||||||
|
|
||||||
| Parameter | Description | Default Value |
|
|
||||||
|------------------------|--------------------------------------------------|---------------|
|
|
||||||
| `collectionName` | The name of the collection to store the vectors | `mem0` |
|
|
||||||
| `embeddingModelDims` | Dimensions of the embedding model | `1536` |
|
|
||||||
| `host` | The host where the Qdrant server is running | `None` |
|
|
||||||
| `port` | The port where the Qdrant server is running | `None` |
|
|
||||||
| `path` | Path for the Qdrant database | `/tmp/qdrant` |
|
|
||||||
| `url` | Full URL for the Qdrant server | `None` |
|
|
||||||
| `apiKey` | API key for the Qdrant server | `None` |
|
|
||||||
| `onDisk` | For enabling persistent storage | `False` |
|
|
||||||
@@ -1,47 +0,0 @@
|
|||||||
[Redis](https://redis.io/) is a scalable, real-time database that can store, search, and analyze vector data.
|
|
||||||
|
|
||||||
### Installation
|
|
||||||
```bash
|
|
||||||
pip install redis redisvl
|
|
||||||
```
|
|
||||||
|
|
||||||
Redis Stack using Docker:
|
|
||||||
```bash
|
|
||||||
docker run -d --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest
|
|
||||||
```
|
|
||||||
|
|
||||||
### Usage
|
|
||||||
|
|
||||||
Here's how to configure Redis in your application:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { Memory } from 'mem0ai/oss';
|
|
||||||
|
|
||||||
const config = {
|
|
||||||
vector_store: {
|
|
||||||
provider: 'redis',
|
|
||||||
config: {
|
|
||||||
collectionName: 'memories',
|
|
||||||
embeddingModelDims: 1536,
|
|
||||||
redisUrl: 'redis://localhost:6379',
|
|
||||||
username: 'your-redis-username',
|
|
||||||
password: 'your-redis-password',
|
|
||||||
},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
const memoryRedis = new Memory(config);
|
|
||||||
await memoryRedis.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
|
|
||||||
```
|
|
||||||
|
|
||||||
### Config
|
|
||||||
|
|
||||||
Let's see the available parameters for the `redis` config:
|
|
||||||
|
|
||||||
| Parameter | Description | Default Value |
|
|
||||||
|------------------------|--------------------------------------------------|---------------|
|
|
||||||
| `collectionName` | The name of the collection to store the vectors | `mem0` |
|
|
||||||
| `embeddingModelDims` | Dimensions of the embedding model | `1536` |
|
|
||||||
| `redisUrl` | The URL of the Redis server | `None` |
|
|
||||||
| `username` | Username for Redis connection | `None` |
|
|
||||||
| `password` | Password for Redis connection | `None` |
|
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
---
|
|
||||||
title: Overview
|
|
||||||
icon: "info"
|
|
||||||
iconType: "solid"
|
|
||||||
---
|
|
||||||
|
|
||||||
Mem0 includes built-in support for various popular databases. Memory can utilize the database provided by the user, ensuring efficient use for specific needs.
|
|
||||||
|
|
||||||
## Supported Vector Databases
|
|
||||||
|
|
||||||
See the list of supported vector databases below.
|
|
||||||
|
|
||||||
<CardGroup cols={2}>
|
|
||||||
<Card title="Memory" href="/components/vectordbs/dbs/memory"></Card>
|
|
||||||
<Card title="Qdrant" href="/components/vectordbs/dbs/qdrant"></Card>
|
|
||||||
<Card title="Redis" href="/components/vectordbs/dbs/redis"></Card>
|
|
||||||
<Card title="Pgvector" href="/components/vectordbs/dbs/pgvector"></Card>
|
|
||||||
</CardGroup>
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
To utilize a vector database, you must provide a configuration to customize its usage. If no configuration is supplied, a default configuration will be applied, and `Memory` will be used as the vector database.
|
|
||||||
|
|
||||||
For a comprehensive list of available parameters for vector database configuration, please refer to [Config](./config).
|
|
||||||
|
|
||||||
## Common issues
|
|
||||||
|
|
||||||
### Using model with different dimensions
|
|
||||||
|
|
||||||
If you are using customized model, which is having different dimensions other than 1536
|
|
||||||
for example 768, you may encounter below error:
|
|
||||||
|
|
||||||
`ValueError: shapes (0,1536) and (768,) not aligned: 1536 (dim 1) != 768 (dim 0)`
|
|
||||||
|
|
||||||
you could add `"embedding_model_dims": 768,` to the config of the vector_store to overcome this issue.
|
|
||||||
|
|
||||||
@@ -1,141 +0,0 @@
|
|||||||
---
|
|
||||||
title: Custom Prompts
|
|
||||||
description: 'Enhance your product experience by adding custom prompts tailored to your needs'
|
|
||||||
icon: "pencil"
|
|
||||||
iconType: "solid"
|
|
||||||
---
|
|
||||||
|
|
||||||
## Introduction to Custom Prompts
|
|
||||||
|
|
||||||
Custom prompts allow you to tailor the behavior of your Mem0 instance to specific use cases or domains.
|
|
||||||
By defining a custom prompt, you can control how information is extracted, processed, and stored in your memory system.
|
|
||||||
|
|
||||||
To create an effective custom prompt:
|
|
||||||
1. Be specific about the information to extract.
|
|
||||||
2. Provide few-shot examples to guide the LLM.
|
|
||||||
3. Ensure examples follow the format shown below.
|
|
||||||
|
|
||||||
Example of a custom prompt:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
const customPrompt = `
|
|
||||||
Please only extract entities containing customer support information, order details, and user information.
|
|
||||||
Here are some few shot examples:
|
|
||||||
|
|
||||||
Input: Hi.
|
|
||||||
Output: {"facts" : []}
|
|
||||||
|
|
||||||
Input: The weather is nice today.
|
|
||||||
Output: {"facts" : []}
|
|
||||||
|
|
||||||
Input: My order #12345 hasn't arrived yet.
|
|
||||||
Output: {"facts" : ["Order #12345 not received"]}
|
|
||||||
|
|
||||||
Input: I'm John Doe, and I'd like to return the shoes I bought last week.
|
|
||||||
Output: {"facts" : ["Customer name: John Doe", "Wants to return shoes", "Purchase made last week"]}
|
|
||||||
|
|
||||||
Input: I ordered a red shirt, size medium, but received a blue one instead.
|
|
||||||
Output: {"facts" : ["Ordered red shirt, size medium", "Received blue shirt instead"]}
|
|
||||||
|
|
||||||
Return the facts and customer information in a json format as shown above.
|
|
||||||
`;
|
|
||||||
```
|
|
||||||
|
|
||||||
Here we initialize the custom prompt in the config:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { Memory } from 'mem0ai/oss';
|
|
||||||
|
|
||||||
const config = {
|
|
||||||
version: 'v1.1',
|
|
||||||
embedder: {
|
|
||||||
provider: 'openai',
|
|
||||||
config: {
|
|
||||||
apiKey: process.env.OPENAI_API_KEY || '',
|
|
||||||
model: 'text-embedding-3-small',
|
|
||||||
},
|
|
||||||
},
|
|
||||||
vectorStore: {
|
|
||||||
provider: 'memory',
|
|
||||||
config: {
|
|
||||||
collectionName: 'memories',
|
|
||||||
dimension: 1536,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
llm: {
|
|
||||||
provider: 'openai',
|
|
||||||
config: {
|
|
||||||
apiKey: process.env.OPENAI_API_KEY || '',
|
|
||||||
model: 'gpt-4-turbo-preview',
|
|
||||||
temperature: 0.2,
|
|
||||||
maxTokens: 1500,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
customPrompt: customPrompt,
|
|
||||||
historyDbPath: 'memory.db',
|
|
||||||
};
|
|
||||||
|
|
||||||
const memory = new Memory(config);
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example 1
|
|
||||||
|
|
||||||
In this example, we are adding a memory of a user ordering a laptop. As seen in the output, the custom prompt is used to extract the relevant information from the user's message.
|
|
||||||
|
|
||||||
<CodeGroup>
|
|
||||||
```typescript Code
|
|
||||||
await memory.add('Yesterday, I ordered a laptop, the order id is 12345', { userId: "user123" });
|
|
||||||
```
|
|
||||||
|
|
||||||
```json Output
|
|
||||||
{
|
|
||||||
"results": [
|
|
||||||
{
|
|
||||||
"id": "c03c9045-df76-4949-bbc5-d5dc1932aa5c",
|
|
||||||
"memory": "Ordered a laptop",
|
|
||||||
"metadata": {}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "cbb1fe73-0bf1-4067-8c1f-63aa53e7b1a4",
|
|
||||||
"memory": "Order ID: 12345",
|
|
||||||
"metadata": {}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "e5f2a012-3b45-4c67-9d8e-123456789abc",
|
|
||||||
"memory": "Order placed yesterday",
|
|
||||||
"metadata": {}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
</CodeGroup>
|
|
||||||
|
|
||||||
### Example 2
|
|
||||||
|
|
||||||
In this example, we are adding a memory of a user liking to go on hikes. This add message is not specific to the use-case mentioned in the custom prompt.
|
|
||||||
Hence, the memory is not added.
|
|
||||||
|
|
||||||
<CodeGroup>
|
|
||||||
```typescript Code
|
|
||||||
await memory.add('I like going to hikes', { userId: "user123" });
|
|
||||||
```
|
|
||||||
|
|
||||||
```json Output
|
|
||||||
{
|
|
||||||
"results": []
|
|
||||||
}
|
|
||||||
```
|
|
||||||
</CodeGroup>
|
|
||||||
|
|
||||||
You can also use custom prompts with chat messages:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
const messages = [
|
|
||||||
{ role: 'user', content: 'Hi, I ordered item #54321 last week but haven\'t received it yet.' },
|
|
||||||
{ role: 'assistant', content: 'I understand you\'re concerned about your order #54321. Let me help track that for you.' }
|
|
||||||
];
|
|
||||||
|
|
||||||
await memory.add(messages, { userId: "user123" });
|
|
||||||
```
|
|
||||||
|
|
||||||
The custom prompt will process both the user and assistant messages to extract relevant information according to the defined format.
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
title: Node.js Guide
|
title: Node SDK
|
||||||
description: 'Get started with Mem0 quickly!'
|
description: 'Get started with Mem0 quickly!'
|
||||||
icon: "node"
|
icon: "node"
|
||||||
iconType: "solid"
|
iconType: "solid"
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
title: Python Guide
|
title: Python SDK
|
||||||
description: 'Get started with Mem0 quickly!'
|
description: 'Get started with Mem0 quickly!'
|
||||||
icon: "python"
|
icon: "python"
|
||||||
iconType: "solid"
|
iconType: "solid"
|
||||||
|
|||||||
Reference in New Issue
Block a user