Update Docs (#2277)

This commit is contained in:
Saket Aryan
2025-03-01 06:07:05 +05:30
committed by GitHub
parent c1aba35884
commit 5606c3ffb8
30 changed files with 437 additions and 877 deletions

View File

@@ -6,26 +6,40 @@ iconType: "solid"
## How to define configurations?
The `config` is defined as a Python dictionary with two main keys:
- `llm`: Specifies the llm provider and its configuration
- `provider`: The name of the llm (e.g., "openai", "groq")
- `config`: A nested dictionary containing provider-specific settings
<Tabs>
<Tab title="Python">
The `config` is defined as a Python dictionary with two main keys:
- `llm`: Specifies the llm provider and its configuration
- `provider`: The name of the llm (e.g., "openai", "groq")
- `config`: A nested dictionary containing provider-specific settings
</Tab>
<Tab title="TypeScript">
The `config` is defined as a TypeScript object with these keys:
- `llm`: Specifies the LLM provider and its configuration (required)
- `provider`: The name of the LLM (e.g., "openai", "groq")
- `config`: A nested object containing provider-specific settings
- `embedder`: Specifies the embedder provider and its configuration (optional)
- `vectorStore`: Specifies the vector store provider and its configuration (optional)
- `historyDbPath`: Path to the history database file (optional)
</Tab>
</Tabs>
### Config Values Precedence
Config values are applied in the following order of precedence (from highest to lowest):
1. Values explicitly set in the `config` dictionary
1. Values explicitly set in the `config` object/dictionary
2. Environment variables (e.g., `OPENAI_API_KEY`, `OPENAI_API_BASE`)
3. Default values defined in the LLM implementation
This means that values specified in the `config` dictionary will override corresponding environment variables, which in turn override default values.
This means that values specified in the `config` will override corresponding environment variables, which in turn override default values.
## How to Use Config
Here's a general example of how to use the config with mem0:
Here's a general example of how to use the config with Mem0:
```python
<CodeGroup>
```python Python
import os
from mem0 import Memory
@@ -44,39 +58,70 @@ m = Memory.from_config(config)
m.add("Your text here", user_id="user", metadata={"category": "example"})
```
```typescript TypeScript
import { Memory } from 'mem0ai/oss';
// Minimal configuration with just the LLM settings
const config = {
llm: {
provider: 'your_chosen_provider',
config: {
// Provider-specific settings go here
}
}
};
const memory = new Memory(config);
await memory.add("Your text here", { userId: "user123", metadata: { category: "example" } });
```
</CodeGroup>
## Why is Config Needed?
Config is essential for:
1. Specifying which llm to use.
1. Specifying which LLM to use.
2. Providing necessary connection details (e.g., model, api_key, temperature).
3. Ensuring proper initialization and connection to your chosen llm.
3. Ensuring proper initialization and connection to your chosen LLM.
## Master List of All Params in Config
Here's a comprehensive list of all parameters that can be used across different llms:
Here's a comprehensive list of all parameters that can be used across different LLMs:
Here's the table based on the provided parameters:
| Parameter | Description | Provider |
|----------------------|-----------------------------------------------|-------------------|
| `model` | Embedding model to use | All |
| `temperature` | Temperature of the model | All |
| `api_key` | API key to use | All |
| `max_tokens` | Tokens to generate | All |
| `top_p` | Probability threshold for nucleus sampling | All |
| `top_k` | Number of highest probability tokens to keep | All |
| `http_client_proxies`| Allow proxy server settings | AzureOpenAI |
| `models` | List of models | Openrouter |
| `route` | Routing strategy | Openrouter |
| `openrouter_base_url`| Base URL for Openrouter API | Openrouter |
| `site_url` | Site URL | Openrouter |
| `app_name` | Application name | Openrouter |
| `ollama_base_url` | Base URL for Ollama API | Ollama |
| `openai_base_url` | Base URL for OpenAI API | OpenAI |
| `azure_kwargs` | Azure LLM args for initialization | AzureOpenAI |
| `deepseek_base_url` | Base URL for DeepSeek API | DeepSeek |
| `xai_base_url` | Base URL for XAI API | XAI |
<Tabs>
<Tab title="Python">
| Parameter | Description | Provider |
|----------------------|-----------------------------------------------|-------------------|
| `model` | Embedding model to use | All |
| `temperature` | Temperature of the model | All |
| `api_key` | API key to use | All |
| `max_tokens` | Tokens to generate | All |
| `top_p` | Probability threshold for nucleus sampling | All |
| `top_k` | Number of highest probability tokens to keep | All |
| `http_client_proxies`| Allow proxy server settings | AzureOpenAI |
| `models` | List of models | Openrouter |
| `route` | Routing strategy | Openrouter |
| `openrouter_base_url`| Base URL for Openrouter API | Openrouter |
| `site_url` | Site URL | Openrouter |
| `app_name` | Application name | Openrouter |
| `ollama_base_url` | Base URL for Ollama API | Ollama |
| `openai_base_url` | Base URL for OpenAI API | OpenAI |
| `azure_kwargs` | Azure LLM args for initialization | AzureOpenAI |
| `deepseek_base_url` | Base URL for DeepSeek API | DeepSeek |
| `xai_base_url` | Base URL for XAI API | XAI |
</Tab>
<Tab title="TypeScript">
| Parameter | Description | Provider |
|----------------------|-----------------------------------------------|-------------------|
| `model` | Embedding model to use | All |
| `temperature` | Temperature of the model | All |
| `apiKey` | API key to use | All |
| `maxTokens` | Tokens to generate | All |
| `topP` | Probability threshold for nucleus sampling | All |
| `topK` | Number of highest probability tokens to keep | All |
| `openaiBaseUrl` | Base URL for OpenAI API | OpenAI |
</Tab>
</Tabs>
## Supported LLMs
For detailed information on configuring specific llms, please visit the [LLMs](./models) section. There you'll find information for each supported llm with provider-specific usage examples and configuration details.
For detailed information on configuring specific LLMs, please visit the [LLMs](./models) section. There you'll find information for each supported LLM with provider-specific usage examples and configuration details.

View File

@@ -1,8 +1,13 @@
---
title: Anthropic
---
To use anthropic's models, please set the `ANTHROPIC_API_KEY` which you find on their [Account Settings Page](https://console.anthropic.com/account/keys).
## Usage
```python
<CodeGroup>
```python Python
import os
from mem0 import Memory
@@ -24,6 +29,26 @@ m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
```typescript TypeScript
import { Memory } from 'mem0ai/oss';
const config = {
llm: {
provider: 'anthropic',
config: {
apiKey: process.env.ANTHROPIC_API_KEY || '',
model: 'claude-3-7-sonnet-latest',
temperature: 0.1,
maxTokens: 2000,
},
},
};
const memory = new Memory(config);
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
```
</CodeGroup>
## Config
All available parameters for the `anthropic` config are present in [Master List of All Params in Config](../config).

View File

@@ -1,10 +1,15 @@
---
title: Groq
---
[Groq](https://groq.com/) is the creator of the world's first Language Processing Unit (LPU), providing exceptional speed performance for AI workloads running on their LPU Inference Engine.
In order to use LLMs from Groq, go to their [platform](https://console.groq.com/keys) and get the API key. Set the API key as `GROQ_API_KEY` environment variable to use the model as given below in the example.
## Usage
```python
<CodeGroup>
```python Python
import os
from mem0 import Memory
@@ -26,6 +31,26 @@ m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
```typescript TypeScript
import { Memory } from 'mem0ai/oss';
const config = {
llm: {
provider: 'groq',
config: {
apiKey: process.env.GROQ_API_KEY || '',
model: 'mixtral-8x7b-32768',
temperature: 0.1,
maxTokens: 1000,
},
},
};
const memory = new Memory(config);
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
```
</CodeGroup>
## Config
All available parameters for the `groq` config are present in [Master List of All Params in Config](../config).

View File

@@ -6,7 +6,8 @@ To use OpenAI LLM models, you have to set the `OPENAI_API_KEY` environment varia
## Usage
```python
<CodeGroup>
```python Python
import os
from mem0 import Memory
@@ -38,6 +39,26 @@ m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
```typescript TypeScript
import { Memory } from 'mem0ai/oss';
const config = {
llm: {
provider: 'openai',
config: {
apiKey: process.env.OPENAI_API_KEY || '',
model: 'gpt-4-turbo-preview',
temperature: 0.2,
maxTokens: 1500,
},
},
};
const memory = new Memory(config);
await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
```
</CodeGroup>
We also support the new [OpenAI structured-outputs](https://platform.openai.com/docs/guides/structured-outputs/introduction) model.
```python
@@ -59,7 +80,9 @@ config = {
m = Memory.from_config(config)
```
<Note>
OpenAI structured-outputs is currently only available in the Python implementation.
</Note>
## Config

View File

@@ -14,20 +14,24 @@ For a comprehensive list of available parameters for llm configuration, please r
To view all supported llms, visit the [Supported LLMs](./models).
<Note>
All LLMs are supported in Python. The following LLMs are also supported in TypeScript: **OpenAI**, **Anthropic**, and **Groq**.
</Note>
<CardGroup cols={4}>
<Card title="OpenAI" href="/components/llms/models/openai"></Card>
<Card title="Ollama" href="/components/llms/models/ollama"></Card>
<Card title="Azure OpenAI" href="/components/llms/models/azure_openai"></Card>
<Card title="Anthropic" href="/components/llms/models/anthropic"></Card>
<Card title="Together" href="/components/llms/models/together"></Card>
<Card title="Groq" href="/components/llms/models/groq"></Card>
<Card title="Litellm" href="/components/llms/models/litellm"></Card>
<Card title="Mistral AI" href="/components/llms/models/mistral_ai"></Card>
<Card title="Google AI" href="/components/llms/models/google_ai"></Card>
<Card title="AWS bedrock" href="/components/llms/models/aws_bedrock"></Card>
<Card title="Gemini" href="/components/llms/models/gemini"></Card>
<Card title="DeepSeek" href="/components/llms/models/deepseek"></Card>
<Card title="xAI" href="/components/llms/models/xAI"></Card>
<Card title="OpenAI" href="/components/llms/models/openai" />
<Card title="Ollama" href="/components/llms/models/ollama" />
<Card title="Azure OpenAI" href="/components/llms/models/azure_openai" />
<Card title="Anthropic" href="/components/llms/models/anthropic" />
<Card title="Together" href="/components/llms/models/together" />
<Card title="Groq" href="/components/llms/models/groq" />
<Card title="Litellm" href="/components/llms/models/litellm" />
<Card title="Mistral AI" href="/components/llms/models/mistral_ai" />
<Card title="Google AI" href="/components/llms/models/google_ai" />
<Card title="AWS bedrock" href="/components/llms/models/aws_bedrock" />
<Card title="Gemini" href="/components/llms/models/gemini" />
<Card title="DeepSeek" href="/components/llms/models/deepseek" />
<Card title="xAI" href="/components/llms/models/xAI" />
</CardGroup>
## Structured vs Unstructured Outputs