diff --git a/docs/components/embedders/config.mdx b/docs/components/embedders/config.mdx
index ad8ffbab..60047daa 100644
--- a/docs/components/embedders/config.mdx
+++ b/docs/components/embedders/config.mdx
@@ -8,16 +8,18 @@ Config in mem0 is a dictionary that specifies the settings for your embedding mo
## How to define configurations?
-The config is defined as a Python dictionary with two main keys:
+The config is defined as an object (or dictionary) with two main keys:
- `embedder`: Specifies the embedder provider and its configuration
- `provider`: The name of the embedder (e.g., "openai", "ollama")
- - `config`: A nested dictionary containing provider-specific settings
+ - `config`: A nested object or dictionary containing provider-specific settings
+
## How to use configurations?
Here's a general example of how to use the config with mem0:
-```python
+
+```python Python
import os
from mem0 import Memory
@@ -36,6 +38,25 @@ m = Memory.from_config(config)
m.add("Your text here", user_id="user", metadata={"category": "example"})
```
+```typescript TypeScript
+import { Memory } from 'mem0ai/oss';
+
+const config = {
+ embedder: {
+ provider: 'openai',
+ config: {
+ apiKey: process.env.OPENAI_API_KEY || '',
+ model: 'text-embedding-3-small',
+ // Provider-specific settings go here
+ },
+ },
+};
+
+const memory = new Memory(config);
+await memory.add("Your text here", { userId: "user", metadata: { category: "example" } });
+```
+
+
## Why is Config Needed?
Config is essential for:
@@ -47,6 +68,8 @@ Config is essential for:
Here's a comprehensive list of all parameters that can be used across different embedders:
+
+
| Parameter | Description | Provider |
|-----------|-------------|----------|
| `model` | Embedding model to use | All |
@@ -61,7 +84,15 @@ Here's a comprehensive list of all parameters that can be used across different
| `memory_add_embedding_type` | The type of embedding to use for the add memory action | VertexAI |
| `memory_update_embedding_type` | The type of embedding to use for the update memory action | VertexAI |
| `memory_search_embedding_type` | The type of embedding to use for the search memory action | VertexAI |
-
+
+
+| Parameter | Description | Provider |
+|-----------|-------------|----------|
+| `model` | Embedding model to use | All |
+| `apiKey` | API key of the provider | All |
+| `embeddingDims` | Dimensions of the embedding model | All |
+
+
## Supported Embedding Models
diff --git a/docs/components/embedders/models/openai.mdx b/docs/components/embedders/models/openai.mdx
index 4946b94c..47a0e611 100644
--- a/docs/components/embedders/models/openai.mdx
+++ b/docs/components/embedders/models/openai.mdx
@@ -6,7 +6,8 @@ To use OpenAI embedding models, set the `OPENAI_API_KEY` environment variable. Y
### Usage
-```python
+
+```python Python
import os
from mem0 import Memory
@@ -25,12 +26,41 @@ m = Memory.from_config(config)
m.add("I'm visiting Paris", user_id="john")
```
+```typescript TypeScript
+import { Memory } from 'mem0ai/oss';
+
+const config = {
+ embedder: {
+ provider: 'openai',
+ config: {
+ apiKey: 'your-openai-api-key',
+ model: 'text-embedding-3-large',
+ },
+ },
+};
+
+const memory = new Memory(config);
+await memory.add("I'm visiting Paris", { userId: "john" });
+```
+
+
### Config
Here are the parameters available for configuring OpenAI embedder:
+
+
| Parameter | Description | Default Value |
| --- | --- | --- |
| `model` | The name of the embedding model to use | `text-embedding-3-small` |
| `embedding_dims` | Dimensions of the embedding model | `1536` |
| `api_key` | The OpenAI API key | `None` |
+
+
+| Parameter | Description | Default Value |
+| --- | --- | --- |
+| `model` | The name of the embedding model to use | `text-embedding-3-small` |
+| `embeddingDims` | Dimensions of the embedding model | `1536` |
+| `apiKey` | The OpenAI API key | `None` |
+
+
diff --git a/docs/components/embedders/overview.mdx b/docs/components/embedders/overview.mdx
index 86a14cc7..b5c57ffb 100644
--- a/docs/components/embedders/overview.mdx
+++ b/docs/components/embedders/overview.mdx
@@ -10,6 +10,10 @@ Mem0 offers support for various embedding models, allowing users to choose the o
See the list of supported embedders below.
+
+ The following embedders are supported in the Python implementation. The TypeScript implementation currently only supports OpenAI.
+
+
diff --git a/docs/components/llms/config.mdx b/docs/components/llms/config.mdx
index f3caa654..c829d29a 100644
--- a/docs/components/llms/config.mdx
+++ b/docs/components/llms/config.mdx
@@ -6,26 +6,40 @@ iconType: "solid"
## How to define configurations?
-The `config` is defined as a Python dictionary with two main keys:
-- `llm`: Specifies the llm provider and its configuration
- - `provider`: The name of the llm (e.g., "openai", "groq")
- - `config`: A nested dictionary containing provider-specific settings
+
+
+ The `config` is defined as a Python dictionary with two main keys:
+ - `llm`: Specifies the llm provider and its configuration
+ - `provider`: The name of the llm (e.g., "openai", "groq")
+ - `config`: A nested dictionary containing provider-specific settings
+
+
+ The `config` is defined as a TypeScript object with these keys:
+ - `llm`: Specifies the LLM provider and its configuration (required)
+ - `provider`: The name of the LLM (e.g., "openai", "groq")
+ - `config`: A nested object containing provider-specific settings
+ - `embedder`: Specifies the embedder provider and its configuration (optional)
+ - `vectorStore`: Specifies the vector store provider and its configuration (optional)
+ - `historyDbPath`: Path to the history database file (optional)
+
+
### Config Values Precedence
Config values are applied in the following order of precedence (from highest to lowest):
-1. Values explicitly set in the `config` dictionary
+1. Values explicitly set in the `config` object/dictionary
2. Environment variables (e.g., `OPENAI_API_KEY`, `OPENAI_API_BASE`)
3. Default values defined in the LLM implementation
-This means that values specified in the `config` dictionary will override corresponding environment variables, which in turn override default values.
+This means that values specified in the `config` will override corresponding environment variables, which in turn override default values.
## How to Use Config
-Here's a general example of how to use the config with mem0:
+Here's a general example of how to use the config with Mem0:
-```python
+
+```python Python
import os
from mem0 import Memory
@@ -44,39 +58,70 @@ m = Memory.from_config(config)
m.add("Your text here", user_id="user", metadata={"category": "example"})
```
+```typescript TypeScript
+import { Memory } from 'mem0ai/oss';
+
+// Minimal configuration with just the LLM settings
+const config = {
+ llm: {
+ provider: 'your_chosen_provider',
+ config: {
+ // Provider-specific settings go here
+ }
+ }
+};
+
+const memory = new Memory(config);
+await memory.add("Your text here", { userId: "user123", metadata: { category: "example" } });
+```
+
+
## Why is Config Needed?
Config is essential for:
-1. Specifying which llm to use.
+1. Specifying which LLM to use.
2. Providing necessary connection details (e.g., model, api_key, temperature).
-3. Ensuring proper initialization and connection to your chosen llm.
+3. Ensuring proper initialization and connection to your chosen LLM.
## Master List of All Params in Config
-Here's a comprehensive list of all parameters that can be used across different llms:
+Here's a comprehensive list of all parameters that can be used across different LLMs:
-Here's the table based on the provided parameters:
-
-| Parameter | Description | Provider |
-|----------------------|-----------------------------------------------|-------------------|
-| `model` | Embedding model to use | All |
-| `temperature` | Temperature of the model | All |
-| `api_key` | API key to use | All |
-| `max_tokens` | Tokens to generate | All |
-| `top_p` | Probability threshold for nucleus sampling | All |
-| `top_k` | Number of highest probability tokens to keep | All |
-| `http_client_proxies`| Allow proxy server settings | AzureOpenAI |
-| `models` | List of models | Openrouter |
-| `route` | Routing strategy | Openrouter |
-| `openrouter_base_url`| Base URL for Openrouter API | Openrouter |
-| `site_url` | Site URL | Openrouter |
-| `app_name` | Application name | Openrouter |
-| `ollama_base_url` | Base URL for Ollama API | Ollama |
-| `openai_base_url` | Base URL for OpenAI API | OpenAI |
-| `azure_kwargs` | Azure LLM args for initialization | AzureOpenAI |
-| `deepseek_base_url` | Base URL for DeepSeek API | DeepSeek |
-| `xai_base_url` | Base URL for XAI API | XAI |
+
+
+ | Parameter | Description | Provider |
+ |----------------------|-----------------------------------------------|-------------------|
+ | `model` | Embedding model to use | All |
+ | `temperature` | Temperature of the model | All |
+ | `api_key` | API key to use | All |
+ | `max_tokens` | Tokens to generate | All |
+ | `top_p` | Probability threshold for nucleus sampling | All |
+ | `top_k` | Number of highest probability tokens to keep | All |
+ | `http_client_proxies`| Allow proxy server settings | AzureOpenAI |
+ | `models` | List of models | Openrouter |
+ | `route` | Routing strategy | Openrouter |
+ | `openrouter_base_url`| Base URL for Openrouter API | Openrouter |
+ | `site_url` | Site URL | Openrouter |
+ | `app_name` | Application name | Openrouter |
+ | `ollama_base_url` | Base URL for Ollama API | Ollama |
+ | `openai_base_url` | Base URL for OpenAI API | OpenAI |
+ | `azure_kwargs` | Azure LLM args for initialization | AzureOpenAI |
+ | `deepseek_base_url` | Base URL for DeepSeek API | DeepSeek |
+ | `xai_base_url` | Base URL for XAI API | XAI |
+
+
+ | Parameter | Description | Provider |
+ |----------------------|-----------------------------------------------|-------------------|
+ | `model` | Embedding model to use | All |
+ | `temperature` | Temperature of the model | All |
+ | `apiKey` | API key to use | All |
+ | `maxTokens` | Tokens to generate | All |
+ | `topP` | Probability threshold for nucleus sampling | All |
+ | `topK` | Number of highest probability tokens to keep | All |
+ | `openaiBaseUrl` | Base URL for OpenAI API | OpenAI |
+
+
## Supported LLMs
-For detailed information on configuring specific llms, please visit the [LLMs](./models) section. There you'll find information for each supported llm with provider-specific usage examples and configuration details.
+For detailed information on configuring specific LLMs, please visit the [LLMs](./models) section. There you'll find information for each supported LLM with provider-specific usage examples and configuration details.
diff --git a/docs/components/llms/models/anthropic.mdx b/docs/components/llms/models/anthropic.mdx
index a981bc89..abfdbbd5 100644
--- a/docs/components/llms/models/anthropic.mdx
+++ b/docs/components/llms/models/anthropic.mdx
@@ -1,8 +1,13 @@
+---
+title: Anthropic
+---
+
To use anthropic's models, please set the `ANTHROPIC_API_KEY` which you find on their [Account Settings Page](https://console.anthropic.com/account/keys).
## Usage
-```python
+
+```python Python
import os
from mem0 import Memory
@@ -24,6 +29,26 @@ m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
+```typescript TypeScript
+import { Memory } from 'mem0ai/oss';
+
+const config = {
+ llm: {
+ provider: 'anthropic',
+ config: {
+ apiKey: process.env.ANTHROPIC_API_KEY || '',
+ model: 'claude-3-7-sonnet-latest',
+ temperature: 0.1,
+ maxTokens: 2000,
+ },
+ },
+};
+
+const memory = new Memory(config);
+await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
+```
+
+
## Config
All available parameters for the `anthropic` config are present in [Master List of All Params in Config](../config).
\ No newline at end of file
diff --git a/docs/components/llms/models/groq.mdx b/docs/components/llms/models/groq.mdx
index b0b7531d..5f9ab866 100644
--- a/docs/components/llms/models/groq.mdx
+++ b/docs/components/llms/models/groq.mdx
@@ -1,10 +1,15 @@
+---
+title: Groq
+---
+
[Groq](https://groq.com/) is the creator of the world's first Language Processing Unit (LPU), providing exceptional speed performance for AI workloads running on their LPU Inference Engine.
In order to use LLMs from Groq, go to their [platform](https://console.groq.com/keys) and get the API key. Set the API key as `GROQ_API_KEY` environment variable to use the model as given below in the example.
## Usage
-```python
+
+```python Python
import os
from mem0 import Memory
@@ -26,6 +31,26 @@ m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
+```typescript TypeScript
+import { Memory } from 'mem0ai/oss';
+
+const config = {
+ llm: {
+ provider: 'groq',
+ config: {
+ apiKey: process.env.GROQ_API_KEY || '',
+ model: 'mixtral-8x7b-32768',
+ temperature: 0.1,
+ maxTokens: 1000,
+ },
+ },
+};
+
+const memory = new Memory(config);
+await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
+```
+
+
## Config
All available parameters for the `groq` config are present in [Master List of All Params in Config](../config).
\ No newline at end of file
diff --git a/docs/components/llms/models/openai.mdx b/docs/components/llms/models/openai.mdx
index da3419c2..c1e07504 100644
--- a/docs/components/llms/models/openai.mdx
+++ b/docs/components/llms/models/openai.mdx
@@ -6,7 +6,8 @@ To use OpenAI LLM models, you have to set the `OPENAI_API_KEY` environment varia
## Usage
-```python
+
+```python Python
import os
from mem0 import Memory
@@ -38,6 +39,26 @@ m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
+```typescript TypeScript
+import { Memory } from 'mem0ai/oss';
+
+const config = {
+ llm: {
+ provider: 'openai',
+ config: {
+ apiKey: process.env.OPENAI_API_KEY || '',
+ model: 'gpt-4-turbo-preview',
+ temperature: 0.2,
+ maxTokens: 1500,
+ },
+ },
+};
+
+const memory = new Memory(config);
+await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
+```
+
+
We also support the new [OpenAI structured-outputs](https://platform.openai.com/docs/guides/structured-outputs/introduction) model.
```python
@@ -59,7 +80,9 @@ config = {
m = Memory.from_config(config)
```
-
+
+ OpenAI structured-outputs is currently only available in the Python implementation.
+
## Config
diff --git a/docs/components/llms/overview.mdx b/docs/components/llms/overview.mdx
index d17b162a..f48a22b1 100644
--- a/docs/components/llms/overview.mdx
+++ b/docs/components/llms/overview.mdx
@@ -14,20 +14,24 @@ For a comprehensive list of available parameters for llm configuration, please r
To view all supported llms, visit the [Supported LLMs](./models).
+
+ All LLMs are supported in Python. The following LLMs are also supported in TypeScript: **OpenAI**, **Anthropic**, and **Groq**.
+
+
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
## Structured vs Unstructured Outputs
diff --git a/docs/components/vectordbs/config.mdx b/docs/components/vectordbs/config.mdx
index 40537916..61275a21 100644
--- a/docs/components/vectordbs/config.mdx
+++ b/docs/components/vectordbs/config.mdx
@@ -6,16 +6,17 @@ iconType: "solid"
## How to define configurations?
-The `config` is defined as a Python dictionary with two main keys:
+The `config` is defined as an object with two main keys:
- `vector_store`: Specifies the vector database provider and its configuration
- - `provider`: The name of the vector database (e.g., "chroma", "pgvector", "qdrant", "milvus","azure_ai_search")
- - `config`: A nested dictionary containing provider-specific settings
+ - `provider`: The name of the vector database (e.g., "chroma", "pgvector", "qdrant", "milvus", "azure_ai_search")
+ - `config`: A nested object containing provider-specific settings
## How to Use Config
Here's a general example of how to use the config with mem0:
-```python
+
+```python Python
import os
from mem0 import Memory
@@ -34,6 +35,29 @@ m = Memory.from_config(config)
m.add("Your text here", user_id="user", metadata={"category": "example"})
```
+```typescript TypeScript
+// Example for in-memory vector database (Only supported in TypeScript)
+import { Memory } from 'mem0ai/oss';
+
+const configMemory = {
+ vector_store: {
+ provider: 'memory',
+ config: {
+ collectionName: 'memories',
+ dimension: 1536,
+ },
+ },
+};
+
+const memory = new Memory(configMemory);
+await memory.add("Your text here", { userId: "user", metadata: { category: "example" } });
+```
+
+
+
+ The in-memory vector database is only supported in the TypeScript implementation.
+
+
## Why is Config Needed?
Config is essential for:
@@ -46,6 +70,8 @@ Config is essential for:
Here's a comprehensive list of all parameters that can be used across different vector databases:
+
+
| Parameter | Description |
|-----------|-------------|
| `collection_name` | Name of the collection |
@@ -60,6 +86,24 @@ Here's a comprehensive list of all parameters that can be used across different
| `url` | Full URL for the server |
| `api_key` | API key for the server |
| `on_disk` | Enable persistent storage |
+
+
+| Parameter | Description |
+|-----------|-------------|
+| `collectionName` | Name of the collection |
+| `embeddingModelDims` | Dimensions of the embedding model |
+| `dimension` | Dimensions of the embedding model (for memory provider) |
+| `host` | Host where the server is running |
+| `port` | Port where the server is running |
+| `url` | URL for the server |
+| `apiKey` | API key for the server |
+| `path` | Path for the database |
+| `onDisk` | Enable persistent storage |
+| `redisUrl` | URL for the Redis server |
+| `username` | Username for database connection |
+| `password` | Password for database connection |
+
+
## Customizing Config
diff --git a/docs/components/vectordbs/dbs/qdrant.mdx b/docs/components/vectordbs/dbs/qdrant.mdx
index 6097588b..1c6f0076 100644
--- a/docs/components/vectordbs/dbs/qdrant.mdx
+++ b/docs/components/vectordbs/dbs/qdrant.mdx
@@ -2,7 +2,8 @@
### Usage
-```python
+
+```python Python
import os
from mem0 import Memory
@@ -23,10 +24,32 @@ m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
+```typescript TypeScript
+import { Memory } from 'mem0ai/oss';
+
+const config = {
+ vectorStore: {
+ provider: 'qdrant',
+ config: {
+ collectionName: 'memories',
+ embeddingModelDims: 1536,
+ host: 'localhost',
+ port: 6333,
+ },
+ },
+};
+
+const memory = new Memory(config);
+await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
+```
+
+
### Config
Let's see the available parameters for the `qdrant` config:
+
+
| Parameter | Description | Default Value |
| --- | --- | --- |
| `collection_name` | The name of the collection to store the vectors | `mem0` |
@@ -37,4 +60,18 @@ Let's see the available parameters for the `qdrant` config:
| `path` | Path for the qdrant database | `/tmp/qdrant` |
| `url` | Full URL for the qdrant server | `None` |
| `api_key` | API key for the qdrant server | `None` |
-| `on_disk` | For enabling persistent storage | `False` |
\ No newline at end of file
+| `on_disk` | For enabling persistent storage | `False` |
+
+
+| Parameter | Description | Default Value |
+| --- | --- | --- |
+| `collectionName` | The name of the collection to store the vectors | `mem0` |
+| `embeddingModelDims` | Dimensions of the embedding model | `1536` |
+| `host` | The host where the Qdrant server is running | `None` |
+| `port` | The port where the Qdrant server is running | `None` |
+| `path` | Path for the Qdrant database | `/tmp/qdrant` |
+| `url` | Full URL for the Qdrant server | `None` |
+| `apiKey` | API key for the Qdrant server | `None` |
+| `onDisk` | For enabling persistent storage | `False` |
+
+
\ No newline at end of file
diff --git a/docs/components/vectordbs/dbs/redis.mdx b/docs/components/vectordbs/dbs/redis.mdx
index 58030c03..039a1aff 100644
--- a/docs/components/vectordbs/dbs/redis.mdx
+++ b/docs/components/vectordbs/dbs/redis.mdx
@@ -12,7 +12,8 @@ docker run -d --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:lat
### Usage
-```python
+
+```python Python
import os
from mem0 import Memory
@@ -34,12 +35,46 @@ m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
+```typescript TypeScript
+import { Memory } from 'mem0ai/oss';
+
+const config = {
+ vectorStore: {
+ provider: 'redis',
+ config: {
+ collectionName: 'memories',
+ embeddingModelDims: 1536,
+ redisUrl: 'redis://localhost:6379',
+ username: 'your-redis-username',
+ password: 'your-redis-password',
+ },
+ },
+};
+
+const memory = new Memory(config);
+await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
+```
+
+
### Config
Let's see the available parameters for the `redis` config:
+
+
| Parameter | Description | Default Value |
| --- | --- | --- |
| `collection_name` | The name of the collection to store the vectors | `mem0` |
| `embedding_model_dims` | Dimensions of the embedding model | `1536` |
-| `redis_url` | The URL of the Redis server | `None` |
\ No newline at end of file
+| `redis_url` | The URL of the Redis server | `None` |
+
+
+| Parameter | Description | Default Value |
+| --- | --- | --- |
+| `collectionName` | The name of the collection to store the vectors | `mem0` |
+| `embeddingModelDims` | Dimensions of the embedding model | `1536` |
+| `redisUrl` | The URL of the Redis server | `None` |
+| `username` | Username for Redis connection | `None` |
+| `password` | Password for Redis connection | `None` |
+
+
\ No newline at end of file
diff --git a/docs/components/vectordbs/overview.mdx b/docs/components/vectordbs/overview.mdx
index b19395ac..773ffb97 100644
--- a/docs/components/vectordbs/overview.mdx
+++ b/docs/components/vectordbs/overview.mdx
@@ -10,6 +10,10 @@ Mem0 includes built-in support for various popular databases. Memory can utilize
See the list of supported vector databases below.
+
+ The following vector databases are supported in the Python implementation. The TypeScript implementation currently only supports Qdrant, Redis and in-memory vector database.
+
+
diff --git a/docs/docs.json b/docs/docs.json
index f582ccbf..c88082f2 100644
--- a/docs/docs.json
+++ b/docs/docs.json
@@ -65,6 +65,7 @@
"pages": [
"open-source/quickstart",
"open-source/python-quickstart",
+ "open-source-typescript/quickstart",
{
"group": "Features",
"icon": "wrench",
@@ -151,67 +152,6 @@
]
}
]
- },
- {
- "group": "Node.js",
- "icon": "js",
- "pages": [
- "open-source-typescript/quickstart",
- {
- "group": "Features",
- "icon": "wrench",
- "pages": [
- "open-source-typescript/features/custom-prompts"
- ]
- },
- {
- "group": "LLMs",
- "icon": "brain",
- "pages": [
- "open-source-typescript/components/llms/overview",
- "open-source-typescript/components/llms/config",
- {
- "group": "Supported LLMs",
- "icon": "list",
- "pages": [
- "open-source-typescript/components/llms/models/openai",
- "open-source-typescript/components/llms/models/anthropic",
- "open-source-typescript/components/llms/models/groq"
- ]
- }
- ]
- },{
- "group": "Vector Databases",
- "icon": "database",
- "pages": [
- "open-source-typescript/components/vectordbs/overview",
- "open-source-typescript/components/vectordbs/config",
- {
- "group": "Supported Vector Databases",
- "icon": "server",
- "pages": [
- "open-source-typescript/components/vectordbs/dbs/qdrant",
- "open-source-typescript/components/vectordbs/dbs/redis"
- ]
- }
- ]
- },
- {
- "group": "Embedding Models",
- "icon": "layer-group",
- "pages": [
- "open-source-typescript/components/embedders/overview",
- "open-source-typescript/components/embedders/config",
- {
- "group": "Supported Embedding Models",
- "icon": "list",
- "pages": [
- "open-source-typescript/components/embedders/models/openai"
- ]
- }
- ]
- }
- ]
}
]
}
diff --git a/docs/features/custom-prompts.mdx b/docs/features/custom-prompts.mdx
index b33de779..8d871c69 100644
--- a/docs/features/custom-prompts.mdx
+++ b/docs/features/custom-prompts.mdx
@@ -17,7 +17,8 @@ To create an effective custom prompt:
Example of a custom prompt:
-```python
+
+```python Python
custom_prompt = """
Please only extract entities containing customer support information, order details, and user information.
Here are some few shot examples:
@@ -39,12 +40,37 @@ Output: {{"facts" : ["Ordered red shirt, size medium", "Received blue shirt inst
Return the facts and customer information in a json format as shown above.
"""
-
```
-Here we initialize the custom prompt in the config.
+```typescript TypeScript
+const customPrompt = `
+Please only extract entities containing customer support information, order details, and user information.
+Here are some few shot examples:
-```python
+Input: Hi.
+Output: {"facts" : []}
+
+Input: The weather is nice today.
+Output: {"facts" : []}
+
+Input: My order #12345 hasn't arrived yet.
+Output: {"facts" : ["Order #12345 not received"]}
+
+Input: I am John Doe, and I would like to return the shoes I bought last week.
+Output: {"facts" : ["Customer name: John Doe", "Wants to return shoes", "Purchase made last week"]}
+
+Input: I ordered a red shirt, size medium, but received a blue one instead.
+Output: {"facts" : ["Ordered red shirt, size medium", "Received blue shirt instead"]}
+
+Return the facts and customer information in a json format as shown above.
+`;
+```
+
+
+Here we initialize the custom prompt in the config:
+
+
+```python Python
from mem0 import Memory
config = {
@@ -63,15 +89,40 @@ config = {
m = Memory.from_config(config_dict=config, user_id="alice")
```
+```typescript TypeScript
+import { Memory } from 'mem0ai/oss';
+
+const config = {
+ version: 'v1.1',
+ llm: {
+ provider: 'openai',
+ config: {
+ apiKey: process.env.OPENAI_API_KEY || '',
+ model: 'gpt-4-turbo-preview',
+ temperature: 0.2,
+ maxTokens: 1500,
+ },
+ },
+ customPrompt: customPrompt
+};
+
+const memory = new Memory(config);
+```
+
+
### Example 1
In this example, we are adding a memory of a user ordering a laptop. As seen in the output, the custom prompt is used to extract the relevant information from the user's message.
-```python Code
+```python Python
m.add("Yesterday, I ordered a laptop, the order id is 12345", user_id="alice")
```
+```typescript TypeScript
+await memory.add('Yesterday, I ordered a laptop, the order id is 12345', { userId: "user123" });
+```
+
```json Output
{
"results": [
@@ -97,11 +148,16 @@ m.add("Yesterday, I ordered a laptop, the order id is 12345", user_id="alice")
In this example, we are adding a memory of a user liking to go on hikes. This add message is not specific to the use-case mentioned in the custom prompt.
Hence, the memory is not added.
+
-```python Code
+```python Python
m.add("I like going to hikes", user_id="alice")
```
+```typescript TypeScript
+await memory.add('I like going to hikes', { userId: "user123" });
+```
+
```json Output
{
"results": [],
@@ -109,3 +165,5 @@ m.add("I like going to hikes", user_id="alice")
}
```
+
+The custom prompt will process both the user and assistant messages to extract relevant information according to the defined format.
diff --git a/docs/open-source-typescript/components/embedders/config.mdx b/docs/open-source-typescript/components/embedders/config.mdx
deleted file mode 100644
index f088dc2f..00000000
--- a/docs/open-source-typescript/components/embedders/config.mdx
+++ /dev/null
@@ -1,56 +0,0 @@
----
-title: Configurations
-icon: "gear"
-iconType: "solid"
----
-
-Config in Mem0 is a dictionary that specifies the settings for your embedding models. It allows you to customize the behavior and connection details of your chosen embedder.
-
-## How to define configurations?
-
-The config is defined as a TypeScript object with two main keys:
-- `embedder`: Specifies the embedder provider and its configuration
- - `provider`: The name of the embedder (e.g., "openai", "ollama")
- - `config`: A nested object containing provider-specific settings
-
-## How to use configurations?
-
-Here's a general example of how to use the config with Mem0:
-
-```typescript
-import { Memory } from 'mem0ai/oss';
-
-const config = {
- embedder: {
- provider: 'openai',
- config: {
- apiKey: 'your-openai-api-key',
- model: 'text-embedding-3-small',
- },
- },
-};
-
-const memory = new Memory(config);
-await memory.add("Your text here", { userId: "user", metadata: { category: "example" } });
-```
-
-## Why is Config Needed?
-
-Config is essential for:
-1. Specifying which embedding model to use.
-2. Providing necessary connection details (e.g., model, api_key, embedding_dims).
-3. Ensuring proper initialization and connection to your chosen embedder.
-
-## Master List of All Params in Config
-
-Here's a comprehensive list of all parameters that can be used across different embedders:
-
-| Parameter | Description |
-|------------------------|--------------------------------------------------|
-| `model` | Embedding model to use |
-| `apiKey` | API key of the provider |
-| `embeddingDims` | Dimensions of the embedding model |
-
-## Supported Embedding Models
-
-For detailed information on configuring specific embedders, please visit the [Embedding Models](./models) section. There you'll find information for each supported embedder with provider-specific usage examples and configuration details.
diff --git a/docs/open-source-typescript/components/embedders/models/openai.mdx b/docs/open-source-typescript/components/embedders/models/openai.mdx
deleted file mode 100644
index 879890dc..00000000
--- a/docs/open-source-typescript/components/embedders/models/openai.mdx
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title: OpenAI
----
-
-To use OpenAI embedding models, you need to provide the API key directly in your configuration. You can obtain the OpenAI API key from the [OpenAI Platform](https://platform.openai.com/account/api-keys).
-
-### Usage
-
-Here's how to configure OpenAI embedding models in your application:
-
-```typescript
-import { Memory } from 'mem0ai/oss';
-
-const config = {
- embedder: {
- provider: 'openai',
- config: {
- apiKey: 'your-openai-api-key',
- model: 'text-embedding-3-large',
- },
- },
-};
-
-const memory = new Memory(config);
-await memory.add("I'm visiting Paris", { userId: "john" });
-```
-
-### Config
-
-Here are the parameters available for configuring the OpenAI embedder:
-
-| Parameter | Description | Default Value |
-|------------------------|--------------------------------------------------|---------------|
-| `model` | The name of the embedding model to use | `text-embedding-3-small` |
-| `embedding_dims` | Dimensions of the embedding model | `1536` |
-| `api_key` | The OpenAI API key | `None` |
diff --git a/docs/open-source-typescript/components/embedders/overview.mdx b/docs/open-source-typescript/components/embedders/overview.mdx
deleted file mode 100644
index 3106bbd4..00000000
--- a/docs/open-source-typescript/components/embedders/overview.mdx
+++ /dev/null
@@ -1,21 +0,0 @@
----
-title: Overview
-icon: "info"
-iconType: "solid"
----
-
-Mem0 offers support for various embedding models, allowing users to choose the one that best suits their needs.
-
-## Supported Embedders
-
-See the list of supported embedders below.
-
-
-
-
-
-## Usage
-
-To utilize a embedder, you must provide a configuration to customize its usage. If no configuration is supplied, a default configuration will be applied, and `OpenAI` will be used as the embedder.
-
-For a comprehensive list of available parameters for embedder configuration, please refer to [Config](./config).
diff --git a/docs/open-source-typescript/components/llms/config.mdx b/docs/open-source-typescript/components/llms/config.mdx
deleted file mode 100644
index 7834ac76..00000000
--- a/docs/open-source-typescript/components/llms/config.mdx
+++ /dev/null
@@ -1,85 +0,0 @@
----
-title: Configurations
-icon: "gear"
-iconType: "solid"
----
-
-## How to define configurations?
-
-The `config` is defined as a TypeScript object with two main keys:
-- `llm`: Specifies the LLM provider and its configuration
- - `provider`: The name of the LLM (e.g., "openai", "groq")
- - `config`: A nested object containing provider-specific settings
-
-### Config Values Precedence
-
-Config values are applied in the following order of precedence (from highest to lowest):
-
-1. Values explicitly set in the `config` object
-2. Environment variables (e.g., `OPENAI_API_KEY`, `OPENAI_API_BASE`)
-3. Default values defined in the LLM implementation
-
-This means that values specified in the `config` object will override corresponding environment variables, which in turn override default values.
-
-## How to Use Config
-
-Here's a general example of how to use the config with Mem0:
-
-```typescript
-import { Memory } from 'mem0ai/oss';
-
-const config = {
- llm: {
- provider: 'openai',
- config: {
- apiKey: process.env.OPENAI_API_KEY || '',
- model: 'gpt-4-turbo-preview',
- temperature: 0.2,
- maxTokens: 1500,
- },
- },
- embedder: {
- provider: 'openai',
- config: {
- apiKey: process.env.OPENAI_API_KEY || '',
- model: 'text-embedding-3-small',
- },
- },
- vectorStore: {
- provider: 'memory',
- config: {
- collectionName: 'memories',
- dimension: 1536,
- },
- },
- historyDbPath: 'memory.db',
-};
-
-const memory = new Memory(config);
-memory.add("Your text here", { userId: "user123", metadata: { category: "example" } });
-```
-
-## Why is Config Needed?
-
-Config is essential for:
-1. Specifying which LLM to use.
-2. Providing necessary connection details (e.g., model, api_key, temperature).
-3. Ensuring proper initialization and connection to your chosen LLM.
-
-## Master List of All Params in Config
-
-Here's a comprehensive list of all parameters that can be used across different LLMs:
-
-| Parameter | Description | Provider |
-|----------------------|-----------------------------------------------|-------------------|
-| `model` | Embedding model to use | All |
-| `temperature` | Temperature of the model | All |
-| `apiKey` | API key to use | All |
-| `maxTokens` | Tokens to generate | All |
-| `topP` | Probability threshold for nucleus sampling | All |
-| `topK` | Number of highest probability tokens to keep | All |
-| `openaiBaseUrl` | Base URL for OpenAI API | OpenAI |
-
-## Supported LLMs
-
-For detailed information on configuring specific LLMs, please visit the [LLMs](./models) section. There you'll find information for each supported LLM with provider-specific usage examples and configuration details.
diff --git a/docs/open-source-typescript/components/llms/models/anthropic.mdx b/docs/open-source-typescript/components/llms/models/anthropic.mdx
deleted file mode 100644
index 2f2e8f74..00000000
--- a/docs/open-source-typescript/components/llms/models/anthropic.mdx
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Anthropic
----
-
-To use Anthropic's models, please set the `ANTHROPIC_API_KEY`, which you can find on their [Account Settings Page](https://console.anthropic.com/account/keys).
-
-## Usage
-
-```typescript
-import { Memory } from 'mem0ai/oss';
-
-const config = {
- llm: {
- provider: 'anthropic',
- config: {
- apiKey: process.env.ANTHROPIC_API_KEY || '',
- model: 'claude-3-7-sonnet-latest',
- temperature: 0.1,
- maxTokens: 2000,
- },
- },
-};
-
-const memory = new Memory(config);
-await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
-```
-
-## Config
-
-All available parameters for the `anthropic` config are present in the [Master List of All Params in Config](../config).
\ No newline at end of file
diff --git a/docs/open-source-typescript/components/llms/models/groq.mdx b/docs/open-source-typescript/components/llms/models/groq.mdx
deleted file mode 100644
index abb96eb9..00000000
--- a/docs/open-source-typescript/components/llms/models/groq.mdx
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Groq
----
-
-[Groq](https://groq.com/) is the creator of the world's first Language Processing Unit (LPU), providing exceptional speed performance for AI workloads running on their LPU Inference Engine.
-
-In order to use LLMs from Groq, go to their [platform](https://console.groq.com/keys) and get the API key. Set the API key as `GROQ_API_KEY` environment variable to use the model as given below in the example.
-
-## Usage
-
-```typescript
-import { Memory } from 'mem0ai/oss';
-
-const config = {
- llm: {
- provider: 'groq',
- config: {
- apiKey: process.env.GROQ_API_KEY || '',
- model: 'mixtral-8x7b-32768',
- temperature: 0.1,
- maxTokens: 1000,
- },
- },
-};
-
-const memory = new Memory(config);
-await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
-```
-
-## Config
-
-All available parameters for the `groq` config are present in the [Master List of All Params in Config](../config).
\ No newline at end of file
diff --git a/docs/open-source-typescript/components/llms/models/openai.mdx b/docs/open-source-typescript/components/llms/models/openai.mdx
deleted file mode 100644
index fb9b12f0..00000000
--- a/docs/open-source-typescript/components/llms/models/openai.mdx
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: OpenAI
----
-
-To use OpenAI LLM models, you need to set the `OPENAI_API_KEY` environment variable. You can obtain the OpenAI API key from the [OpenAI Platform](https://platform.openai.com/account/api-keys).
-
-## Usage
-
-```typescript
-import { Memory } from 'mem0ai/oss';
-
-const config = {
- llm: {
- provider: 'openai',
- config: {
- apiKey: process.env.OPENAI_API_KEY || '',
- model: 'gpt-4-turbo-preview',
- temperature: 0.2,
- maxTokens: 1500,
- },
- },
-};
-
-const memory = new Memory(config);
-await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
-```
-
-## Config
-
-All available parameters for the `openai` config are present in the [Master List of All Params in Config](../config).
\ No newline at end of file
diff --git a/docs/open-source-typescript/components/llms/overview.mdx b/docs/open-source-typescript/components/llms/overview.mdx
deleted file mode 100644
index f2a471ff..00000000
--- a/docs/open-source-typescript/components/llms/overview.mdx
+++ /dev/null
@@ -1,45 +0,0 @@
----
-title: Overview
-icon: "info"
-iconType: "solid"
----
-
-Mem0 includes built-in support for various popular large language models. Memory can utilize the LLM provided by the user, ensuring efficient use for specific needs.
-
-## Usage
-
-To use a llm, you must provide a configuration to customize its usage. If no configuration is supplied, a default configuration will be applied, and `OpenAI` will be used as the llm.
-
-For a comprehensive list of available parameters for llm configuration, please refer to [Config](./config).
-
-To view all supported llms, visit the [Supported LLMs](./models).
-
-
-
-
-
-
-
-## Structured vs Unstructured Outputs
-
-Mem0 supports two types of OpenAI LLM formats, each with its own strengths and use cases:
-
-### Structured Outputs
-
-Structured outputs are LLMs that align with OpenAI's structured outputs model:
-
-- **Optimized for:** Returning structured responses (e.g., JSON objects)
-- **Benefits:** Precise, easily parseable data
-- **Ideal for:** Data extraction, form filling, API responses
-- **Learn more:** [OpenAI Structured Outputs Guide](https://platform.openai.com/docs/guides/structured-outputs/introduction)
-
-### Unstructured Outputs
-
-Unstructured outputs correspond to OpenAI's standard, free-form text model:
-
-- **Flexibility:** Returns open-ended, natural language responses
-- **Customization:** Use the `response_format` parameter to guide output
-- **Trade-off:** Less efficient than structured outputs for specific data needs
-- **Best for:** Creative writing, explanations, general conversation
-
-Choose the format that best suits your application's requirements for optimal performance and usability.
diff --git a/docs/open-source-typescript/components/vectordbs/config.mdx b/docs/open-source-typescript/components/vectordbs/config.mdx
deleted file mode 100644
index eea14c85..00000000
--- a/docs/open-source-typescript/components/vectordbs/config.mdx
+++ /dev/null
@@ -1,100 +0,0 @@
----
-title: Configurations
-icon: "gear"
-iconType: "solid"
----
-
-## How to define configurations?
-
-The `config` is defined as a TypeScript object with two main keys:
-- `vectorStore`: Specifies the vector database provider and its configuration
- - `provider`: The name of the vector database (e.g., "chroma", "pgvector", "qdrant", "milvus", "azure_ai_search", "redis", "memory")
- - `config`: A nested object containing provider-specific settings
-
-## In-Memory Storage Option
-
-We also support an in-memory storage option for the vector store, which is useful for reduced overhead and faster access times. Here's how to configure it:
-
-### Example for In-Memory Storage
-
-```typescript
-const configMemory = {
- vector_store: {
- provider: 'memory',
- config: {
- collectionName: 'memories',
- dimension: 1536,
- },
- },
-};
-
-const memory = new Memory(configMemory);
-await memory.add("Your text here", { userId: "user", metadata: { category: "example" } });
-```
-
-## How to Use Config
-
-Here's a general example of how to use the config with Mem0:
-
-### Example for qdrant
-
-```typescript
-import { Memory } from 'mem0ai/oss';
-
-const config = {
- vector_store: {
- provider: 'qdrant',
- config: {
- collectionName: 'memories',
- embeddingModelDims: 1536,
- host: 'localhost',
- port: 6333,
- url: 'https://your-qdrant-url.com',
- apiKey: 'your-qdrant-api-key',
- onDisk: true,
- },
- },
-};
-
-const memory = new Memory(config);
-await memory.add("Your text here", { userId: "user", metadata: { category: "example" } });
-```
-
-## Why is Config Needed?
-
-Config is essential for:
-1. Specifying which vector database to use.
-2. Providing necessary connection details (e.g., host, port, credentials).
-3. Customizing database-specific settings (e.g., collection name, path).
-4. Ensuring proper initialization and connection to your chosen vector store.
-
-## Master List of All Params in Config
-
-Here's a comprehensive list of all parameters that can be used across different vector databases:
-
-| Parameter | Description |
-|------------------------|--------------------------------------|
-| `collectionName` | Name of the collection |
-| `dimension` | Dimensions of the embedding model |
-| `host` | Host where the server is running |
-| `port` | Port where the server is running |
-| `embeddingModelDims` | Dimensions of the embedding model |
-| `url` | URL for the Qdrant server |
-| `apiKey` | API key for the Qdrant server |
-| `path` | Path for the Qdrant server |
-| `onDisk` | Enable persistent storage (for Qdrant) |
-| `redisUrl` | URL for the Redis server |
-| `username` | Username for Redis connection |
-| `password` | Password for Redis connection |
-
-## Customizing Config
-
-Each vector database has its own specific configuration requirements. To customize the config for your chosen vector store:
-
-1. Identify the vector database you want to use from [supported vector databases](./dbs).
-2. Refer to the `Config` section in the respective vector database's documentation.
-3. Include only the relevant parameters for your chosen database in the `config` object.
-
-## Supported Vector Databases
-
-For detailed information on configuring specific vector databases, please visit the [Supported Vector Databases](./dbs) section. There you'll find individual pages for each supported vector store with provider-specific usage examples and configuration details.
diff --git a/docs/open-source-typescript/components/vectordbs/dbs/pgvector.mdx b/docs/open-source-typescript/components/vectordbs/dbs/pgvector.mdx
deleted file mode 100644
index e770ce5a..00000000
--- a/docs/open-source-typescript/components/vectordbs/dbs/pgvector.mdx
+++ /dev/null
@@ -1,44 +0,0 @@
-[pgvector](https://github.com/pgvector/pgvector) is open-source vector similarity search for Postgres. After connecting with Postgres, run `CREATE EXTENSION IF NOT EXISTS vector;` to create the vector extension.
-
-### Usage
-
-Here's how to configure pgvector in your application:
-
-```typescript
-import { Memory } from 'mem0ai/oss';
-
-const config = {
- vector_store: {
- provider: 'pgvector',
- config: {
- collectionName: 'memories',
- dimension: 1536,
- dbname: 'vectordb',
- user: 'postgres',
- password: 'postgres',
- host: 'localhost',
- port: 5432,
- embeddingModelDims: 1536,
- hnsw: true,
- },
- },
-};
-
-const memory = new Memory(config);
-await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
-```
-
-### Config
-
-Here's the parameters available for configuring pgvector:
-
-| Parameter | Description | Default Value |
-|------------------------|--------------------------------------------------|---------------|
-| `dbname` | The name of the database | `postgres` |
-| `collectionName` | The name of the collection | `mem0` |
-| `embeddingModelDims` | Dimensions of the embedding model | `1536` |
-| `user` | Username to connect to the database | `None` |
-| `password` | Password to connect to the database | `None` |
-| `host` | The host where the Postgres server is running | `None` |
-| `port` | The port where the Postgres server is running | `None` |
-| `hnsw` | Enable HNSW indexing | `False` |
\ No newline at end of file
diff --git a/docs/open-source-typescript/components/vectordbs/dbs/qdrant.mdx b/docs/open-source-typescript/components/vectordbs/dbs/qdrant.mdx
deleted file mode 100644
index 3992ec7f..00000000
--- a/docs/open-source-typescript/components/vectordbs/dbs/qdrant.mdx
+++ /dev/null
@@ -1,42 +0,0 @@
-[Qdrant](https://qdrant.tech/) is an open-source vector search engine. It is designed to work with large-scale datasets and provides a high-performance search engine for vector data.
-
-### Usage
-
-Here's how to configure Qdrant in your application:
-
-```typescript
-import { Memory } from 'mem0ai/oss';
-
-const config = {
- vector_store: {
- provider: 'qdrant',
- config: {
- collectionName: 'memories',
- embeddingModelDims: 1536,
- host: 'localhost',
- port: 6333,
- url: 'https://your-qdrant-url.com',
- apiKey: 'your-qdrant-api-key',
- onDisk: true,
- },
- },
-};
-
-const memory = new Memory(config);
-await memory.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
-```
-
-### Config
-
-Let's see the available parameters for the `qdrant` config:
-
-| Parameter | Description | Default Value |
-|------------------------|--------------------------------------------------|---------------|
-| `collectionName` | The name of the collection to store the vectors | `mem0` |
-| `embeddingModelDims` | Dimensions of the embedding model | `1536` |
-| `host` | The host where the Qdrant server is running | `None` |
-| `port` | The port where the Qdrant server is running | `None` |
-| `path` | Path for the Qdrant database | `/tmp/qdrant` |
-| `url` | Full URL for the Qdrant server | `None` |
-| `apiKey` | API key for the Qdrant server | `None` |
-| `onDisk` | For enabling persistent storage | `False` |
\ No newline at end of file
diff --git a/docs/open-source-typescript/components/vectordbs/dbs/redis.mdx b/docs/open-source-typescript/components/vectordbs/dbs/redis.mdx
deleted file mode 100644
index 670dfae7..00000000
--- a/docs/open-source-typescript/components/vectordbs/dbs/redis.mdx
+++ /dev/null
@@ -1,47 +0,0 @@
-[Redis](https://redis.io/) is a scalable, real-time database that can store, search, and analyze vector data.
-
-### Installation
-```bash
-pip install redis redisvl
-```
-
-Redis Stack using Docker:
-```bash
-docker run -d --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest
-```
-
-### Usage
-
-Here's how to configure Redis in your application:
-
-```typescript
-import { Memory } from 'mem0ai/oss';
-
-const config = {
- vector_store: {
- provider: 'redis',
- config: {
- collectionName: 'memories',
- embeddingModelDims: 1536,
- redisUrl: 'redis://localhost:6379',
- username: 'your-redis-username',
- password: 'your-redis-password',
- },
- },
-};
-
-const memoryRedis = new Memory(config);
-await memoryRedis.add("Likes to play cricket on weekends", { userId: "alice", metadata: { category: "hobbies" } });
-```
-
-### Config
-
-Let's see the available parameters for the `redis` config:
-
-| Parameter | Description | Default Value |
-|------------------------|--------------------------------------------------|---------------|
-| `collectionName` | The name of the collection to store the vectors | `mem0` |
-| `embeddingModelDims` | Dimensions of the embedding model | `1536` |
-| `redisUrl` | The URL of the Redis server | `None` |
-| `username` | Username for Redis connection | `None` |
-| `password` | Password for Redis connection | `None` |
\ No newline at end of file
diff --git a/docs/open-source-typescript/components/vectordbs/overview.mdx b/docs/open-source-typescript/components/vectordbs/overview.mdx
deleted file mode 100644
index 62cbdd17..00000000
--- a/docs/open-source-typescript/components/vectordbs/overview.mdx
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title: Overview
-icon: "info"
-iconType: "solid"
----
-
-Mem0 includes built-in support for various popular databases. Memory can utilize the database provided by the user, ensuring efficient use for specific needs.
-
-## Supported Vector Databases
-
-See the list of supported vector databases below.
-
-
-
-
-
-
-
-
-## Usage
-
-To utilize a vector database, you must provide a configuration to customize its usage. If no configuration is supplied, a default configuration will be applied, and `Memory` will be used as the vector database.
-
-For a comprehensive list of available parameters for vector database configuration, please refer to [Config](./config).
-
-## Common issues
-
-### Using model with different dimensions
-
-If you are using customized model, which is having different dimensions other than 1536
-for example 768, you may encounter below error:
-
-`ValueError: shapes (0,1536) and (768,) not aligned: 1536 (dim 1) != 768 (dim 0)`
-
-you could add `"embedding_model_dims": 768,` to the config of the vector_store to overcome this issue.
-
diff --git a/docs/open-source-typescript/features/custom-prompts.mdx b/docs/open-source-typescript/features/custom-prompts.mdx
deleted file mode 100644
index 83613131..00000000
--- a/docs/open-source-typescript/features/custom-prompts.mdx
+++ /dev/null
@@ -1,141 +0,0 @@
----
-title: Custom Prompts
-description: 'Enhance your product experience by adding custom prompts tailored to your needs'
-icon: "pencil"
-iconType: "solid"
----
-
-## Introduction to Custom Prompts
-
-Custom prompts allow you to tailor the behavior of your Mem0 instance to specific use cases or domains.
-By defining a custom prompt, you can control how information is extracted, processed, and stored in your memory system.
-
-To create an effective custom prompt:
-1. Be specific about the information to extract.
-2. Provide few-shot examples to guide the LLM.
-3. Ensure examples follow the format shown below.
-
-Example of a custom prompt:
-
-```typescript
-const customPrompt = `
-Please only extract entities containing customer support information, order details, and user information.
-Here are some few shot examples:
-
-Input: Hi.
-Output: {"facts" : []}
-
-Input: The weather is nice today.
-Output: {"facts" : []}
-
-Input: My order #12345 hasn't arrived yet.
-Output: {"facts" : ["Order #12345 not received"]}
-
-Input: I'm John Doe, and I'd like to return the shoes I bought last week.
-Output: {"facts" : ["Customer name: John Doe", "Wants to return shoes", "Purchase made last week"]}
-
-Input: I ordered a red shirt, size medium, but received a blue one instead.
-Output: {"facts" : ["Ordered red shirt, size medium", "Received blue shirt instead"]}
-
-Return the facts and customer information in a json format as shown above.
-`;
-```
-
-Here we initialize the custom prompt in the config:
-
-```typescript
-import { Memory } from 'mem0ai/oss';
-
-const config = {
- version: 'v1.1',
- embedder: {
- provider: 'openai',
- config: {
- apiKey: process.env.OPENAI_API_KEY || '',
- model: 'text-embedding-3-small',
- },
- },
- vectorStore: {
- provider: 'memory',
- config: {
- collectionName: 'memories',
- dimension: 1536,
- },
- },
- llm: {
- provider: 'openai',
- config: {
- apiKey: process.env.OPENAI_API_KEY || '',
- model: 'gpt-4-turbo-preview',
- temperature: 0.2,
- maxTokens: 1500,
- },
- },
- customPrompt: customPrompt,
- historyDbPath: 'memory.db',
-};
-
-const memory = new Memory(config);
-```
-
-### Example 1
-
-In this example, we are adding a memory of a user ordering a laptop. As seen in the output, the custom prompt is used to extract the relevant information from the user's message.
-
-
-```typescript Code
-await memory.add('Yesterday, I ordered a laptop, the order id is 12345', { userId: "user123" });
-```
-
-```json Output
-{
- "results": [
- {
- "id": "c03c9045-df76-4949-bbc5-d5dc1932aa5c",
- "memory": "Ordered a laptop",
- "metadata": {}
- },
- {
- "id": "cbb1fe73-0bf1-4067-8c1f-63aa53e7b1a4",
- "memory": "Order ID: 12345",
- "metadata": {}
- },
- {
- "id": "e5f2a012-3b45-4c67-9d8e-123456789abc",
- "memory": "Order placed yesterday",
- "metadata": {}
- }
- ]
-}
-```
-
-
-### Example 2
-
-In this example, we are adding a memory of a user liking to go on hikes. This add message is not specific to the use-case mentioned in the custom prompt.
-Hence, the memory is not added.
-
-
-```typescript Code
-await memory.add('I like going to hikes', { userId: "user123" });
-```
-
-```json Output
-{
- "results": []
-}
-```
-
-
-You can also use custom prompts with chat messages:
-
-```typescript
-const messages = [
- { role: 'user', content: 'Hi, I ordered item #54321 last week but haven\'t received it yet.' },
- { role: 'assistant', content: 'I understand you\'re concerned about your order #54321. Let me help track that for you.' }
-];
-
-await memory.add(messages, { userId: "user123" });
-```
-
-The custom prompt will process both the user and assistant messages to extract relevant information according to the defined format.
diff --git a/docs/open-source-typescript/quickstart.mdx b/docs/open-source-typescript/quickstart.mdx
index 756458cb..0dc4cf62 100644
--- a/docs/open-source-typescript/quickstart.mdx
+++ b/docs/open-source-typescript/quickstart.mdx
@@ -1,5 +1,5 @@
---
-title: Node.js Guide
+title: Node SDK
description: 'Get started with Mem0 quickly!'
icon: "node"
iconType: "solid"
diff --git a/docs/open-source/python-quickstart.mdx b/docs/open-source/python-quickstart.mdx
index 5b423cba..8467419f 100644
--- a/docs/open-source/python-quickstart.mdx
+++ b/docs/open-source/python-quickstart.mdx
@@ -1,5 +1,5 @@
---
-title: Python Guide
+title: Python SDK
description: 'Get started with Mem0 quickly!'
icon: "python"
iconType: "solid"