Docs Update (#2591)

This commit is contained in:
Prateek Chhikara
2025-04-29 08:15:25 -07:00
committed by GitHub
parent 6d13e83001
commit 393a4fd5a6
111 changed files with 2296 additions and 99 deletions

View File

@@ -4,6 +4,8 @@ icon: "gear"
iconType: "solid"
---
<Snippet file="paper-release.mdx" />
## How to define configurations?
<Tabs>

View File

@@ -2,6 +2,8 @@
title: Anthropic
---
<Snippet file="paper-release.mdx" />
To use anthropic's models, please set the `ANTHROPIC_API_KEY` which you find on their [Account Settings Page](https://console.anthropic.com/account/keys).
## Usage

View File

@@ -2,6 +2,8 @@
title: AWS Bedrock
---
<Snippet file="paper-release.mdx" />
### Setup
- Before using the AWS Bedrock LLM, make sure you have the appropriate model access from [Bedrock Console](https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/modelaccess).
- You will also need to authenticate the `boto3` client by using a method in the [AWS documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials)

View File

@@ -2,6 +2,8 @@
title: Azure OpenAI
---
<Snippet file="paper-release.mdx" />
<Note> Mem0 Now Supports Azure OpenAI Models in TypeScript SDK </Note>
To use Azure OpenAI models, you have to set the `LLM_AZURE_OPENAI_API_KEY`, `LLM_AZURE_ENDPOINT`, `LLM_AZURE_DEPLOYMENT` and `LLM_AZURE_API_VERSION` environment variables. You can obtain the Azure API key from the [Azure](https://azure.microsoft.com/).

View File

@@ -2,6 +2,8 @@
title: DeepSeek
---
<Snippet file="paper-release.mdx" />
To use DeepSeek LLM models, you have to set the `DEEPSEEK_API_KEY` environment variable. You can also optionally set `DEEPSEEK_API_BASE` if you need to use a different API endpoint (defaults to "https://api.deepseek.com").
## Usage

View File

@@ -2,6 +2,8 @@
title: Gemini
---
<Snippet file="paper-release.mdx" />
To use Gemini model, you have to set the `GEMINI_API_KEY` environment variable. You can obtain the Gemini API key from the [Google AI Studio](https://aistudio.google.com/app/apikey)
## Usage

View File

@@ -2,6 +2,8 @@
title: Google AI
---
<Snippet file="paper-release.mdx" />
To use Google AI model, you have to set the `GOOGLE_API_KEY` environment variable. You can obtain the Google API key from the [Google Maker Suite](https://makersuite.google.com/app/apikey)
## Usage

View File

@@ -2,6 +2,8 @@
title: Groq
---
<Snippet file="paper-release.mdx" />
[Groq](https://groq.com/) is the creator of the world's first Language Processing Unit (LPU), providing exceptional speed performance for AI workloads running on their LPU Inference Engine.
In order to use LLMs from Groq, go to their [platform](https://console.groq.com/keys) and get the API key. Set the API key as `GROQ_API_KEY` environment variable to use the model as given below in the example.

View File

@@ -2,6 +2,8 @@
title: LangChain
---
<Snippet file="paper-release.mdx" />
Mem0 supports LangChain as a provider to access a wide range of LLM models. LangChain is a framework for developing applications powered by language models, making it easy to integrate various LLM providers through a consistent interface.
For a complete list of available chat models supported by LangChain, refer to the [LangChain Chat Models documentation](https://python.langchain.com/docs/integrations/chat).

View File

@@ -1,3 +1,5 @@
<Snippet file="paper-release.mdx" />
[Litellm](https://litellm.vercel.app/docs/) is compatible with over 100 large language models (LLMs), all using a standardized input/output format. You can explore the [available models](https://litellm.vercel.app/docs/providers) to use with Litellm. Ensure you set the `API_KEY` for the model you choose to use.
## Usage

View File

@@ -2,6 +2,8 @@
title: LM Studio
---
<Snippet file="paper-release.mdx" />
To use LM Studio with Mem0, you'll need to have LM Studio running locally with its server enabled. LM Studio provides a way to run local LLMs with an OpenAI-compatible API.
## Usage

View File

@@ -2,6 +2,8 @@
title: Mistral AI
---
<Snippet file="paper-release.mdx" />
To use mistral's models, please obtain the Mistral AI api key from their [console](https://console.mistral.ai/). Set the `MISTRAL_API_KEY` environment variable to use the model as given below in the example.
## Usage

View File

@@ -1,3 +1,5 @@
<Snippet file="paper-release.mdx" />
You can use LLMs from Ollama to run Mem0 locally. These [models](https://ollama.com/search?c=tools) support tool support.
## Usage

View File

@@ -2,6 +2,8 @@
title: OpenAI
---
<Snippet file="paper-release.mdx" />
To use OpenAI LLM models, you have to set the `OPENAI_API_KEY` environment variable. You can obtain the OpenAI API key from the [OpenAI Platform](https://platform.openai.com/account/api-keys).
## Usage

View File

@@ -1,3 +1,5 @@
<Snippet file="paper-release.mdx" />
To use TogetherAI LLM models, you have to set the `TOGETHER_API_KEY` environment variable. You can obtain the TogetherAI API key from their [Account settings page](https://api.together.xyz/settings/api-keys).
## Usage

View File

@@ -2,6 +2,8 @@
title: xAI
---
<Snippet file="paper-release.mdx" />
[xAI](https://x.ai/) is a new AI company founded by Elon Musk that develops large language models, including Grok. Grok is trained on real-time data from X (formerly Twitter) and aims to provide accurate, up-to-date responses with a touch of wit and humor.
In order to use LLMs from xAI, go to their [platform](https://console.x.ai) and get the API key. Set the API key as `XAI_API_KEY` environment variable to use the model as given below in the example.

View File

@@ -4,6 +4,8 @@ icon: "info"
iconType: "solid"
---
<Snippet file="paper-release.mdx" />
Mem0 includes built-in support for various popular large language models. Memory can utilize the LLM provided by the user, ensuring efficient use for specific needs.
## Usage