Add langchain embedding, update langchain LLM and version bump -> 0.1.84 (#2510)

This commit is contained in:
Dev Khant
2025-04-07 15:27:26 +05:30
committed by GitHub
parent 5509066925
commit 9dfa9b4412
14 changed files with 266 additions and 253 deletions

View File

@@ -6,6 +6,15 @@ mode: "wide"
<Tabs>
<Tab title="Python">
<Update label="2025-04-07" description="v0.1.84">
**New Features:**
- **Langchain Embedder:** Added Langchain embedder integration
**Improvements:**
- **Langchain LLM:** Updated Langchain LLM integration to directly pass the Langchain object LLM
</Update>
<Update label="2025-04-07" description="v0.1.83">
**Bug Fixes:**

View File

@@ -0,0 +1,120 @@
---
title: LangChain
---
Mem0 supports LangChain as a provider to access a wide range of embedding models. LangChain is a framework for developing applications powered by language models, making it easy to integrate various embedding providers through a consistent interface.
For a complete list of available embedding models supported by LangChain, refer to the [LangChain Text Embedding documentation](https://python.langchain.com/docs/integrations/text_embedding/).
## Usage
<CodeGroup>
```python Python
import os
from mem0 import Memory
from langchain_openai import OpenAIEmbeddings
# Set necessary environment variables for your chosen LangChain provider
os.environ["OPENAI_API_KEY"] = "your-api-key"
# Initialize a LangChain embeddings model directly
openai_embeddings = OpenAIEmbeddings(
model="text-embedding-3-small",
dimensions=1536
)
# Pass the initialized model to the config
config = {
"embedder": {
"provider": "langchain",
"config": {
"model": openai_embeddings
}
}
}
m = Memory.from_config(config)
messages = [
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
{"role": "assistant", "content": "How about a thriller movies? They can be quite engaging."},
{"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."},
{"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
]
m.add(messages, user_id="alice", metadata={"category": "movies"})
```
</CodeGroup>
## Supported LangChain Embedding Providers
LangChain supports a wide range of embedding providers, including:
- OpenAI (`OpenAIEmbeddings`)
- Cohere (`CohereEmbeddings`)
- Google (`VertexAIEmbeddings`)
- Hugging Face (`HuggingFaceEmbeddings`)
- Sentence Transformers (`HuggingFaceEmbeddings`)
- Azure OpenAI (`AzureOpenAIEmbeddings`)
- Ollama (`OllamaEmbeddings`)
- Together (`TogetherEmbeddings`)
- And many more
You can use any of these model instances directly in your configuration. For a complete and up-to-date list of available embedding providers, refer to the [LangChain Text Embedding documentation](https://python.langchain.com/docs/integrations/text_embedding/).
## Provider-Specific Configuration
When using LangChain as an embedder provider, you'll need to:
1. Set the appropriate environment variables for your chosen embedding provider
2. Import and initialize the specific model class you want to use
3. Pass the initialized model instance to the config
### Examples with Different Providers
#### HuggingFace Embeddings
```python
from langchain_huggingface import HuggingFaceEmbeddings
# Initialize a HuggingFace embeddings model
hf_embeddings = HuggingFaceEmbeddings(
model_name="BAAI/bge-small-en-v1.5",
encode_kwargs={"normalize_embeddings": True}
)
config = {
"embedder": {
"provider": "langchain",
"config": {
"model": hf_embeddings
}
}
}
```
#### Ollama Embeddings
```python
from langchain_ollama import OllamaEmbeddings
# Initialize an Ollama embeddings model
ollama_embeddings = OllamaEmbeddings(
model="nomic-embed-text"
)
config = {
"embedder": {
"provider": "langchain",
"config": {
"model": ollama_embeddings
}
}
}
```
<Note>
Make sure to install the necessary LangChain packages and any provider-specific dependencies.
</Note>
## Config
All available parameters for the `langchain` embedder config are present in [Master List of All Params in Config](../config).

View File

@@ -23,6 +23,7 @@ See the list of supported embedders below.
<Card title="Vertex AI" href="/components/embedders/models/vertexai"></Card>
<Card title="Together" href="/components/embedders/models/together"></Card>
<Card title="LM Studio" href="/components/embedders/models/lmstudio"></Card>
<Card title="Langchain" href="/components/embedders/models/langchain"></Card>
</CardGroup>
## Usage

View File

@@ -109,7 +109,6 @@ Here's a comprehensive list of all parameters that can be used across different
| `deepseek_base_url` | Base URL for DeepSeek API | DeepSeek |
| `xai_base_url` | Base URL for XAI API | XAI |
| `lmstudio_base_url` | Base URL for LM Studio API | LM Studio |
| `langchain_provider` | Provider for Langchain | Langchain |
</Tab>
<Tab title="TypeScript">
| Parameter | Description | Provider |

View File

@@ -12,19 +12,24 @@ For a complete list of available chat models supported by LangChain, refer to th
```python Python
import os
from mem0 import Memory
from langchain_openai import ChatOpenAI
# Set necessary environment variables for your chosen LangChain provider
# For example, if using OpenAI through LangChain:
os.environ["OPENAI_API_KEY"] = "your-api-key"
# Initialize a LangChain model directly
openai_model = ChatOpenAI(
model="gpt-4o",
temperature=0.2,
max_tokens=2000
)
# Pass the initialized model to the config
config = {
"llm": {
"provider": "langchain",
"config": {
"langchain_provider": "OpenAI",
"model": "gpt-4o",
"temperature": 0.2,
"max_tokens": 2000,
"model": openai_model
}
}
}
@@ -53,15 +58,15 @@ LangChain supports a wide range of LLM providers, including:
- HuggingFace (`HuggingFaceChatEndpoint`)
- And many more
You can specify any supported provider in the `langchain_provider` parameter. For a complete and up-to-date list of available providers, refer to the [LangChain Chat Models documentation](https://python.langchain.com/docs/integrations/chat).
You can use any of these model instances directly in your configuration. For a complete and up-to-date list of available providers, refer to the [LangChain Chat Models documentation](https://python.langchain.com/docs/integrations/chat).
## Provider-Specific Configuration
When using LangChain as a provider, you'll need to:
1. Set the appropriate environment variables for your chosen LLM provider
2. Specify the LangChain provider class name in the `langchain_provider` parameter
3. Include any additional configuration parameters required by the specific provider
2. Import and initialize the specific model class you want to use
3. Pass the initialized model instance to the config
<Note>
Make sure to install the necessary LangChain packages and any provider-specific dependencies.

View File

@@ -161,7 +161,8 @@
"components/embedders/models/vertexai",
"components/embedders/models/gemini",
"components/embedders/models/lmstudio",
"components/embedders/models/together"
"components/embedders/models/together",
"components/embedders/models/langchain"
]
}
]