improvement(OSS): Fix AOSS and AWS BedRock LLM (#2697)
Co-authored-by: Prateek Chhikara <prateekchhikara24@gmail.com> Co-authored-by: Deshraj Yadav <deshrajdry@gmail.com>
This commit is contained in:
@@ -25,7 +25,7 @@ from mem0 import Memory
|
||||
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
|
||||
|
||||
# AWS credentials
|
||||
os.environ["AWS_REGION"] = "us-east-1"
|
||||
os.environ["AWS_REGION"] = "us-west-2"
|
||||
os.environ["AWS_ACCESS_KEY_ID"] = "your-access-key"
|
||||
os.environ["AWS_SECRET_ACCESS_KEY"] = "your-secret-key"
|
||||
|
||||
@@ -33,7 +33,7 @@ config = {
|
||||
"embedder": {
|
||||
"provider": "aws_bedrock",
|
||||
"config": {
|
||||
"model": "amazon.titan-embed-text-v1"
|
||||
"model": "amazon.titan-embed-text-v2:0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -15,16 +15,15 @@ title: AWS Bedrock
|
||||
import os
|
||||
from mem0 import Memory
|
||||
|
||||
os.environ["OPENAI_API_KEY"] = "your-api-key" # used for embedding model
|
||||
os.environ['AWS_REGION'] = 'us-east-1'
|
||||
os.environ["AWS_ACCESS_KEY"] = "xx"
|
||||
os.environ['AWS_REGION'] = 'us-west-2'
|
||||
os.environ["AWS_ACCESS_KEY_ID"] = "xx"
|
||||
os.environ["AWS_SECRET_ACCESS_KEY"] = "xx"
|
||||
|
||||
config = {
|
||||
"llm": {
|
||||
"provider": "aws_bedrock",
|
||||
"config": {
|
||||
"model": "arn:aws:bedrock:us-east-1:123456789012:model/your-model-name",
|
||||
"model": "anthropic.claude-3-5-haiku-20241022-v1:0",
|
||||
"temperature": 0.2,
|
||||
"max_tokens": 2000,
|
||||
}
|
||||
|
||||
@@ -1,59 +1,75 @@
|
||||
[OpenSearch](https://opensearch.org/) is an open-source, enterprise-grade search and observability suite that brings order to unstructured data at scale. OpenSearch supports k-NN (k-Nearest Neighbors) and allows you to store and retrieve high-dimensional vector embeddings efficiently.
|
||||
[OpenSearch](https://opensearch.org/) is an enterprise-grade search and observability suite that brings order to unstructured data at scale. OpenSearch supports k-NN (k-Nearest Neighbors) and allows you to store and retrieve high-dimensional vector embeddings efficiently.
|
||||
|
||||
### Installation
|
||||
|
||||
OpenSearch support requires additional dependencies. Install them with:
|
||||
|
||||
```bash
|
||||
pip install opensearch>=2.8.0
|
||||
pip install opensearch-py
|
||||
```
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Before using OpenSearch with Mem0, you need to set up a collection in AWS OpenSearch Service.
|
||||
|
||||
#### AWS OpenSearch Service
|
||||
You can create a collection through the AWS Console:
|
||||
- Navigate to [OpenSearch Service Console](https://console.aws.amazon.com/aos/home)
|
||||
- Click "Create collection"
|
||||
- Select "Serverless collection" and then enable "Vector search" capabilities
|
||||
- Once created, note the endpoint URL (host) for your configuration
|
||||
|
||||
|
||||
### Usage
|
||||
|
||||
```python
|
||||
import os
|
||||
from mem0 import Memory
|
||||
import boto3
|
||||
from opensearchpy import OpenSearch, RequestsHttpConnection, AWSV4SignerAuth
|
||||
|
||||
os.environ["OPENAI_API_KEY"] = "sk-xx"
|
||||
# For AWS OpenSearch Service with IAM authentication
|
||||
region = 'us-west-2'
|
||||
service = 'aoss'
|
||||
credentials = boto3.Session().get_credentials()
|
||||
auth = AWSV4SignerAuth(credentials, region, service)
|
||||
|
||||
config = {
|
||||
"vector_store": {
|
||||
"provider": "opensearch",
|
||||
"config": {
|
||||
"collection_name": "mem0",
|
||||
"host": "localhost",
|
||||
"port": 9200,
|
||||
"embedding_model_dims": 1536
|
||||
"host": "your-domain.us-west-2.aoss.amazonaws.com",
|
||||
"port": 443,
|
||||
"http_auth": auth,
|
||||
"embedding_model_dims": 1024,
|
||||
"connection_class": RequestsHttpConnection,
|
||||
"pool_maxsize": 20,
|
||||
"use_ssl": True,
|
||||
"verify_certs": True
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Add Memories
|
||||
|
||||
```python
|
||||
m = Memory.from_config(config)
|
||||
messages = [
|
||||
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
|
||||
{"role": "assistant", "content": "How about a thriller movies? They can be quite engaging."},
|
||||
{"role": "user", "content": "I’m not a big fan of thriller movies but I love sci-fi movies."},
|
||||
{"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."},
|
||||
{"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
|
||||
]
|
||||
m.add(messages, user_id="alice", metadata={"category": "movies"})
|
||||
```
|
||||
|
||||
### Config
|
||||
### Search Memories
|
||||
|
||||
Let's see the available parameters for the `opensearch` config:
|
||||
|
||||
| Parameter | Description | Default Value |
|
||||
| ---------------------- | -------------------------------------------------- | ------------- |
|
||||
| `collection_name` | The name of the index to store the vectors | `mem0` |
|
||||
| `embedding_model_dims` | Dimensions of the embedding model | `1536` |
|
||||
| `host` | The host where the OpenSearch server is running | `localhost` |
|
||||
| `port` | The port where the OpenSearch server is running | `9200` |
|
||||
| `api_key` | API key for authentication | `None` |
|
||||
| `user` | Username for basic authentication | `None` |
|
||||
| `password` | Password for basic authentication | `None` |
|
||||
| `verify_certs` | Whether to verify SSL certificates | `False` |
|
||||
| `auto_create_index` | Whether to automatically create the index | `True` |
|
||||
| `use_ssl` | Whether to use SSL for connection | `False` |
|
||||
```python
|
||||
results = m.search("What kind of movies does Alice like?", user_id="alice")
|
||||
```
|
||||
|
||||
### Features
|
||||
|
||||
|
||||
Reference in New Issue
Block a user