improvement(OSS): Fix AOSS and AWS BedRock LLM (#2697)

Co-authored-by: Prateek Chhikara <prateekchhikara24@gmail.com>
Co-authored-by: Deshraj Yadav <deshrajdry@gmail.com>
This commit is contained in:
Saket Aryan
2025-05-16 04:49:29 +05:30
committed by GitHub
parent 267e5b13ea
commit 5c67a5e6bc
14 changed files with 502 additions and 127 deletions

View File

@@ -25,7 +25,7 @@ from mem0 import Memory
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
# AWS credentials
os.environ["AWS_REGION"] = "us-east-1"
os.environ["AWS_REGION"] = "us-west-2"
os.environ["AWS_ACCESS_KEY_ID"] = "your-access-key"
os.environ["AWS_SECRET_ACCESS_KEY"] = "your-secret-key"
@@ -33,7 +33,7 @@ config = {
"embedder": {
"provider": "aws_bedrock",
"config": {
"model": "amazon.titan-embed-text-v1"
"model": "amazon.titan-embed-text-v2:0"
}
}
}

View File

@@ -15,16 +15,15 @@ title: AWS Bedrock
import os
from mem0 import Memory
os.environ["OPENAI_API_KEY"] = "your-api-key" # used for embedding model
os.environ['AWS_REGION'] = 'us-east-1'
os.environ["AWS_ACCESS_KEY"] = "xx"
os.environ['AWS_REGION'] = 'us-west-2'
os.environ["AWS_ACCESS_KEY_ID"] = "xx"
os.environ["AWS_SECRET_ACCESS_KEY"] = "xx"
config = {
"llm": {
"provider": "aws_bedrock",
"config": {
"model": "arn:aws:bedrock:us-east-1:123456789012:model/your-model-name",
"model": "anthropic.claude-3-5-haiku-20241022-v1:0",
"temperature": 0.2,
"max_tokens": 2000,
}

View File

@@ -1,59 +1,75 @@
[OpenSearch](https://opensearch.org/) is an open-source, enterprise-grade search and observability suite that brings order to unstructured data at scale. OpenSearch supports k-NN (k-Nearest Neighbors) and allows you to store and retrieve high-dimensional vector embeddings efficiently.
[OpenSearch](https://opensearch.org/) is an enterprise-grade search and observability suite that brings order to unstructured data at scale. OpenSearch supports k-NN (k-Nearest Neighbors) and allows you to store and retrieve high-dimensional vector embeddings efficiently.
### Installation
OpenSearch support requires additional dependencies. Install them with:
```bash
pip install opensearch>=2.8.0
pip install opensearch-py
```
### Prerequisites
Before using OpenSearch with Mem0, you need to set up a collection in AWS OpenSearch Service.
#### AWS OpenSearch Service
You can create a collection through the AWS Console:
- Navigate to [OpenSearch Service Console](https://console.aws.amazon.com/aos/home)
- Click "Create collection"
- Select "Serverless collection" and then enable "Vector search" capabilities
- Once created, note the endpoint URL (host) for your configuration
### Usage
```python
import os
from mem0 import Memory
import boto3
from opensearchpy import OpenSearch, RequestsHttpConnection, AWSV4SignerAuth
os.environ["OPENAI_API_KEY"] = "sk-xx"
# For AWS OpenSearch Service with IAM authentication
region = 'us-west-2'
service = 'aoss'
credentials = boto3.Session().get_credentials()
auth = AWSV4SignerAuth(credentials, region, service)
config = {
"vector_store": {
"provider": "opensearch",
"config": {
"collection_name": "mem0",
"host": "localhost",
"port": 9200,
"embedding_model_dims": 1536
"host": "your-domain.us-west-2.aoss.amazonaws.com",
"port": 443,
"http_auth": auth,
"embedding_model_dims": 1024,
"connection_class": RequestsHttpConnection,
"pool_maxsize": 20,
"use_ssl": True,
"verify_certs": True
}
}
}
```
### Add Memories
```python
m = Memory.from_config(config)
messages = [
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
{"role": "assistant", "content": "How about a thriller movies? They can be quite engaging."},
{"role": "user", "content": "Im not a big fan of thriller movies but I love sci-fi movies."},
{"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."},
{"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
]
m.add(messages, user_id="alice", metadata={"category": "movies"})
```
### Config
### Search Memories
Let's see the available parameters for the `opensearch` config:
| Parameter | Description | Default Value |
| ---------------------- | -------------------------------------------------- | ------------- |
| `collection_name` | The name of the index to store the vectors | `mem0` |
| `embedding_model_dims` | Dimensions of the embedding model | `1536` |
| `host` | The host where the OpenSearch server is running | `localhost` |
| `port` | The port where the OpenSearch server is running | `9200` |
| `api_key` | API key for authentication | `None` |
| `user` | Username for basic authentication | `None` |
| `password` | Password for basic authentication | `None` |
| `verify_certs` | Whether to verify SSL certificates | `False` |
| `auto_create_index` | Whether to automatically create the index | `True` |
| `use_ssl` | Whether to use SSL for connection | `False` |
```python
results = m.search("What kind of movies does Alice like?", user_id="alice")
```
### Features

View File

@@ -200,6 +200,7 @@
"icon": "lightbulb",
"pages": [
"examples",
"examples/aws_example",
"examples/mem0-demo",
"examples/ai_companion_js",
"examples/eliza_os",

View File

@@ -0,0 +1,120 @@
---
title: AWS Bedrock and AOSS
---
<Snippet file="paper-release.mdx" />
This example demonstrates how to configure and use the `mem0ai` SDK with **AWS Bedrock** and **OpenSearch Service (AOSS)** for persistent memory capabilities in Python.
## Installation
Install the required dependencies:
```bash
pip install mem0ai boto3 opensearch-py
```
## Environment Setup
Set your AWS environment variables:
```python
import os
# Set these in your environment or notebook
os.environ['AWS_REGION'] = 'us-west-2'
os.environ['AWS_ACCESS_KEY_ID'] = 'AK00000000000000000'
os.environ['AWS_SECRET_ACCESS_KEY'] = 'AS00000000000000000'
# Confirm they are set
print(os.environ['AWS_REGION'])
print(os.environ['AWS_ACCESS_KEY_ID'])
print(os.environ['AWS_SECRET_ACCESS_KEY'])
```
## Configuration and Usage
This sets up Mem0 with AWS Bedrock for embeddings and LLM, and OpenSearch as the vector store.
```python
import boto3
from opensearchpy import OpenSearch, RequestsHttpConnection, AWSV4SignerAuth
from mem0.memory.main import Memory
region = 'us-west-2'
service = 'aoss'
credentials = boto3.Session().get_credentials()
auth = AWSV4SignerAuth(credentials, region, service)
config = {
"embedder": {
"provider": "aws_bedrock",
"config": {
"model": "amazon.titan-embed-text-v2:0"
}
},
"llm": {
"provider": "aws_bedrock",
"config": {
"model": "anthropic.claude-3-5-haiku-20241022-v1:0",
"temperature": 0.1,
"max_tokens": 2000
}
},
"vector_store": {
"provider": "opensearch",
"config": {
"collection_name": "mem0",
"host": "your-opensearch-domain.us-west-2.es.amazonaws.com",
"port": 443,
"http_auth": auth,
"embedding_model_dims": 1024,
"connection_class": RequestsHttpConnection,
"pool_maxsize": 20,
"use_ssl": True,
"verify_certs": True
}
}
}
# Initialize memory system
m = Memory.from_config(config)
```
## Usage
#### Add a memory:
```python
messages = [
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
{"role": "assistant", "content": "How about a thriller movies? They can be quite engaging."},
{"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."},
{"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
]
# Store inferred memories (default behavior)
result = m.add(messages, user_id="alice", metadata={"category": "movie_recommendations"})
```
#### Search a memory:
```python
relevant_memories = m.search(query, user_id="alice")
```
#### Get all memories:
```python
all_memories = m.get_all(user_id="alice")
```
#### Get a specific memory:
```python
memory = m.get(memory_id)
```
---
## Conclusion
With Mem0 and AWS services like Bedrock and OpenSearch, you can build intelligent AI companions that remember, adapt, and personalize their responses over time. This makes them ideal for long-term assistants, tutors, or support bots with persistent memory and natural conversation abilities.