AWS Bedrock Integration and spell checks (#3124)
This commit is contained in:
@@ -496,7 +496,7 @@ mode: "wide"
|
||||
|
||||
<Update label="2025-04-28" description="v2.1.20">
|
||||
**Improvements:**
|
||||
- **Client:** Fixed `organizationId` and `projectId` being asssigned to default in `ping` method
|
||||
- **Client:** Fixed `organizationId` and `projectId` being assigned to default in `ping` method
|
||||
</Update>
|
||||
|
||||
<Update label="2025-04-22" description="v2.1.19">
|
||||
@@ -555,7 +555,7 @@ mode: "wide"
|
||||
|
||||
<Update label="2025-03-29" description="v2.1.13">
|
||||
**Improvements:**
|
||||
- **Introuced `ping` method to check if API key is valid and populate org/project id**
|
||||
- **Introduced `ping` method to check if API key is valid and populate org/project id**
|
||||
</Update>
|
||||
|
||||
<Update label="2025-03-29" description="AI SDK v1.0.0">
|
||||
|
||||
@@ -260,6 +260,7 @@
|
||||
"integrations/livekit",
|
||||
"integrations/pipecat",
|
||||
"integrations/elevenlabs",
|
||||
"integrations/aws-bedrock",
|
||||
"integrations/flowise",
|
||||
"integrations/langchain-tools",
|
||||
"integrations/agentops",
|
||||
|
||||
@@ -61,7 +61,7 @@ class Companion:
|
||||
check_prompt = f"""
|
||||
Analyze the given input and determine whether the user is primarily:
|
||||
1) Talking about themselves or asking for personal advice. They may use words like "I" for this.
|
||||
2) Inquiring about the AI companions's capabilities or characteristics They may use words like "you" for this.
|
||||
2) Inquiring about the AI companion's capabilities or characteristics They may use words like "you" for this.
|
||||
|
||||
Respond with a single word:
|
||||
- 'user' if the input is focused on the user
|
||||
|
||||
@@ -80,7 +80,7 @@ agent = FunctionCallingAgent.from_tools(
|
||||
```
|
||||
|
||||
Start the chat.
|
||||
<Note> The agent will use the Mem0 to store the relavant memories from the chat. </Note>
|
||||
<Note> The agent will use the Mem0 to store the relevant memories from the chat. </Note>
|
||||
|
||||
Input
|
||||
```python
|
||||
@@ -139,7 +139,7 @@ Added user message to memory: I am feeling hungry, order me something and send m
|
||||
=== LLM Response ===
|
||||
Please let me know your name and the dish you'd like to order, and I'll take care of it for you!
|
||||
```
|
||||
<Note> The agent is not able to remember the past prefernces that user shared in previous chats. </Note>
|
||||
<Note> The agent is not able to remember the past preferences that user shared in previous chats. </Note>
|
||||
|
||||
### Using the agent WITH memory
|
||||
Input
|
||||
@@ -171,4 +171,4 @@ Emailing... David
|
||||
=== LLM Response ===
|
||||
I've ordered a pizza for you, and the bill has been sent to your email. Enjoy your meal! If there's anything else you need, feel free to let me know.
|
||||
```
|
||||
<Note> The agent is able to remember the past prefernces that user shared and use them to perform actions. </Note>
|
||||
<Note> The agent is able to remember the past preferences that user shared and use them to perform actions. </Note>
|
||||
|
||||
@@ -6,28 +6,28 @@ title: Multimodal Demo with Mem0
|
||||
|
||||
Enhance your AI interactions with **Mem0**'s multimodal capabilities. Mem0 now supports image understanding, allowing for richer context and more natural interactions across supported AI platforms.
|
||||
|
||||
> 🎉 Experience the power of multimodal AI! Test out Mem0's image understanding capabilities at [multimodal-demo.mem0.ai](https://multimodal-demo.mem0.ai)
|
||||
> Experience the power of multimodal AI! Test out Mem0's image understanding capabilities at [multimodal-demo.mem0.ai](https://multimodal-demo.mem0.ai)
|
||||
|
||||
## 🚀 Features
|
||||
## Features
|
||||
|
||||
- **🖼️ Image Understanding**: Share and discuss images with AI assistants while maintaining context.
|
||||
- **🔍 Smart Visual Context**: Automatically capture and reference visual elements in conversations.
|
||||
- **🔗 Cross-Modal Memory**: Link visual and textual information seamlessly in your memory layer.
|
||||
- **📌 Cross-Session Recall**: Reference previously discussed visual content across different conversations.
|
||||
- **⚡ Seamless Integration**: Works naturally with existing chat interfaces for a smooth experience.
|
||||
- **Image Understanding**: Share and discuss images with AI assistants while maintaining context.
|
||||
- **Smart Visual Context**: Automatically capture and reference visual elements in conversations.
|
||||
- **Cross-Modal Memory**: Link visual and textual information seamlessly in your memory layer.
|
||||
- **Cross-Session Recall**: Reference previously discussed visual content across different conversations.
|
||||
- **Seamless Integration**: Works naturally with existing chat interfaces for a smooth experience.
|
||||
|
||||
## 📖 How It Works
|
||||
## How It Works
|
||||
|
||||
1. **📂 Upload Visual Content**: Simply drag and drop or paste images into your conversations.
|
||||
2. **💬 Natural Interaction**: Discuss the visual content naturally with AI assistants.
|
||||
3. **📚 Memory Integration**: Visual context is automatically stored and linked with your conversation history.
|
||||
4. **🔄 Persistent Recall**: Retrieve and reference past visual content effortlessly.
|
||||
1. **Upload Visual Content**: Simply drag and drop or paste images into your conversations.
|
||||
2. **Natural Interaction**: Discuss the visual content naturally with AI assistants.
|
||||
3. **Memory Integration**: Visual context is automatically stored and linked with your conversation history.
|
||||
4. **Persistent Recall**: Retrieve and reference past visual content effortlessly.
|
||||
|
||||
## Demo Video
|
||||
|
||||
<iframe width="700" height="400" src="https://www.youtube.com/embed/2Md5AEFVpmg?si=rXXupn6CiDUPJsi3" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
|
||||
|
||||
## 🔥 Try It Out
|
||||
## Try It Out
|
||||
|
||||
Visit [multimodal-demo.mem0.ai](https://multimodal-demo.mem0.ai) to experience Mem0's multimodal capabilities firsthand. Upload images and see how Mem0 understands and remembers visual context across your conversations.
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ iconType: "solid"
|
||||
|
||||
When an AI agent or LLM needs to access memories, it employs the `search` method. Mem0 conducts a comprehensive search across these data stores, retrieving relevant information from each.
|
||||
|
||||
The retrieved memories can be seamlessly integrated into the LLM's prompt as required, enhancing the personalization and relevance of responses.
|
||||
The retrieved memories can be seamlessly integrated into the system prompt as required, enhancing the personalization and relevance of responses.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="What are the key features of Mem0?">
|
||||
@@ -23,7 +23,7 @@ iconType: "solid"
|
||||
- **Developer-Friendly API**: Offers a straightforward API for seamless integration into various applications.
|
||||
- **Platform Consistency**: Ensures consistent behavior and data across different platforms and devices.
|
||||
- **Managed Service**: Provides a hosted solution for easy deployment and maintenance.
|
||||
- **Save Costs**: Saves costs by adding relevent memories instead of complete transcripts to context window
|
||||
- **Save Costs**: Saves costs by adding relevant memories instead of complete transcripts to context window
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How Mem0 is different from traditional RAG?">
|
||||
|
||||
@@ -13,7 +13,7 @@ iconType: "solid"
|
||||
- **Developer-Friendly API**: Offers a straightforward API for seamless integration into various applications.
|
||||
- **Platform Consistency**: Ensures consistent behavior and data across different platforms and devices.
|
||||
- **Managed Service**: Provides a hosted solution for easy deployment and maintenance.
|
||||
- **Save Costs**: Saves costs by adding relevent memories instead of complete transcripts to context window
|
||||
- **Save Costs**: Saves costs by adding relevant memories instead of complete transcripts to context window
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -7,10 +7,10 @@ Integrate [**Mem0**](https://github.com/mem0ai/mem0) with [Agno](https://github.
|
||||
|
||||
## Overview
|
||||
|
||||
1. 🧠 Store and retrieve memories from Mem0 within Agno agents
|
||||
2. 🖼️ Support for multimodal interactions (text and images)
|
||||
3. 🔍 Semantic search for relevant past conversations
|
||||
4. 🌐 Personalized responses based on user history
|
||||
1. Store and retrieve memories from Mem0 within Agno agents
|
||||
2. Support for multimodal interactions (text and images)
|
||||
3. Semantic search for relevant past conversations
|
||||
4. Personalized responses based on user history
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
||||
132
docs/integrations/aws-bedrock.mdx
Normal file
132
docs/integrations/aws-bedrock.mdx
Normal file
@@ -0,0 +1,132 @@
|
||||
---
|
||||
title: AWS Bedrock
|
||||
---
|
||||
|
||||
<Snippet file="security-compliance.mdx" />
|
||||
|
||||
This integration demonstrates how to use **Mem0** with **AWS Bedrock** and **Amazon OpenSearch Service (AOSS)** to enable persistent, semantic memory in intelligent agents.
|
||||
|
||||
## Overview
|
||||
|
||||
In this guide, you'll:
|
||||
|
||||
1. Configure AWS credentials to enable Bedrock and OpenSearch access
|
||||
2. Set up the Mem0 SDK to use Bedrock for embeddings and LLM
|
||||
3. Store and retrieve memories using OpenSearch as a vector store
|
||||
4. Build memory-aware applications with scalable cloud infrastructure
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- AWS account with access to:
|
||||
- Bedrock foundation models (e.g., Titan, Claude)
|
||||
- OpenSearch Service with a configured domain
|
||||
- Python 3.8+
|
||||
- Valid AWS credentials (via environment or IAM role)
|
||||
|
||||
## Setup and Installation
|
||||
|
||||
Install required packages:
|
||||
|
||||
```bash
|
||||
pip install mem0ai boto3 opensearch-py
|
||||
```
|
||||
|
||||
Set environment variables:
|
||||
|
||||
Be sure to configure your AWS credentials using environment variables, IAM roles, or the AWS CLI.
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
os.environ['AWS_REGION'] = 'us-west-2'
|
||||
os.environ['AWS_ACCESS_KEY_ID'] = 'AKIA...'
|
||||
os.environ['AWS_SECRET_ACCESS_KEY'] = 'AS...'
|
||||
```
|
||||
|
||||
## Initialize Mem0 Integration
|
||||
|
||||
Import necessary modules and configure Mem0:
|
||||
|
||||
```python
|
||||
import boto3
|
||||
from opensearchpy import OpenSearch, RequestsHttpConnection, AWSV4SignerAuth
|
||||
from mem0.memory.main import Memory
|
||||
|
||||
region = 'us-west-2'
|
||||
service = 'aoss'
|
||||
credentials = boto3.Session().get_credentials()
|
||||
auth = AWSV4SignerAuth(credentials, region, service)
|
||||
|
||||
config = {
|
||||
"embedder": {
|
||||
"provider": "aws_bedrock",
|
||||
"config": {
|
||||
"model": "amazon.titan-embed-text-v2:0"
|
||||
}
|
||||
},
|
||||
"llm": {
|
||||
"provider": "aws_bedrock",
|
||||
"config": {
|
||||
"model": "anthropic.claude-3-5-haiku-20241022-v1:0",
|
||||
"temperature": 0.1,
|
||||
"max_tokens": 2000
|
||||
}
|
||||
},
|
||||
"vector_store": {
|
||||
"provider": "opensearch",
|
||||
"config": {
|
||||
"collection_name": "mem0",
|
||||
"host": "your-opensearch-domain.us-west-2.es.amazonaws.com",
|
||||
"port": 443,
|
||||
"http_auth": auth,
|
||||
"embedding_model_dims": 1024,
|
||||
"connection_class": RequestsHttpConnection,
|
||||
"pool_maxsize": 20,
|
||||
"use_ssl": True,
|
||||
"verify_certs": True
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Initialize memory system
|
||||
m = Memory.from_config(config)
|
||||
```
|
||||
|
||||
## Memory Operations
|
||||
|
||||
Use Mem0 with your Bedrock-powered LLM and OpenSearch storage backend:
|
||||
|
||||
```python
|
||||
# Store conversational context
|
||||
messages = [
|
||||
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
|
||||
{"role": "assistant", "content": "How about a thriller?"},
|
||||
{"role": "user", "content": "I prefer sci-fi."},
|
||||
{"role": "assistant", "content": "Noted! I'll suggest sci-fi movies next time."}
|
||||
]
|
||||
|
||||
m.add(messages, user_id="alice", metadata={"category": "movie_recommendations"})
|
||||
|
||||
# Search for memory
|
||||
relevant = m.search("What kind of movies does Alice like?", user_id="alice")
|
||||
|
||||
# Retrieve all user memories
|
||||
all_memories = m.get_all(user_id="alice")
|
||||
```
|
||||
|
||||
## Key Features
|
||||
|
||||
1. **Serverless Memory Embeddings**: Use Titan or other Bedrock models for fast, cloud-native embeddings
|
||||
2. **Scalable Vector Search**: Store and retrieve vectorized memories via OpenSearch
|
||||
3. **Seamless AWS Auth**: Uses AWS IAM or environment variables to securely authenticate
|
||||
4. **User-specific Memory Spaces**: Memories are isolated per user ID
|
||||
5. **Persistent Memory Context**: Maintain and recall history across sessions
|
||||
|
||||
## Help
|
||||
|
||||
- [AWS Bedrock Documentation](https://docs.aws.amazon.com/bedrock/)
|
||||
- [Amazon OpenSearch Service Docs](https://docs.aws.amazon.com/opensearch-service/)
|
||||
- [Mem0 Platform](https://app.mem0.ai)
|
||||
|
||||
<Snippet file="get-help.mdx" />
|
||||
|
||||
@@ -64,11 +64,11 @@ Create functions to handle context retrieval, response generation, and addition
|
||||
def retrieve_context(query: str, user_id: str) -> List[Dict]:
|
||||
"""Retrieve relevant context from Mem0"""
|
||||
memories = mem0.search(query, user_id=user_id)
|
||||
seralized_memories = ' '.join([mem["memory"] for mem in memories])
|
||||
serialized_memories = ' '.join([mem["memory"] for mem in memories])
|
||||
context = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": f"Relevant information: {seralized_memories}"
|
||||
"content": f"Relevant information: {serialized_memories}"
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
|
||||
@@ -103,7 +103,7 @@ memory_from_config = Mem0Memory.from_config(
|
||||
)
|
||||
```
|
||||
|
||||
Intilaize the LLM
|
||||
Initialize the LLM
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
Reference in New Issue
Block a user