diff --git a/docs/changelog.mdx b/docs/changelog.mdx
index 68189b6d..01b7496e 100644
--- a/docs/changelog.mdx
+++ b/docs/changelog.mdx
@@ -496,7 +496,7 @@ mode: "wide"
**Improvements:**
-- **Client:** Fixed `organizationId` and `projectId` being asssigned to default in `ping` method
+- **Client:** Fixed `organizationId` and `projectId` being assigned to default in `ping` method
@@ -555,7 +555,7 @@ mode: "wide"
**Improvements:**
-- **Introuced `ping` method to check if API key is valid and populate org/project id**
+- **Introduced `ping` method to check if API key is valid and populate org/project id**
diff --git a/docs/docs.json b/docs/docs.json
index 3f6bbd2c..e5c3d45a 100644
--- a/docs/docs.json
+++ b/docs/docs.json
@@ -260,6 +260,7 @@
"integrations/livekit",
"integrations/pipecat",
"integrations/elevenlabs",
+ "integrations/aws-bedrock",
"integrations/flowise",
"integrations/langchain-tools",
"integrations/agentops",
diff --git a/docs/examples/ai_companion.mdx b/docs/examples/ai_companion.mdx
index 945a6da4..56a786c4 100644
--- a/docs/examples/ai_companion.mdx
+++ b/docs/examples/ai_companion.mdx
@@ -61,7 +61,7 @@ class Companion:
check_prompt = f"""
Analyze the given input and determine whether the user is primarily:
1) Talking about themselves or asking for personal advice. They may use words like "I" for this.
- 2) Inquiring about the AI companions's capabilities or characteristics They may use words like "you" for this.
+ 2) Inquiring about the AI companion's capabilities or characteristics They may use words like "you" for this.
Respond with a single word:
- 'user' if the input is focused on the user
diff --git a/docs/examples/llama-index-mem0.mdx b/docs/examples/llama-index-mem0.mdx
index 371b3e52..19c867e1 100644
--- a/docs/examples/llama-index-mem0.mdx
+++ b/docs/examples/llama-index-mem0.mdx
@@ -80,7 +80,7 @@ agent = FunctionCallingAgent.from_tools(
```
Start the chat.
- The agent will use the Mem0 to store the relavant memories from the chat.
+ The agent will use the Mem0 to store the relevant memories from the chat.
Input
```python
@@ -139,7 +139,7 @@ Added user message to memory: I am feeling hungry, order me something and send m
=== LLM Response ===
Please let me know your name and the dish you'd like to order, and I'll take care of it for you!
```
- The agent is not able to remember the past prefernces that user shared in previous chats.
+ The agent is not able to remember the past preferences that user shared in previous chats.
### Using the agent WITH memory
Input
@@ -171,4 +171,4 @@ Emailing... David
=== LLM Response ===
I've ordered a pizza for you, and the bill has been sent to your email. Enjoy your meal! If there's anything else you need, feel free to let me know.
```
- The agent is able to remember the past prefernces that user shared and use them to perform actions.
+ The agent is able to remember the past preferences that user shared and use them to perform actions.
diff --git a/docs/examples/multimodal-demo.mdx b/docs/examples/multimodal-demo.mdx
index 935e616e..1f034e82 100644
--- a/docs/examples/multimodal-demo.mdx
+++ b/docs/examples/multimodal-demo.mdx
@@ -6,28 +6,28 @@ title: Multimodal Demo with Mem0
Enhance your AI interactions with **Mem0**'s multimodal capabilities. Mem0 now supports image understanding, allowing for richer context and more natural interactions across supported AI platforms.
-> 🎉 Experience the power of multimodal AI! Test out Mem0's image understanding capabilities at [multimodal-demo.mem0.ai](https://multimodal-demo.mem0.ai)
+> Experience the power of multimodal AI! Test out Mem0's image understanding capabilities at [multimodal-demo.mem0.ai](https://multimodal-demo.mem0.ai)
-## 🚀 Features
+## Features
-- **🖼️ Image Understanding**: Share and discuss images with AI assistants while maintaining context.
-- **🔍 Smart Visual Context**: Automatically capture and reference visual elements in conversations.
-- **🔗 Cross-Modal Memory**: Link visual and textual information seamlessly in your memory layer.
-- **📌 Cross-Session Recall**: Reference previously discussed visual content across different conversations.
-- **⚡ Seamless Integration**: Works naturally with existing chat interfaces for a smooth experience.
+- **Image Understanding**: Share and discuss images with AI assistants while maintaining context.
+- **Smart Visual Context**: Automatically capture and reference visual elements in conversations.
+- **Cross-Modal Memory**: Link visual and textual information seamlessly in your memory layer.
+- **Cross-Session Recall**: Reference previously discussed visual content across different conversations.
+- **Seamless Integration**: Works naturally with existing chat interfaces for a smooth experience.
-## 📖 How It Works
+## How It Works
-1. **📂 Upload Visual Content**: Simply drag and drop or paste images into your conversations.
-2. **💬 Natural Interaction**: Discuss the visual content naturally with AI assistants.
-3. **📚 Memory Integration**: Visual context is automatically stored and linked with your conversation history.
-4. **🔄 Persistent Recall**: Retrieve and reference past visual content effortlessly.
+1. **Upload Visual Content**: Simply drag and drop or paste images into your conversations.
+2. **Natural Interaction**: Discuss the visual content naturally with AI assistants.
+3. **Memory Integration**: Visual context is automatically stored and linked with your conversation history.
+4. **Persistent Recall**: Retrieve and reference past visual content effortlessly.
## Demo Video
-## 🔥 Try It Out
+## Try It Out
Visit [multimodal-demo.mem0.ai](https://multimodal-demo.mem0.ai) to experience Mem0's multimodal capabilities firsthand. Upload images and see how Mem0 understands and remembers visual context across your conversations.
diff --git a/docs/faqs.mdx b/docs/faqs.mdx
index edf9bc36..c1efef9f 100644
--- a/docs/faqs.mdx
+++ b/docs/faqs.mdx
@@ -14,7 +14,7 @@ iconType: "solid"
When an AI agent or LLM needs to access memories, it employs the `search` method. Mem0 conducts a comprehensive search across these data stores, retrieving relevant information from each.
- The retrieved memories can be seamlessly integrated into the LLM's prompt as required, enhancing the personalization and relevance of responses.
+ The retrieved memories can be seamlessly integrated into the system prompt as required, enhancing the personalization and relevance of responses.
@@ -23,7 +23,7 @@ iconType: "solid"
- **Developer-Friendly API**: Offers a straightforward API for seamless integration into various applications.
- **Platform Consistency**: Ensures consistent behavior and data across different platforms and devices.
- **Managed Service**: Provides a hosted solution for easy deployment and maintenance.
- - **Save Costs**: Saves costs by adding relevent memories instead of complete transcripts to context window
+ - **Save Costs**: Saves costs by adding relevant memories instead of complete transcripts to context window
diff --git a/docs/features.mdx b/docs/features.mdx
index 6d758f26..08114c1a 100644
--- a/docs/features.mdx
+++ b/docs/features.mdx
@@ -13,7 +13,7 @@ iconType: "solid"
- **Developer-Friendly API**: Offers a straightforward API for seamless integration into various applications.
- **Platform Consistency**: Ensures consistent behavior and data across different platforms and devices.
- **Managed Service**: Provides a hosted solution for easy deployment and maintenance.
-- **Save Costs**: Saves costs by adding relevent memories instead of complete transcripts to context window
+- **Save Costs**: Saves costs by adding relevant memories instead of complete transcripts to context window
diff --git a/docs/integrations/agno.mdx b/docs/integrations/agno.mdx
index 77ff9140..01832313 100644
--- a/docs/integrations/agno.mdx
+++ b/docs/integrations/agno.mdx
@@ -7,10 +7,10 @@ Integrate [**Mem0**](https://github.com/mem0ai/mem0) with [Agno](https://github.
## Overview
-1. 🧠 Store and retrieve memories from Mem0 within Agno agents
-2. 🖼️ Support for multimodal interactions (text and images)
-3. 🔍 Semantic search for relevant past conversations
-4. 🌐 Personalized responses based on user history
+1. Store and retrieve memories from Mem0 within Agno agents
+2. Support for multimodal interactions (text and images)
+3. Semantic search for relevant past conversations
+4. Personalized responses based on user history
## Prerequisites
diff --git a/docs/integrations/aws-bedrock.mdx b/docs/integrations/aws-bedrock.mdx
new file mode 100644
index 00000000..bc718fd0
--- /dev/null
+++ b/docs/integrations/aws-bedrock.mdx
@@ -0,0 +1,132 @@
+---
+title: AWS Bedrock
+---
+
+
+
+This integration demonstrates how to use **Mem0** with **AWS Bedrock** and **Amazon OpenSearch Service (AOSS)** to enable persistent, semantic memory in intelligent agents.
+
+## Overview
+
+In this guide, you'll:
+
+1. Configure AWS credentials to enable Bedrock and OpenSearch access
+2. Set up the Mem0 SDK to use Bedrock for embeddings and LLM
+3. Store and retrieve memories using OpenSearch as a vector store
+4. Build memory-aware applications with scalable cloud infrastructure
+
+## Prerequisites
+
+- AWS account with access to:
+ - Bedrock foundation models (e.g., Titan, Claude)
+ - OpenSearch Service with a configured domain
+- Python 3.8+
+- Valid AWS credentials (via environment or IAM role)
+
+## Setup and Installation
+
+Install required packages:
+
+```bash
+pip install mem0ai boto3 opensearch-py
+```
+
+Set environment variables:
+
+Be sure to configure your AWS credentials using environment variables, IAM roles, or the AWS CLI.
+
+```python
+import os
+
+os.environ['AWS_REGION'] = 'us-west-2'
+os.environ['AWS_ACCESS_KEY_ID'] = 'AKIA...'
+os.environ['AWS_SECRET_ACCESS_KEY'] = 'AS...'
+```
+
+## Initialize Mem0 Integration
+
+Import necessary modules and configure Mem0:
+
+```python
+import boto3
+from opensearchpy import OpenSearch, RequestsHttpConnection, AWSV4SignerAuth
+from mem0.memory.main import Memory
+
+region = 'us-west-2'
+service = 'aoss'
+credentials = boto3.Session().get_credentials()
+auth = AWSV4SignerAuth(credentials, region, service)
+
+config = {
+ "embedder": {
+ "provider": "aws_bedrock",
+ "config": {
+ "model": "amazon.titan-embed-text-v2:0"
+ }
+ },
+ "llm": {
+ "provider": "aws_bedrock",
+ "config": {
+ "model": "anthropic.claude-3-5-haiku-20241022-v1:0",
+ "temperature": 0.1,
+ "max_tokens": 2000
+ }
+ },
+ "vector_store": {
+ "provider": "opensearch",
+ "config": {
+ "collection_name": "mem0",
+ "host": "your-opensearch-domain.us-west-2.es.amazonaws.com",
+ "port": 443,
+ "http_auth": auth,
+ "embedding_model_dims": 1024,
+ "connection_class": RequestsHttpConnection,
+ "pool_maxsize": 20,
+ "use_ssl": True,
+ "verify_certs": True
+ }
+ }
+}
+
+# Initialize memory system
+m = Memory.from_config(config)
+```
+
+## Memory Operations
+
+Use Mem0 with your Bedrock-powered LLM and OpenSearch storage backend:
+
+```python
+# Store conversational context
+messages = [
+ {"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
+ {"role": "assistant", "content": "How about a thriller?"},
+ {"role": "user", "content": "I prefer sci-fi."},
+ {"role": "assistant", "content": "Noted! I'll suggest sci-fi movies next time."}
+]
+
+m.add(messages, user_id="alice", metadata={"category": "movie_recommendations"})
+
+# Search for memory
+relevant = m.search("What kind of movies does Alice like?", user_id="alice")
+
+# Retrieve all user memories
+all_memories = m.get_all(user_id="alice")
+```
+
+## Key Features
+
+1. **Serverless Memory Embeddings**: Use Titan or other Bedrock models for fast, cloud-native embeddings
+2. **Scalable Vector Search**: Store and retrieve vectorized memories via OpenSearch
+3. **Seamless AWS Auth**: Uses AWS IAM or environment variables to securely authenticate
+4. **User-specific Memory Spaces**: Memories are isolated per user ID
+5. **Persistent Memory Context**: Maintain and recall history across sessions
+
+## Help
+
+- [AWS Bedrock Documentation](https://docs.aws.amazon.com/bedrock/)
+- [Amazon OpenSearch Service Docs](https://docs.aws.amazon.com/opensearch-service/)
+- [Mem0 Platform](https://app.mem0.ai)
+
+
+
diff --git a/docs/integrations/langchain.mdx b/docs/integrations/langchain.mdx
index df249c26..1ac21518 100644
--- a/docs/integrations/langchain.mdx
+++ b/docs/integrations/langchain.mdx
@@ -64,11 +64,11 @@ Create functions to handle context retrieval, response generation, and addition
def retrieve_context(query: str, user_id: str) -> List[Dict]:
"""Retrieve relevant context from Mem0"""
memories = mem0.search(query, user_id=user_id)
- seralized_memories = ' '.join([mem["memory"] for mem in memories])
+ serialized_memories = ' '.join([mem["memory"] for mem in memories])
context = [
{
"role": "system",
- "content": f"Relevant information: {seralized_memories}"
+ "content": f"Relevant information: {serialized_memories}"
},
{
"role": "user",
diff --git a/docs/integrations/llama-index.mdx b/docs/integrations/llama-index.mdx
index cbf43377..77fe57ac 100644
--- a/docs/integrations/llama-index.mdx
+++ b/docs/integrations/llama-index.mdx
@@ -103,7 +103,7 @@ memory_from_config = Mem0Memory.from_config(
)
```
-Intilaize the LLM
+Initialize the LLM
```python
import os