Add ollama example (#1711)
This commit is contained in:
73
docs/examples/mem0-with-ollama.mdx
Normal file
73
docs/examples/mem0-with-ollama.mdx
Normal file
@@ -0,0 +1,73 @@
|
||||
---
|
||||
title: Mem0 with Ollama
|
||||
---
|
||||
|
||||
## Running Mem0 Locally with Ollama
|
||||
|
||||
Mem0 can be utilized entirely locally by leveraging Ollama for both the embedding model and the language model (LLM). This guide will walk you through the necessary steps and provide the complete code to get you started.
|
||||
|
||||
### Overview
|
||||
|
||||
By using Ollama, you can run Mem0 locally, which allows for greater control over your data and models. This setup uses Ollama for both the embedding model and the language model, providing a fully local solution.
|
||||
|
||||
### Setup
|
||||
|
||||
Before you begin, ensure you have Mem0 and Ollama installed and properly configured on your local machine.
|
||||
|
||||
### Full Code Example
|
||||
|
||||
Below is the complete code to set up and use Mem0 locally with Ollama:
|
||||
|
||||
```python
|
||||
import os
|
||||
from mem0 import Memory
|
||||
|
||||
config = {
|
||||
"vector_store": {
|
||||
"provider": "qdrant",
|
||||
"config": {
|
||||
"collection_name": "test",
|
||||
"host": "localhost",
|
||||
"port": 6333,
|
||||
"embedding_model_dims": 768, # Change this according to your local model's dimensions
|
||||
},
|
||||
},
|
||||
"llm": {
|
||||
"provider": "ollama",
|
||||
"config": {
|
||||
"model": "llama3.1:latest",
|
||||
"temperature": 0,
|
||||
"max_tokens": 8000,
|
||||
"ollama_base_url": "http://localhost:11434", # Ensure this URL is correct
|
||||
},
|
||||
},
|
||||
"embedder": {
|
||||
"provider": "ollama",
|
||||
"config": {
|
||||
"model": "nomic-embed-text:latest",
|
||||
# Alternatively, you can use "snowflake-arctic-embed:latest"
|
||||
"ollama_base_url": "http://localhost:11434",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
# Initialize Memory with the configuration
|
||||
m = Memory.from_config(config)
|
||||
|
||||
# Add a memory
|
||||
m.add("I'm visiting Paris", user_id="john")
|
||||
|
||||
# Retrieve memories
|
||||
memories = m.get_all(user_id="john")
|
||||
```
|
||||
|
||||
### Key Points
|
||||
|
||||
- **Configuration**: The setup involves configuring the vector store, language model, and embedding model to use local resources.
|
||||
- **Vector Store**: Qdrant is used as the vector store, running on localhost.
|
||||
- **Language Model**: Ollama is used as the LLM provider, with the "llama3.1:latest" model.
|
||||
- **Embedding Model**: Ollama is also used for embeddings, with the "nomic-embed-text:latest" model.
|
||||
|
||||
### Conclusion
|
||||
|
||||
This local setup of Mem0 using Ollama provides a fully self-contained solution for memory management and AI interactions. It allows for greater control over your data and models while still leveraging the powerful capabilities of Mem0.
|
||||
@@ -124,6 +124,7 @@
|
||||
"group": "💡 Examples",
|
||||
"pages": [
|
||||
"examples/overview",
|
||||
"examples/mem0-with-ollama",
|
||||
"examples/personal-ai-tutor",
|
||||
"examples/customer-support-agent",
|
||||
"examples/langgraph",
|
||||
|
||||
@@ -197,46 +197,7 @@ m.reset() # Reset all memories
|
||||
|
||||
## Run Mem0 Locally
|
||||
|
||||
Mem0 can be used entirely locally with Ollama, where both the embedding model and the language model (LLM) utilize Ollama.
|
||||
|
||||
Here's the example on how it can be used:
|
||||
|
||||
```python
|
||||
import os
|
||||
from mem0 import Memory
|
||||
|
||||
config = {
|
||||
"vector_store": {
|
||||
"provider": "qdrant",
|
||||
"config": {
|
||||
"collection_name": "test",
|
||||
"host": "localhost",
|
||||
"port": 6333,
|
||||
"embedding_model_dims": 768, # (For Nomic == 768), could be some other embedding size, change this according to your local models dimensions
|
||||
},
|
||||
},
|
||||
"llm": {
|
||||
"provider": "ollama",
|
||||
"config": {
|
||||
"model": "llama3.1:latest",
|
||||
"temperature": 0,
|
||||
"max_tokens": 8000,
|
||||
"ollama_base_url": "http://localhost:11434", # Ensure this is correct
|
||||
},
|
||||
},
|
||||
"embedder": {
|
||||
"provider": "ollama",
|
||||
"config": {
|
||||
"model": "nomic-embed-text:latest",
|
||||
# "model": "snowflake-arctic-embed:latest",
|
||||
"ollama_base_url": "http://localhost:11434",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
m = Memory.from_config(config)
|
||||
m.add("I'm visiting Paris", user_id="john")
|
||||
```
|
||||
Please refer the example [Mem0 with Ollama](../examples/mem0-with-ollama) to run Mem0 locally.
|
||||
|
||||
|
||||
## Chat Completion
|
||||
|
||||
Reference in New Issue
Block a user