Docs for using Ollama locally (#1668)

This commit is contained in:
Dev Khant
2024-08-09 02:40:39 +05:30
committed by GitHub
parent 7a2fd70184
commit 38965ab6bf

View File

@@ -195,6 +195,36 @@ m.delete_all(user_id="alice") # Delete all memories
m.reset() # Reset all memories m.reset() # Reset all memories
``` ```
## Run Mem0 Locally
Mem0 can be used entirely locally with Ollama, where both the embedding model and the language model (LLM) utilize Ollama.
Here's the example on how it can be used:
```python
import os
from mem0 import Memory
config = {
"vector_store":{
"provider": "qdrant",
"config": {
"embedding_model_dims": 768 # change according to embedding model
}
},
"llm": {
"provider": "ollama"
},
"embedder": {
"provider": "ollama"
}
}
m = Memory.from_config(config)
m.add("I'm visiting Paris", user_id="john")
```
## Chat Completion ## Chat Completion
Mem0 can be easily integrate into chat applications to enhance conversational agents with structured memory. Mem0's APIs are designed to be compatible with OpenAI's, with the goal of making it easy to leverage Mem0 in applications you may have already built. Mem0 can be easily integrate into chat applications to enhance conversational agents with structured memory. Mem0's APIs are designed to be compatible with OpenAI's, with the goal of making it easy to leverage Mem0 in applications you may have already built.