Add ollama example (#1711)
This commit is contained in:
@@ -197,46 +197,7 @@ m.reset() # Reset all memories
|
||||
|
||||
## Run Mem0 Locally
|
||||
|
||||
Mem0 can be used entirely locally with Ollama, where both the embedding model and the language model (LLM) utilize Ollama.
|
||||
|
||||
Here's the example on how it can be used:
|
||||
|
||||
```python
|
||||
import os
|
||||
from mem0 import Memory
|
||||
|
||||
config = {
|
||||
"vector_store": {
|
||||
"provider": "qdrant",
|
||||
"config": {
|
||||
"collection_name": "test",
|
||||
"host": "localhost",
|
||||
"port": 6333,
|
||||
"embedding_model_dims": 768, # (For Nomic == 768), could be some other embedding size, change this according to your local models dimensions
|
||||
},
|
||||
},
|
||||
"llm": {
|
||||
"provider": "ollama",
|
||||
"config": {
|
||||
"model": "llama3.1:latest",
|
||||
"temperature": 0,
|
||||
"max_tokens": 8000,
|
||||
"ollama_base_url": "http://localhost:11434", # Ensure this is correct
|
||||
},
|
||||
},
|
||||
"embedder": {
|
||||
"provider": "ollama",
|
||||
"config": {
|
||||
"model": "nomic-embed-text:latest",
|
||||
# "model": "snowflake-arctic-embed:latest",
|
||||
"ollama_base_url": "http://localhost:11434",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
m = Memory.from_config(config)
|
||||
m.add("I'm visiting Paris", user_id="john")
|
||||
```
|
||||
Please refer the example [Mem0 with Ollama](../examples/mem0-with-ollama) to run Mem0 locally.
|
||||
|
||||
|
||||
## Chat Completion
|
||||
|
||||
Reference in New Issue
Block a user