Files
t6_mem0/docs/components/llms/models/ollama.mdx
2024-08-15 21:13:00 +05:30

28 lines
688 B
Plaintext

You can use LLMs from Ollama to run Mem0 locally. These [models](https://ollama.com/search?c=tools) support tool support.
## Usage
```python
import os
from mem0 import Memory
os.environ["OPENAI_API_KEY"] = "your-api-key" # for embedder
config = {
"llm": {
"provider": "ollama",
"config": {
"model": "mixtral:8x7b",
"temperature": 0.1,
"max_tokens": 2000,
}
}
}
m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
## Config
All available parameters for the `ollama` config are present in [Master List of All Params in Config](../config).