Support Ollama models (#1596)

This commit is contained in:
Dev Khant
2024-08-02 23:45:45 +05:30
committed by GitHub
parent 3eff82082e
commit 44aa16a0f8
8 changed files with 188 additions and 30 deletions

View File

@@ -8,6 +8,7 @@ Mem0 includes built-in support for various popular large language models. Memory
<CardGroup cols={4}>
<Card title="OpenAI" href="#openai"></Card>
<Card title="Ollama" href="#ollama"></Card>
<Card title="Groq" href="#groq"></Card>
<Card title="Together" href="#together"></Card>
<Card title="AWS Bedrock" href="#aws-bedrock"></Card>
@@ -45,6 +46,31 @@ m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
## Ollama
You can use LLMs from Ollama to run Mem0 locally. These [models](https://ollama.com/search?c=tools) support tool support.
```python
import os
from mem0 import Memory
os.environ["OPENAI_API_KEY"] = "your-api-key" # for embedder
config = {
"llm": {
"provider": "ollama",
"config": {
"model": "mixtral:8x7b",
"temperature": 0.1,
"max_tokens": 2000,
}
}
}
m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
```
## Groq
[Groq](https://groq.com/) is the creator of the world's first Language Processing Unit (LPU), providing exceptional speed performance for AI workloads running on their LPU Inference Engine.