Added Support for Ollama for local model inference. (#1045)
Co-authored-by: Deshraj Yadav <deshraj@gatech.edu>
This commit is contained in:
@@ -10,5 +10,5 @@ from embedchain import Pipeline as App
|
||||
app = App()
|
||||
app.add("https://docs.embedchain.ai/", data_type="docs_site")
|
||||
app.query("What is Embedchain?")
|
||||
# Answer: Embedchain is a platform that utilizes various components, including paid/proprietary ones, to provide what is believed to be the best configuration available. It uses LLM (Language Model) providers such as OpenAI, Anthpropic, Vertex_AI, GPT4ALL, Azure_OpenAI, LLAMA2, JINA, and COHERE. Embedchain allows users to import and utilize these LLM providers for their applications.'
|
||||
# Answer: Embedchain is a platform that utilizes various components, including paid/proprietary ones, to provide what is believed to be the best configuration available. It uses LLM (Language Model) providers such as OpenAI, Anthpropic, Vertex_AI, GPT4ALL, Azure_OpenAI, LLAMA2, JINA, Ollama and COHERE. Embedchain allows users to import and utilize these LLM providers for their applications.'
|
||||
```
|
||||
|
||||
@@ -12,6 +12,7 @@ Embedchain comes with built-in support for various popular large language models
|
||||
<Card title="Azure OpenAI" href="#azure-openai"></Card>
|
||||
<Card title="Anthropic" href="#anthropic"></Card>
|
||||
<Card title="Cohere" href="#cohere"></Card>
|
||||
<Card title="Ollama" href="#Ollama"></Card>
|
||||
<Card title="GPT4All" href="#gpt4all"></Card>
|
||||
<Card title="JinaChat" href="#jinachat"></Card>
|
||||
<Card title="Hugging Face" href="#hugging-face"></Card>
|
||||
@@ -329,6 +330,32 @@ llm:
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
## Ollama
|
||||
|
||||
Setup Ollama using https://github.com/jmorganca/ollama
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```python main.py
|
||||
import os
|
||||
from embedchain import Pipeline as App
|
||||
|
||||
# load llm configuration from config.yaml file
|
||||
app = App.from_config(config_path="config.yaml")
|
||||
```
|
||||
|
||||
```yaml config.yaml
|
||||
llm:
|
||||
provider: ollama
|
||||
config:
|
||||
model: 'llama2'
|
||||
temperature: 0.5
|
||||
top_p: 1
|
||||
stream: true
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
## GPT4ALL
|
||||
|
||||
Install related dependencies using the following command:
|
||||
|
||||
@@ -44,6 +44,10 @@ Get started with Embedchain by trying out the examples below. You can run the ex
|
||||
<td className="align-middle"><a target="_blank" href="https://colab.research.google.com/github/embedchain/embedchain/blob/main/notebooks/cohere.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" noZoom alt="Open In Colab"/></a></td>
|
||||
<td className="align-middle"><a target="_blank" href="https://replit.com/@taranjeetio/cohere#main.py"><img src="https://replit.com/badge?caption=Try%20with%20Replit&variant=small" noZoom alt="Try with Replit Badge"/></a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td className="align-middle">Ollama</td>
|
||||
<td className="align-middle"><a target="_blank" href="https://colab.research.google.com/github/embedchain/embedchain/blob/main/notebooks/ollama.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" noZoom alt="Open In Colab"/></a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td className="align-middle">Hugging Face</td>
|
||||
<td className="align-middle"><a target="_blank" href="https://colab.research.google.com/github/embedchain/embedchain/blob/main/notebooks/hugging_face_hub.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" noZoom alt="Open In Colab"/></a></td>
|
||||
|
||||
Reference in New Issue
Block a user