Clarifai : Added Clarifai as LLM and embedding model provider. (#1311)
Co-authored-by: Deshraj Yadav <deshraj@gatech.edu>
This commit is contained in:
@@ -16,6 +16,7 @@ Embedchain supports several embedding models from the following providers:
|
||||
<Card title="NVIDIA AI" href="#nvidia-ai"></Card>
|
||||
<Card title="Cohere" href="#cohere"></Card>
|
||||
<Card title="Ollama" href="#ollama"></Card>
|
||||
<Card title="Clarifai" href="#clarifai"></Card>
|
||||
</CardGroup>
|
||||
|
||||
## OpenAI
|
||||
@@ -385,4 +386,51 @@ embedder:
|
||||
model: 'all-minilm:latest'
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
## Clarifai
|
||||
|
||||
Install related dependencies using the following command:
|
||||
|
||||
```bash
|
||||
pip install --upgrade 'embedchain[clarifai]'
|
||||
```
|
||||
|
||||
set the `CLARIFAI_PAT` as environment variable which you can find in the [security page](https://clarifai.com/settings/security). Optionally you can also pass the PAT key as parameters in LLM/Embedder class.
|
||||
|
||||
Now you are all set with exploring Embedchain.
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```python main.py
|
||||
import os
|
||||
from embedchain import App
|
||||
|
||||
os.environ["CLARIFAI_PAT"] = "XXX"
|
||||
|
||||
# load llm and embedder configuration from config.yaml file
|
||||
app = App.from_config(config_path="config.yaml")
|
||||
|
||||
#Now let's add some data.
|
||||
app.add("https://www.forbes.com/profile/elon-musk")
|
||||
|
||||
#Query the app
|
||||
response = app.query("what college degrees does elon musk have?")
|
||||
```
|
||||
Head to [Clarifai Platform](https://clarifai.com/explore/models?page=1&perPage=24&filterData=%5B%7B%22field%22%3A%22output_fields%22%2C%22value%22%3A%5B%22embeddings%22%5D%7D%5D) to explore all the State of the Art embedding models available to use.
|
||||
For passing LLM model inference parameters use `model_kwargs` argument in the config file. Also you can use `api_key` argument to pass `CLARIFAI_PAT` in the config.
|
||||
|
||||
```yaml config.yaml
|
||||
llm:
|
||||
provider: clarifai
|
||||
config:
|
||||
model: "https://clarifai.com/mistralai/completion/models/mistral-7B-Instruct"
|
||||
model_kwargs:
|
||||
temperature: 0.5
|
||||
max_tokens: 1000
|
||||
embedder:
|
||||
provider: clarifai
|
||||
config:
|
||||
model: "https://clarifai.com/clarifai/main/models/BAAI-bge-base-en-v15"
|
||||
```
|
||||
</CodeGroup>
|
||||
@@ -15,6 +15,7 @@ Embedchain comes with built-in support for various popular large language models
|
||||
<Card title="Together" href="#together"></Card>
|
||||
<Card title="Ollama" href="#ollama"></Card>
|
||||
<Card title="vLLM" href="#vllm"></Card>
|
||||
<Card title="Clarifai" href="#clarifai"></Card>
|
||||
<Card title="GPT4All" href="#gpt4all"></Card>
|
||||
<Card title="JinaChat" href="#jinachat"></Card>
|
||||
<Card title="Hugging Face" href="#hugging-face"></Card>
|
||||
@@ -385,6 +386,54 @@ llm:
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
## Clarifai
|
||||
|
||||
Install related dependencies using the following command:
|
||||
|
||||
```bash
|
||||
pip install --upgrade 'embedchain[clarifai]'
|
||||
```
|
||||
|
||||
set the `CLARIFAI_PAT` as environment variable which you can find in the [security page](https://clarifai.com/settings/security). Optionally you can also pass the PAT key as parameters in LLM/Embedder class.
|
||||
|
||||
Now you are all set with exploring Embedchain.
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```python main.py
|
||||
import os
|
||||
from embedchain import App
|
||||
|
||||
os.environ["CLARIFAI_PAT"] = "XXX"
|
||||
|
||||
# load llm configuration from config.yaml file
|
||||
app = App.from_config(config_path="config.yaml")
|
||||
|
||||
#Now let's add some data.
|
||||
app.add("https://www.forbes.com/profile/elon-musk")
|
||||
|
||||
#Query the app
|
||||
response = app.query("what college degrees does elon musk have?")
|
||||
```
|
||||
Head to [Clarifai Platform](https://clarifai.com/explore/models?page=1&perPage=24&filterData=%5B%7B%22field%22%3A%22use_cases%22%2C%22value%22%3A%5B%22llm%22%5D%7D%5D) to browse various State-of-the-Art LLM models for your use case.
|
||||
For passing model inference parameters use `model_kwargs` argument in the config file. Also you can use `api_key` argument to pass `CLARIFAI_PAT` in the config.
|
||||
|
||||
```yaml config.yaml
|
||||
llm:
|
||||
provider: clarifai
|
||||
config:
|
||||
model: "https://clarifai.com/mistralai/completion/models/mistral-7B-Instruct"
|
||||
model_kwargs:
|
||||
temperature: 0.5
|
||||
max_tokens: 1000
|
||||
embedder:
|
||||
provider: clarifai
|
||||
config:
|
||||
model: "https://clarifai.com/clarifai/main/models/BAAI-bge-base-en-v15"
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
|
||||
## GPT4ALL
|
||||
|
||||
Install related dependencies using the following command:
|
||||
|
||||
Reference in New Issue
Block a user