[Docs]: Clean up docs (#802)

This commit is contained in:
Deshraj Yadav
2023-10-14 19:14:24 -07:00
committed by GitHub
parent 4a8c50f886
commit 77c90a308e
14 changed files with 120 additions and 304 deletions

View File

@@ -27,14 +27,14 @@ from embedchain import App
os.environ['OPENAI_API_KEY'] = 'xxx'
# load embedding model configuration from openai.yaml file
app = App.from_config(yaml_path="openai.yaml")
# load embedding model configuration from config.yaml file
app = App.from_config(yaml_path="config.yaml")
app.add("https://en.wikipedia.org/wiki/OpenAI")
app.query("What is OpenAI?")
```
```yaml openai.yaml
```yaml config.yaml
embedder:
provider: openai
config:
@@ -52,11 +52,11 @@ GPT4All supports generating high quality embeddings of arbitrary length document
```python main.py
from embedchain import App
# load embedding model configuration from gpt4all.yaml file
app = App.from_config(yaml_path="gpt4all.yaml")
# load embedding model configuration from config.yaml file
app = App.from_config(yaml_path="config.yaml")
```
```yaml gpt4all.yaml
```yaml config.yaml
llm:
provider: gpt4all
model: 'orca-mini-3b.ggmlv3.q4_0.bin'
@@ -83,11 +83,11 @@ Hugging Face supports generating embeddings of arbitrary length documents of tex
```python main.py
from embedchain import App
# load embedding model configuration from huggingface.yaml file
app = App.from_config(yaml_path="huggingface.yaml")
# load embedding model configuration from config.yaml file
app = App.from_config(yaml_path="config.yaml")
```
```yaml huggingface.yaml
```yaml config.yaml
llm:
provider: huggingface
model: 'google/flan-t5-xxl'
@@ -114,11 +114,11 @@ Embedchain supports Google's VertexAI embeddings model through a simple interfac
```python main.py
from embedchain import App
# load embedding model configuration from vertexai.yaml file
app = App.from_config(yaml_path="vertexai.yaml")
# load embedding model configuration from config.yaml file
app = App.from_config(yaml_path="config.yaml")
```
```yaml vertexai.yaml
```yaml config.yaml
llm:
provider: vertexai
model: 'chat-bison'

View File

@@ -35,7 +35,7 @@ app.add("https://en.wikipedia.org/wiki/OpenAI")
app.query("What is OpenAI?")
```
If you are looking to configure the different parameters of the LLM, you can do so by loading the app using a [yaml config](https://github.com/embedchain/embedchain/blob/main/embedchain/yaml/chroma.yaml) file.
If you are looking to configure the different parameters of the LLM, you can do so by loading the app using a [yaml config](https://github.com/embedchain/embedchain/blob/main/configs/chroma.yaml) file.
<CodeGroup>
@@ -45,11 +45,11 @@ from embedchain import App
os.environ['OPENAI_API_KEY'] = 'xxx'
# load llm configuration from openai.yaml file
app = App.from_config(yaml_path="openai.yaml")
# load llm configuration from config.yaml file
app = App.from_config(yaml_path="config.yaml")
```
```yaml openai.yaml
```yaml config.yaml
llm:
provider: openai
model: 'gpt-3.5-turbo'
@@ -79,11 +79,11 @@ from embedchain import App
os.environ["ANTHROPIC_API_KEY"] = "xxx"
# load llm configuration from anthropic.yaml file
app = App.from_config(yaml_path="anthropic.yaml")
# load llm configuration from config.yaml file
app = App.from_config(yaml_path="config.yaml")
```
```yaml anthropic.yaml
```yaml config.yaml
llm:
provider: anthropic
model: 'claude-instant-1'
@@ -96,15 +96,14 @@ llm:
</CodeGroup>
<br />
<Tip>
You may also have to set the `OPENAI_API_KEY` if you use the OpenAI's embedding model.
</Tip>
## Cohere
Install related dependencies using the following command:
```bash
pip install --upgrade 'embedchain[cohere]'
```
Set the `COHERE_API_KEY` as environment variable which you can find on their [Account settings page](https://dashboard.cohere.com/api-keys).
Once you have the API key, you are all set to use it with Embedchain.
@@ -117,11 +116,11 @@ from embedchain import App
os.environ["COHERE_API_KEY"] = "xxx"
# load llm configuration from cohere.yaml file
app = App.from_config(yaml_path="cohere.yaml")
# load llm configuration from config.yaml file
app = App.from_config(yaml_path="config.yaml")
```
```yaml cohere.yaml
```yaml config.yaml
llm:
provider: cohere
model: large
@@ -135,6 +134,12 @@ llm:
## GPT4ALL
Install related dependencies using the following command:
```bash
pip install --upgrade 'embedchain[opensource]'
```
GPT4all is a free-to-use, locally running, privacy-aware chatbot. No GPU or internet required. You can use this with Embedchain using the following code:
<CodeGroup>
@@ -142,11 +147,11 @@ GPT4all is a free-to-use, locally running, privacy-aware chatbot. No GPU or inte
```python main.py
from embedchain import App
# load llm configuration from gpt4all.yaml file
app = App.from_config(yaml_path="gpt4all.yaml")
# load llm configuration from config.yaml file
app = App.from_config(yaml_path="config.yaml")
```
```yaml gpt4all.yaml
```yaml config.yaml
llm:
provider: gpt4all
model: 'orca-mini-3b.ggmlv3.q4_0.bin'
@@ -177,11 +182,11 @@ import os
from embedchain import App
os.environ["JINACHAT_API_KEY"] = "xxx"
# load llm configuration from jina.yaml file
app = App.from_config(yaml_path="jina.yaml")
# load llm configuration from config.yaml file
app = App.from_config(yaml_path="config.yaml")
```
```yaml jina.yaml
```yaml config.yaml
llm:
provider: jina
config:
@@ -195,6 +200,13 @@ llm:
## Hugging Face
Install related dependencies using the following command:
```bash
pip install --upgrade 'embedchain[huggingface_hub]'
```
First, set `HUGGINGFACE_ACCESS_TOKEN` in environment variable which you can obtain from [their platform](https://huggingface.co/settings/tokens).
Once you have the token, load the app using the config yaml file:
@@ -207,11 +219,11 @@ from embedchain import App
os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "xxx"
# load llm configuration from huggingface.yaml file
app = App.from_config(yaml_path="huggingface.yaml")
# load llm configuration from config.yaml file
app = App.from_config(yaml_path="config.yaml")
```
```yaml huggingface.yaml
```yaml config.yaml
llm:
provider: huggingface
model: 'google/flan-t5-xxl'
@@ -237,11 +249,11 @@ from embedchain import App
os.environ["REPLICATE_API_TOKEN"] = "xxx"
# load llm configuration from llama2.yaml file
app = App.from_config(yaml_path="llama2.yaml")
# load llm configuration from config.yaml file
app = App.from_config(yaml_path="config.yaml")
```
```yaml llama2.yaml
```yaml config.yaml
llm:
provider: llama2
model: 'a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5'
@@ -262,11 +274,11 @@ Setup Google Cloud Platform application credentials by following the instruction
```python main.py
from embedchain import App
# load llm configuration from vertexai.yaml file
app = App.from_config(yaml_path="vertexai.yaml")
# load llm configuration from config.yaml file
app = App.from_config(yaml_path="config.yaml")
```
```yaml vertexai.yaml
```yaml config.yaml
llm:
provider: vertexai
model: 'chat-bison'

View File

@@ -25,10 +25,10 @@ Utilizing a vector database alongside Embedchain is a seamless process. All you
from embedchain import App
# load chroma configuration from yaml file
app = App.from_config(yaml_path="chroma-config-1.yaml")
app = App.from_config(yaml_path="config1.yaml")
```
```yaml chroma-config-1.yaml
```yaml config1.yaml
vectordb:
provider: chroma
config:
@@ -37,7 +37,7 @@ vectordb:
allow_reset: true
```
```yaml chroma-config-2.yaml
```yaml config2.yaml
vectordb:
provider: chroma
config:
@@ -52,16 +52,22 @@ vectordb:
## Elasticsearch
Install related dependencies using the following command:
```bash
pip install --upgrade 'embedchain[elasticsearch]'
```
<CodeGroup>
```python main.py
from embedchain import App
# load elasticsearch configuration from yaml file
app = App.from_config(yaml_path="elasticsearch.yaml")
app = App.from_config(yaml_path="config.yaml")
```
```yaml elasticsearch.yaml
```yaml config.yaml
vectordb:
provider: elasticsearch
config:
@@ -74,16 +80,22 @@ vectordb:
## OpenSearch
Install related dependencies using the following command:
```bash
pip install --upgrade 'embedchain[opensearch]'
```
<CodeGroup>
```python main.py
from embedchain import App
# load opensearch configuration from yaml file
app = App.from_config(yaml_path="opensearch.yaml")
app = App.from_config(yaml_path="config.yaml")
```
```yaml opensearch.yaml
```yaml config.yaml
vectordb:
provider: opensearch
config:
@@ -101,16 +113,22 @@ vectordb:
## Zilliz
Install related dependencies using the following command:
```bash
pip install --upgrade 'embedchain[milvus]'
```
<CodeGroup>
```python main.py
from embedchain import App
# load zilliz configuration from yaml file
app = App.from_config(yaml_path="zilliz.yaml")
app = App.from_config(yaml_path="config.yaml")
```
```yaml zilliz.yaml
```yaml config.yaml
vectordb:
provider: zilliz
config: