[Feature] Add support for AWS Bedrock LLM (#1189)
Co-authored-by: Deven Patel <deven298@yahoo.com>
This commit is contained in:
@@ -200,9 +200,10 @@ Alright, let's dive into what each key means in the yaml config above:
|
||||
- `stream` (Boolean): Controls if the response is streamed back to the user (set to false).
|
||||
- `prompt` (String): A prompt for the model to follow when generating responses, requires `$context` and `$query` variables.
|
||||
- `system_prompt` (String): A system prompt for the model to follow when generating responses, in this case, it's set to the style of William Shakespeare.
|
||||
- `stream` (Boolean): Controls if the response is streamed back to the user (set to false).
|
||||
- `stream` (Boolean): Controls if the response is streamed back to the user (set to false).
|
||||
- `number_documents` (Integer): Number of documents to pull from the vectordb as context, defaults to 1
|
||||
- `api_key` (String): The API key for the language model.
|
||||
- `model_kwargs` (Dict): Keyword arguments to pass to the language model. Used for `aws_bedrock` provider, since it requires different arguments for each model.
|
||||
3. `vectordb` Section:
|
||||
- `provider` (String): The provider for the vector database, set to 'chroma'. You can find the full list of vector database providers in [our docs](/components/vector-databases).
|
||||
- `config`:
|
||||
|
||||
@@ -21,6 +21,7 @@ Embedchain comes with built-in support for various popular large language models
|
||||
<Card title="Llama2" href="#llama2"></Card>
|
||||
<Card title="Vertex AI" href="#vertex-ai"></Card>
|
||||
<Card title="Mistral AI" href="#mistral-ai"></Card>
|
||||
<Card title="AWS Bedrock" href="#aws-bedrock"></Card>
|
||||
</CardGroup>
|
||||
|
||||
## OpenAI
|
||||
@@ -627,11 +628,8 @@ llm:
|
||||
Obtain the Mistral AI api key from their [console](https://console.mistral.ai/).
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```python main.py
|
||||
import os
|
||||
from embedchain import App
|
||||
|
||||
|
||||
```python main.py
|
||||
os.environ["MISTRAL_API_KEY"] = "xxx"
|
||||
|
||||
app = App.from_config(config_path="config.yaml")
|
||||
@@ -663,5 +661,45 @@ embedder:
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
|
||||
## AWS Bedrock
|
||||
|
||||
### Setup
|
||||
- Before using the AWS Bedrock LLM, make sure you have the appropriate model access from [Bedrock Console](https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/modelaccess).
|
||||
- You will also need `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` to authenticate the API with AWS. You can find these in your [AWS Console](https://us-east-1.console.aws.amazon.com/iam/home?region=us-east-1#/users).
|
||||
|
||||
|
||||
### Usage
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```python main.py
|
||||
import os
|
||||
from embedchain import App
|
||||
|
||||
os.environ["AWS_ACCESS_KEY_ID"] = "xxx"
|
||||
os.environ["AWS_SECRET_ACCESS_KEY"] = "xxx"
|
||||
|
||||
app = App.from_config(config_path="config.yaml")
|
||||
```
|
||||
|
||||
```yaml config.yaml
|
||||
llm:
|
||||
provider: aws_bedrock
|
||||
config:
|
||||
model: amazon.titan-text-express-v1
|
||||
# check notes below for model_kwargs
|
||||
model_kwargs:
|
||||
temperature: 0.5
|
||||
topP: 1
|
||||
maxTokenCount: 1000
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
<br />
|
||||
<Note>
|
||||
The model arguments are different for each providers. Please refer to the [AWS Bedrock Documentation](https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/providers) to find the appropriate arguments for your model.
|
||||
</Note>
|
||||
|
||||
<br/ >
|
||||
<Snippet file="missing-llm-tip.mdx" />
|
||||
|
||||
Reference in New Issue
Block a user