Migrate from template to prompt arg while keeping backward compatibility (#1066)
This commit is contained in:
@@ -26,7 +26,7 @@ llm:
|
||||
top_p: 1
|
||||
stream: false
|
||||
api_key: sk-xxx
|
||||
template: |
|
||||
prompt: |
|
||||
Use the following pieces of context to answer the query at the end.
|
||||
If you don't know the answer, just say that you don't know, don't try to make up an answer.
|
||||
|
||||
@@ -73,7 +73,7 @@ chunker:
|
||||
"max_tokens": 1000,
|
||||
"top_p": 1,
|
||||
"stream": false,
|
||||
"template": "Use the following pieces of context to answer the query at the end.\nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\n$context\n\nQuery: $query\n\nHelpful Answer:",
|
||||
"prompt": "Use the following pieces of context to answer the query at the end.\nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\n$context\n\nQuery: $query\n\nHelpful Answer:",
|
||||
"system_prompt": "Act as William Shakespeare. Answer the following questions in the style of William Shakespeare.",
|
||||
"api_key": "sk-xxx"
|
||||
}
|
||||
@@ -117,7 +117,7 @@ config = {
|
||||
'max_tokens': 1000,
|
||||
'top_p': 1,
|
||||
'stream': False,
|
||||
'template': (
|
||||
'prompt': (
|
||||
"Use the following pieces of context to answer the query at the end.\n"
|
||||
"If you don't know the answer, just say that you don't know, don't try to make up an answer.\n"
|
||||
"$context\n\nQuery: $query\n\nHelpful Answer:"
|
||||
@@ -170,7 +170,7 @@ Alright, let's dive into what each key means in the yaml config above:
|
||||
- `max_tokens` (Integer): Controls how many tokens are used in the response.
|
||||
- `top_p` (Float): Controls the diversity of word selection. A higher value (closer to 1) makes word selection more diverse.
|
||||
- `stream` (Boolean): Controls if the response is streamed back to the user (set to false).
|
||||
- `template` (String): A custom template for the prompt that the model uses to generate responses.
|
||||
- `prompt` (String): A prompt for the model to follow when generating responses, requires $context and $query variables.
|
||||
- `system_prompt` (String): A system prompt for the model to follow when generating responses, in this case, it's set to the style of William Shakespeare.
|
||||
- `stream` (Boolean): Controls if the response is streamed back to the user (set to false).
|
||||
- `number_documents` (Integer): Number of documents to pull from the vectordb as context, defaults to 1
|
||||
|
||||
@@ -37,7 +37,7 @@ llm:
|
||||
max_tokens: 1000
|
||||
top_p: 1
|
||||
stream: false
|
||||
template: |
|
||||
prompt: |
|
||||
Use the following pieces of context to answer the query at the end.
|
||||
If you don't know the answer, just say that you don't know, don't try to make up an answer.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user