Docs: use LlmConfig instead of QueryConfig (#626)

This commit is contained in:
cachho
2023-09-24 20:48:03 +02:00
committed by GitHub
parent 1db3e43adf
commit 6c71a1020d
4 changed files with 14 additions and 28 deletions

View File

@@ -43,11 +43,11 @@ Dry Run is an option in the `add`, `query` and `chat` methods that allows the us
- You can add config to your query method to stream responses like ChatGPT does. You would require a downstream handler to render the chunk in your desirable format. Supports both OpenAI model and OpenSourceApp. 📊
- To use this, instantiate a `QueryConfig` or `ChatConfig` object with `stream=True`. Then pass it to the `.chat()` or `.query()` method. The following example iterates through the chunks and prints them as they appear.
- To use this, instantiate a `LlmConfig` or `ChatConfig` object with `stream=True`. Then pass it to the `.chat()` or `.query()` method. The following example iterates through the chunks and prints them as they appear.
```python
app = App()
query_config = QueryConfig(stream = True)
query_config = LlmConfig(stream = True)
resp = app.query("What unique capacity does Naval argue humans possess when it comes to understanding explanations or concepts?", query_config)
for chunk in resp: