feat: openai default model uses gpt-4o-mini (#1526)
This commit is contained in:
@@ -20,7 +20,7 @@ app:
|
||||
llm:
|
||||
provider: openai
|
||||
config:
|
||||
model: 'gpt-3.5-turbo'
|
||||
model: 'gpt-4o-mini'
|
||||
temperature: 0.5
|
||||
max_tokens: 1000
|
||||
top_p: 1
|
||||
@@ -82,7 +82,7 @@ cache:
|
||||
"llm": {
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"model": "gpt-3.5-turbo",
|
||||
"model": "gpt-4o-mini",
|
||||
"temperature": 0.5,
|
||||
"max_tokens": 1000,
|
||||
"top_p": 1,
|
||||
@@ -140,7 +140,7 @@ config = {
|
||||
'llm': {
|
||||
'provider': 'openai',
|
||||
'config': {
|
||||
'model': 'gpt-3.5-turbo',
|
||||
'model': 'gpt-4o-mini',
|
||||
'temperature': 0.5,
|
||||
'max_tokens': 1000,
|
||||
'top_p': 1,
|
||||
@@ -206,7 +206,7 @@ Alright, let's dive into what each key means in the yaml config above:
|
||||
2. `llm` Section:
|
||||
- `provider` (String): The provider for the language model, which is set to 'openai'. You can find the full list of llm providers in [our docs](/components/llms).
|
||||
- `config`:
|
||||
- `model` (String): The specific model being used, 'gpt-3.5-turbo'.
|
||||
- `model` (String): The specific model being used, 'gpt-4o-mini'.
|
||||
- `temperature` (Float): Controls the randomness of the model's output. A higher value (closer to 1) makes the output more random.
|
||||
- `max_tokens` (Integer): Controls how many tokens are used in the response.
|
||||
- `top_p` (Float): Controls the diversity of word selection. A higher value (closer to 1) makes word selection more diverse.
|
||||
|
||||
@@ -62,7 +62,7 @@ app = App.from_config(config_path="config.yaml")
|
||||
llm:
|
||||
provider: openai
|
||||
config:
|
||||
model: 'gpt-3.5-turbo'
|
||||
model: 'gpt-4o-mini'
|
||||
temperature: 0.5
|
||||
max_tokens: 1000
|
||||
top_p: 1
|
||||
@@ -205,7 +205,7 @@ app = App.from_config(config_path="config.yaml")
|
||||
llm:
|
||||
provider: azure_openai
|
||||
config:
|
||||
model: gpt-3.5-turbo
|
||||
model: gpt-4o-mini
|
||||
deployment_name: your_llm_deployment_name
|
||||
temperature: 0.5
|
||||
max_tokens: 1000
|
||||
@@ -887,7 +887,7 @@ response = app.chat("Which companies did Elon Musk found?")
|
||||
llm:
|
||||
provider: openai
|
||||
config:
|
||||
model: gpt-3.5-turbo
|
||||
model: gpt-4o-mini
|
||||
temperature: 0.5
|
||||
max_tokens: 1000
|
||||
token_usage: true
|
||||
|
||||
@@ -32,7 +32,7 @@ app:
|
||||
llm:
|
||||
provider: openai
|
||||
config:
|
||||
model: "gpt-3.5-turbo"
|
||||
model: "gpt-4o-mini"
|
||||
temperature: 0.5
|
||||
max_tokens: 1000
|
||||
top_p: 1
|
||||
|
||||
@@ -122,7 +122,7 @@ You can achieve this by setting `stream` to `true` in the config file.
|
||||
llm:
|
||||
provider: openai
|
||||
config:
|
||||
model: 'gpt-3.5-turbo'
|
||||
model: 'gpt-4o-mini'
|
||||
temperature: 0.5
|
||||
max_tokens: 1000
|
||||
top_p: 1
|
||||
|
||||
Reference in New Issue
Block a user