View source on GitHub |
Configuration options for model generation and outputs.
Not all parameters may be configurable for every model.
Attributes | |
---|---|
candidate_count
|
int
Optional. Number of generated responses to return. This value must be between [1, 8], inclusive. If unset, this will default to 1. |
stop_sequences
|
MutableSequence[str]
Optional. The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop sequence. The stop sequence will not be included as part of the response. |
max_output_tokens
|
int
Optional. The maximum number of tokens to include in a candidate. If unset, this will default to output_token_limit specified
in the |
temperature
|
float
Optional. Controls the randomness of the output. Note: The
default value varies by model, see the Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model. |
top_p
|
float
Optional. The maximum cumulative probability of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits number of tokens based on the cumulative probability. |
top_k
|
int
Optional. The maximum number of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Top-k sampling considers the set of |