GenerationConfig

Configuration options for model generation and outputs. Not all parameters may be configurable for every model.

JSON representation
{
  "stopSequences": [
    string
  ],
  "responseMimeType": string,
  "candidateCount": integer,
  "maxOutputTokens": integer,
  "temperature": number,
  "topP": number,
  "topK": integer
}
Fields
stopSequences[]

string

Optional. The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop sequence. The stop sequence will not be included as part of the response.

responseMimeType

string

Optional. Output response mimetype of the generated candidate text. Supported mimetype: text/plain: (default) Text output. application/json: JSON response in the candidates.

candidateCount

integer

Optional. Number of generated responses to return.

Currently, this value can only be set to 1. If unset, this will default to 1.

maxOutputTokens

integer

Optional. The maximum number of tokens to include in a candidate.

Note: The default value varies by model, see the Model.output_token_limit attribute of the Model returned from the getModel function.

temperature

number

Optional. Controls the randomness of the output.

Note: The default value varies by model, see the Model.temperature attribute of the Model returned from the getModel function.

Values can range from [0.0, 2.0].

topP

number

Optional. The maximum cumulative probability of tokens to consider when sampling.

The model uses combined Top-k and nucleus sampling.

Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits number of tokens based on the cumulative probability.

Note: The default value varies by model, see the Model.top_p attribute of the Model returned from the getModel function.

topK

integer

Optional. The maximum number of tokens to consider when sampling.

Models use nucleus sampling or combined Top-k and nucleus sampling. Top-k sampling considers the set of topK most probable tokens. Models running with nucleus sampling don't allow topK setting.

Note: The default value varies by model, see the Model.top_k attribute of the Model returned from the getModel function. Empty topK field in Model indicates the model doesn't apply top-k sampling and doesn't allow setting topK on requests.