GenerateContentResponse

Response from the model supporting multiple candidates.

Note on safety ratings and content filtering. They are reported for both prompt in GenerateContentResponse.prompt_feedback and for each candidate in finishReason and in safetyRatings. The API contract is that: - either all requested candidates are returned or no candidates at all - no candidates are returned only if there was something wrong with the prompt (see promptFeedback) - feedback on each candidate is reported on finishReason and safetyRatings.

JSON representation
{
  "candidates": [
    {
      object (Candidate)
    }
  ],
  "promptFeedback": {
    object (PromptFeedback)
  }
}
Fields
candidates[]

object (Candidate)

Candidate responses from the model.

promptFeedback

object (PromptFeedback)

Returns the prompt's feedback related to the content filters.

PromptFeedback

A set of the feedback metadata the prompt specified in GenerateContentRequest.content.

JSON representation
{
  "blockReason": enum (BlockReason),
  "safetyRatings": [
    {
      object (SafetyRating)
    }
  ]
}
Fields
blockReason

enum (BlockReason)

Optional. If set, the prompt was blocked and no candidates are returned. Rephrase your prompt.

safetyRatings[]

object (SafetyRating)

Ratings for safety of the prompt. There is at most one rating per category.

BlockReason

Specifies what was the reason why prompt was blocked.

Enums
BLOCK_REASON_UNSPECIFIED Default value. This value is unused.
SAFETY Prompt was blocked due to safety reasons. You can inspect safetyRatings to understand which safety category blocked it.
OTHER Prompt was blocked due to unknown reaasons.