google.ai.generativelanguage.HarmCategory

The category of a rating.

These categories cover various kinds of harms that developers may wish to adjust.

HARM_CATEGORY_UNSPECIFIED 0

Category is unspecified.

HARM_CATEGORY_DEROGATORY 1

Negative or harmful comments targeting identity and/or protected attribute.

HARM_CATEGORY_TOXICITY 2

Content that is rude, disrespectful, or profane.

HARM_CATEGORY_VIOLENCE 3

Describes scenarios depicting violence against an individual or group, or general descriptions of gore.

HARM_CATEGORY_SEXUAL 4

Contains references to sexual acts or other lewd content.

HARM_CATEGORY_MEDICAL 5

Promotes unchecked medical advice.

HARM_CATEGORY_DANGEROUS 6

Dangerous content that promotes, facilitates, or encourages harmful acts.

HARM_CATEGORY_HARASSMENT 7

Harassment content.

HARM_CATEGORY_HATE_SPEECH 8

Hate speech and content.

HARM_CATEGORY_SEXUALLY_EXPLICIT 9

Sexually explicit content.

HARM_CATEGORY_DANGEROUS_CONTENT 10

Dangerous content.

HARM_CATEGORY_DANGEROUS <HarmCategory.HARM_CATEGORY_DANGEROUS: 6>
HARM_CATEGORY_DANGEROUS_CONTENT <HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: 10>
HARM_CATEGORY_DEROGATORY <HarmCategory.HARM_CATEGORY_DEROGATORY: 1>
HARM_CATEGORY_HARASSMENT <HarmCategory.HARM_CATEGORY_HARASSMENT: 7>
HARM_CATEGORY_HATE_SPEECH <HarmCategory.HARM_CATEGORY_HATE_SPEECH: 8>
HARM_CATEGORY_MEDICAL <HarmCategory.HARM_CATEGORY_MEDICAL: 5>
HARM_CATEGORY_SEXUAL <HarmCategory.HARM_CATEGORY_SEXUAL: 4>
HARM_CATEGORY_SEXUALLY_EXPLICIT <HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: 9>
HARM_CATEGORY_TOXICITY <HarmCategory.HARM_CATEGORY_TOXICITY: 2>
HARM_CATEGORY_UNSPECIFIED <HarmCategory.HARM_CATEGORY_UNSPECIFIED: 0>
HARM_CATEGORY_VIOLENCE <HarmCategory.HARM_CATEGORY_VIOLENCE: 3>