![]() |
Calls the API and returns a types.ChatResponse
containing the response.
google.generativeai.chat(
*,
model: (model_types.AnyModelNameOptions | None) = 'models/chat-bison-001',
context: (str | None) = None,
examples: (discuss_types.ExamplesOptions | None) = None,
messages: (discuss_types.MessagesOptions | None) = None,
temperature: (float | None) = None,
candidate_count: (int | None) = None,
top_p: (float | None) = None,
top_k: (float | None) = None,
prompt: (discuss_types.MessagePromptOptions | None) = None,
client: (glm.DiscussServiceClient | None) = None
) -> discuss_types.ChatResponse
Args | |
---|---|
model
|
Which model to call, as a string or a types.Model .
|
context
|
Text that should be provided to the model first, to ground the response.
If not empty, this This field can be a description of your prompt to the model to help provide context and guide the responses. Examples:
Anything included in this field will take precedence over history in |
examples
|
Examples of what the model should generate.
This includes both the user input and the response that the model should emulate. These |
messages
|
A snapshot of the conversation history sorted chronologically.
Turns alternate between two authors. If the total input size exceeds the model's |
temperature
|
Controls the randomness of the output. Must be positive.
Typical values are in the range: |
candidate_count
|
The maximum number of generated response messages to return.
This value must be between |
top_k
|
The API uses combined nucleus and
top-k sampling.
|
top_p
|
The API uses combined nucleus and
top-k sampling.
For example, if the sorted probabilities are
Typical values are in the |
prompt
|
You may pass a types.MessagePromptOptions instead of a
setting context /examples /messages , but not both.
|
client
|
If you're not relying on the default client, you pass a
glm.DiscussServiceClient instead.
|
Returns | |
---|---|
A types.ChatResponse containing the model's reply.
|