![]() |
Calls the API and returns a types.ChatResponse
containing the response.
google.generativeai.chat_async(
*,
model='models/chat-bison-001',
context=None,
examples=None,
messages=None,
temperature=None,
candidate_count=None,
top_p=None,
top_k=None,
prompt=None,
client=None
)
Args | |
---|---|
model
|
Which model to call, as a string or a types.Model .
|
context
|
Text that should be provided to the model first, to ground the response.
If not empty, this This field can be a description of your prompt to the model to help provide context and guide the responses. Examples:
Anything included in this field will take precedence over history in |
examples
|
Examples of what the model should generate.
This includes both the user input and the response that the model should emulate. These |
messages
|
A snapshot of the conversation history sorted chronologically.
Turns alternate between two authors. If the total input size exceeds the model's |
temperature
|
Controls the randomness of the output. Must be positive.
Typical values are in the range: |
candidate_count
|
The maximum number of generated response messages to return.
This value must be between |
top_k
|
The API uses combined nucleus and
top-k sampling.
|
top_p
|
The API uses combined nucleus and
top-k sampling.
For example, if the sorted probabilities are
Typical values are in the |
prompt
|
You may pass a types.MessagePromptOptions instead of a
setting context /examples /messages , but not both.
|
client
|
If you're not relying on the default client, you pass a
glm.DiscussServiceClient instead.
|
Returns | |
---|---|
A types.ChatResponse containing the model's reply.
|