View source on GitHub |
An API for using Generative Language Models (GLMs) in dialog applications.
google.ai.generativelanguage.DiscussServiceClient(
*,
credentials: Optional[ga_credentials.Credentials] = None,
transport: Optional[Union[str, DiscussServiceTransport]] = None,
client_options: Optional[Union[client_options_lib.ClientOptions, dict]] = None,
client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO
) -> None
Also known as large language models (LLMs), this API provides models that are trained for multi-turn dialog.
Raises | |
---|---|
google.auth.exceptions.MutualTLSChannelError
|
If mutual TLS transport creation failed for any reason. |
Attributes | |
---|---|
transport
|
Returns the transport used by the client instance. |
Methods
count_message_tokens
count_message_tokens(
request: Optional[Union[google.ai.generativelanguage.CountMessageTokensRequest
, dict]] = None,
*,
model: Optional[str] = None,
prompt: Optional[google.ai.generativelanguage.MessagePrompt
] = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = ()
) -> google.ai.generativelanguage.CountMessageTokensResponse
Runs a model's tokenizer on a string and returns the token count.
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# <a href="https://googleapis.dev/python/google-api-core/latest/client_options.html">https://googleapis.dev/python/google-api-core/latest/client_options.html</a>
from google.ai import generativelanguage_v1beta
def sample_count_message_tokens():
# Create a client
client = generativelanguage_v1beta.DiscussServiceClient()
# Initialize request argument(s)
prompt = generativelanguage_v1beta.MessagePrompt()
prompt.messages.content = "content_value"
request = generativelanguage_v1beta.CountMessageTokensRequest(
model="model_value",
prompt=prompt,
)
# Make the request
response = client.count_message_tokens(request=request)
# Handle the response
print(response)
Args | |
---|---|
request
|
Union[google.ai.generativelanguage.CountMessageTokensRequest, dict]
The request object. Counts the number of tokens in the Models may tokenize text differently, so each model may
return a different |
model
|
str
Required. The model's resource name. This serves as an ID for the Model to use. This name should match a model name returned by the
Format: This corresponds to the |
prompt
|
google.ai.generativelanguage.MessagePrompt
Required. The prompt, whose token count is to be returned. This corresponds to the |
retry
|
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout
|
float
The timeout for this request. |
metadata
|
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Returns | |
---|---|
google.ai.generativelanguage.CountMessageTokensResponse
|
A response from CountMessageTokens.
It returns the model's token_count for the prompt. |
from_service_account_file
@classmethod
from_service_account_file( filename: str, *args, **kwargs )
Creates an instance of this client using the provided credentials file.
Args | |
---|---|
filename
|
str
The path to the service account private key json file. |
args
|
Additional arguments to pass to the constructor. |
kwargs
|
Additional arguments to pass to the constructor. |
Returns | |
---|---|
DiscussServiceClient
|
The constructed client. |
from_service_account_info
@classmethod
from_service_account_info( info: dict, *args, **kwargs )
Creates an instance of this client using the provided credentials info.
Args | |
---|---|
info
|
dict
The service account private key info. |
args
|
Additional arguments to pass to the constructor. |
kwargs
|
Additional arguments to pass to the constructor. |
Returns | |
---|---|
DiscussServiceClient
|
The constructed client. |
from_service_account_json
@classmethod
from_service_account_json( filename: str, *args, **kwargs )
Creates an instance of this client using the provided credentials file.
Args | |
---|---|
filename
|
str
The path to the service account private key json file. |
args
|
Additional arguments to pass to the constructor. |
kwargs
|
Additional arguments to pass to the constructor. |
Returns | |
---|---|
DiscussServiceClient
|
The constructed client. |
generate_message
generate_message(
request: Optional[Union[google.ai.generativelanguage.GenerateMessageRequest
, dict]] = None,
*,
model: Optional[str] = None,
prompt: Optional[google.ai.generativelanguage.MessagePrompt
] = None,
temperature: Optional[float] = None,
candidate_count: Optional[int] = None,
top_p: Optional[float] = None,
top_k: Optional[int] = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = ()
) -> google.ai.generativelanguage.GenerateMessageResponse
Generates a response from the model given an input MessagePrompt
.
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# <a href="https://googleapis.dev/python/google-api-core/latest/client_options.html">https://googleapis.dev/python/google-api-core/latest/client_options.html</a>
from google.ai import generativelanguage_v1beta
def sample_generate_message():
# Create a client
client = generativelanguage_v1beta.DiscussServiceClient()
# Initialize request argument(s)
prompt = generativelanguage_v1beta.MessagePrompt()
prompt.messages.content = "content_value"
request = generativelanguage_v1beta.GenerateMessageRequest(
model="model_value",
prompt=prompt,
)
# Make the request
response = client.generate_message(request=request)
# Handle the response
print(response)
Args | |
---|---|
request
|
Union[google.ai.generativelanguage.GenerateMessageRequest, dict]
The request object. Request to generate a message response from the model. |
model
|
str
Required. The name of the model to use. Format: This corresponds to the |
prompt
|
google.ai.generativelanguage.MessagePrompt
Required. The structured textual input given to the model as a prompt. Given a prompt, the model will return what it predicts is the next message in the discussion. This corresponds to the |
temperature
|
float
Optional. Controls the randomness of the output. Values can range over This corresponds to the |
candidate_count
|
int
Optional. The number of generated response messages to return. This value must be between This corresponds to the |
top_p
|
float
Optional. The maximum cumulative probability of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Nucleus sampling considers the smallest set of tokens
whose probability sum is at least This corresponds to the |
top_k
|
int
Optional. The maximum number of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Top-k sampling considers the set of This corresponds to the |
retry
|
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout
|
float
The timeout for this request. |
metadata
|
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Returns | |
---|---|
google.ai.generativelanguage.GenerateMessageResponse
|
The response from the model.
This includes candidate messages and conversation history in the form of chronologically-ordered messages. |
get_mtls_endpoint_and_cert_source
@classmethod
get_mtls_endpoint_and_cert_source( client_options: Optional[client_options_lib.ClientOptions] = None )
Return the API endpoint and client cert source for mutual TLS.
The client cert source is determined in the following order:
(1) if GOOGLE_API_USE_CLIENT_CERTIFICATE
environment variable is not "true", the
client cert source is None.
(2) if client_options.client_cert_source
is provided, use the provided one; if the
default client cert source exists, use the default one; otherwise the client cert
source is None.
The API endpoint is determined in the following order:
(1) if client_options.api_endpoint
if provided, use the provided one.
(2) if GOOGLE_API_USE_CLIENT_CERTIFICATE
environment variable is "always", use the
default mTLS endpoint; if the environment variable is "never", use the default API
endpoint; otherwise if client cert source exists, use the default mTLS endpoint, otherwise
use the default API endpoint.
More details can be found at https://google.aip.dev/auth/4114
Args | |
---|---|
client_options
|
google.api_core.client_options.ClientOptions
Custom options for the
client. Only the |
Returns | |
---|---|
Tuple[str, Callable[[], Tuple[bytes, bytes]]]: returns the API endpoint and the client cert source to use. |
Raises | |
---|---|
google.auth.exceptions.MutualTLSChannelError
|
If any errors happen. |
__enter__
__enter__() -> 'DiscussServiceClient'
__exit__
__exit__(
type, value, traceback
)
Releases underlying transport's resources.
.. warning:: ONLY use as a context manager if the transport is NOT shared with other clients! Exiting the with block will CLOSE the transport and may cause errors in other clients!
Class Variables | |
---|---|
DEFAULT_ENDPOINT |
'generativelanguage.googleapis.com'
|
DEFAULT_MTLS_ENDPOINT |
'generativelanguage.mtls.googleapis.com'
|