View source on GitHub |
API for using Generative Language Models (GLMs) trained to generate text.
google.ai.generativelanguage.TextServiceAsyncClient(
*,
credentials: Optional[ga_credentials.Credentials] = None,
transport: Union[str, TextServiceTransport] = 'grpc_asyncio',
client_options: Optional[ClientOptions] = None,
client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO
) -> None
Also known as Large Language Models (LLM)s, these generate text given an input prompt from the user.
Raises | |
---|---|
google.auth.exceptions.MutualTlsChannelError
|
If mutual TLS transport creation failed for any reason. |
Attributes | |
---|---|
transport
|
Returns the transport used by the client instance. |
Methods
batch_embed_text
batch_embed_text(
request=None,
*,
model=None,
texts=None,
retry=<_MethodDefault._DEFAULT_VALUE: <object object>>,
timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>,
metadata=()
)
Generates multiple embeddings from the model given input text in a synchronous call.
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# <a href="https://googleapis.dev/python/google-api-core/latest/client_options.html">https://googleapis.dev/python/google-api-core/latest/client_options.html</a>
from google.ai import generativelanguage_v1beta
async def sample_batch_embed_text():
# Create a client
client = generativelanguage_v1beta.TextServiceAsyncClient()
# Initialize request argument(s)
request = generativelanguage_v1beta.BatchEmbedTextRequest(
model="model_value",
)
# Make the request
response = await client.batch_embed_text(request=request)
# Handle the response
print(response)
Args | |
---|---|
request
|
Optional[Union[google.ai.generativelanguage.BatchEmbedTextRequest, dict]]
model (:class:
texts (:class:
|
retry
|
google.api_core.retry_async.AsyncRetry
Designation of what errors, if any, should be retried. |
timeout
|
float
The timeout for this request. |
metadata
|
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Returns | |
---|---|
google.ai.generativelanguage.BatchEmbedTextResponse
|
The response to a EmbedTextRequest. |
count_text_tokens
count_text_tokens(
request=None,
*,
model=None,
prompt=None,
retry=<_MethodDefault._DEFAULT_VALUE: <object object>>,
timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>,
metadata=()
)
Runs a model's tokenizer on a text and returns the token count.
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# <a href="https://googleapis.dev/python/google-api-core/latest/client_options.html">https://googleapis.dev/python/google-api-core/latest/client_options.html</a>
from google.ai import generativelanguage_v1beta
async def sample_count_text_tokens():
# Create a client
client = generativelanguage_v1beta.TextServiceAsyncClient()
# Initialize request argument(s)
prompt = generativelanguage_v1beta.TextPrompt()
prompt.text = "text_value"
request = generativelanguage_v1beta.CountTextTokensRequest(
model="model_value",
prompt=prompt,
)
# Make the request
response = await client.count_text_tokens(request=request)
# Handle the response
print(response)
Args | |
---|---|
request
|
Optional[Union[google.ai.generativelanguage.CountTextTokensRequest, dict]]
model (:class:
prompt (:class:
|
retry
|
google.api_core.retry_async.AsyncRetry
Designation of what errors, if any, should be retried. |
timeout
|
float
The timeout for this request. |
metadata
|
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Returns | |
---|---|
google.ai.generativelanguage.CountTextTokensResponse
|
A response from CountTextTokens.
It returns the model's token_count for the prompt. |
embed_text
embed_text(
request=None,
*,
model=None,
text=None,
retry=<_MethodDefault._DEFAULT_VALUE: <object object>>,
timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>,
metadata=()
)
Generates an embedding from the model given an input message.
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# <a href="https://googleapis.dev/python/google-api-core/latest/client_options.html">https://googleapis.dev/python/google-api-core/latest/client_options.html</a>
from google.ai import generativelanguage_v1beta
async def sample_embed_text():
# Create a client
client = generativelanguage_v1beta.TextServiceAsyncClient()
# Initialize request argument(s)
request = generativelanguage_v1beta.EmbedTextRequest(
model="model_value",
)
# Make the request
response = await client.embed_text(request=request)
# Handle the response
print(response)
Args | |
---|---|
request
|
Optional[Union[google.ai.generativelanguage.EmbedTextRequest, dict]]
model (:class:
text (:class:
|
retry
|
google.api_core.retry_async.AsyncRetry
Designation of what errors, if any, should be retried. |
timeout
|
float
The timeout for this request. |
metadata
|
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Returns | |
---|---|
google.ai.generativelanguage.EmbedTextResponse
|
The response to a EmbedTextRequest. |
from_service_account_file
@classmethod
from_service_account_file( filename: str, *args, **kwargs )
Creates an instance of this client using the provided credentials file.
Args | |
---|---|
filename
|
str
The path to the service account private key json file. |
args
|
Additional arguments to pass to the constructor. |
kwargs
|
Additional arguments to pass to the constructor. |
Returns | |
---|---|
TextServiceAsyncClient
|
The constructed client. |
from_service_account_info
@classmethod
from_service_account_info( info: dict, *args, **kwargs )
Creates an instance of this client using the provided credentials info.
Args | |
---|---|
info
|
dict
The service account private key info. |
args
|
Additional arguments to pass to the constructor. |
kwargs
|
Additional arguments to pass to the constructor. |
Returns | |
---|---|
TextServiceAsyncClient
|
The constructed client. |
from_service_account_json
@classmethod
from_service_account_json( filename: str, *args, **kwargs )
Creates an instance of this client using the provided credentials file.
Args | |
---|---|
filename
|
str
The path to the service account private key json file. |
args
|
Additional arguments to pass to the constructor. |
kwargs
|
Additional arguments to pass to the constructor. |
Returns | |
---|---|
TextServiceAsyncClient
|
The constructed client. |
generate_text
generate_text(
request=None,
*,
model=None,
prompt=None,
temperature=None,
candidate_count=None,
max_output_tokens=None,
top_p=None,
top_k=None,
retry=<_MethodDefault._DEFAULT_VALUE: <object object>>,
timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>,
metadata=()
)
Generates a response from the model given an input message.
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# <a href="https://googleapis.dev/python/google-api-core/latest/client_options.html">https://googleapis.dev/python/google-api-core/latest/client_options.html</a>
from google.ai import generativelanguage_v1beta
async def sample_generate_text():
# Create a client
client = generativelanguage_v1beta.TextServiceAsyncClient()
# Initialize request argument(s)
prompt = generativelanguage_v1beta.TextPrompt()
prompt.text = "text_value"
request = generativelanguage_v1beta.GenerateTextRequest(
model="model_value",
prompt=prompt,
)
# Make the request
response = await client.generate_text(request=request)
# Handle the response
print(response)
Args | |
---|---|
request
|
Optional[Union[google.ai.generativelanguage.GenerateTextRequest, dict]]
model (:class:
prompt (:class:
temperature (:class:
candidate_count (:class:
max_output_tokens (:class:
top_p (:class:
top_k (:class:
|
retry
|
google.api_core.retry_async.AsyncRetry
Designation of what errors, if any, should be retried. |
timeout
|
float
The timeout for this request. |
metadata
|
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Returns | |
---|---|
google.ai.generativelanguage.GenerateTextResponse
|
The response from the model, including candidate completions. |
get_mtls_endpoint_and_cert_source
@classmethod
get_mtls_endpoint_and_cert_source( client_options: Optional[ClientOptions] = None )
Return the API endpoint and client cert source for mutual TLS.
The client cert source is determined in the following order:
(1) if GOOGLE_API_USE_CLIENT_CERTIFICATE
environment variable is not "true", the
client cert source is None.
(2) if client_options.client_cert_source
is provided, use the provided one; if the
default client cert source exists, use the default one; otherwise the client cert
source is None.
The API endpoint is determined in the following order:
(1) if client_options.api_endpoint
if provided, use the provided one.
(2) if GOOGLE_API_USE_CLIENT_CERTIFICATE
environment variable is "always", use the
default mTLS endpoint; if the environment variable is "never", use the default API
endpoint; otherwise if client cert source exists, use the default mTLS endpoint, otherwise
use the default API endpoint.
More details can be found at https://google.aip.dev/auth/4114
Args | |
---|---|
client_options
|
google.api_core.client_options.ClientOptions
Custom options for the
client. Only the |
Returns | |
---|---|
Tuple[str, Callable[[], Tuple[bytes, bytes]]]: returns the API endpoint and the client cert source to use. |
Raises | |
---|---|
google.auth.exceptions.MutualTLSChannelError
|
If any errors happen. |
get_transport_class
get_transport_class()
partial(func, *args, **keywords) - new function with partial application of the given arguments and keywords.
Class Variables | |
---|---|
DEFAULT_ENDPOINT |
'generativelanguage.googleapis.com'
|
DEFAULT_MTLS_ENDPOINT |
'generativelanguage.mtls.googleapis.com'
|