View source on GitHub |
API for using Large Models that generate multimodal content and have additional capabilities beyond text generation.
google.ai.generativelanguage.GenerativeServiceClient(
*,
credentials: Optional[ga_credentials.Credentials] = None,
transport: Optional[Union[str, GenerativeServiceTransport]] = None,
client_options: Optional[Union[client_options_lib.ClientOptions, dict]] = None,
client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO
) -> None
Raises | |
---|---|
google.auth.exceptions.MutualTLSChannelError
|
If mutual TLS transport creation failed for any reason. |
Attributes | |
---|---|
transport
|
Returns the transport used by the client instance. |
Methods
batch_embed_contents
batch_embed_contents(
request: Optional[Union[google.ai.generativelanguage.BatchEmbedContentsRequest
, dict]] = None,
*,
model: Optional[str] = None,
requests: Optional[MutableSequence[generative_service.EmbedContentRequest]] = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = ()
) -> google.ai.generativelanguage.BatchEmbedContentsResponse
Generates multiple embeddings from the model given input text in a synchronous call.
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# <a href="https://googleapis.dev/python/google-api-core/latest/client_options.html">https://googleapis.dev/python/google-api-core/latest/client_options.html</a>
from google.ai import generativelanguage_v1beta
def sample_batch_embed_contents():
# Create a client
client = generativelanguage_v1beta.GenerativeServiceClient()
# Initialize request argument(s)
requests = generativelanguage_v1beta.EmbedContentRequest()
requests.model = "model_value"
request = generativelanguage_v1beta.BatchEmbedContentsRequest(
model="model_value",
requests=requests,
)
# Make the request
response = client.batch_embed_contents(request=request)
# Handle the response
print(response)
Args | |
---|---|
request
|
Union[google.ai.generativelanguage.BatchEmbedContentsRequest, dict]
The request object. Batch request to get embeddings from the model for a list of prompts. |
model
|
str
Required. The model's resource name. This serves as an ID for the Model to use. This name should match a model name returned by the
Format: This corresponds to the |
requests
|
MutableSequence[google.ai.generativelanguage.EmbedContentRequest]
Required. Embed requests for the batch. The model in
each of these requests must match the model specified
This corresponds to the |
retry
|
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout
|
float
The timeout for this request. |
metadata
|
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Returns | |
---|---|
google.ai.generativelanguage.BatchEmbedContentsResponse
|
The response to a BatchEmbedContentsRequest. |
count_tokens
count_tokens(
request: Optional[Union[google.ai.generativelanguage.CountTokensRequest
, dict]] = None,
*,
model: Optional[str] = None,
contents: Optional[MutableSequence[content.Content]] = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = ()
) -> google.ai.generativelanguage.CountTokensResponse
Runs a model's tokenizer on input content and returns the token count.
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# <a href="https://googleapis.dev/python/google-api-core/latest/client_options.html">https://googleapis.dev/python/google-api-core/latest/client_options.html</a>
from google.ai import generativelanguage_v1beta
def sample_count_tokens():
# Create a client
client = generativelanguage_v1beta.GenerativeServiceClient()
# Initialize request argument(s)
request = generativelanguage_v1beta.CountTokensRequest(
model="model_value",
)
# Make the request
response = client.count_tokens(request=request)
# Handle the response
print(response)
Args | |
---|---|
request
|
Union[google.ai.generativelanguage.CountTokensRequest, dict]
The request object. Counts the number of tokens in the Models may tokenize text differently, so each model may
return a different |
model
|
str
Required. The model's resource name. This serves as an ID for the Model to use. This name should match a model name returned by the
Format: This corresponds to the |
contents
|
MutableSequence[google.ai.generativelanguage.Content]
Required. The input given to the model as a prompt. This corresponds to the |
retry
|
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout
|
float
The timeout for this request. |
metadata
|
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Returns | |
---|---|
google.ai.generativelanguage.CountTokensResponse
|
A response from CountTokens.
It returns the model's token_count for the prompt. |
embed_content
embed_content(
request: Optional[Union[google.ai.generativelanguage.EmbedContentRequest
, dict]] = None,
*,
model: Optional[str] = None,
content: Optional[google.ai.generativelanguage.Content
] = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = ()
) -> google.ai.generativelanguage.EmbedContentResponse
Generates an embedding from the model given an input Content
.
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# <a href="https://googleapis.dev/python/google-api-core/latest/client_options.html">https://googleapis.dev/python/google-api-core/latest/client_options.html</a>
from google.ai import generativelanguage_v1beta
def sample_embed_content():
# Create a client
client = generativelanguage_v1beta.GenerativeServiceClient()
# Initialize request argument(s)
request = generativelanguage_v1beta.EmbedContentRequest(
model="model_value",
)
# Make the request
response = client.embed_content(request=request)
# Handle the response
print(response)
Args | |
---|---|
request
|
Union[google.ai.generativelanguage.EmbedContentRequest, dict]
The request object. Request containing the |
model
|
str
Required. The model's resource name. This serves as an ID for the Model to use. This name should match a model name returned by the
Format: This corresponds to the |
content
|
google.ai.generativelanguage.Content
Required. The content to embed. Only the This corresponds to the |
retry
|
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout
|
float
The timeout for this request. |
metadata
|
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Returns | |
---|---|
google.ai.generativelanguage.EmbedContentResponse
|
The response to an EmbedContentRequest. |
from_service_account_file
@classmethod
from_service_account_file( filename: str, *args, **kwargs )
Creates an instance of this client using the provided credentials file.
Args | |
---|---|
filename
|
str
The path to the service account private key json file. |
args
|
Additional arguments to pass to the constructor. |
kwargs
|
Additional arguments to pass to the constructor. |
Returns | |
---|---|
GenerativeServiceClient
|
The constructed client. |
from_service_account_info
@classmethod
from_service_account_info( info: dict, *args, **kwargs )
Creates an instance of this client using the provided credentials info.
Args | |
---|---|
info
|
dict
The service account private key info. |
args
|
Additional arguments to pass to the constructor. |
kwargs
|
Additional arguments to pass to the constructor. |
Returns | |
---|---|
GenerativeServiceClient
|
The constructed client. |
from_service_account_json
@classmethod
from_service_account_json( filename: str, *args, **kwargs )
Creates an instance of this client using the provided credentials file.
Args | |
---|---|
filename
|
str
The path to the service account private key json file. |
args
|
Additional arguments to pass to the constructor. |
kwargs
|
Additional arguments to pass to the constructor. |
Returns | |
---|---|
GenerativeServiceClient
|
The constructed client. |
generate_answer
generate_answer(
request: Optional[Union[google.ai.generativelanguage.GenerateAnswerRequest
, dict]] = None,
*,
model: Optional[str] = None,
contents: Optional[MutableSequence[content.Content]] = None,
safety_settings: Optional[MutableSequence[safety.SafetySetting]] = None,
answer_style: Optional[google.ai.generativelanguage.GenerateAnswerRequest.AnswerStyle
] = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = ()
) -> google.ai.generativelanguage.GenerateAnswerResponse
Generates a grounded answer from the model given an input GenerateAnswerRequest
.
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# <a href="https://googleapis.dev/python/google-api-core/latest/client_options.html">https://googleapis.dev/python/google-api-core/latest/client_options.html</a>
from google.ai import generativelanguage_v1beta
def sample_generate_answer():
# Create a client
client = generativelanguage_v1beta.GenerativeServiceClient()
# Initialize request argument(s)
request = generativelanguage_v1beta.GenerateAnswerRequest(
model="model_value",
answer_style="VERBOSE",
)
# Make the request
response = client.generate_answer(request=request)
# Handle the response
print(response)
Args | |
---|---|
request
|
Union[google.ai.generativelanguage.GenerateAnswerRequest, dict]
The request object. Request to generate a grounded answer from the model. |
model
|
str
Required. The name of the Format: This corresponds to the |
contents
|
MutableSequence[google.ai.generativelanguage.Content]
Required. The content of the current conversation with
the model. For single-turn queries, this is a single
question to answer. For multi-turn queries, this is a
repeated field that contains conversation history and
the last This corresponds to the |
safety_settings
|
MutableSequence[google.ai.generativelanguage.SafetySetting]
Optional. A list of unique This will be enforced on the
This corresponds to the |
answer_style
|
google.ai.generativelanguage.GenerateAnswerRequest.AnswerStyle
Required. Style in which answers should be returned. This corresponds to the |
retry
|
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout
|
float
The timeout for this request. |
metadata
|
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Returns | |
---|---|
google.ai.generativelanguage.GenerateAnswerResponse
|
Response from the model for a grounded answer. |
generate_content
generate_content(
request: Optional[Union[google.ai.generativelanguage.GenerateContentRequest
, dict]] = None,
*,
model: Optional[str] = None,
contents: Optional[MutableSequence[content.Content]] = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = ()
) -> google.ai.generativelanguage.GenerateContentResponse
Generates a response from the model given an input GenerateContentRequest
.
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# <a href="https://googleapis.dev/python/google-api-core/latest/client_options.html">https://googleapis.dev/python/google-api-core/latest/client_options.html</a>
from google.ai import generativelanguage_v1beta
def sample_generate_content():
# Create a client
client = generativelanguage_v1beta.GenerativeServiceClient()
# Initialize request argument(s)
request = generativelanguage_v1beta.GenerateContentRequest(
model="model_value",
)
# Make the request
response = client.generate_content(request=request)
# Handle the response
print(response)
Args | |
---|---|
request
|
Union[google.ai.generativelanguage.GenerateContentRequest, dict]
The request object. Request to generate a completion from the model. |
model
|
str
Required. The name of the Format: This corresponds to the |
contents
|
MutableSequence[google.ai.generativelanguage.Content]
Required. The content of the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request. This corresponds to the |
retry
|
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout
|
float
The timeout for this request. |
metadata
|
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Returns | |
---|---|
google.ai.generativelanguage.GenerateContentResponse
|
Response from the model supporting multiple candidates.
Note on safety ratings and content filtering. They are reported for both prompt in GenerateContentResponse.prompt_feedback and for each candidate in finish_reason and in safety_ratings. The API contract is that: - either all requested candidates are returned or no candidates at all - no candidates are returned only if there was something wrong with the prompt (see prompt_feedback) - feedback on each candidate is reported on finish_reason and safety_ratings. |
get_mtls_endpoint_and_cert_source
@classmethod
get_mtls_endpoint_and_cert_source( client_options: Optional[client_options_lib.ClientOptions] = None )
Return the API endpoint and client cert source for mutual TLS.
The client cert source is determined in the following order:
(1) if GOOGLE_API_USE_CLIENT_CERTIFICATE
environment variable is not "true", the
client cert source is None.
(2) if client_options.client_cert_source
is provided, use the provided one; if the
default client cert source exists, use the default one; otherwise the client cert
source is None.
The API endpoint is determined in the following order:
(1) if client_options.api_endpoint
if provided, use the provided one.
(2) if GOOGLE_API_USE_CLIENT_CERTIFICATE
environment variable is "always", use the
default mTLS endpoint; if the environment variable is "never", use the default API
endpoint; otherwise if client cert source exists, use the default mTLS endpoint, otherwise
use the default API endpoint.
More details can be found at https://google.aip.dev/auth/4114
Args | |
---|---|
client_options
|
google.api_core.client_options.ClientOptions
Custom options for the
client. Only the |
Returns | |
---|---|
Tuple[str, Callable[[], Tuple[bytes, bytes]]]: returns the API endpoint and the client cert source to use. |
Raises | |
---|---|
google.auth.exceptions.MutualTLSChannelError
|
If any errors happen. |
stream_generate_content
stream_generate_content(
request: Optional[Union[google.ai.generativelanguage.GenerateContentRequest
, dict]] = None,
*,
model: Optional[str] = None,
contents: Optional[MutableSequence[content.Content]] = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = ()
) -> Iterable[google.ai.generativelanguage.GenerateContentResponse
]
Generates a streamed response from the model given an input GenerateContentRequest
.
# This snippet has been automatically generated and should be regarded as a
# code template only.
# It will require modifications to work:
# - It may require correct/in-range values for request initialization.
# - It may require specifying regional endpoints when creating the service
# client as shown in:
# <a href="https://googleapis.dev/python/google-api-core/latest/client_options.html">https://googleapis.dev/python/google-api-core/latest/client_options.html</a>
from google.ai import generativelanguage_v1beta
def sample_stream_generate_content():
# Create a client
client = generativelanguage_v1beta.GenerativeServiceClient()
# Initialize request argument(s)
request = generativelanguage_v1beta.GenerateContentRequest(
model="model_value",
)
# Make the request
stream = client.stream_generate_content(request=request)
# Handle the response
for response in stream:
print(response)
Args | |
---|---|
request
|
Union[google.ai.generativelanguage.GenerateContentRequest, dict]
The request object. Request to generate a completion from the model. |
model
|
str
Required. The name of the Format: This corresponds to the |
contents
|
MutableSequence[google.ai.generativelanguage.Content]
Required. The content of the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request. This corresponds to the |
retry
|
google.api_core.retry.Retry
Designation of what errors, if any, should be retried. |
timeout
|
float
The timeout for this request. |
metadata
|
Sequence[Tuple[str, str]]
Strings which should be sent along with the request as metadata. |
Returns | |
---|---|
Iterable[google.ai.generativelanguage.GenerateContentResponse]:
Response from the model supporting multiple candidates.
Note on safety ratings and content filtering. They are reported for both prompt in GenerateContentResponse.prompt_feedback and for each candidate in finish_reason and in safety_ratings. The API contract is that: - either all requested candidates are returned or no candidates at all - no candidates are returned only if there was something wrong with the prompt (see prompt_feedback) - feedback on each candidate is reported on finish_reason and safety_ratings. |
__enter__
__enter__() -> 'GenerativeServiceClient'
__exit__
__exit__(
type, value, traceback
)
Releases underlying transport's resources.
.. warning:: ONLY use as a context manager if the transport is NOT shared with other clients! Exiting the with block will CLOSE the transport and may cause errors in other clients!
Class Variables | |
---|---|
DEFAULT_ENDPOINT |
'generativelanguage.googleapis.com'
|
DEFAULT_MTLS_ENDPOINT |
'generativelanguage.mtls.googleapis.com'
|