Method: models.countMessageTokens

Runs a model's tokenizer on a string and returns the token count.

HTTP request

POST https://generativelanguage.googleapis.com/v1beta3/{model=models/*}:countMessageTokens

The URL uses gRPC Transcoding syntax.

For example:

curl https://generativelanguage.googleapis.com/v1beta2/models/chat-bison-001:countMessageTokens?key=$PALM_API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
        "prompt": {
            "messages": [
                {"content":"How many tokens?"},
                {"content": "For this whole conversation?" }
            ]
        }
    }'
{
  "tokenCount": 23
}

Path parameters

Parameters
model

string

Required. The model's resource name. This serves as an ID for the Model to use.

This name should match a model name returned by the models.list method.

Format: models/{model}

Request body

The request body contains data with the following structure:

JSON representation
{
  "prompt": {
    object (MessagePrompt)
  }
}
Fields
prompt

object (MessagePrompt)

Required. The prompt, whose token count is to be returned.

Response body

A response from models.countMessageTokens.

It returns the model's tokenCount for the prompt.

If successful, the response body contains data with the following structure:

JSON representation
{
  "tokenCount": integer
}
Fields
tokenCount

integer

The number of tokens that the model tokenizes the prompt into.

Always non-negative.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/generative-language

For more information, see the Authentication Overview.