PaLM 2 models

PaLM 2 is a family of language models, optimized for ease of use on key developer use cases. The PaLM family of models includes variations trained for text and chat generation as well as text embeddings. This guide provides information about each variation to help you decide which is the best fit for your use case.

Model sizes

The model sizes are described by an animal name. The following table shows the available sizes and what they mean relative to each other.

Model size Description Services
Bison Most capable PaLM 2 model size.
  • text
  • chat
Gecko Smallest, most efficient PaLM 2 model size.
  • embeddings

Model variations

Different PaLM models are available and optimized for specific use cases. The following table describes attributes of each.

Variation Attribute Description
Bison Text Model last updated May 2023
Model code text-bison-001
Model capabilities
  • Input: text
  • Output: text
  • Optimized for language tasks such as:
    • Code generation
    • Text generation
    • Text editing
    • Problem solving
    • Recommendations generation
    • Information extraction
    • Data extraction or generation
    • AI agent
  • Can handle zero, one, and few-shot tasks.
Model safety Adjustable safety settings for 6 dimensions of harm available to developers. See the safety settings topic for details.
Rate limit 90 requests per minute
Bison Chat Model last updated May 2023
Model code chat-bison-001
Model capabilities
  • Input: text
  • Output: text
  • Generates text in a conversational format.
  • Optimized for dialog language tasks such as implementation of chat bots or AI agents.
  • Can handle zero, one, and few-shot tasks.
Model safety No adjustable safety settings.
Rate limit 90 requests per minute
Gecko Embedding Model last updated May 2023
Model code embedding-gecko-001
Model capabilities
  • Input: text
  • Output: text
  • Generates text embeddings for the input text.
  • Optimized for creating embeddings for text of up to 1024 tokens.
Model safety No adjustable safety settings.
Rate limit 1500 requests per minute

Model metadata

Use the ModelService API to get additional metadata about the latest models such as input and output token limits. The following table displays the metadata for the text-bison-001 model variant.

Attribute Value
Display name Text Bison
Model code models/text-bison-001
Description Model targeted for text generation
Input token limit 8196
Output token limit 1024
Supported generation methods generateText
Temperature 0.7
top_p 0.95
top_k 40

Model attributes

The table below describes the attributes of the PaLM 2 which are common to all the model variations.

Attribute Description
Training data PaLM 2's knowledge cutoff time is mid-2021. Knowledge about events after that time is limited.
Supported language English
Configurable model parameters
  • Top p
  • Top k
  • Temperature
  • Stop sequence
  • Max output length
  • Number of response candidates

See the model parameters section of the Intro to LLMs guide for information about each of these parameters.