Gemma Open Models

A family of lightweight, state-of-the art open models built from the same research and technology used to create the Gemini models

Gemma models logo

Responsible by design

Incorporating comprehensive safety measures, these models help ensure responsible and trustworthy AI solutions through curated datasets and rigorous tuning.

Gemma models logo

Unmatched performance at size

Gemma models achieve exceptional benchmark results at its 2B and 7B sizes, even outperforming some larger open models.

Gemma models logo

Framework flexible

With Keras 3.0, enjoy seamless compatibility with JAX, TensorFlow, and PyTorch, empowering you to effortlessly choose and switch frameworks depending on your task.

Benchmarks

Gemma sets a new bar for state-of-the-art performance for size compared to popular models like Llama 2 and Mistral 7B.

5-shot, top-1

MMLU

The MMLU benchmark is a test that measures the breadth of knowledge and problem-solving ability acquired by large language models during pretraining.

0-shot

HellaSwag

The HellaSwag benchmark challenges a language model's ability to understand and apply common sense reasoning by selecting the most logical ending to a story.

0-shot

PIQA

The PIQA benchmark tests a language model's ability to understand and apply physical commonsense knowledge by answering questions about everyday physical interactions.

0-shot

SIQA

The SIQA benchmark evaluates a language model's understanding of social interactions and social common sense by asking questions about people’s actions and their social implications.

0-shot

Boolq

The BoolQ benchmark tests a language model's ability to answer naturally occurring (generated in unprompted and unconstrained settings) yes/no questions, testing the models ability to do real-world natural language inference tasks.

partial scoring

Winogrande

The Winogrande benchmark tests a language model's ability to resolve ambiguous fill-in-the-blank tasks with binary options, requiring generalized commonsense reasoning.

7-shot

CQA

The CQA benchmark assesses the performance of language models on multiple-choice question-answering, requiring different types of commonsense knowledge.

OBQA

The OBQA benchmark evaluates a language model's ability to perform advanced question-answering with multi-step reasoning, commonsense knowledge, and rich text comprehension, modeled after open book exams.

ARC-e

The ARC-e benchmark tests a language model's advanced question-answering skills with genuine grade-school level, multiple-choice science questions.

ARC-c

The ARC-c benchmark is a more focused subset of the ARC-e dataset, containing only questions answered incorrectly by common (retrieval-base and word co-occurrence) algorithms.

5-shot

TriviaQA

The TriviaQA benchmark tests reading comprehension skills with question-answer-evidence triples.

pass@1

HumanEval

The HumanEval benchmark tests a language model's code generation abilities by evaluating whether its solutions pass functional unit tests for programming problems.

3-shot

MBPP

The MBPP benchmark tests a language model's ability to solve basic Python programming problems, focusing on fundamental programming concepts and standard library usage.

maj@1

GSM8K

The GSM8K benchmark tests a language model's ability to solve grade-school-level math problems that frequently require multiple steps of reasoning.

4-shot

MATH

The MATH benchmark evaluates a language model's ability to solve complex mathematical word problems, requiring reasoning, multi-step problem-solving, and the understanding of mathematical concepts.

AGIEval

The AGIEval benchmark tests a language model's general intelligence by using questions derived from real-world exams designed to assess human intellectual abilities (college entrance exams, law exams, etc.).

BBH

The BBH (BIG-Bench Hard) benchmark focuses on tasks deemed beyond the abilities of current language models, testing their limits across various reasoning and understanding domains.

100%

75%

50%

25%

0%

100%

75%

50%

25%

0%

Gemma

7b

64.3

Gemma

2b

42.3

Mistral

7b

62.5

LLAMA-2

13b

54.8

LLAMA-2

7b

45.3

Gemma

7b

81.2

Gemma

2b

71.4

Mistral

7b

81.0

LLAMA-2

13b

80.7

LLAMA-2

7b

77.2

Gemma

7b

81.2

Gemma

2b

77.3

Mistral

7b

82.2

LLAMA-2

13b

80.5

LLAMA-2

7b

78.8

Gemma

7b

51.8

Gemma

2b

49.7

Mistral

7b

47.0*

LLAMA-2

13b

50.3

LLAMA-2

7b

48.3

Gemma

7b

83.2

Gemma

2b

69.42

Mistral

7b

83.2*

LLAMA-2

13b

81.7

LLAMA-2

7b

77.4

Gemma

7b

72.3

Gemma

2b

65.4

Mistral

7b

74.2

LLAMA-2

13b

72.8

LLAMA-2

7b

69.2

Gemma

7b

71.3

Gemma

2b

65.3

Mistral

7b

66.3*

LLAMA-2

13b

67.3

LLAMA-2

7b

57.8

Gemma

7b

52.8

Gemma

2b

47.8

Mistral

7b

52.2

LLAMA-2

13b

57.0

LLAMA-2

7b

58.6

Gemma

7b

81.5

Gemma

2b

73.2

Mistral

7b

80.5

LLAMA-2

13b

77.3

LLAMA-2

7b

75.2

Gemma

7b

53.2

Gemma

2b

42.06

Mistral

7b

54.9

LLAMA-2

13b

49.4

LLAMA-2

7b

45.9

Gemma

7b

63.4

Gemma

2b

53.2

Mistral

7b

62.5

LLAMA-2

13b

79.6

LLAMA-2

7b

72.1

Gemma

7b

32.3

Gemma

2b

22.0

Mistral

7b

26.2

LLAMA-2

13b

18.3

LLAMA-2

7b

12.8

Gemma

7b

44.4

Gemma

2b

29.2

Mistral

7b

40.2*

LLAMA-2

13b

30.6

LLAMA-2

7b

20.8

Gemma

7b

46.4

Gemma

2b

17.7

Mistral

7b

35.4*

LLAMA-2

13b

28.7

LLAMA-2

7b

14.6

Gemma

7b

24.3

Gemma

2b

11.8

Mistral

7b

12.7

LLAMA-2

13b

3.9

LLAMA-2

7b

2.5

Gemma

7b

41.7

Gemma

2b

24.2

Mistral

7b

41.2*

LLAMA-2

13b

39.1

LLAMA-2

7b

29.3

Gemma

7b

55.1

Gemma

2b

35.2

Mistral

7b

56.1*

LLAMA-2

13b

39.4

LLAMA-2

7b

32.6

*See the technical report for details on performance with other methodologies

Responsible AI development

Responsibility by Design

Pre-trained on carefully curated data and tuned for safety on top, helping to empower safe and responsible AI development based with Gemma models.

Robust and Transparent Evaluation

Comprehensive evaluations and transparent reporting unveil model limitations to adopt a responsible approach for each use case.

Powering Responsible Development

The Responsible Generative AI Toolkit supports developers to design and implement Responsible AI best practices.

Google Cloud icon

Optimized for Google Cloud

With Gemma models on Google Cloud, you can deeply customize the model to your specific needs with Vertex AI's fully-managed tools or GKE’s self-managed option and deploy it to flexible and cost-efficient AI-optimized infrastructure.

Accelerating academic research with Google Cloud credits

The Academic Research Program recently concluded its application period, awarding Google Cloud credits to support researchers pushing the boundaries of scientific discovery using Gemma models. We are excited to see the groundbreaking research that emerges from this initiative.

Stay tuned for future opportunities to advance your research with Google Cloud.

Join the community

Connect, explore, and share your knowledge with others in the ML model community.

Compete to build the best AI assistant for ML engineers

Kaggle is hosting a competition challenging participants to use Gemma models to build the best AI assistants for ML engineering tasks. The winners will be announced at Google I/O.

Join the competition
Kaggle competition trophy