Get started with the Gemini API in Node.js applications

This tutorial demonstrates how to access the Gemini API for your Node.js application using the Google AI JavaScript SDK.

In this tutorial, you'll learn how to do the following:

In addition, this tutorial contains sections about advanced use cases (like embeddings and counting tokens) as well as options for controlling content generation.

Prerequisites

This tutorial assumes that you're familiar with building applications with Node.js.

To complete this tutorial, make sure that your development environment meets the following requirements:

  • Node.js v18+
  • npm

Set up your project

Before calling the Gemini API, you need to set up your project, which includes setting up your API key, installing the SDK package, and initializing the model.

Set up your API key

To use the Gemini API, you'll need an API key. If you don't already have one, create a key in Google AI Studio.

Get an API key

Secure your API key

It's strongly recommended that you do not check an API key into your version control system. Instead, you should use a secrets store for your API key.

All the snippets in this tutorial assume that you're accessing your API key as an environment variable.

Install the SDK package

To use the Gemini API in your own application, you need to install the GoogleGenerativeAI package for Node.js:

npm install @google/generative-ai

Initialize the Generative Model

Before you can make any API calls, you need to import and initialize the Generative Model.

const { GoogleGenerativeAI } = require("@google/generative-ai");

// Access your API key as an environment variable (see "Set up your API key" above)
const genAI = new GoogleGenerativeAI(process.env.API_KEY);

// ...

const model = genAI.getGenerativeModel({ model: "MODEL_NAME"});

// ...

When specifying a model, note the following:

  • Use a model that's specific to your use case (for example, gemini-pro-vision is for multimodal input). Within this guide, the instructions for each implementation list the recommended model for each use case.

Implement common use cases

Now that your project is set up, you can explore using the Gemini API to implement different use cases:

In the advanced use cases section, you can find information about the Gemini API and embeddings.

Generate text from text-only input

When the prompt input includes only text, use the gemini-pro model with the generateContent method to generate text output:

const { GoogleGenerativeAI } = require("@google/generative-ai");

// Access your API key as an environment variable (see "Set up your API key" above)
const genAI = new GoogleGenerativeAI(process.env.API_KEY);

async function run() {
  // For text-only input, use the gemini-pro model
  const model = genAI.getGenerativeModel({ model: "gemini-pro"});

  const prompt = "Write a story about a magic backpack."

  const result = await model.generateContent(prompt);
  const response = await result.response;
  const text = response.text();
  console.log(text);
}

run();

Generate text from text-and-image input (multimodal)

Gemini provides a multimodal model (gemini-pro-vision), so you can input both text and images. Make sure to review the image requirements for prompts.

When the prompt input includes both text and images, use the gemini-pro-vision model with the generateContent method to generate text output:

const { GoogleGenerativeAI } = require("@google/generative-ai");
const fs = require("fs");

// Access your API key as an environment variable (see "Set up your API key" above)
const genAI = new GoogleGenerativeAI(process.env.API_KEY);

// Converts local file information to a GoogleGenerativeAI.Part object.
function fileToGenerativePart(path, mimeType) {
  return {
    inlineData: {
      data: Buffer.from(fs.readFileSync(path)).toString("base64"),
      mimeType
    },
  };
}

async function run() {
  // For text-and-image input (multimodal), use the gemini-pro-vision model
  const model = genAI.getGenerativeModel({ model: "gemini-pro-vision" });

  const prompt = "What's different between these pictures?";

  const imageParts = [
    fileToGenerativePart("image1.png", "image/png"),
    fileToGenerativePart("image2.jpeg", "image/jpeg"),
  ];

  const result = await model.generateContent([prompt, ...imageParts]);
  const response = await result.response;
  const text = response.text();
  console.log(text);
}

run();

Build multi-turn conversations (chat)

Using Gemini, you can build freeform conversations across multiple turns. The SDK simplifies the process by managing the state of the conversation, so unlike with generateContent, you don't have to store the conversation history yourself.

To build a multi-turn conversation (like chat), use the gemini-pro model, and initialize the chat by calling startChat(). Then use sendMessage() to send a new user message, which will also append the message and the response to the chat history.

There are two possible options for role associated with the content in a conversation:

  • user: the role which provides the prompts. This value is the default for sendMessage calls.

  • model: the role which provides the responses. This role can be used when calling startChat() with existing history.

const { GoogleGenerativeAI } = require("@google/generative-ai");

// Access your API key as an environment variable (see "Set up your API key" above)
const genAI = new GoogleGenerativeAI(process.env.API_KEY);

async function run() {
  // For text-only input, use the gemini-pro model
  const model = genAI.getGenerativeModel({ model: "gemini-pro"});

  const chat = model.startChat({
    history: [
      {
        role: "user",
        parts: [{ text: "Hello, I have 2 dogs in my house." }],
      },
      {
        role: "model",
        parts: [{ text: "Great to meet you. What would you like to know?" }],
      },
    ],
    generationConfig: {
      maxOutputTokens: 100,
    },
  });

  const msg = "How many paws are in my house?";

  const result = await chat.sendMessage(msg);
  const response = await result.response;
  const text = response.text();
  console.log(text);
}

run();

Use streaming for faster interactions

By default, the model returns a response after completing the entire generation process. You can achieve faster interactions by not waiting for the entire result, and instead use streaming to handle partial results.

The following example shows how to implement streaming with the generateContentStream method to generate text from a text-and-image input prompt.

//...

const result = await model.generateContentStream([prompt, ...imageParts]);

let text = '';
for await (const chunk of result.stream) {
  const chunkText = chunk.text();
  console.log(chunkText);
  text += chunkText;
}

//...

You can use a similar approach for text-only input and chat use cases.

// Use streaming with text-only input
const result = await model.generateContentStream(prompt);

See chat example above for how to instantiate a chat.

// Use streaming with multi-turn conversations (like chat)
const result = await chat.sendMessageStream(msg);

Implement advanced use cases

The common use cases described in the previous section of this tutorial help you become comfortable with using the Gemini API. This section describes some use cases that might be considered more advanced.

Use embeddings

Embedding is a technique used to represent information as a list of floating point numbers in an array. With Gemini, you can represent text (words, sentences, and blocks of text) in a vectorized form, making it easier to compare and contrast embeddings. For example, two texts that share a similar subject matter or sentiment should have similar embeddings, which can be identified through mathematical comparison techniques such as cosine similarity.

Use the embedding-001 model with the embedContent method (or the batchEmbedContent method) to generate embeddings. The following example generates an embedding for a single string:

const { GoogleGenerativeAI } = require("@google/generative-ai");

// Access your API key as an environment variable (see "Set up your API key" above)
const genAI = new GoogleGenerativeAI(process.env.API_KEY);

async function run() {
  // For embeddings, use the embedding-001 model
  const model = genAI.getGenerativeModel({ model: "embedding-001"});

  const text = "The quick brown fox jumps over the lazy dog."

  const result = await model.embedContent(text);
  const embedding = result.embedding;
  console.log(embedding.values);
}

run();

Count tokens

When using long prompts, it might be useful to count tokens before sending any content to the model. The following examples show how to use countTokens() for various use cases:

// For text-only input
const { totalTokens } = await model.countTokens(prompt);
// For text-and-image input (multimodal)
const { totalTokens } = await model.countTokens([prompt, ...imageParts]);
// For multi-turn conversations (like chat)
const history = await chat.getHistory();
const msgContent = { role: "user", parts: [{ text: msg }] };
const contents = [...history, msgContent];
const { totalTokens } = await model.countTokens({ contents });

Options to control content generation

You can control content generation by configuring model parameters and by using safety settings.

Note that passing generationConfig or safetySettings to a model request method (like generateContent) will fully override the configuration object with the same name passed in getGenerativeModel.

Configure model parameters

Every prompt you send to the model includes parameter values that control how the model generates a response. The model can generate different results for different parameter values. Learn more about Model parameters.

const generationConfig = {
  stopSequences: ["red"],
  maxOutputTokens: 200,
  temperature: 0.9,
  topP: 0.1,
  topK: 16,
};

const model = genAI.getGenerativeModel({ model: "MODEL_NAME",  generationConfig });

Use safety settings

You can use safety settings to adjust the likelihood of getting responses that may be considered harmful. By default, safety settings block content with medium and/or high probability of being unsafe content across all dimensions. Learn more about Safety settings.

Here's how to set one safety setting:

import { HarmBlockThreshold, HarmCategory } from "@google/generative-ai";

// ...

const safetySettings = [
  {
    category: HarmCategory.HARM_CATEGORY_HARASSMENT,
    threshold: HarmBlockThreshold.BLOCK_ONLY_HIGH,
  },
];

const model = genAI.getGenerativeModel({ model: "MODEL_NAME", safetySettings });

You can also set more than one safety setting:

const safetySettings = [
  {
    category: HarmCategory.HARM_CATEGORY_HARASSMENT,
    threshold: HarmBlockThreshold.BLOCK_ONLY_HIGH,
  },
  {
    category: HarmCategory.HARM_CATEGORY_HATE_SPEECH,
    threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
  },
];

What's next

  • Prompt design is the process of creating prompts that elicit the desired response from language models. Writing well structured prompts is an essential part of ensuring accurate, high quality responses from a language model. Learn about best practices for prompt writing.

  • Gemini offers several model variations to meet the needs of different use cases, such as input types and complexity, implementations for chat or other dialog language tasks, and size constraints. Learn about the available Gemini models.

  • Gemini offers options for requesting rate limit increases. The rate limit for Gemini Pro models is 60 requests per minute (RPM).