Overview
This node integrates with the Google Gemini chat models to generate AI-driven text responses based on user input messages. It supports multiple Gemini model variants and offers an optional "thinking mode" that provides detailed internal reasoning alongside the final response. This makes it useful for scenarios requiring not only answers but also insight into the model's thought process, such as educational tools, complex decision support, or debugging AI outputs.
Practical examples:
- Generating conversational replies or chatbot responses.
- Asking the model to explain its reasoning behind a recommendation.
- Customizing output randomness and length for creative writing or summarization tasks.
Properties
| Name | Meaning |
|---|---|
| Model | The Gemini model variant to use. Options: Gemini 1.5 Pro, Gemini 1.5 Flash, Gemini 1.5 Flash-8B, Gemini 2.0 Flash Experimental |
| Message | The input message text sent to the Gemini model. |
| Enable Thinking | Whether to enable thinking mode, which returns both internal reasoning and final response. |
| System Instruction | Optional system-level instruction guiding the model’s behavior (e.g., "You are a helpful assistant..."). |
| Temperature | Controls randomness in the generated output; higher values produce more random results (0 to 2). |
| Max Output Tokens | Maximum number of tokens the model can generate in the response (1 to 8192). |
| Top P | Controls diversity via nucleus sampling (probability mass threshold between 0 and 1). |
| Top K | Limits diversity by restricting the number of tokens considered at each step (1 to 100). |
| Custom Parameters | Additional custom parameters as name-value pairs to include in the request payload. |
Output
The node outputs JSON data with the following structure:
model: The selected Gemini model used.message: The original input message.response: The generated text response from the model.thinking(optional): If thinking mode is enabled, contains the model’s internal reasoning or thought process.usage: Metadata about token usage returned by the API.rawResponse: The full raw response object from the Gemini API.pairedItem: Index linking the output to the corresponding input item.
No binary data output is produced by this node.
Dependencies
- Requires an API key credential for authenticating requests to the Google Gemini generative language API.
- The node uses HTTP POST requests to the endpoint:
https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent - Proper configuration of the API key credential within n8n is necessary.
Troubleshooting
Common issues:
- Invalid or missing API credentials will cause authentication errors.
- Exceeding token limits or invalid parameter values may result in API errors.
- Malformed JSON in custom parameters can cause request failures.
Error messages:
- Authentication errors: Verify that the API key credential is correctly set up and has required permissions.
- Parameter validation errors: Check that numeric inputs like temperature, max tokens, topP, and topK are within allowed ranges.
- JSON parsing errors in thinking mode: Ensure the model’s response is valid JSON when thinking mode is enabled.
To handle errors gracefully, the node supports continuing on failure per item if configured.
Links and References
- Google Gemini Models Documentation
- OpenAI-style Sampling Parameters Explanation (for understanding temperature, topP, topK)
- n8n Documentation on Creating Custom Nodes