Overview
This node integrates with the Google Gemini Chat Model Plus, a conversational AI model that can generate text completions based on input prompts. It supports advanced features such as enabling Google Search within the chat to enhance responses and proxy support for network configurations.
Common scenarios where this node is beneficial include:
- Building AI-powered chatbots that require up-to-date information via integrated search.
- Generating creative or informative text completions in workflows.
- Enhancing customer support automation by leveraging Google's language models.
- Experimenting with different model parameters to tune response randomness and length.
Practical example: A user can input a conversation history or prompt, enable Google Search to fetch relevant real-time data, and receive a coherent, context-aware reply generated by the Gemini model.
Properties
| Name | Meaning |
|---|---|
| This node must be connected to an AI chain. Insert one | A notice indicating that this node requires connection to an AI chain or agent node to function properly. |
| Model | The specific Google Gemini model used to generate completions. Options are dynamically loaded from the API and exclude embedding models. Examples include models/gemini-1.0-pro. |
| Enable Google Search | Boolean flag to enable the Gemini API's built-in Google Search tool, allowing the model to incorporate search results into its responses. |
| Debug Mode | Boolean flag to enable detailed debug logging in the terminal for troubleshooting and development purposes. |
| Options | A collection of additional parameters to customize the generation: |
| Maximum Number of Tokens | Maximum tokens to generate in the completion (default 2048). |
| Sampling Temperature | Controls randomness of output (0 to 1). Lower values make output more deterministic; default is 0.4. |
| Top K | Limits sampling to top K probable tokens (-1 disables it); default is 32. |
| Top P | Nucleus sampling parameter controlling diversity (0 to 1); default is 1. |
| Safety Settings | Multiple safety filters to block harmful content. Each setting includes: |
| Safety Category | Categories like Harassment, Hate Speech, Sexually Explicit, Dangerous Content. |
| Safety Threshold | Levels defining how strictly to block content, e.g., block low and above, medium and above, only high, or none. |
Output
The node outputs JSON data under the field named json containing the generated text completion from the Gemini model. The main content is extracted from the first candidate's first content part's text returned by the API.
If enabled, the node can also incorporate Google Search results internally to enrich the response, but these are not separately output.
No binary data output is produced by this node.
Example output structure (simplified):
{
"json": {
"text": "Generated response text from the Gemini model"
}
}
Dependencies
- Requires an API key credential for authenticating with the Google Gemini generative language API.
- Supports optional HTTPS proxy configuration via environment variables
HTTPS_PROXYorHTTP_PROXY. - Uses the Axios HTTP client for API requests.
- Optional debug logging depends on internal logger utility.
Troubleshooting
Common issues:
- Missing or invalid API key credential will cause authentication failures.
- Network connectivity problems or proxy misconfiguration may prevent API calls.
- Enabling Google Search without proper API access or quota might result in incomplete responses.
- Incorrect model name selection could lead to errors or unexpected behavior.
Error messages:
"Error calling Gemini REST API:"followed by details indicates failure during the API request. Check API key validity, network access, and model name correctness.- If no candidates or empty responses are returned, verify input message format and model availability.
Resolutions:
- Ensure the API key credential is correctly configured in n8n.
- Verify proxy environment variables if behind a corporate firewall.
- Use debug mode to get detailed logs for diagnosing issues.
- Confirm the selected model supports the requested features.