Overview
This node provides integration with a Gemini Chat Model for generating AI completions using Langchain's Google Generative AI interface. It allows users to generate text completions based on the selected Gemini model variant, controlling output randomness and token limits.
Common scenarios include:
- Generating conversational AI responses.
- Creating content or text completions with configurable creativity.
- Experimenting with different Gemini model variants for varied output styles.
Practical example:
- A chatbot workflow that uses this node to generate replies based on user input, adjusting temperature to control response creativity.
Properties
| Name | Meaning |
|---|---|
| Model | The Gemini model variant used for generation. Options: "gemini-pro", "gemini-1.5-flash", "gemini-1.5-pro" |
| Temperature | Controls randomness of output; higher values produce more random results (0 to 1). |
| Max Output Tokens | Maximum number of tokens to generate in the completion (minimum 1). |
| Top K | Maximum number of tokens considered at each step during generation (minimum 1). |
| Top P | Cumulative probability threshold for token selection at each step (0 to 1). |
Output
The node outputs a JSON object containing a response field which is an instance of the Gemini Chat Model configured with the specified parameters. This object can be used downstream in workflows to generate text completions.
No binary data output is produced by this node.
Dependencies
- Requires an API key credential for accessing the Google Gemini generative AI service.
- The node depends on the
@langchain/google-genaipackage for interfacing with the Gemini models. - Proper configuration of the API key credential within n8n is necessary for operation.
Troubleshooting
- Missing or invalid API key: The node will fail if the required API key credential is not set or invalid. Ensure the API key is correctly configured.
- Invalid parameter values: Providing out-of-range values for properties like temperature, max tokens, topK, or topP may cause errors. Use values within the specified ranges.
- Model unavailability: Selecting a model variant not supported or temporarily unavailable could result in errors. Verify model names are correct and supported.
- Network issues: Connectivity problems to the external API endpoint will cause request failures. Check network access and firewall settings.