Google Gemini - FCI icon

Google Gemini - FCI

Interact with Google Gemini AI models using direct URL and API Key

Overview

This node allows users to send conversational messages to a Google Gemini language model via its API. It is designed for scenarios where you want to interact with advanced AI text generation models, such as chatbots, virtual assistants, or content generation tools.

Typical use cases include:

  • Sending prompts and receiving generated text responses.
  • Customizing the tone or behavior of the model by specifying roles in messages.
  • Adjusting generation parameters like temperature and token limits to control creativity and length.
  • Enabling code execution within the model's response if supported.

For example, you could use this node to build a customer support chatbot that sends user queries to the Gemini model and returns helpful answers, or to generate creative writing based on user prompts.

Properties

Name Meaning
Server URL The base URL of the Google Gemini API endpoint (default: https://generativelanguage.googleapis.com).
API Key The API key credential required to authenticate requests to the Google Gemini API.
Model The identifier of the specific Gemini model to use for generating responses. Can be selected from a list or provided directly by ID (e.g., models/gemini-2.5-flash).
Messages A collection of messages forming the conversation history or prompt. Each message includes:
- Prompt: The text content to send.
- Role: Defines the message role (User or Model) influencing the response.
Simplify Output Whether to return a simplified version of the API response instead of the full raw data. Defaults to true.
Output Content as JSON Whether to attempt parsing and returning the model's response content as JSON format. Defaults to false.
Options Additional optional parameters to customize the model's behavior:
- System Message: Context or instructions for the model.
- Code Execution: Enable/disable code execution.
- Temperature: Controls randomness (0–2).
- Top P: Nucleus sampling diversity (0–1).
- Top K: Top-k sampling diversity (1–100).
- Max Output Tokens: Maximum tokens to generate (1–8192).
- Candidate Count: Number of response candidates (1–4).
- Frequency Penalty: Penalizes frequent tokens (-2 to 2).
- Presence Penalty: Penalizes any repeated tokens (-2 to 2).
- Max Tools Iterations: Max number of tool call iterations (0–50).

Output

The node outputs an array of items, each containing a json field with the response from the Google Gemini model. Depending on the "Simplify Output" property, this can be either:

  • A simplified text response extracted from the model's output, suitable for direct use.
  • The full raw response object from the API, including metadata and multiple candidate completions if requested.

If "Output Content as JSON" is enabled, the node attempts to parse the response content as JSON and return it accordingly.

The node does not explicitly handle binary data output.

Dependencies

  • Requires access to the Google Gemini API endpoint.
  • Needs a valid API key credential for authentication.
  • Network connectivity to the specified server URL.
  • No additional external dependencies beyond standard HTTP request capabilities.

Troubleshooting

  • Authentication errors: Ensure the API key is valid and has permissions to access the Google Gemini API.
  • Invalid model ID: Verify the model identifier is correct and available in your account.
  • Rate limiting or quota exceeded: Check your Google Cloud usage limits and quotas.
  • Malformed messages: Make sure the messages array contains properly structured prompts and roles.
  • JSON parsing errors: If enabling JSON output, ensure the model's response is valid JSON; otherwise, disable this option.
  • Timeouts or network issues: Confirm network connectivity and that the server URL is reachable.

Links and References

Discussion