Overview
The Gemini CLI node integrates with the Google Gemini CLI tool to perform AI-powered coding and project management tasks through conversational interactions. It supports operations such as continuing a conversation, generating execution plans, editing plans, approving and executing plans, and querying the Gemini AI model.
For the Continue operation specifically, this node allows users to continue an existing conversation with the Gemini CLI by sending follow-up prompts or instructions. This is useful in scenarios where you want to maintain context across multiple exchanges, such as iteratively refining code, debugging, or expanding on previous AI-generated outputs.
Practical examples:
- Continuing a coding assistant session to add new features or fix bugs based on prior conversation history.
- Following up on a previously generated execution plan to clarify or extend steps.
- Maintaining a dialogue with the AI for complex project tasks that require multiple back-and-forth interactions.
Properties
| Name | Meaning |
|---|---|
| Prompt | The text prompt or instruction to send to Gemini CLI to continue the conversation. Supports expressions to dynamically use data from previous nodes. |
| Model | The Gemini AI model to use: - Gemini 2.5 Pro (most capable, large context window) - Gemini 2.5 Flash (faster, more efficient) |
| Max Turns | Maximum number of conversation turns (back-and-forth exchanges) allowed in the session. |
| Timeout | Maximum time in seconds to wait for the Gemini CLI response before aborting the request. |
| Project Path | Directory path where Gemini CLI should run, allowing access to project files and commands. If empty, uses current working directory. |
| Output Format | How to format the output data returned by the node: - Messages: raw array of all exchanged messages - Structured: object with messages, summary, result, metrics - Text: only final result text |
| Additional Options | Collection of optional settings including: - API Key for Gemini authentication (if not set via environment variable) - Use Vertex AI instead of Gemini API - System Prompt for additional context - Debug Mode |
| Tools Configuration | Configure built-in tools and integrations enabled for Gemini CLI, including file system access, shell commands, web fetch, and web search. Also includes security mode and checkpointing options. |
| MCP Servers | Configure external MCP servers for extended functionality, specifying connection type, commands, environment variables, trust level, and included/excluded tools. |
Output
The output JSON structure depends on the selected Output Format:
- Messages: Returns an array of message objects representing the full conversation exchange, each with type (user, assistant, error), content, and timestamp.
- Structured (default): Returns an object containing:
messages: full message array,summary: counts of user, assistant, and error messages plus a flag if a result exists,result: the final textual result from Gemini CLI,metrics: timing and turn count information,configuration: details about model, tools enabled, security mode, checkpointing, MCP servers, and project path,success: boolean indicating if the operation succeeded,error: error message if any.
- Text: Returns only the final result text string.
- Other formats like "plan" or "plan_status" are not relevant for the Continue operation.
No binary data output is produced by this node.
Dependencies
- Requires the Gemini CLI installed and accessible in the system PATH (
geminicommand). - Optionally requires a valid API key for Gemini API authentication, either provided in node parameters or via environment variable.
- May optionally use Vertex AI if enabled.
- Node interacts with local filesystem and can execute shell commands depending on tool configuration.
- MCP servers can be configured for extended protocol support.
- Node requires appropriate permissions to read/write in the specified project directory if used.
Troubleshooting
- Gemini CLI not found or not installed: The node checks for the presence of the
geminicommand. If missing, install it globally using npm (npm install -g @google/gemini-cli) and ensure it is in the system PATH. - Invalid or inaccessible project path: If a project path is specified, it must exist, be a directory, and have read/write permissions. Otherwise, the node will throw an error.
- Empty prompt error: For the Continue operation, the prompt parameter is required and cannot be empty.
- Timeouts: If Gemini CLI does not respond within the specified timeout, the node aborts with a timeout error. Increase the timeout if needed.
- Authentication errors: If the Gemini CLI is installed but not properly authenticated (missing or invalid API key), the node will report authentication issues. Provide a valid API key.
- Debug mode: Enable debug mode to get detailed logs in the n8n execution console for troubleshooting.
- Plan-related errors: Not applicable for Continue operation but relevant for other operations involving plans.
Links and References
- Google Gemini CLI GitHub Repository (for installation and usage)
- n8n Documentation on Creating Custom Nodes
- Node.js Child Process Module (used internally for spawning Gemini CLI)
- Google Vertex AI (optional alternative AI backend)
This summary covers the logic and usage of the Gemini CLI node's Continue operation based on static analysis of the source code and provided property definitions.