Overview
The Gemini CLI node integrates with the Google Gemini CLI tool to perform AI-powered coding and project management tasks through conversational interactions. It supports operations such as continuing a conversation, generating execution plans, editing plans, approving and executing plans, and querying the Gemini AI models.
For the Continue operation specifically, this node allows users to continue a previous conversation or interaction with the Gemini CLI, sending follow-up prompts and receiving responses that build on prior context. This is useful in scenarios where iterative refinement or multi-turn dialogues are needed, such as:
- Continuing a code generation or debugging session.
- Extending an AI-assisted project planning discussion.
- Following up on previous instructions or queries without restarting the context.
Practical example: You started a conversation asking Gemini CLI to generate a Python function for CSV parsing. Using the Continue operation, you can send additional instructions like "Add error handling for missing files" and receive updated code or suggestions.
Properties
| Name | Meaning |
|---|---|
| Prompt | The prompt or instruction to send to Gemini CLI to continue the conversation. Supports expressions to dynamically use data from previous nodes. |
| Model | The Gemini model to use for the operation. Options: • Gemini 2.5 Pro (most capable, large context window) • Gemini 2.5 Flash (fast and efficient for quick responses) |
| Max Turns | Maximum number of conversation turns (back-and-forth exchanges) allowed in the session. |
| Timeout | Maximum time in seconds to wait for the Gemini CLI response before aborting the request. |
| Project Path | Directory path where Gemini CLI should run. If empty, uses the current working directory. This allows Gemini CLI to access project files and execute commands in the specified location. |
| Output Format | How to format the output data returned by the node. Options: • Messages: raw array of all exchanged messages • Plan: execution plan structure (not relevant for Continue) • Plan Status: plan progress/status • Structured: object with messages, summary, result, metrics • Text: only the final result text |
| Additional Options | Collection of optional settings: • API Key: Gemini API key if not set via environment variable • Use Vertex AI: whether to use Vertex AI instead of Gemini API • System Prompt: extra context/instructions • Debug Mode: enable debug logging |
| Tools Configuration | Configure built-in tools and integrations enabled for Gemini CLI, including: • File system read/write • Shell command execution • Web fetch • Web search Security mode and checkpointing options also available. |
| MCP Servers | Configure external MCP servers for extended functionality, including connection type, commands, environment variables, tool inclusion/exclusion, headers, and trust level. |
Output
The node outputs JSON data structured according to the selected output format:
- Messages: An array of message objects representing the full conversation exchange, each with type (user, assistant, error), content, and timestamp.
- Structured: A detailed object containing:
messages: full message array,summary: counts of user, assistant, and error messages plus a flag indicating if a result is present,result: the final textual result from Gemini CLI,metrics: performance data such as duration and number of turns,configuration: details about model, enabled tools, security mode, checkpointing, MCP servers, and project path,success: boolean indicating success status,error: error message if any.
- Text: Only the final result text string.
- Other formats like Plan or Plan Status are not applicable for the Continue operation.
No binary data output is produced by this node.
Dependencies
- Requires the Google Gemini CLI installed and accessible in the system PATH.
- Optionally requires a valid Gemini API key either via environment variable or input property.
- May optionally use Vertex AI if configured.
- Node depends on local file system access if a project path is specified.
- External MCP servers can be configured for extended capabilities.
- Proper permissions and network access may be required depending on enabled tools (e.g., web fetch, shell commands).
Troubleshooting
- Gemini CLI Not Installed or Inaccessible: The node checks for Gemini CLI availability at runtime. If not found, it throws an error instructing to install it globally (
npm install -g @google/gemini-cli). - Invalid or Missing Prompt: For Continue operation, the prompt must be non-empty; otherwise, an error is thrown.
- Invalid Project Path: If a project path is provided but does not exist, is not a directory, or lacks read/write permissions, the node will error with a descriptive message.
- Timeouts: If Gemini CLI does not respond within the specified timeout, the process is aborted and an error is raised.
- API Key Issues: If no API key is provided and not set in environment variables, authentication may fail.
- Debug Mode: Enabling debug mode provides verbose logs to help diagnose issues.
- Output Parsing Errors: If Gemini CLI returns malformed or unexpected output, the node may throw errors related to JSON parsing.
- Tool Security Settings: Misconfiguration of tool security modes (safe, auto-approve, sandbox) can cause unexpected behavior or require confirmations.
Links and References
- Google Gemini CLI GitHub Repository (for installation and usage)
- n8n Documentation on Custom Nodes
- Node.js Child Process Module (used internally for spawning Gemini CLI)
- Model Context Protocol (MCP) (for MCP server integration)
This summary covers the logic and usage of the Gemini CLI node's Continue operation based on static analysis of the source code and provided properties.