Overview
This node integrates with the Google Gemini CLI to perform AI-powered coding and project management tasks using conversational prompts. It supports starting new conversations, continuing existing ones, generating detailed execution plans for tasks, editing and approving those plans, executing approved plans step-by-step, and listing stored plans.
Typical use cases include:
- Automating code generation or modification based on natural language instructions.
- Creating structured execution plans for complex development tasks.
- Managing and executing multi-step workflows programmatically.
- Integrating AI-assisted project management into n8n workflows.
For example, a user can provide a prompt like "Create a Python function to parse CSV files," and the node will interact with Gemini CLI to generate code or an execution plan accordingly.
Properties
| Name | Meaning |
|---|---|
| Prompt | The instruction or query sent to Gemini CLI. Required for operations query, continue, and generate_plan. Supports expressions to dynamically use data from previous nodes. |
| Model | Selects the Gemini model to use: - Gemini 2.5 Pro: Most capable with 1M token context window. - Gemini 2.5 Flash: Faster, more efficient for quick responses. |
| Max Turns | Maximum number of conversation turns (back-and-forth exchanges) allowed in the session. |
| Timeout | Maximum time in seconds to wait for Gemini CLI completion before aborting. |
| Project Path | Directory path where Gemini CLI runs, allowing access to project files and commands. If empty, uses current working directory. |
| Output Format | How to format the output data: - Messages: Raw array of all exchanged messages. - Plan: Execution plan structure (for plan-related operations). - Plan Status: Progress and status of plan execution. - Structured: Object with messages, summary, result, and metrics. - Text: Only the final result text. |
| Additional Options | Collection of optional settings: - API Key: Gemini API key if not set via environment variable. - Use Vertex AI: Whether to use Vertex AI instead of Gemini API. - System Prompt: Extra context/instructions for Gemini CLI. - Debug Mode: Enable debug logging. |
| Tools Configuration | Configure built-in tools and integrations: - Enable Built-in Tools: File system read/write, shell commands, web fetch, web search. - Security Mode: Safe (confirmations required), Auto-Approve (YOLO mode), Sandbox. - Enable Checkpointing: Save conversation state. |
| MCP Servers | Configure external MCP servers for extended functionality: - Multiple server entries with command or HTTP connection. - Environment variables, included/excluded tools, trust level, working directory, timeout, etc. |
Output
The node outputs JSON data whose structure depends on the operation and selected output format:
For conversational operations (
query,continue):- Text:
{ result: string, success: boolean, duration_ms: number, error?: string } - Messages:
{ messages: Array<{type:string,content:string,timestamp:number}>, messageCount: number } - Structured:
{ "messages": [...], "summary": { "userMessageCount": number, "assistantMessageCount": number, "errorMessageCount": number, "hasResult": boolean }, "result": string, "metrics": { "duration_ms": number, "num_turns": number }, "configuration": { "model": string, "toolsEnabled": string[], "securityMode": string, "checkpointing": boolean, "mcpServersCount": number, "mcpServerNames": string[], "projectPath": string }, "success": boolean, "error"?: string }
- Text:
For plan-related operations (
generate_plan,edit_plan,approve_plan,list_plans,execute_plan):- Plan: Detailed plan object including id, title, description, steps, status, timestamps.
- Plan Status: Execution progress with counts of completed/failed steps and final status.
- Structured: Includes plan details plus execution results and summary.
- Text: Typically not used for plans.
When executing plans, the node returns the updated plan with step statuses and results of each step.
The node does not output binary data.
Dependencies
- Requires the Google Gemini CLI installed and accessible in the system PATH (
geminicommand). - Optionally requires a valid Gemini API key either via environment variable or input property.
- Supports integration with Vertex AI as an alternative backend.
- May require configuration of external MCP servers for extended capabilities.
- Node uses standard Node.js modules: child_process, fs, and path.
- Permissions to read/write in the specified project directory if file system tools are enabled.
Troubleshooting
- Gemini CLI Not Installed or Inaccessible: Error indicating Gemini CLI is missing or not found in PATH. Solution: Install Gemini CLI globally (
npm install -g @google/gemini-cli) and ensure it is executable. - Invalid Project Path: Errors if the provided project path does not exist, is not a directory, or lacks read/write permissions. Verify the path and permissions.
- Missing Required Parameters: Prompt is mandatory for conversational operations; Plan ID is mandatory for plan operations. Ensure these inputs are provided.
- Timeouts: Gemini CLI may time out if the operation takes longer than the configured timeout. Increase timeout or optimize prompt.
- Plan Not Found: When editing, approving, or executing a plan, if the plan ID does not exist in the
.gemini/plansdirectory, an error is thrown. - Plan Not Approved: Attempting to execute a plan that is not marked as approved will fail.
- JSON Parsing Errors: If Gemini CLI response does not contain valid JSON for plans, errors occur. This usually indicates unexpected CLI output or malformed response.
- Debug Mode: Enable debug mode to get detailed logs for troubleshooting.
Links and References
- Google Gemini CLI GitHub Repository (hypothetical link)
- n8n Documentation on Custom Nodes
- Node.js child_process.spawn documentation
- Model Context Protocol (MCP) (for MCP server integration)
This summary covers the query operation of the Gemini CLI node, describing its input properties, output structure, dependencies, and common issues.