Package Information
Available Nodes
Documentation
n8n-nodes-pi-openrouter-pro-v6
Professional OpenRouter integration node for n8n workflows with advanced AI model routing, multimodal support, caching, and structured outputs.
✨ v6.0.0 - Major Model Loading Fix
This version fixes the critical issue where only 12 fallback models were available in the dropdown.
Now loads the full catalog of 100+ models directly from OpenRouter API with reliable authentication.
Disclaimer
This project is not affiliated with, endorsed by, or in any way officially connected with OpenRouter. The author is not an official representative of OpenRouter. This is an independent community project created to integrate OpenRouter's API with n8n workflows.
Installation
Install the current package from the npm registry:
npm install n8n-nodes-pi-openrouter-pro-v6
After installation, restart your n8n instance to load the new node.
Version Management
This project uses a versioned package strategy to ensure clean releases and avoid npm registry clutter:
- Latest Version:
n8n-nodes-pi-openrouter-pro-v6(current) - Previous Versions: Automatically deprecated when new versions are released
Old versions are deprecated with clear migration instructions pointing to the latest package. This keeps the npm registry clean while allowing users to easily identify and upgrade to the latest version.
Upgrading from Previous Versions
If you have an older version installed, uninstall it first and then install the latest:
# Uninstall old version (if applicable)
npm uninstall n8n-nodes-pi-openrouter-pro-v5
# Install latest version
npm install n8n-nodes-pi-openrouter-pro-v6
Features
- Unified OpenRouter Pro Chain: A single node now handles model discovery, routing, generation, and structured responses without requiring separate helper nodes.
- Built-In Model Selection: Choose any OpenRouter model directly inside the node, including provider routing, capability filters, and dashboard presets.
- Optional Advanced Settings: Expand advanced panels for sampling controls, provider ordering, reasoning tokens, and multimodal preferences only when you need them.
- Prompt Caching Controls: Toggle OpenRouter caching, set cache keys, and manage overrides inline to cut costs on repeated prompts.
- Structured Output Toggle: Enable JSON enforcement on demand with schema builders or quick JSON mode; disable it for free-form text without rewiring your workflow.
- Production-Ready Integrations: Streaming, tool/function calling, usage metering, and multimodal payload support remain first-class features.
Node Overview
OpenRouter Pro Chain
The OpenRouter Pro Chain node centralizes every OpenRouter capability behind one configuration panel:
- Model Selection: Pick a single model, create routing lists with priorities, or filter by capability (vision, tool use, reasoning) using built-in selectors.
- Prompt & Messages: Compose system, user, and tool messages with support for templating, attachments, and multimodal payloads (text, images, audio, video).
- Advanced Controls: Reveal optional sections for sampling parameters (temperature, top_p, penalties, seed, stop sequences), provider ordering/filters, web search, and reasoning budgets only when needed.
- Caching & Cost Management: Toggle OpenRouter's prompt caching with custom cache identifiers, cache miss fallbacks, and usage reporting to track spend.
- Structured Output: Turn on structured output mode to enforce JSON schemas or quick JSON responses; switch it off to emit rich text plus raw API payloads.
- Response Handling: Choose streaming vs. buffered responses, capture usage metrics, and access raw provider metadata for downstream analytics.
Credentials
OpenRouter API
Set up your OpenRouter credentials:
- API Key: Required - Get your API key from OpenRouter Dashboard
- Referer (Optional): Custom referer header for request branding
- Title (Optional): Custom title header for request identification
The credential validation tests connectivity by calling the OpenRouter /models endpoint.
Usage Examples
Quick Chat Completion
- Add an OpenRouter Pro Chain node to your workflow.
- Select a model from the built-in model dropdown or choose a preset routing list.
- Enter your prompt or message array and execute the workflow. Leave structured output disabled to receive rich text plus raw response data.
JSON Structured Output
- Open the Structured Output section within the node and enable JSON mode.
- Provide a JSON schema (or use quick JSON) to validate the reply shape.
- Run the node to receive schema-enforced responses while still accessing the underlying API payload for debugging.
Caching Reusable Prompts
- Enable Prompt Caching in the node settings.
- Provide a cache key or let the node auto-generate one from inputs.
- Rerun the workflow to reuse cached responses and inspect cache hits via the usage metadata.
Multimodal Conversations
- Add text, image, audio, or video attachments to the message composer.
- Choose a compatible multimodal model using the capability filters.
- Execute the workflow to receive responses that incorporate every modality.
Development
# Install dependencies
npm install
# Build the project
npm run build
# Development mode with watch
npm run dev
# or
npm run build:watch
# Run linter
npm run lint
# Fix linting issues
npm run lint:fix
# Quality validation
npm run quality:check
# Full quality check
npm run quality:full
Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
License
MIT License - see LICENSE file for details.
Support
For issues related to this n8n integration, please use the GitHub Issues. For OpenRouter API-specific questions, refer to the official OpenRouter documentation.