Package Information
Documentation
Free LLM Router n8n Node
n8n community node for Free LLM Router Microservice.
This node provides a LangChain-compatible model interface that can be connected to the "Basic LLM Chain" and other LangChain nodes in n8n.
Features
- 🤖 LangChain Compatible - Works seamlessly with Basic LLM Chain and other LangChain nodes
- 🔄 Smart Model Selection - Automatic model selection with Smart Strategy
- 🎯 Priority Lists - Define model priority lists for fallback
- 🏷️ Advanced Filtering - Filter models by tags, type, context size, and success rate
- 🛠️ Function Calling - Full support for OpenAI-compatible tools/function calling
- 🖼️ Vision Support - Send images along with text for multimodal analysis
- 🛡️ Authentication - Supports None, Basic Auth, and Bearer Token authentication
- ⚙️ Full Control - Access to all OpenAI-compatible parameters
- 📡 Streaming Support - Real-time response streaming with LangChain callbacks
Installation
Community Nodes (Recommended)
- Go to Settings → Community Nodes in your n8n instance
- Click Install a community node
- Enter
n8n-nodes-bozonx-free-llm-router-microservice - Click Install
Manual Installation
cd ~/.n8n/nodes
npm install n8n-nodes-bozonx-free-llm-router-microservice
Restart your n8n instance after installation.
Prerequisites
You need a running instance of the Free LLM Router Microservice. See the main project README for setup instructions.
Quick start with Docker:
git clone https://github.com/bozonx/free-llm-router-microservice.git
cd free-llm-router-microservice
cp config.yaml.example config.yaml
cp .env.production.example .env.production
# Edit .env.production to add your API keys
docker compose -f docker/docker-compose.yml up -d
Setup
1. Create Credentials
- In n8n, go to Credentials → New
- Search for "Free LLM Router API"
- Configure:
- Base URL: Your microservice URL (e.g.,
http://free-llm-router-microservice:8080) - Authentication: Choose None, Basic Auth, or Bearer Token
- Add credentials if using authentication
- Base URL: Your microservice URL (e.g.,
2. Add the Node to Your Workflow
- Create or open a workflow
- Add the Free LLM Router Model node
- Connect it to a Basic LLM Chain or other LangChain node
- Configure model selection and parameters
Usage
Model Selection Modes
Auto (Smart Strategy)
Let the router automatically select the best model based on:
- Model availability and health (Circuit Breaker)
- Priority and weight configuration
- Success rate and latency statistics
- Optional filters (tags, type, context size, etc.)
Specific Model
Choose a specific model by name:
llama-3.3-70b- Any provideropenrouter/deepseek-r1- Specific provider
Priority List
Provide comma-separated list of models to try in order:
openrouter/deepseek-r1, llama-3.3-70b, auto- Models are tried sequentially
- Add
autoat the end to fallback to Smart Strategy
Filter Options (Auto Mode)
When using Auto mode, you can filter models by:
- Tags: Filter by model capabilities (e.g.,
code, reasoning) - Type:
fastorreasoning - Minimum Context Size: Required context window size
- Prefer Fast: Prioritize models with lowest latency
- Minimum Success Rate: Filter out unreliable models (0-1)
Parameters
All standard OpenAI parameters are supported:
- Temperature (0-2): Controls randomness
- Maximum Tokens: Max tokens to generate
- Top P (0-1): Nucleus sampling parameter
- Frequency Penalty (-2 to 2): Reduces repetition
- Presence Penalty (-2 to 2): Encourages new topics
- Timeout: Request timeout in milliseconds
Routing Options
Advanced routing parameters for fine-tuning request behavior:
- Max Model Switches: Maximum number of different models to try
- Max Same Model Retries: Maximum retries on the same model for temporary errors (429, network errors)
- Retry Delay: Delay between retries in milliseconds
- Fallback Model: Override fallback model in format
provider/model(e.g.,deepseek/deepseek-chatoropenrouter/deepseek-r1). Applied only if fallback is enabled in microservice config. Provider is the first part before/, model can contain additional/characters.
Example Workflows
Simple Chat with Auto Selection
Add Free LLM Router Model node
- Model Selection: Auto
- Temperature: 0.7
- Maximum Tokens: 1000
Add Basic LLM Chain node
- Connect Free LLM Router to "model" input
- Set your prompt
Code Generation with Filtering
Add Free LLM Router Model node
- Model Selection: Auto
- Filter Options:
- Tags:
code - Type:
fast - Prefer Fast: Yes
- Tags:
Connect to Basic LLM Chain
Model Fallback Chain
Add Free LLM Router Model node
- Model Selection: Priority List
- Model Priority List:
openrouter/deepseek-r1, llama-3.3-70b, auto
Connect to Basic LLM Chain
This will try DeepSeek R1 first, then Llama 3.3, then fall back to Smart Strategy.
Function Calling with Tools
Add Free LLM Router Model node
- Model Selection: Auto or specific model
- Temperature: 0.7
Add Tool nodes (e.g., Calculator, Web Search)
Add Agent node
- Connect Free LLM Router to "model" input
- Connect Tools to "tools" input
- Set your prompt
The model will automatically use bindTools() to enable function calling with the connected tools.
Vision (Image Analysis)
The node supports vision-capable models (like gemini-2.0-flash-exp) for multimodal analysis.
How to use:
- Add Free LLM Router Model node
- Configure it to use a vision-capable model (e.g. filter by tag
visionor select specific model) - Connect it to an AI Agent node in n8n
- The AI Agent handles the user input (text + images) and passes it to the model
Note: Vision support works through the AI Agent interface in n8n. Ensure you select a model that supports vision (e.g., gemini-2.0-flash-exp).
Available vision-capable models:
gemini-2.0-flash-exp(recommended, 1M tokens context, supportsvisiontag)nemotron-nano-12b-v2-vl(128K tokens context, supportsvisiontag)
Response Metadata
All responses include router metadata in the _router field:
{
"_router": {
"provider": "openrouter",
"model_name": "llama-3.3-70b",
"attempts": 1,
"fallback_used": false,
"errors": []
}
}
Troubleshooting
Node not appearing in n8n
- Check that the installation was successful
- Restart your n8n instance
- Clear browser cache
Connection errors
- Verify the Base URL in credentials
- Check that the microservice is running:
curl http://your-service:8080/api/v1/health - Verify authentication settings match your microservice configuration
No models available
- Check microservice logs
- Verify
models.yamlconfiguration - Check Circuit Breaker status via Admin API:
GET /api/v1/admin/state
Resources
License
Support
For issues and questions: