Package Information
Downloads: 154 weekly / 154 monthly
Latest Version: 0.2.1
Author: Alejandro Sanz
Documentation
n8n-nodes-universal-llm-vision
A comprehensive n8n community node for analyzing images using multiple LLM vision providers (OpenRouter, Groq, Grok, OpenAI, Anthropic, Google Gemini).
Installation
Install via n8n's community node interface:
- Open n8n in your browser
- Go to Settings > Community Nodes
- Search for "n8n-nodes-universal-llm-vision" and install
Features
- ✅ Image analysis using multiple LLM providers
- ✅ Support for binary data, URLs, and base64 images
- ✅ Flexible prompts and model parameters
- ✅ Metadata inclusion (usage, tokens)
- ✅ Custom headers and advanced parameters
- ✅ Comprehensive testing included
- ✅ n8n Agents compatible
Usage
Basic Setup
- Add the "Universal LLM Vision" node to your n8n workflow
- Configure your API credentials for the chosen provider
- Select image source and analysis parameters

Supported Providers
- OpenAI
- Google Gemini
- Anthropic
- OpenRouter
- Groq
- Grok (X.AI)
- Custom (OpenAI-compatible API)
Supported Models
- GPT 5, GPT 4.1, GPT 4o, ... (OpenAI)
- Claude 4.5 Sonnet & Haiku, ... (Anthropic)
- Gemini 2.5 Flash Lite, Gemini 3.0 Flash, ... (Google)
- Gemma 3 27B, GLM 4.6V, Ministral 3, Nemotron VL, Qwen3 VL, ... (OpenRouter)
- Llama 4 Maverick (Groq)
- Grok 4.1 Fast (Grok/X.AI)
Available Operations
- Analyze Image: Analyze images with custom prompts
Configuration
Credentials
Set up your API credentials:
- Provider: Select LLM provider (OpenAI, Anthropic, etc.)
- API Key: Your provider's API key
- Base URL: Custom API endpoint (optional, defaults provided)
Custom Provider Configuration
To use a custom OpenAI-compatible LLM vision API:
- Select "Custom Provider" and provide your API Key
- Set the Base URL (e.g.,
https://your-api.com/v1)
Requirements: API must support /chat/completions with OpenAI-style requests/responses and Bearer auth.
Example: Set Base URL to https://my-vision-api.com/v1 and ensure vision support.
Troubleshooting: Check API key and endpoint for auth issues; verify OpenAI compatibility for format errors.
Node Parameters
- Model: Model identifier (e.g., gpt-4o)
- Image Source: Binary Data, URL, or Base64
- Prompt: Analysis prompt
- Image Detail: Auto/Low/High resolution
- Model Parameters: Temperature, Max Tokens, Top P
- Advanced Options: System prompt, response format, custom headers
- Output: Property name and metadata inclusion
Examples
See the example workflow for a complete setup.

Analyze Image from Binary Data
- Use a "Download" node to fetch an image
- Connect to "Universal LLM Vision" node
- Set Image Source to "Binary Data"
- Configure prompt: "Describe this image in detail"
Analyze Image from URL
- Add "Universal LLM Vision" node
- Set Image Source to "URL"
- Provide image URL
- Use prompt: "Extract all text from this image (OCR)"
Development
This node was built using the n8n-community-node-starter boilerplate, which provides:
- Programmatic node architecture for complex logic
- Built-in CI/CD pipelines
- Comprehensive testing framework
- AI-assisted development support
Contributing
- Fork the repository
- Create a feature branch
- Make your changes with tests
- Submit a pull request
License
MIT License - see LICENSE file.
Links
- n8n Documentation
- Community Nodes Guide
- n8n-community-node-starter - The boilerplate this node is based on