universal-llm-vision

n8n node for Universal LLM Vision

Package Information

Downloads: 154 weekly / 154 monthly
Latest Version: 0.2.1
Author: Alejandro Sanz

Documentation

n8n-nodes-universal-llm-vision

A comprehensive n8n community node for analyzing images using multiple LLM vision providers (OpenRouter, Groq, Grok, OpenAI, Anthropic, Google Gemini).

Installation

Install via n8n's community node interface:

  1. Open n8n in your browser
  2. Go to Settings > Community Nodes
  3. Search for "n8n-nodes-universal-llm-vision" and install

Features

  • ✅ Image analysis using multiple LLM providers
  • ✅ Support for binary data, URLs, and base64 images
  • ✅ Flexible prompts and model parameters
  • ✅ Metadata inclusion (usage, tokens)
  • ✅ Custom headers and advanced parameters
  • ✅ Comprehensive testing included
  • ✅ n8n Agents compatible

Usage

Basic Setup

  1. Add the "Universal LLM Vision" node to your n8n workflow
  2. Configure your API credentials for the chosen provider
  3. Select image source and analysis parameters

Sample usage in a workflow

Supported Providers

  • OpenAI
  • Google Gemini
  • Anthropic
  • OpenRouter
  • Groq
  • Grok (X.AI)
  • Custom (OpenAI-compatible API)

Supported Models

  • GPT 5, GPT 4.1, GPT 4o, ... (OpenAI)
  • Claude 4.5 Sonnet & Haiku, ... (Anthropic)
  • Gemini 2.5 Flash Lite, Gemini 3.0 Flash, ... (Google)
  • Gemma 3 27B, GLM 4.6V, Ministral 3, Nemotron VL, Qwen3 VL, ... (OpenRouter)
  • Llama 4 Maverick (Groq)
  • Grok 4.1 Fast (Grok/X.AI)

Available Operations

  • Analyze Image: Analyze images with custom prompts

Configuration

Credentials

Set up your API credentials:

  • Provider: Select LLM provider (OpenAI, Anthropic, etc.)
  • API Key: Your provider's API key
  • Base URL: Custom API endpoint (optional, defaults provided)

Custom Provider Configuration

To use a custom OpenAI-compatible LLM vision API:

  • Select "Custom Provider" and provide your API Key
  • Set the Base URL (e.g., https://your-api.com/v1)

Requirements: API must support /chat/completions with OpenAI-style requests/responses and Bearer auth.

Example: Set Base URL to https://my-vision-api.com/v1 and ensure vision support.

Troubleshooting: Check API key and endpoint for auth issues; verify OpenAI compatibility for format errors.

Node Parameters

  • Model: Model identifier (e.g., gpt-4o)
  • Image Source: Binary Data, URL, or Base64
  • Prompt: Analysis prompt
  • Image Detail: Auto/Low/High resolution
  • Model Parameters: Temperature, Max Tokens, Top P
  • Advanced Options: System prompt, response format, custom headers
  • Output: Property name and metadata inclusion

Examples

See the example workflow for a complete setup.

Sample analysis result

Analyze Image from Binary Data

  1. Use a "Download" node to fetch an image
  2. Connect to "Universal LLM Vision" node
  3. Set Image Source to "Binary Data"
  4. Configure prompt: "Describe this image in detail"

Analyze Image from URL

  1. Add "Universal LLM Vision" node
  2. Set Image Source to "URL"
  3. Provide image URL
  4. Use prompt: "Extract all text from this image (OCR)"

Development

This node was built using the n8n-community-node-starter boilerplate, which provides:

  • Programmatic node architecture for complex logic
  • Built-in CI/CD pipelines
  • Comprehensive testing framework
  • AI-assisted development support

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes with tests
  4. Submit a pull request

License

MIT License - see LICENSE file.

Links

Discussion