Package Information
Available Nodes
Documentation
๐ n8n-nodes-openai-litellm
A simplified n8n community node for OpenAI-compatible LLM providers with advanced structured JSON metadata injection capabilities.
๐ Credits
This project is based on the excellent work by rorubyy and their original n8n-nodes-openai-langfuse project. This version has been simplified and refocused to provide a clean, dependency-free solution for structured JSON metadata injection with OpenAI-compatible providers.
Special thanks to rorubyy for the foundation and inspiration! ๐
โจ Key Features
๐ฏ Universal Compatibility
- Full support for OpenAI-compatible chat models (
gpt-4o,gpt-4o-mini,o1-preview, etc.) - Seamless integration with LiteLLM and other OpenAI-compatible providers
- Works with Azure OpenAI, LocalAI, and custom APIs
๐ง Structured Metadata Injection
- Inject custom JSON data directly into your LLM requests
- Add structured context for tracking and analysis
- Flexible metadata for projects, environments, workflows, and more
โก Simplified Architecture
- No external tracing dependencies
- Quick and easy setup
- Optimized for performance and reliability
๐ฆ NPM Package: @rlquilez/n8n-nodes-openai-litellm
๐ข About n8n: n8n is a fair-code licensed workflow automation platform.
๐ Table of Contents
- ๐ Installation
- ๐ Credentials
- โ๏ธ Configuration
- ๐ฏ JSON Metadata
- ๐ง Compatibility
- ๐ Resources
- ๐ Version History
๐ Installation
Follow the official installation guide for n8n community nodes.
๐ฏ Community Nodes (Recommended)
For n8n v0.187+, install directly from the UI:
- Go to Settings โ Community Nodes
- Click Install
- Enter
@rlquilez/n8n-nodes-openai-litellmin the "Enter npm package name" field - Accept the risks of using community nodes
- Select Install
๐ณ Docker Installation (Recommended for Production)
A pre-configured Docker setup is available in the docker/ directory:
Clone the repository and navigate to the docker/ directory
git clone https://github.com/rlquilez/n8n-nodes-openai-litellm.git cd n8n-nodes-openai-litellm/dockerBuild the Docker image
docker build -t n8n-openai-litellm .Run the container
docker run -it -p 5678:5678 n8n-openai-litellm
You can now access n8n at http://localhost:5678
โ๏ธ Manual Installation
For a standard installation without Docker:
# Go to your n8n installation directory
cd ~/.n8n
# Install the node
npm install @rlquilez/n8n-nodes-openai-litellm
# Restart n8n to apply the node
n8n start
๐ Credentials
This credential is used to authenticate your OpenAI-compatible LLM endpoint.
OpenAI Settings
| Field | Description | Example |
|---|---|---|
| OpenAI API Key | Your API key for accessing the OpenAI-compatible endpoint | sk-abc123... |
| OpenAI Organization ID | (Optional) Your OpenAI organization ID, if required | org-xyz789 |
| OpenAI Base URL | Full URL to your OpenAI-compatible endpoint | default: https://api.openai.com/v1 |
๐ก LiteLLM Compatibility: You can use this node with LiteLLM by setting the Base URL to your LiteLLM proxy endpoint (e.g.,
http://localhost:4000/v1).
โ After saving the credential, you're ready to use the node with structured JSON metadata injection.
โ๏ธ Configuration
This node allows you to inject structured JSON metadata into your OpenAI requests, providing additional context for your model calls.
๐ฏ JSON Metadata
Supported Fields
| Field | Type | Description |
|---|---|---|
| Custom Metadata (JSON) | object |
Custom JSON object with additional context (e.g., project, env, workflow) |
| Session ID | string |
Used for trace grouping and session management |
| User ID | string |
Optional: for trace attribution and user identification |
๐งช Configuration Example
| Input Field | Example Value |
|---|---|
| Custom Metadata (JSON) | See example below |
| Session ID | default-session-id |
| User ID | user-123 |
{
"project": "example-project",
"env": "dev",
"workflow": "main-flow",
"version": "1.0.0",
"tags": ["ai", "automation"]
}
๐ก How It Works
The node uses LiteLLM-compatible metadata transmission through the extraBody.metadata parameter, ensuring proper integration with LiteLLM proxies and observability tools.
Metadata Flow:
- Session ID and User ID are automatically added to the custom metadata
- All metadata is transmitted via LiteLLM's standard
extraBody.metadataparameter - Compatible with LiteLLM logging, Langfuse, and other observability platforms
- Maintains full compatibility with OpenAI-compatible endpoints
Common Use Cases:
- Session Management: Track conversations across multiple interactions
- User Attribution: Associate requests with specific users
- Project Tracking: Identify which project generated the request
- Environment Control: Differentiate between dev, staging, and production
- Workflow Analysis: Track performance by workflow type
- Debugging: Add unique identifiers for debugging purposes
- Observability: Integration with Langfuse, LiteLLM logging, and custom analytics
๐ง Compatibility
- Minimum n8n version: 1.0.0 or higher
- Compatible with:
- Official OpenAI API
- Any OpenAI-compatible LLM (e.g., via LiteLLM, LocalAI, Azure OpenAI)
- All providers that support OpenAI-compatible endpoints
Tested Models
โ OpenAI Models:
gpt-4o,gpt-4o-minigpt-4-turbo,gpt-4gpt-3.5-turboo1-preview,o1-mini
โ Compatible Providers:
- LiteLLM - Proxy for 100+ LLMs
- Azure OpenAI - Microsoft's enterprise API
- LocalAI - Self-hosted local LLMs
- Ollama - Local models via OpenAI-compatible API
๐ Resources
Official Documentation
- ๐ n8n Community Nodes Documentation
- ๐ LiteLLM Documentation
- ๐ฌ n8n Community Forum
- ๐ค OpenAI API Documentation
Useful Links
- ๐ Report Issues
- ๐ก Request Features
- ๐ฆ NPM Package
๐ LiteLLM + Langfuse Configuration
To use this node with LiteLLM and Langfuse for observability, you need to configure your LiteLLM proxy properly:
1. LiteLLM Configuration (config.yaml)
model_list:
- model_name: gpt-4o-mini
litellm_params:
model: gpt-4o-mini
api_key: os.environ/OPENAI_API_KEY
litellm_settings:
success_callback: ["langfuse"] # Enable Langfuse logging
# Langfuse environment variables (set these in your environment)
# LANGFUSE_PUBLIC_KEY=pk-xxx
# LANGFUSE_SECRET_KEY=sk-xxx
# LANGFUSE_HOST=https://cloud.langfuse.com (or your self-hosted URL)
2. Environment Variables
Set these environment variables where you run LiteLLM:
export LANGFUSE_PUBLIC_KEY="pk-xxx"
export LANGFUSE_SECRET_KEY="sk-xxx"
export LANGFUSE_HOST="https://cloud.langfuse.com"
export OPENAI_API_KEY="sk-xxx"
3. Start LiteLLM Proxy
litellm --config config.yaml --port 4000
4. Configure n8n Node
- Base URL:
http://localhost:4000(or your LiteLLM proxy URL) - API Key: Any value (LiteLLM will use the configured API key)
- Metadata: Will be automatically forwarded to Langfuse with fields like:
langfuse_user_id(from User ID field)langfuse_session_id(from Session ID field)- Custom metadata from JSON field
๐ Version History
v1.0.15 - Current
- ๐ง Fixed LiteLLM + Langfuse integration - Changed metadata format to work correctly with LiteLLM proxy
- โ
Proper Langfuse fields - Added
langfuse_user_idandlangfuse_session_idfor proper trace attribution - ๐ฏ Simplified approach - Removed complex
extra_bodyapproach in favor of direct metadata field - ๐ Enhanced documentation - Added comprehensive LiteLLM + Langfuse configuration guide
v1.0.14
- ๐ง Enhanced metadata transmission - Added dual approach with both direct
extra_bodyandmodelKwargs.extra_bodyfor maximum compatibility - ๐ Improved logging - Enhanced console logging to show both
extra_bodyandmodelKwargsconfiguration
v1.0.13
- ๐ง Multiple transmission approaches - Attempted various methods to ensure metadata reaches LLM endpoint
- ๐ Enhanced debugging - Added comprehensive logging for troubleshooting
v1.0.12
- ๐ง Enhanced metadata transmission - Added dual approach with both direct
extra_bodyandmodelKwargs.extra_bodyfor maximum compatibility - ๐ Improved logging - Enhanced console logging to show both
extra_bodyandmodelKwargsconfiguration - ๐ Documentation - Updated README with comprehensive version history and troubleshooting guide
v1.0.11
- ๐ง Critical Fix: Proper extra_body parameter application - Reorganized ChatOpenAI configuration to prevent options spread from overriding extra_body
- โ Enhanced payload transmission - Ensures metadata is properly included in the request payload to LiteLLM/OpenAI endpoints
- ๐ Added detailed logging - Better visibility into extra_body configuration for debugging
v1.0.10
- ๐ Documentation update - Updated version history with v1.0.9 critical fix details
v1.0.9
- ๐ง Critical Fix: Corrected extra_body parameter name - Fixed
extraBodytoextra_bodyto match LangChain ChatOpenAI API specification - โ Verified metadata transmission - Ensures metadata is properly sent to LiteLLM and OpenAI-compatible endpoints
- ๐ Based on official documentation - Implementation follows LangChain and LiteLLM examples
v1.0.8
- ๐ Enhanced documentation - Updated README with detailed metadata features and version history
- ๐ฏ Improved use cases - Added comprehensive examples and observability integration details
v1.0.7
- ๐ง Fixed LiteLLM metadata payload transmission - Implemented proper
extra_body.metadataparameter for LiteLLM compatibility - ๐ Added Session ID and User ID fields - Separate fields for better trace attribution and session management
- ๐ฏ Improved metadata structure - Based on LiteLLM documentation and reference implementation
- โ Enhanced observability - Better integration with Langfuse and LiteLLM logging systems
v1.0.6
- ๐ Added Session ID and User ID fields - Separate input fields for better metadata organization
- ๐ง Improved metadata handling - Enhanced processing and logging of metadata values
- ๐ Simplified default JSON example - Cleaner default metadata structure
v1.0.5
- ๐ Repository synchronization - Updated with latest remote changes
- ๐ Documentation improvements - Enhanced README and node descriptions
v1.0.2
- ๐ง Documentation and examples improvements
- ๐ฏ Focus on custom JSON metadata injection
- ๐ Documentation completely rewritten
v1.0.1
- ๐จ Updated icons to official OpenAI icons from n8n repository
- ๐ง Minor compatibility fixes
v1.0.0
- ๐ Initial release with OpenAI-compatible providers
- ๐ Structured JSON metadata injection
- โก Simplified architecture without external tracing dependencies
๐ Contributing
Developed with โค๏ธ for the n8n community
If this project was helpful, consider giving it a โญ on GitHub!