openai-litellm

n8n community node: OpenAI-compatible LLM provider with structured JSON metadata injection

Package Information

Released: 9/20/2025
Downloads: 13 weeklyย /ย 89 monthly
Latest Version: 1.0.22
Author: Ruby Lo

Documentation

๐Ÿš€ n8n-nodes-openai-litellm

A simplified n8n community node for OpenAI-compatible LLM providers with advanced structured JSON metadata injection capabilities.

npm version
License: MIT
n8n Community Node

๐Ÿ™ Credits

This project is based on the excellent work by rorubyy and their original n8n-nodes-openai-langfuse project. This version has been simplified and refocused to provide a clean, dependency-free solution for structured JSON metadata injection with OpenAI-compatible providers.

Special thanks to rorubyy for the foundation and inspiration! ๐ŸŽ‰


โœจ Key Features

๐ŸŽฏ Universal Compatibility

  • Full support for OpenAI-compatible chat models (gpt-4o, gpt-4o-mini, o1-preview, etc.)
  • Seamless integration with LiteLLM and other OpenAI-compatible providers
  • Works with Azure OpenAI, LocalAI, and custom APIs

๐Ÿ”ง Structured Metadata Injection

  • Inject custom JSON data directly into your LLM requests
  • Add structured context for tracking and analysis
  • Flexible metadata for projects, environments, workflows, and more

โšก Simplified Architecture

  • No external tracing dependencies
  • Quick and easy setup
  • Optimized for performance and reliability

๐Ÿ“ฆ NPM Package: @rlquilez/n8n-nodes-openai-litellm

๐Ÿข About n8n: n8n is a fair-code licensed workflow automation platform.

๐Ÿ“‹ Table of Contents


๐Ÿš€ Installation

Follow the official installation guide for n8n community nodes.

๐ŸŽฏ Community Nodes (Recommended)

For n8n v0.187+, install directly from the UI:

  1. Go to Settings โ†’ Community Nodes
  2. Click Install
  3. Enter @rlquilez/n8n-nodes-openai-litellm in the "Enter npm package name" field
  4. Accept the risks of using community nodes
  5. Select Install

๐Ÿณ Docker Installation (Recommended for Production)

A pre-configured Docker setup is available in the docker/ directory:

  1. Clone the repository and navigate to the docker/ directory

    git clone https://github.com/rlquilez/n8n-nodes-openai-litellm.git
    cd n8n-nodes-openai-litellm/docker
    
  2. Build the Docker image

    docker build -t n8n-openai-litellm .
    
  3. Run the container

    docker run -it -p 5678:5678 n8n-openai-litellm
    

You can now access n8n at http://localhost:5678

โš™๏ธ Manual Installation

For a standard installation without Docker:

# Go to your n8n installation directory
cd ~/.n8n 

# Install the node
npm install @rlquilez/n8n-nodes-openai-litellm

# Restart n8n to apply the node
n8n start

๐Ÿ” Credentials

This credential is used to authenticate your OpenAI-compatible LLM endpoint.

OpenAI Settings

Field Description Example
OpenAI API Key Your API key for accessing the OpenAI-compatible endpoint sk-abc123...
OpenAI Organization ID (Optional) Your OpenAI organization ID, if required org-xyz789
OpenAI Base URL Full URL to your OpenAI-compatible endpoint default: https://api.openai.com/v1

๐Ÿ’ก LiteLLM Compatibility: You can use this node with LiteLLM by setting the Base URL to your LiteLLM proxy endpoint (e.g., http://localhost:4000/v1).

โœ… After saving the credential, you're ready to use the node with structured JSON metadata injection.


โš™๏ธ Configuration

This node allows you to inject structured JSON metadata into your OpenAI requests, providing additional context for your model calls.


๐ŸŽฏ JSON Metadata

Supported Fields

Field Type Description
Custom Metadata (JSON) object Custom JSON object with additional context (e.g., project, env, workflow)
Session ID string Used for trace grouping and session management
User ID string Optional: for trace attribution and user identification

๐Ÿงช Configuration Example

Input Field Example Value
Custom Metadata (JSON) See example below
Session ID default-session-id
User ID user-123
{
  "project": "example-project",
  "env": "dev",
  "workflow": "main-flow",
  "version": "1.0.0",
  "tags": ["ai", "automation"]
}

๐Ÿ’ก How It Works

The node uses LiteLLM-compatible metadata transmission through the extraBody.metadata parameter, ensuring proper integration with LiteLLM proxies and observability tools.

Metadata Flow:

  1. Session ID and User ID are automatically added to the custom metadata
  2. All metadata is transmitted via LiteLLM's standard extraBody.metadata parameter
  3. Compatible with LiteLLM logging, Langfuse, and other observability platforms
  4. Maintains full compatibility with OpenAI-compatible endpoints

Common Use Cases:

  • Session Management: Track conversations across multiple interactions
  • User Attribution: Associate requests with specific users
  • Project Tracking: Identify which project generated the request
  • Environment Control: Differentiate between dev, staging, and production
  • Workflow Analysis: Track performance by workflow type
  • Debugging: Add unique identifiers for debugging purposes
  • Observability: Integration with Langfuse, LiteLLM logging, and custom analytics

๐Ÿ”ง Compatibility

  • Minimum n8n version: 1.0.0 or higher
  • Compatible with:
    • Official OpenAI API
    • Any OpenAI-compatible LLM (e.g., via LiteLLM, LocalAI, Azure OpenAI)
    • All providers that support OpenAI-compatible endpoints

Tested Models

โœ… OpenAI Models:

  • gpt-4o, gpt-4o-mini
  • gpt-4-turbo, gpt-4
  • gpt-3.5-turbo
  • o1-preview, o1-mini

โœ… Compatible Providers:

  • LiteLLM - Proxy for 100+ LLMs
  • Azure OpenAI - Microsoft's enterprise API
  • LocalAI - Self-hosted local LLMs
  • Ollama - Local models via OpenAI-compatible API

๐Ÿ“š Resources

Official Documentation

Useful Links


๐Ÿ”— LiteLLM + Langfuse Configuration

To use this node with LiteLLM and Langfuse for observability, you need to configure your LiteLLM proxy properly:

1. LiteLLM Configuration (config.yaml)

model_list:
  - model_name: gpt-4o-mini
    litellm_params:
      model: gpt-4o-mini
      api_key: os.environ/OPENAI_API_KEY

litellm_settings:
  success_callback: ["langfuse"]  # Enable Langfuse logging
  
# Langfuse environment variables (set these in your environment)
# LANGFUSE_PUBLIC_KEY=pk-xxx
# LANGFUSE_SECRET_KEY=sk-xxx  
# LANGFUSE_HOST=https://cloud.langfuse.com (or your self-hosted URL)

2. Environment Variables

Set these environment variables where you run LiteLLM:

export LANGFUSE_PUBLIC_KEY="pk-xxx"
export LANGFUSE_SECRET_KEY="sk-xxx"
export LANGFUSE_HOST="https://cloud.langfuse.com"
export OPENAI_API_KEY="sk-xxx"

3. Start LiteLLM Proxy

litellm --config config.yaml --port 4000

4. Configure n8n Node

  • Base URL: http://localhost:4000 (or your LiteLLM proxy URL)
  • API Key: Any value (LiteLLM will use the configured API key)
  • Metadata: Will be automatically forwarded to Langfuse with fields like:
    • langfuse_user_id (from User ID field)
    • langfuse_session_id (from Session ID field)
    • Custom metadata from JSON field

๐Ÿ“ˆ Version History

v1.0.15 - Current

  • ๐Ÿ”ง Fixed LiteLLM + Langfuse integration - Changed metadata format to work correctly with LiteLLM proxy
  • โœ… Proper Langfuse fields - Added langfuse_user_id and langfuse_session_id for proper trace attribution
  • ๐ŸŽฏ Simplified approach - Removed complex extra_body approach in favor of direct metadata field
  • ๐Ÿ“š Enhanced documentation - Added comprehensive LiteLLM + Langfuse configuration guide

v1.0.14

  • ๐Ÿ”ง Enhanced metadata transmission - Added dual approach with both direct extra_body and modelKwargs.extra_body for maximum compatibility
  • ๐Ÿ“Š Improved logging - Enhanced console logging to show both extra_body and modelKwargs configuration

v1.0.13

  • ๐Ÿ”ง Multiple transmission approaches - Attempted various methods to ensure metadata reaches LLM endpoint
  • ๐Ÿ“Š Enhanced debugging - Added comprehensive logging for troubleshooting

v1.0.12

  • ๐Ÿ”ง Enhanced metadata transmission - Added dual approach with both direct extra_body and modelKwargs.extra_body for maximum compatibility
  • ๐Ÿ“Š Improved logging - Enhanced console logging to show both extra_body and modelKwargs configuration
  • ๐Ÿ“š Documentation - Updated README with comprehensive version history and troubleshooting guide

v1.0.11

  • ๐Ÿ”ง Critical Fix: Proper extra_body parameter application - Reorganized ChatOpenAI configuration to prevent options spread from overriding extra_body
  • โœ… Enhanced payload transmission - Ensures metadata is properly included in the request payload to LiteLLM/OpenAI endpoints
  • ๐Ÿ“Š Added detailed logging - Better visibility into extra_body configuration for debugging

v1.0.10

  • ๐Ÿ“ Documentation update - Updated version history with v1.0.9 critical fix details

v1.0.9

  • ๐Ÿ”ง Critical Fix: Corrected extra_body parameter name - Fixed extraBody to extra_body to match LangChain ChatOpenAI API specification
  • โœ… Verified metadata transmission - Ensures metadata is properly sent to LiteLLM and OpenAI-compatible endpoints
  • ๐Ÿ“š Based on official documentation - Implementation follows LangChain and LiteLLM examples

v1.0.8

  • ๐Ÿ“ Enhanced documentation - Updated README with detailed metadata features and version history
  • ๐ŸŽฏ Improved use cases - Added comprehensive examples and observability integration details

v1.0.7

  • ๐Ÿ”ง Fixed LiteLLM metadata payload transmission - Implemented proper extra_body.metadata parameter for LiteLLM compatibility
  • ๐Ÿ“Š Added Session ID and User ID fields - Separate fields for better trace attribution and session management
  • ๐ŸŽฏ Improved metadata structure - Based on LiteLLM documentation and reference implementation
  • โœ… Enhanced observability - Better integration with Langfuse and LiteLLM logging systems

v1.0.6

  • ๐Ÿ†• Added Session ID and User ID fields - Separate input fields for better metadata organization
  • ๐Ÿ”ง Improved metadata handling - Enhanced processing and logging of metadata values
  • ๐Ÿ“ Simplified default JSON example - Cleaner default metadata structure

v1.0.5

  • ๐Ÿ”„ Repository synchronization - Updated with latest remote changes
  • ๐Ÿ“š Documentation improvements - Enhanced README and node descriptions

v1.0.2

  • ๐Ÿ”ง Documentation and examples improvements
  • ๐ŸŽฏ Focus on custom JSON metadata injection
  • ๐Ÿ“ Documentation completely rewritten

v1.0.1

  • ๐ŸŽจ Updated icons to official OpenAI icons from n8n repository
  • ๐Ÿ”ง Minor compatibility fixes

v1.0.0

  • ๐ŸŽ‰ Initial release with OpenAI-compatible providers
  • ๐Ÿ“Š Structured JSON metadata injection
  • โšก Simplified architecture without external tracing dependencies

๐Ÿ’ Contributing

Developed with โค๏ธ for the n8n community

If this project was helpful, consider giving it a โญ on GitHub!

Discussion