Package Information
Documentation
n8n-nodes-ai-langfuse
n8n community nodes for AI LLM providers with Langfuse observability
This project is proudly developed and maintained by Wistron DXLab.
Supported Providers:
- OpenAI (GPT-4, GPT-4o, etc.)
- Google Gemini (Gemini 2.5, Gemini 1.5, etc.)
npm package: https://www.npmjs.com/package/n8n-nodes-openai-langfuse
Features
OpenAI Chat Node (
lmChatOpenAiLangfuse)- Support for OpenAI-compatible chat models (e.g.,
gpt-4.1-mini,gpt-4o) - Automatic Langfuse tracing for every request and response
- Custom metadata injection:
sessionId,userId, and structured JSON
- Support for OpenAI-compatible chat models (e.g.,
Google Gemini Chat Node (
lmChatGoogleGeminiLangfuse)- Support for Google Gemini models (e.g.,
gemini-2.5-flash,gemini-1.5-pro) - Automatic Langfuse tracing for every request and response
- Safety settings configuration
- Custom metadata injection:
sessionId,userId, and structured JSON
- Support for Google Gemini models (e.g.,
n8n is a fair-code licensed workflow automation platform.
Installation
Credentials
Operations
Compatibility
Resources
Installation
Follow the installation guide in the official n8n documentation for community nodes.
Community Nodes (Recommended)
For n8n v0.187+, install directly from the UI:
- Go to Settings → Community Nodes
- Click Install
- Enter
n8n-nodes-openai-langfusein Enter npm package name - Agree to the risks of using community nodes
- Select Install
Manual Installation
For a standard installation:
# Go to your n8n installation directory
cd ~/.n8n
# Install the node
npm install n8n-nodes-openai-langfuse
# Restart n8n to apply the node
n8n start
Credentials
OpenAI + Langfuse Credentials
| Field Name | Description | Example |
|---|---|---|
| OpenAI API Key | Your API key for accessing the OpenAI-compatible endpoint | sk-abc123... |
| OpenAI Organization ID | (Optional) Your OpenAI organization ID | org-xyz789 |
| OpenAI Base URL | Full URL to your OpenAI-compatible endpoint | default: https://api.openai.com/v1 |
| Langfuse Base URL | The base URL of your Langfuse instance | https://cloud.langfuse.com |
| Langfuse Public Key | Langfuse public key for tracing | pk-xxx |
| Langfuse Secret Key | Langfuse secret key for tracing | sk-xxx |
Google Gemini + Langfuse Credentials
| Field Name | Description | Example |
|---|---|---|
| Google API Key | Your Google AI Studio API key | AIza... |
| Langfuse Base URL | The base URL of your Langfuse instance | https://cloud.langfuse.com |
| Langfuse Public Key | Langfuse public key for tracing | pk-xxx |
| Langfuse Secret Key | Langfuse secret key for tracing | sk-xxx |
🔑 How to find your Langfuse keys:
Log in to your Langfuse dashboard, then go to:
Settings → Projects → [Your Project] to retrieve publicKey and secretKey.
🔑 How to get a Google API Key:
Go to Google AI Studio to create an API key.
Operations
Both nodes let you inject Langfuse-compatible metadata into your LLM requests.
You can trace every run with context such as sessionId, userId, and any custom metadata.
Supported Fields
| Field | Type | Description |
|---|---|---|
sessionId |
string |
Logical session ID to group related runs |
userId |
string |
ID representing the end user making the request |
metadata |
object |
Custom JSON object with additional context (e.g., workflowId, env) |
Google Gemini Additional Options
| Option | Type | Description |
|---|---|---|
temperature |
number |
Controls randomness (0-2) |
topP |
number |
Nucleus sampling (0-1) |
topK |
number |
Top-k token selection (1-100) |
maxOutputTokens |
number |
Maximum tokens to generate |
safetySettings |
object |
Content filtering configuration |
Available Gemini Models
- Gemini 2.5 Series:
gemini-2.5-flash,gemini-2.5-pro - Gemini 2.0 Series:
gemini-2.0-flash-thinking-exp,gemini-2.0-flash-exp - Gemini 1.5 Series:
gemini-1.5-pro,gemini-1.5-flash,gemini-1.5-flash-8b - Gemini 1.0 Series:
gemini-1.0-pro,gemini-pro
Compatibility
- Requires n8n version 1.0.0 or later
- Compatible with:
- OpenAI official API (https://api.openai.com)
- Any OpenAI-compatible LLM (e.g. via LiteLLM, LocalAI, Azure OpenAI)
- Google Generative AI API (https://generativelanguage.googleapis.com)
- Langfuse Cloud and self-hosted instances
Resources
Version History
- v0.2.0 – Added Google Gemini chat node with Langfuse integration
- v0.1.x – OpenAI + Langfuse integration
License
MIT © 2025 Wistron DXLab