Overview
This node integrates OpenAI's chat models with Langfuse tracing capabilities, enabling advanced AI-driven workflows with detailed traceability. It is designed for users who want to leverage OpenAI's language models while capturing rich metadata and session information via Langfuse for monitoring, debugging, and analytics.
Common scenarios include:
- Building conversational AI agents or chatbots that require detailed usage tracking.
- Running AI chains where traceability of requests and responses is critical.
- Experimenting with different OpenAI models while collecting metadata for performance analysis.
Practical example:
- A customer support chatbot powered by GPT-4 that logs each interaction with session and user identifiers to Langfuse, allowing the team to analyze conversation flows and improve response quality over time.
Properties
| Name | Meaning |
|---|---|
| Credential | An API key credential required to authenticate requests to OpenAI and Langfuse services. |
| Langfuse Metadata | Collection of metadata fields attached to Langfuse traces: • Custom Metadata (JSON): Optional JSON object with extra metadata (e.g., project, environment, workflow). • Session ID: Identifier used for grouping traces. • User ID: Optional identifier for trace attribution. |
| Model | The OpenAI model to use for generating completions. Can be selected from a list or specified by ID. |
| Options | Additional options for the request: • Base URL: Override default API base URL. • Frequency Penalty: Penalizes repeated tokens. • Max Retries: Number of retry attempts. • Maximum Number of Tokens: Max tokens to generate. • Presence Penalty: Penalizes tokens already present. • Reasoning Effort: Controls reasoning token usage ("low", "medium", "high"). • Response Format: Output format ("text" or "json_object"). • Sampling Temperature: Controls randomness. • Timeout: Max request duration in milliseconds. • Top P: Controls diversity via nucleus sampling. |
| Notice | Informational notices displayed based on configuration, e.g., reminders about JSON response format requirements or model compatibility when using custom base URLs. |
Output
The node outputs data under the json field containing the AI model's generated completion. Depending on the chosen response format:
- Text: Regular text response from the model.
- JSON: Structured JSON output guaranteed to be valid JSON if the prompt and model support it.
No binary data output is indicated.
Dependencies
- Requires an API key credential that provides access to both OpenAI's API and Langfuse services.
- Uses Langfuse SDK to create callback handlers for trace logging.
- Supports overriding the OpenAI API base URL for non-standard endpoints.
- Requires n8n environment configured with appropriate credentials and network access.
Troubleshooting
- Invalid or missing credentials: Ensure the API key credential is correctly set up and has permissions for both OpenAI and Langfuse.
- Model selection issues: Selecting unsupported or incompatible models (especially when overriding base URL) may cause errors or unexpected behavior. Use recommended models released after November 2023 for JSON response format.
- JSON response format errors: If using JSON mode, the prompt must include the word "json" and the model must support this feature; otherwise, parsing errors may occur.
- Timeouts: Requests exceeding the configured timeout will fail; increase the timeout setting if needed.
- Retries exhausted: If the maximum number of retries is reached without success, check network connectivity and API limits.