Overview
This node integrates OpenAI's chat models with Langfuse tracing capabilities, enabling advanced AI-driven workflows with detailed traceability. It is designed for users who want to leverage OpenAI's language models while capturing rich metadata and session information via Langfuse for monitoring, debugging, and analytics.
Common scenarios include:
- Building conversational AI agents or chatbots that require detailed usage tracking.
- Running AI chains where traceability of requests and responses is critical.
- Experimenting with different OpenAI models while associating each request with custom metadata and user/session identifiers.
Practical example:
- A customer support chatbot powered by GPT-4 that logs each interaction with session IDs and user IDs to Langfuse, allowing the team to analyze conversation flows and improve response quality over time.
Properties
| Name | Meaning |
|---|---|
| Credential | An API key credential to authenticate requests to OpenAI and Langfuse services. |
| Langfuse Metadata | Collection of metadata fields to attach to Langfuse traces: • Custom Metadata (JSON): Optional JSON object with extra metadata (e.g., project, environment, workflow). • Session ID: Identifier used for grouping traces. • User ID: Optional identifier for trace attribution. |
| Model | The OpenAI model to use for generating completions. Can be selected from a list or specified by ID. Examples include GPT-4 variants and other supported models. |
| Options | Additional options for the OpenAI API call: • Base URL: Override default API base URL. • Frequency Penalty: Penalizes repeated tokens. • Max Retries: Number of retry attempts. • Maximum Number of Tokens: Max tokens to generate. • Presence Penalty: Encourages new topics. • Reasoning Effort: Controls reasoning token usage (low, medium, high). • Response Format: Text or JSON output. • Sampling Temperature: Controls randomness. • Timeout: Max request duration in ms. • Top P: Controls diversity via nucleus sampling. |
| Notice | Informational notices about usage, e.g., requirements when using JSON response format or non-OpenAI models. |
Output
The node outputs data under the json field representing the AI model's completion result. This typically includes the generated text or JSON object depending on the selected response format.
- If Text response format is selected, the output contains the plain text generated by the model.
- If JSON response format is selected, the output guarantees valid JSON generated by the model (assuming the prompt includes the word "json" as required).
No binary data output is indicated.
Dependencies
- Requires an API key credential that provides access to both OpenAI's API and Langfuse tracing service.
- Uses Langfuse SDK to create callback handlers that send trace data including session ID, user ID, and custom metadata.
- Supports overriding the OpenAI API base URL for compatibility with non-OpenAI endpoints.
- Requires n8n environment configured with appropriate credentials and network access to OpenAI and Langfuse endpoints.
Troubleshooting
- Missing or invalid credentials: Ensure the API key credential is correctly configured and has permissions for both OpenAI and Langfuse services.
- Incorrect model selection: Selecting unsupported or incompatible models (especially when overriding base URL) may cause errors or unexpected behavior.
- JSON response format issues: When using JSON mode, the prompt must include the word "json" to ensure valid JSON output. Failure to do so can lead to parsing errors.
- Timeouts: Requests exceeding the configured timeout will fail; increase the timeout if necessary.
- Retries exhausted: If the maximum number of retries is reached without success, check network connectivity and API limits.
- Langfuse metadata errors: Invalid JSON in custom metadata or missing required session ID may cause trace logging failures.
Links and References
- OpenAI Models Overview
- n8n Documentation: Langchain OpenAI Chat Model Node
- Langfuse Documentation (for trace metadata and usage)