Overview
This node integrates with OpenAI GPT models to generate AI-powered text completions and streams the responses in real-time. It is designed to send a user-defined message along with a system prompt to guide the model's behavior, then receive and output the streamed response chunks as well as the final complete result.
Common scenarios where this node is beneficial include:
- Building conversational chatbots that require streaming partial responses for better user experience.
- Generating text completions or suggestions dynamically while the user waits.
- Integrating AI assistance into workflows where incremental output is useful (e.g., content creation, coding help).
Practical example:
You can configure the node to use "GPT-4 Turbo" with a system prompt like "You are a helpful AI assistant." Then send a user message such as "Hello, how can you help me today?" The node will stream back the AI's reply chunk by chunk, allowing downstream nodes or UI components to display the response progressively, and finally output the full completion once done.
Properties
| Name | Meaning |
|---|---|
| Model | The OpenAI model to use. Options: GPT-4 Turbo, GPT-4, GPT-3.5 Turbo, GPT-4o, GPT-4o Mini |
| System Prompt | System prompt to guide the model's behavior (e.g., instructions or context for the AI) |
| Message | The message to send to OpenAI in JSON format, typically including role and content fields |
| Temperature | Controls randomness in the response; value between 0 (deterministic) and 2 (more random) |
| Max Tokens | Maximum number of tokens to generate in the completion |
| Base URL | Optional custom base URL for OpenAI-compatible APIs; leave empty to use standard OpenAI endpoint |
| Advanced Options | Collection of optional settings: |
| - Message State | Override the message state for all streamed chunks. Options: Default (auto), Thinking, Responding, Active, Waiting, Complete |
| - Progress Message | Optional progress message to include with the message state |
Output
The node produces two outputs:
Stream (first output):
An array of JSON objects representing streamed chunks of the AI response. Each chunk includes metadata such as the current message state and optionally a progress message. This allows downstream nodes or interfaces to process or display partial results as they arrive.Result (second output):
A single JSON object containing the full final response from the OpenAI model, including details such as the model used, the entire generated text, token usage, and other relevant metadata.
The node does not output binary data.
Dependencies
- Requires an API key credential for authenticating with OpenAI or compatible APIs.
- Supports specifying a custom base URL to connect to OpenAI-compatible services beyond the official OpenAI API.
- Uses an internal service module to handle message processing, request sending, and formatting of streamed chunks and final responses.
Troubleshooting
Common issues:
- Invalid or missing API key credential will cause authentication failures.
- Incorrectly formatted JSON in the "Message" property may lead to parsing errors.
- Setting an unsupported model name could result in API errors.
- Network or connectivity problems when using a custom base URL.
Error messages:
- Errors returned from the OpenAI API are caught and formatted into error response objects emitted on both outputs.
- If the node fails to parse the input message JSON, it may throw a syntax error.
- Token limit exceeded errors if
maxTokensis set too high relative to model limits.
Resolutions:
- Ensure valid API credentials are configured.
- Validate JSON syntax in the "Message" field before execution.
- Use supported model names from the provided options.
- Verify network access and correctness of any custom base URL.