Overview
This node enables interaction with OpenAI chat models to generate text completions based on user input. It supports multiple OpenAI models including GPT-4, GPT-4 Turbo, GPT-3.5 Turbo, and custom models. The node sends a prompt (input text) to the selected model and returns the generated response along with metadata such as usage statistics and response IDs.
Common scenarios for this node include:
- Generating conversational replies or chatbot responses.
- Creating content such as articles, summaries, or creative writing.
- Assisting with coding, brainstorming, or answering questions.
- Continuing conversations by referencing previous response IDs for context.
Practical example: A user inputs a question or statement, selects GPT-4 Turbo as the model, and receives a detailed, context-aware answer. Optionally, they can provide a previous response ID to maintain conversation continuity.
Properties
| Name | Meaning |
|---|---|
| Model | The OpenAI chat model to use. Options: GPT-4, GPT-4 Turbo, GPT-3.5 Turbo, or Custom (specify name below). |
| Custom Model Name | Name of the custom model to use (only shown if "Custom" model is selected). |
| Input | The input text prompt sent to the OpenAI model. |
| Previous Response ID | Optional ID of a previous response to continue a conversation contextually. |
| Options | Collection of optional parameters controlling generation behavior: |
| - Temperature | Controls randomness in the response (0 to 2). Lower values = more focused/deterministic output. |
| - Max Tokens | Maximum number of tokens to generate in the response (1 to 4096). |
| - Top P | Controls diversity via nucleus sampling (0 to 1). |
| - Frequency Penalty | Penalizes repeated lines verbatim (-2 to 2). |
| - Presence Penalty | Encourages talking about new topics (-2 to 2). |
| - Include Response ID | Whether to include the response ID in the output (true/false). |
Output
The node outputs an array of JSON objects, each corresponding to an input item, containing:
usage: Usage statistics returned by OpenAI (e.g., token counts).model: The model used for the completion.created: Timestamp of the response creation.id(optional): The unique response ID from OpenAI (included if enabled).object(optional): The type of object returned by OpenAI.response: The generated text response from the model.content: Same asresponse, provided for convenience.response_id: The unique response ID.previous_response_id: The ID of the previous response if provided, else null.full_response: The complete raw response object from the OpenAI API.
The node does not output binary data.
Dependencies
- Requires an API key credential for authenticating with the OpenAI API.
- Makes HTTP POST requests to
https://api.openai.com/v1/responses. - Supports optional organization ID header if provided in credentials.
Troubleshooting
- Missing Input Text: If the input text is empty or whitespace, the node throws an error indicating that input text is required.
- Missing Model Name: If the model name is not specified or empty (including when "Custom" is selected but no custom model name is given), an error is thrown.
- API Errors: If the OpenAI API returns an error (e.g., invalid API key, quota exceeded, invalid parameters), the node surfaces the HTTP status code and error message.
- Request Failures: Network issues or other request failures result in an error with the failure message.
To resolve errors:
- Ensure the input text is provided and non-empty.
- Select a valid model or specify a custom model name.
- Verify the API key credential is correctly configured and has necessary permissions.
- Check network connectivity and OpenAI service status.