Overview
This node integrates with the OpenAI API to generate chat completions using conversational models such as GPT. It allows users to send a sequence of messages with roles (system, user, assistant) and receive AI-generated responses. This is useful for building chatbots, virtual assistants, content generation, or any application requiring natural language understanding and generation.
Practical examples:
- Creating a customer support chatbot that responds to user queries.
- Generating creative writing prompts or story continuations.
- Automating conversational workflows in business applications.
Properties
| Name | Meaning |
|---|---|
| Model / chatModel | The OpenAI model used to generate the completion. Options include GPT models starting with "gpt-" but excluding vision-specific ones. |
| Prompt | A collection of messages forming the conversation history. Each message has a Role (Assistant, System, User) and Content (text). |
| Simplify | Whether to return a simplified response containing only the main data choices instead of the full raw API response. |
| Echo Prompt | If true, the prompt messages are echoed back along with the completion in the response. |
| Frequency Penalty | Penalizes new tokens based on their existing frequency in the text to reduce repetition. Range: -2 to 2. |
| JSON Schema | Defines a JSON schema to validate or structure the response when using JSON schema response format. |
| Maximum Number of Tokens | Limits the maximum tokens generated in the completion. Most models support up to 2048 tokens; newer ones up to 32768. |
| Number of Completions | How many completions to generate per prompt. Higher values consume more tokens and quota. |
| Presence Penalty | Penalizes new tokens based on whether they appear in the text so far, encouraging discussion of new topics. Range: -2 to 2. |
| Response Format Type | The format of the response: either plain text or JSON schema. |
| Sampling Temperature | Controls randomness of output. Lower values make output more deterministic; higher values increase creativity. Range: 0 to 2. |
| Schema Name | The name assigned to the JSON schema when using JSON schema response format. |
| Top P | Controls diversity via nucleus sampling. Values between 0 and 1, where lower values limit token selection to the most likely options. |
Output
The node outputs a JSON object representing the API response from OpenAI's chat completion endpoint. If "Simplify" is enabled, the output contains a data field with an array of choice objects representing generated completions. Each choice typically includes the generated message content and metadata.
Binary data output is not applicable for this node as it deals with text-based chat completions.
Dependencies
- Requires an API key credential for authenticating with the OpenAI API.
- Network access to OpenAI's API endpoint (
https://api.openai.comby default). - Proper configuration of the API key credential within n8n.
Troubleshooting
Common issues:
- Invalid or missing API key will cause authentication errors.
- Exceeding token limits may result in errors or truncated responses.
- Using unsupported or incorrect model names will cause request failures.
- Improperly formatted prompt messages may lead to unexpected results.
Error messages:
- Authentication errors: Check that the API key credential is correctly set up.
- Rate limit errors: Reduce request frequency or check your OpenAI quota.
- Validation errors: Ensure prompt messages and parameters conform to OpenAI API requirements.
- Token limit exceeded: Adjust
maxTokensto fit within model context length.