OpenAI

Consume Open AI

Actions5

Overview

This node integrates with the OpenAI API to generate text completions based on a given prompt. It is designed to produce natural language responses, making it useful for tasks such as drafting emails, writing code snippets, generating creative content, or answering questions.

Common scenarios include:

  • Automating content creation by providing a seed prompt and receiving generated text.
  • Enhancing chatbots with AI-generated replies.
  • Summarizing or expanding text inputs.
  • Experimenting with different language models to find the best fit for specific tasks.

For example, you can input a prompt like "Write a short poem about the sea," select a model, and receive a creative completion from the AI.

Properties

Name Meaning
Model The AI model used to generate the completion. Options include various OpenAI models filtered to exclude audio, image, speech, and certain specialized models. See OpenAI Models for details.
Prompt The text prompt to generate completions for. Example: "Say this is a test".
Simplify Whether to return a simplified version of the response containing only the relevant completion data instead of the full raw API response. Defaults to true.
Echo Prompt If enabled, the prompt will be included in the output along with the completion.
Frequency Penalty Penalizes new tokens based on their existing frequency in the text to reduce repetition. Range: -2 to 2. Default is 0.
Maximum Number of Tokens The maximum number of tokens to generate in the completion. Most models support up to 2048 tokens; newer ones support up to 32,768. Default is 16.
Number of Completions How many completions to generate per prompt. Generating multiple completions consumes more tokens. Default is 1.
Presence Penalty Penalizes new tokens based on whether they appear in the text so far, encouraging the model to talk about new topics. Range: -2 to 2. Default is 0.
Sampling Temperature Controls randomness of completions. Lower values make output more deterministic; higher values increase randomness. Range: 0 to 1. Default is 1.
Top P Controls diversity via nucleus sampling. A value of 0.5 means half of all likelihood-weighted options are considered. Usually adjusted alongside temperature but not both. Range: 0 to 1. Default is 1.

Output

The node outputs JSON data representing the completion(s) generated by the selected OpenAI model. When simplification is enabled (default), the output contains a data field which holds an array of completion choices returned by the API. Each choice typically includes the generated text and metadata.

If binary data were involved (not applicable here), it would represent files or media, but this node focuses solely on text completions.

Example simplified output structure:

{
  "data": [
    {
      "text": "Generated completion text here",
      "index": 0,
      "logprobs": null,
      "finish_reason": "stop"
    }
  ]
}

Dependencies

  • Requires an API key credential for authenticating with the OpenAI API.
  • The node uses the OpenAI REST API endpoint, defaulting to https://api.openai.com.
  • No additional external dependencies beyond network access to OpenAI's service.

Troubleshooting

  • Invalid API Key or Authentication Errors: Ensure that a valid API key credential is configured in n8n. Check for typos or expired keys.
  • Model Not Found or Unsupported: Selecting a model that is deprecated or unsupported may cause errors. Use the provided model list filtered by the node.
  • Token Limit Exceeded: Setting maxTokens too high or requesting too many completions (n) can exceed token quotas or limits imposed by the model.
  • Empty or Unexpected Output: Verify that the prompt is correctly set and non-empty. Also, check if the simplify option is enabled, which changes the output format.
  • Network Issues: Connectivity problems to the OpenAI API endpoint will cause request failures. Confirm internet access and proxy settings if applicable.

Links and References

Discussion