Overview
This node integrates with the CometAPI large language models to generate text completions based on conversational messages. It is designed for scenarios where users want to leverage advanced AI models for chatbots, content generation, or interactive assistants. For example, you can use it to build a customer support chatbot that responds dynamically to user queries or to generate creative writing prompts by providing system and user messages.
The node sends a series of messages with roles (user, assistant, system) to the API and receives a completion response from the selected model.
Properties
| Name | Meaning |
|---|---|
| Model | The specific language model to use for generating completions. Example: "gpt-4o-mini". |
| Messages | A collection of message objects forming the conversation history. Each message has: - Role: User, Assistant, or System - Content: The text content of the message. At least one message must have non-empty content. |
| Options | Additional parameters to customize the completion behavior: - Frequency Penalty: Penalizes repeated tokens (range -2 to 2). - Maximum Number of Tokens: Max tokens to generate (up to 32768). - Presence Penalty: Encourages new topics (range -2 to 2). - Sampling Temperature: Controls randomness (0 to 2). - Stream: Whether to receive partial results as server-sent events (true/false). - Top P: Controls diversity via nucleus sampling (0 to 1). |
Output
The node outputs an array of JSON objects corresponding to each input item. Each output JSON contains the full response from the CometAPI endpoint, which includes the generated completion text and related metadata.
If streaming is enabled, partial message deltas may be sent as data-only server-sent events, allowing real-time consumption of the generated text.
No binary data output is produced by this node.
Dependencies
- Requires an API key credential for authenticating with the CometAPI service.
- The node makes HTTP POST requests to
https://api.cometapi.com/v1/chat/completions. - Proper network access and valid credentials are necessary.
- No additional environment variables are required beyond the API authentication setup.
Troubleshooting
Error: "At least one message with content is required"
This occurs if all provided messages have empty or whitespace-only content. Ensure at least one message contains meaningful text.HTTP request errors
Errors returned from the API include status codes and error messages. Common causes: invalid API key, exceeding token limits, or malformed requests. Check your API key validity and parameter values.Streaming issues
If enabling streaming, ensure your workflow supports handling streamed partial responses; otherwise, unexpected behavior may occur.Continue On Fail
If enabled, the node will output error details in the JSON instead of stopping execution, useful for debugging or partial processing.
Links and References
- CometAPI Documentation (hypothetical link for reference)
- Concepts referenced: token limits, frequency/presence penalties, temperature, nucleus sampling (top_p) — common in large language model APIs.