Actions2
Overview
This node integrates with the Groq AI SDK to generate text completions or structured objects using Groq models. It supports two main input modes: a simple single prompt or a conversation-style input with multiple messages, allowing flexible interaction styles with the AI model.
Common scenarios include:
- Generating natural language text based on a prompt (e.g., writing assistance, content generation).
- Conducting multi-turn conversations by providing message histories.
- Producing structured data outputs conforming to a JSON schema (not covered here as the operation is "Generate Text").
Practical examples:
- Using a single prompt like "Write a summary of today's news" to get a concise text output.
- Providing a chat history with system, user, and assistant roles to simulate conversational AI responses.
- Adjusting generation parameters such as max tokens and temperature to control output length and creativity.
Properties
| Name | Meaning |
|---|---|
| Model Name or ID | Select a Groq model from a dynamically loaded list or specify an ID via expression. Examples include "gemma2-9b-it" or "llama-3.1-8b-instant". |
| Input Type | Choose how to provide input: • Simple Prompt: Use a single text prompt. • Messages: Provide a conversation with multiple messages. |
| System | (Shown if Input Type is Simple Prompt) A system prompt that guides the model's behavior, e.g., "You are a helpful assistant." Optional but recommended for context setting. |
| Prompt | (Shown if Input Type is Simple Prompt) The single text prompt to generate a completion for. Can use expressions to pull data from previous nodes. |
| Messages | (Shown if Input Type is Messages and "Messages as JSON" is false) A collection of messages forming a conversation. Each message has a role (System, User, Assistant) and corresponding content. |
| Messages as JSON | (Shown if Input Type is Messages) Boolean flag to input messages as a raw JSON array instead of UI fields. |
| Messages (JSON) | (Shown if Input Type is Messages and "Messages as JSON" is true) JSON string representing an array of message objects, each with role and content. Must be valid JSON. |
| Options | Collection of optional parameters: • Max Tokens: Maximum number of tokens to generate (default 2048). • Temperature: Controls randomness; higher values produce more creative outputs (default 0.7). • Include Request Body: Whether to include the full request body in the output for debugging or logging purposes. |
Output
The node outputs a JSON object per input item containing detailed information about the generated text completion:
text: The generated text result, with any internal<think>tags removed.reasoning: Extracted reasoning text if present inside<think>tags in the response.finishReason: Reason why the generation finished (e.g., length limit reached).usage: Token usage statistics including prompt tokens, completion tokens, total tokens, and cache hit/miss metrics.response: Metadata about the API response such as response ID, model ID, timestamp, and headers.warnings: Any warnings returned by the API.- Optionally, if enabled, the original request body sent to the API is included under
request.body.
No binary data output is produced by this node.
Dependencies
- Requires an active Groq API key credential configured in n8n.
- Uses the Groq AI SDK (
@ai-sdk/groq) and OpenAI-compatible client libraries internally. - Network access to
https://api.groq.comfor model listing and generation requests.
Troubleshooting
- Missing API Key: If no API key is provided in credentials, the node throws an error "No API key provided in Groq credentials". Ensure the API key credential is set up correctly.
- Invalid JSON in Messages (JSON): When using JSON input for messages, invalid JSON syntax will cause an error indicating the problem. Validate JSON format before running.
- Invalid Message Structure: Messages must be an array of objects with
role(system/user/assistant) andcontentstrings. Incorrect structure triggers validation errors. - Schema Errors: Not applicable for "Generate Text" operation, but for other operations, invalid JSON schemas cause errors.
- API Errors: Any errors from the Groq API during generation are surfaced. Enabling "Continue On Fail" allows processing subsequent items despite errors.
- Token Limits: Setting
maxTokenstoo high may cause API rejections or long wait times. Adjust according to model limits.
Links and References
- Groq API Documentation
- n8n Expressions Documentation
- OpenAI Chat Completion API Reference (conceptually similar interface)
- JSON Schema Specification