Actions2
Overview
This node integrates with various large language models (LLMs) to generate text responses based on user prompts. It supports multiple model types, including Gemini, Llama, Mistral, and WizardLM variants, allowing users to select the model best suited for their needs. The primary operation "Generate Text" sends a prompt to the selected LLM and returns the generated text output.
Common scenarios where this node is beneficial include:
- Automating content creation such as article drafts, summaries, or creative writing.
- Generating conversational responses for chatbots.
- Assisting in coding or technical explanations.
- Experimenting with different LLMs for research or development purposes.
Practical example: A user inputs a product description prompt, selects a Gemini model, and receives a marketing copy generated by the LLM.
Properties
| Name | Meaning |
|---|---|
| Model Type | Selects the LLM model to use. Options include Gemini variants, Gemma, Llama 3.1 variants, Mistral, Mixtral, and WizardLM. |
| Prompt | The input text prompt that the LLM will respond to. |
| Temperature | Controls randomness in response generation; higher values produce more diverse outputs. |
| Max Tokens | Maximum length of the generated response in tokens. |
| Top P | Nucleus sampling parameter controlling token selection breadth (probability mass). |
| Top K | Limits token selection to the top K probable tokens at each step. |
| Saftey Settings : Hate Block | Level of filtering for hate speech content: None, Low, Medium, High. Applies only to certain Gemini models. |
| Saftey Settings : Harrasment Block | Level of filtering for harassment content: None, Low, Medium, High. Applies only to certain Gemini models. |
| Saftey Settings : Sexual Block | Level of filtering for sexual content: None, Low, Medium, High. Applies only to certain Gemini models. |
| Saftey Settings : Dangerous Content Block | Level of filtering for dangerous content: None, Low, Medium, High. Applies only to certain Gemini models. |
Output
The node outputs JSON data containing the response from the LLM API. The structure typically includes the generated text or an error message if the request fails.
Example output JSON structure:
{
"response": "<generated text from the LLM>",
"modelType": "<model used>",
"usage": {
"tokensUsed": 1234
}
}
If an error occurs during the API call, the output JSON contains an error field describing the issue.
No binary data output is produced by this node.
Dependencies
- Requires an API key credential for authentication with the external LLM service.
- The node makes HTTP POST requests to the configured API domain endpoint
/llms. - For video processing operation (not requested here), it calls a fixed external URL.
Troubleshooting
No credentials returned!
This error indicates missing or misconfigured API credentials. Ensure the API key credential is properly set up in n8n.API request failures
Network issues, invalid API keys, or exceeding rate limits can cause errors. Check the API key validity, network connectivity, and usage quotas.Invalid model type or parameters
Selecting unsupported model types or invalid parameter values may result in API errors. Verify the chosen model and parameters conform to supported options.Empty or malformed prompt
Providing an empty prompt may lead to unexpected results or errors. Always supply a meaningful prompt string.Continue On Fail behavior
If enabled, the node will return error details in the output JSON instead of stopping execution, useful for debugging or partial workflows.
Links and References
- OpenAI GPT-like Models Documentation (for general understanding of LLM parameters)
- N8N HTTP Request Node Documentation (similar request handling)
- Refer to your LLM provider's API documentation for detailed model capabilities and parameter descriptions.
