Actions2
Overview
This node integrates with a language model service to generate text completions or process videos based on user input. It supports multiple large language models (LLMs) and allows fine-tuning of generation parameters such as temperature, max tokens, and sampling controls. The node is useful for automating content creation, generating responses, summarizing text, or analyzing video content via an external API.
Typical use cases include:
- Generating creative or informative text from prompts.
- Customizing output randomness and length for tailored results.
- Processing video URLs to extract insights or perform analysis (though this operation is hidden in the provided context).
- Applying safety filters to block hate speech, harassment, sexual content, or dangerous content when using certain models.
Properties
| Name | Meaning |
|---|---|
| Model Type | Selects the LLM model to use. Options include Gemini variants, Gemma, Llama 3.1 variants, Mistral, Mixtral, WizardLM, etc. |
| Prompt | The text prompt that the LLM will respond to. |
| Temperature | Controls randomness in response generation; higher values produce more diverse outputs. |
| Max Tokens | Maximum number of tokens in the generated response. |
| Top P | Nucleus sampling parameter controlling token selection breadth (probability mass). |
| Top K | Limits token selection to top K candidates during generation. |
| Saftey Settings : Hate Block | Level of filtering for hate speech content: None, Low, Medium, High (only for some Gemini models). |
| Saftey Settings : Harrasment Block | Level of filtering for harassment content: None, Low, Medium, High (only for some Gemini models). |
| Saftey Settings : Sexual Block | Level of filtering for sexual content: None, Low, Medium, High (only for some Gemini models). |
| Saftey Settings : Dangerous Content Block | Level of filtering for dangerous content: None, Low, Medium, High (only for some Gemini models). |
Output
The node outputs JSON data containing the response from the external LLM API.
- For Generate Text operation, the output JSON includes the generated text or error information if the request fails.
- For Process Video operation (not detailed here), the output JSON contains the API's video processing result.
No binary data output is indicated.
Example output structure for text generation:
{
"json": {
"generated_text": "...",
"other_response_fields": "..."
}
}
If an error occurs and "Continue On Fail" is enabled, the output JSON will contain an error field describing the issue.
Dependencies
- Requires an API key credential for authentication with the external LLM service.
- The node makes HTTP POST requests to the service domain specified in credentials.
- The endpoint
/llmsis used for text generation. - The endpoint
/process-videois used for video processing (not covered here). - Timeout for API calls is set to 120 seconds.
- Proper configuration of the API domain and key is necessary in n8n credentials.
Troubleshooting
- No credentials returned!: Ensure the API key credential is configured correctly in n8n.
- Request timeouts or network errors: Check network connectivity and API service availability.
- API errors returned in output: Review the error message in the output JSON; it may indicate invalid parameters or quota limits.
- Incorrect or empty responses: Verify prompt correctness and model selection.
- If "Continue On Fail" is disabled, any API error will stop execution with an error message referencing the failed item index.
Links and References
- No direct links are provided in the source code.
- Users should refer to the external LLM service documentation for details on model capabilities and API usage.
- n8n documentation on custom nodes and credentials management may be helpful.
