Actions2
Overview
This node integrates with a language model service to generate text completions or process videos based on user input. It is designed primarily for natural language generation tasks, where users provide prompts and receive generated text responses from various large language models (LLMs). Additionally, it supports video processing by sending video URLs to the service.
Common scenarios include:
- Generating creative or informative text based on prompts.
- Customizing output randomness and length via parameters like temperature and max tokens.
- Applying safety filters to block harmful content such as hate speech, harassment, sexual content, or dangerous content.
- Processing videos by submitting their URLs for analysis or transformation.
Practical examples:
- Creating chatbot replies or content drafts automatically.
- Summarizing or expanding text inputs.
- Filtering generated content to comply with safety standards.
- Sending a video URL to extract metadata or perform AI-based video analysis.
Properties
| Name | Meaning |
|---|---|
| Model Type | Selects the LLM model to use for text generation. Options include Gemini variants, Llama models, Mistral, Mixtral, WizardLM, etc. |
| Prompt | The input text prompt that the LLM will respond to. |
| Temperature | Controls randomness in text generation; higher values produce more diverse outputs. |
| Max Tokens | Maximum number of tokens (words/pieces) in the generated response. |
| Top P | Nucleus sampling parameter controlling token selection breadth during generation. |
| Top K | Limits token selection to the top K probable tokens at each step. |
| Saftey Settings : Hate Block | Level of filtering applied to block hate speech content: None, Low, Medium, High. (Only for certain Gemini models) |
| Saftey Settings : Harrasment Block | Level of filtering applied to block harassment content: None, Low, Medium, High. (Only for certain Gemini models) |
| Saftey Settings : Sexual Block | Level of filtering applied to block sexual content: None, Low, Medium, High. (Only for certain Gemini models) |
| Saftey Settings : Dangerous Content Block | Level of filtering applied to block dangerous content: None, Low, Medium, High. (Only for certain Gemini models) |
| JSON Response | When enabled (only for Gemini 2.0 Flash Exp model), returns the response in JSON format instead of plain text. |
| Operation | Choose between "Generate Text" or "Process Video". |
| Video URL | URL of the video to be processed (only shown when operation is "Process Video"). |
Output
The node outputs an array of items, each containing a json field with the API response:
For Generate Text operation:
- The
jsoncontains the generated text or JSON response from the LLM service depending on theJSON Responsesetting. - If an error occurs, the
jsonincludes anerrorfield describing the issue.
- The
For Process Video operation:
- The
jsoncontains the result of the video processing request sent to the external service. - Errors are similarly reported in the
json.errorfield.
- The
No binary data output is indicated by the code.
Dependencies
- Requires an API key credential for authentication with the external LLM/video processing service.
- The node makes HTTP POST requests to the service domain specified in credentials.
- The service endpoints used are
/llmsfor text generation and/process-videofor video processing. - Proper configuration of the API key and service domain in n8n credentials is necessary.
Troubleshooting
No credentials returned!
This error occurs if the required API key credential is missing or not configured properly. Ensure the credential is set up and linked to the node.API request failures
Network issues, invalid API keys, or service downtime can cause errors during HTTP requests. Check network connectivity, verify API key validity, and confirm service availability.Invalid property values
Providing unsupported model types or invalid parameter values may lead to unexpected results or errors. Use only supported options and valid ranges.Continue On Fail behavior
If enabled, the node will return error details in the output instead of stopping execution on failure, allowing workflows to handle errors gracefully.
Links and References
- No direct external links are provided in the source code.
- Users should refer to the documentation of the external LLM/video processing service for detailed API usage and model descriptions.
