Actions2
Overview
This node integrates with a language model service to generate text completions or process videos based on user input. The primary use case is generating natural language responses from prompts using various large language models (LLMs). It supports multiple model types, allowing users to select different LLMs tailored for specific tasks or performance characteristics.
Typical scenarios include:
- Generating creative or informative text based on a prompt.
- Customizing output randomness and length via parameters like temperature and max tokens.
- Applying safety filters to block hate speech, harassment, sexual content, or dangerous content when using certain models.
- Processing video URLs for analysis or transformation (though this operation is less detailed here).
Practical examples:
- Creating chatbot replies or content generation by providing a prompt.
- Summarizing or expanding text with control over response creativity.
- Filtering out unsafe content automatically during text generation.
- Sending a video URL to the service for processing (e.g., extracting metadata or captions).
Properties
| Name | Meaning |
|---|---|
| Model Type | Select the LLM model to use. Options include Gemini variants, Gemma, Llama 3.1 versions, Mistral, Mixtral, WizardLM, etc. |
| Prompt | The text prompt you want the language model to respond to. |
| Temperature | Controls randomness in the generated text; higher values produce more diverse outputs. |
| Max Tokens | Maximum length of the generated response in tokens. |
| Top P | Nucleus sampling parameter controlling token selection breadth (probability mass). |
| Top K | Limits token selection to top K probable tokens at each step. |
| Saftey Settings : Hate Block | Level of filtering for hate speech content: None, Low, Medium, High (only for some Gemini models). |
| Saftey Settings : Harrasment Block | Level of filtering for harassment content: None, Low, Medium, High (only for some Gemini models). |
| Saftey Settings : Sexual Block | Level of filtering for sexual content: None, Low, Medium, High (only for some Gemini models). |
| Saftey Settings : Dangerous Content Block | Level of filtering for dangerous content: None, Low, Medium, High (only for some Gemini models). |
Output
The node outputs JSON data containing the response from the language model API. For the "Generate Text" operation, the JSON includes the generated text completion or an error message if the request failed.
For the "Process Video" operation, the output JSON contains the response from the video processing endpoint, which may include processed video data or related metadata.
No binary data output is indicated by the code.
Dependencies
- Requires an API key credential for authentication with the external AI service.
- The node makes HTTP POST requests to the configured domain endpoints
/llmsfor text generation and/process-videofor video processing. - Proper configuration of the API domain and credentials in n8n is necessary.
Troubleshooting
- No credentials returned!: This error occurs if the required API key credential is missing or not configured properly. Ensure the credential is set up in n8n.
- API request failures: Network issues, invalid API keys, or incorrect domain URLs can cause request errors. Check connectivity and credential validity.
- Unsupported model or operation: Selecting incompatible model types or operations may result in unexpected behavior or errors.
- Safety filter settings: Using safety blocks with unsupported models might have no effect or cause errors; verify model compatibility.
- If
continueOnFailis enabled, errors per item will be returned as JSON error objects instead of stopping execution.
Links and References
- No direct external links are provided in the source code.
- Users should refer to the documentation of the external AI service used for details on model capabilities and API usage.
