Actions2
Overview
This node, named "SL LLMs," integrates with an external AI service to perform two main operations: generating text responses from a language model and processing videos using a prompt. It is useful in scenarios where users want to leverage large language models (LLMs) for natural language generation or analyze video content by sending a video URL along with a prompt.
Practical examples:
- Generating creative text, summaries, or answers based on user prompts.
- Processing a video URL to extract insights, generate captions, or perform other AI-driven video analysis tasks by providing a related prompt.
Properties
| Name | Meaning |
|---|---|
| Operation | Choose between "Generate Text" (text generation via LLM) or "Process Video" (video analysis). |
| Video URL | URL of the video to be processed (only shown when Operation is "Process Video"). |
| Model Name or ID | Select a language model from a dynamically loaded list or specify a model ID (only for text generation). |
| Prompt | The input prompt or message you want the language model or video processor to respond to. |
Output
The node outputs an array of JSON objects, each representing the response from the AI service for each input item:
- For Process Video operation: The JSON output contains the response returned by the
/process-videoAPI endpoint, which likely includes processed video data or analysis results based on the provided video URL and prompt. - For Generate Text operation: The JSON output contains the generated text or structured response from the language model API (
/llmsendpoint). Depending on the settings, this can be plain text or a JSON-formatted response.
No binary data output is produced by this node.
Dependencies
- Requires an API key credential for authentication with the external AI service.
- The node makes HTTP POST requests to the service domain specified in the credentials:
/process-videoendpoint for video processing./llmsendpoint for text generation.
- The node dynamically loads available models from
https://ai.system.sl/llm-modelsvia a GET request. - Proper network access and valid API credentials are necessary for successful operation.
Troubleshooting
No credentials returned!
This error occurs if the required API key credential is missing or not configured properly. Ensure that the API key credential is set up correctly in n8n before running the node.Invalid response from API: Expected an array of models.
Happens during model loading if the external API does not return a valid array. Check the API availability and your network connection.API request failures (e.g., network errors, invalid parameters)
The node throws errors if the external API call fails. If "Continue On Fail" is enabled, errors will be included in the output JSON per item; otherwise, execution stops. Verify the correctness of input parameters like video URL, prompt, and model selection.Incorrect or empty Video URL
For the "Process Video" operation, ensure the video URL is accessible and valid.
Links and References
- n8n Expressions Documentation — for using expressions in property fields.
- External AI service documentation (not provided in code) would be needed for detailed API usage and model information.
