SL LLMs icon

SL LLMs

StrangeLogic LLM Node

Overview

This node, named "StrangeLogic LLM Node," provides two main operations: generating text responses using large language models (LLMs) and processing videos via an external API.

  • Process Video Operation: It accepts a video URL and a prompt, sends these to an external service endpoint for video processing, and returns the response. This is useful for scenarios where you want to analyze or extract information from videos using AI-powered services.

  • Generate Text Operation: It sends a user-provided prompt along with model parameters to an LLM API to generate text completions. This can be used for content generation, summarization, chatbots, or any task requiring natural language understanding and generation.

Practical examples:

  • Automatically transcribing or analyzing video content by providing a video URL.
  • Generating creative writing, code snippets, or answering questions by sending prompts to various supported LLM models.

Properties

Name Meaning
Operation Choose between "Generate Text" or "Process Video" operation modes.
Video URL The URL of the video to be processed (only shown when Operation is "Process Video").
Model Type Select the LLM model to use for text generation. Options include Gemini variants, Llama models, Mistral, Mixtral, WizardLM, etc. (hidden when Operation is "Process Video").
Prompt The input prompt or message sent to the LLM or used as context for video processing.
Temperature Controls randomness in text generation (only for "Generate Text").
Max Tokens Maximum length of the generated text response (only for "Generate Text").
Top P Nucleus sampling parameter controlling token selection breadth (only for "Generate Text").
Top K Limits token selection to top K tokens (only for "Generate Text").
Safety Settings: Hate Block Level of filtering for hate speech content: None, Low, Medium, High (only for certain Gemini models and "Generate Text").
Safety Settings: Harassment Block Level of filtering for harassment content: None, Low, Medium, High (only for certain Gemini models and "Generate Text").
Safety Settings: Sexual Block Level of filtering for sexual content: None, Low, Medium, High (only for certain Gemini models and "Generate Text").
Safety Settings: Dangerous Content Block Level of filtering for dangerous content: None, Low, Medium, High (only for certain Gemini models and "Generate Text").

Output

  • For Process Video operation: The output JSON contains the response from the /process-video API endpoint, which presumably includes processed data or analysis results related to the provided video URL. The exact structure depends on the external API's response.

  • For Generate Text operation: The output JSON contains the response from the /llms API endpoint, including the generated text based on the prompt and model parameters.

  • No binary data output is indicated by the source code.

Dependencies

  • Requires an external API service accessible via a domain URL and authenticated with an API key credential.
  • The node expects credentials that provide at least:
    • domain: Base URL of the API service.
    • apiKeyApi: API key for authorization.
  • The node uses HTTP POST requests to interact with the external API endpoints /process-video and /llms.
  • Proper configuration of this API credential in n8n is necessary for the node to function.

Troubleshooting

  • No credentials returned!: This error occurs if the required API credentials are not set or accessible. Ensure the API key credential is configured correctly in n8n.

  • API request failures: Network issues, invalid URLs, or incorrect API keys may cause request errors. Check connectivity, validate the video URL, and verify API credentials.

  • Invalid or empty video URL: Providing an invalid or empty video URL will likely cause the /process-video API call to fail.

  • Model type mismatch: Selecting a model type incompatible with the operation (e.g., choosing a model while in "Process Video" mode) might cause unexpected behavior; however, the UI hides irrelevant options.

  • The node supports continuing on failure if enabled, returning error details per item instead of stopping execution.

Links and References

Discussion