SL LLMs icon

SL LLMs

StrangeLogic LLM Node

Overview

This node integrates with a Large Language Model (LLM) service to generate text completions or process videos based on user input. The primary use case is generating natural language responses from prompts using various AI models, which can be useful for content creation, chatbots, summarization, or any scenario requiring AI-generated text. Additionally, it supports video processing by sending a video URL and prompt to the service.

For example, you could use this node to:

  • Generate creative writing or code snippets by providing a prompt.
  • Create conversational agents that respond dynamically.
  • Analyze or extract information from videos by providing their URLs.

Properties

Name Meaning
Operation Choose between "Generate Text" or "Process Video".
Video URL URL of the video to be processed (only shown if Operation is "Process Video").
Model Type Select the AI model to use for generation. Options include Gemini variants, Llama models, Mistral, WizardLM, etc.
Prompt The text prompt to send to the LLM for generating a response.
Temperature Controls randomness in output; higher values produce more diverse results (default 0.7).
Max Tokens Maximum length of the generated response (default 2048 tokens).
Top P Nucleus sampling parameter controlling token selection breadth (default 1).
Top K Limits token selection to top K choices (default 2).
Safety Settings: Hate Block Level of filtering for hate speech content: None, Low, Medium, High (only for certain Gemini models).
Safety Settings: Harassment Block Level of filtering for harassment content: None, Low, Medium, High (only for certain Gemini models).
Safety Settings: Sexual Block Level of filtering for sexual content: None, Low, Medium, High (only for certain Gemini models).
Safety Settings: Dangerous Content Block Level of filtering for dangerous content: None, Low, Medium, High (only for certain Gemini models).
JSON Response Whether to request the response in JSON format (only available for some Gemini 2.0 models).

Output

The node outputs a JSON object containing the response from the LLM service or video processing API.

  • For text generation, the output JSON contains the generated text or structured JSON if requested.
  • For video processing, the output JSON contains the result of the video analysis or processing as returned by the external API.

If an error occurs during the API call, the output JSON will contain an error field describing the issue.

The node does not output binary data.

Dependencies

  • Requires an API key credential for authenticating with the external LLM/video processing service.
  • The node sends HTTP POST requests to the service domain specified in the credentials.
  • The service endpoint for text generation is /llms.
  • The service endpoint for video processing is /process-video.

Troubleshooting

  • No credentials returned!: This error indicates missing or misconfigured API authentication credentials. Ensure the API key credential is properly set up in n8n.
  • API request failures: Network issues, invalid parameters, or service downtime may cause errors. Check the error message returned in the output JSON for details.
  • Unsupported operation or model: Selecting incompatible combinations of operation and model type may lead to unexpected behavior or no output.
  • Continue On Fail: If enabled, the node will continue processing subsequent items even if one fails, returning error details in the output JSON.

Links and References

  • No direct links are provided in the source code. For more information, consult the documentation of the external LLM service or the StrangeLogic platform if available.

Discussion