SL LLMs icon

SL LLMs

StrangeLogic LLM Node

Overview

This node integrates with an external Large Language Model (LLM) service to generate text completions or process videos based on user input. The primary use case is generating AI-driven text responses from a prompt using various supported LLM models. It can also process video URLs for analysis or transformation, although the "Generate Text" operation is the main focus here.

Typical scenarios include:

  • Generating creative or informative text completions from prompts.
  • Using different LLM models tailored for specific tasks or performance characteristics.
  • Applying safety filters to block harmful or inappropriate content in generated text.
  • Processing video content via URL for specialized AI-based video operations (less common).

Example: Given a prompt like "Explain quantum computing in simple terms," the node sends this to the selected LLM model and returns a detailed explanation as text.

Properties

Name Meaning
Model Type Selects the LLM model to use. Options include Gemini variants, Llama 3.1 versions, Mistral, Mixtral, WizardLM, and others.
Prompt The text prompt you want the LLM to respond to.
Temperature Controls randomness in response generation; higher values produce more diverse outputs.
Max Tokens Maximum length of the generated response in tokens.
Top P Nucleus sampling parameter controlling token selection breadth (probability mass).
Top K Limits token selection to top K probable tokens at each step.
Saftey Settings : Hate Block Level of filtering to block hate speech content. Options: None, Low, Medium, High. Applies only to certain Gemini models.
Saftey Settings : Harrasment Block Level of filtering to block harassment content. Options: None, Low, Medium, High. Applies only to certain Gemini models.
Saftey Settings : Sexual Block Level of filtering to block sexual content. Options: None, Low, Medium, High. Applies only to certain Gemini models.
Saftey Settings : Dangerous Content Block Level of filtering to block dangerous content. Options: None, Low, Medium, High. Applies only to certain Gemini models.

Output

The node outputs JSON data containing the response from the external LLM API:

  • For Generate Text operation, the output JSON includes the generated text completion or an error message if the request failed.
  • For Process Video operation, the output JSON contains the processed video data or error details.

No binary data output is produced by this node.

Example output structure for text generation:

{
  "json": {
    "text": "Generated response text from the LLM",
    ...
  }
}

If an error occurs and "Continue On Fail" is enabled, the output will contain an error message field instead.

Dependencies

  • Requires an API key credential for authenticating with the external LLM service.
  • The node makes HTTP POST requests to the service domain specified in credentials.
  • The service endpoint for text generation is /llms.
  • The service endpoint for video processing is /process-video.
  • Proper network access and valid API credentials are necessary.

Troubleshooting

  • No credentials returned!: Ensure that the required API key credential is configured correctly in n8n.
  • Request timeouts or failures: Check network connectivity and service availability. The video processing request has a timeout of 120 seconds.
  • Error messages from API: These are passed through in the output JSON under an error field. Review the error message for issues such as invalid parameters or quota limits.
  • If "Continue On Fail" is disabled, any API error will stop execution with an error.
  • Make sure the selected model supports the requested operation and safety settings.

Links and References

  • No direct links provided in the source code.
  • Refer to your external LLM service provider's API documentation for details on model capabilities, parameters, and safety settings.

Discussion