SL LLMs icon

SL LLMs

StrangeLogic LLM Node

Overview

This node integrates with a large language model (LLM) service to generate text based on user-provided prompts. It supports multiple advanced LLM models, allowing users to customize the generation parameters such as randomness, response length, and token sampling strategies. The node is useful for automating content creation, generating conversational responses, summarizing information, or any scenario requiring AI-generated text.

In addition to text generation, the node has a placeholder operation for video processing, which currently implements a fixed 90-second wait timer and returns a completion status. This suggests future expansion but currently does not perform actual video analysis.

Practical examples:

  • Generating creative writing or marketing copy from a brief prompt.
  • Producing answers or explanations in chatbots.
  • Creating summaries or expansions of input text.
  • Experimenting with different LLM models and safety settings to tailor output style and content filtering.

Properties

Name Meaning
Operation Choose between "Generate Text" or "Process Video" operations.
Model Type Select the LLM model to use. Options include Gemini variants, Gemma, Llama 3.1 versions, Mistral, Mixtral, WizardLM, etc.
Prompt The input text prompt that the LLM will respond to.
Temperature Controls randomness in text generation; higher values produce more diverse outputs.
Max Tokens Maximum number of tokens (words/pieces) in the generated response.
Top P Nucleus sampling parameter controlling the diversity of token selection (probability mass).
Top K Limits token selection to top K probable tokens at each step.
Safety Settings : Hate Block Level of filtering for hate speech content: None, Low, Medium, High (only for certain Gemini models).
Safety Settings : Harrasment Block Level of filtering for harassment content: None, Low, Medium, High (only for certain Gemini models).
Safety Settings : Sexual Block Level of filtering for sexual content: None, Low, Medium, High (only for certain Gemini models).
Safety Settings : Dangerous Content Block Level of filtering for dangerous content: None, Low, Medium, High (only for certain Gemini models).

Output

The node outputs JSON data containing the response from the LLM API. For the "Generate Text" operation, the JSON includes the generated text or an error message if the request failed.

For the "Process Video" operation, the output JSON contains a status object indicating completion after a fixed 90-second delay. No actual video processing results are returned.

If errors occur during execution, the output JSON will contain an error field describing the issue.

Dependencies

  • Requires an API key credential for authentication with the external LLM service.
  • The node makes HTTP POST requests to the configured API domain endpoint.
  • The API domain and key must be set up in n8n credentials before using this node.
  • Network connectivity to the external LLM API is required.

Troubleshooting

  • No credentials returned!
    This error occurs if the required API key credential is missing or not configured properly. Ensure the API key credential is created and assigned to the node.

  • HTTP request failures or timeouts
    Network issues or incorrect API domain configuration can cause request failures. Verify the API URL and network access.

  • Timeouts
    The node sets a 120-second timeout for API calls. If the API is slow or unresponsive, consider checking service status or increasing timeout if possible.

  • Unsupported operation
    Currently, only "Generate Text" performs meaningful work. The "Process Video" operation only waits 90 seconds and returns a fixed message.

  • Safety settings have no effect on unsupported models
    Safety filters apply only to specific Gemini models. Using other models ignores these settings.

Links and References

Discussion