Actions2
Overview
This node integrates with StrangeLogic's Large Language Models (LLMs) API to generate text completions or process videos based on user input. It supports multiple advanced LLM models, allowing users to send prompts and receive generated text responses tailored by parameters such as temperature and token limits.
Common scenarios:
- Generating creative or informative text content from a prompt.
- Experimenting with different LLM models for varied response styles.
- Processing video URLs to extract or analyze content via the API (though this operation is hidden in the provided properties).
Practical examples:
- Inputting a question or statement prompt to get an AI-generated answer or continuation.
- Using safety settings to filter out harmful or inappropriate content in generated text.
- Sending a video URL to the API for processing (e.g., summarization or analysis), receiving structured results.
Properties
| Name | Meaning |
|---|---|
| Model Type | Selects the LLM model to use for text generation. Options include Gemini variants, Llama models, Mistral, Mixtral, WizardLM, and others. |
| Prompt | The text prompt you want the LLM to respond to. |
Additional properties shown only when the operation is "text":
| Name | Meaning |
|---|---|
| Temperature | Controls randomness in the generated response; higher values produce more diverse outputs. |
| Max Tokens | Maximum length of the generated response in tokens. |
| Top P | Nucleus sampling parameter controlling the diversity of token selection. |
| Top K | Limits the number of highest probability tokens considered at each step. |
| Safety Settings: Hate Block | Level of filtering for hate speech content: None, Low, Medium, High (only for certain Gemini models). |
| Safety Settings: Harassment Block | Level of filtering for harassment content: None, Low, Medium, High (only for certain Gemini models). |
| Safety Settings: Sexual Block | Level of filtering for sexual content: None, Low, Medium, High (only for certain Gemini models). |
| Safety Settings: Dangerous Content Block | Level of filtering for dangerous content: None, Low, Medium, High (only for certain Gemini models). |
| JSON Response | When enabled (only for a specific Gemini model), requests the API to return a JSON-formatted response. |
Note: The "video" operation and its property "Video URL" are hidden in your provided context and thus excluded here.
Output
The node outputs JSON data containing the API response:
- For text generation, the output JSON contains the generated text or structured JSON if the JSON Response option is enabled.
- For video processing (not active in your selected operation), the output JSON contains the processed video data returned by the API.
No binary data output is produced by this node.
Dependencies
- Requires an API key credential for authentication with the StrangeLogic LLM API.
- The node makes HTTP POST requests to the API domain specified in the credentials.
- No additional external dependencies beyond standard HTTP request capabilities.
Troubleshooting
No credentials returned!
This error occurs if the required API authentication credentials are missing or not configured properly. Ensure that the API key credential is set up correctly in n8n.API request failures
Network issues, invalid API keys, or incorrect parameters can cause request errors. Check the API key validity, network connectivity, and parameter correctness.Continue On Fail behavior
If enabled, the node will output error messages in the JSON field instead of stopping execution on failure, allowing workflows to handle errors gracefully.Model compatibility with safety settings
Safety setting options apply only to certain Gemini models. Using them with unsupported models may have no effect or cause unexpected behavior.
Links and References
- StrangeLogic LLM API Documentation (Replace with actual URL if available)
- n8n HTTP Request Node Documentation
- General info on LLM parameters: temperature, top_p, top_k sampling techniques.
