Overview
This node provides an interface to interact with large language models (LLMs) via a custom API. It supports two main operations: generating chat completions based on user prompts and processing videos (though the video operation is not detailed here). The primary use case is to send a prompt to an LLM model and receive a generated text response, which can be used for tasks such as content generation, summarization, question answering, or conversational agents.
Practical examples include:
- Automating customer support replies by sending user queries to the LLM.
- Generating creative writing or marketing copy from a given prompt.
- Extracting insights or summaries from textual data.
- Experimenting with different LLM models and safety settings to tailor responses.
Properties
| Name | Meaning |
|---|---|
| Model Name or ID | Select the LLM model to use from a dynamically loaded list or specify a model ID manually. |
| Prompt | The input text prompt you want the LLM to respond to. |
| Temperature | Controls randomness in the response generation; higher values produce more diverse outputs. |
| Max Tokens | Maximum length of the generated response in tokens. |
| Top P | Nucleus sampling parameter controlling the breadth of token selection during generation. |
| Top K | Limits token selection to the top K most likely tokens at each step. |
| Saftey Settings : Hate Block | Level of filtering/blocking for hate speech content (None, Low, Medium, High). Only available for certain Gemini models. |
| Saftey Settings : Harrasment Block | Level of filtering/blocking for harassment content (None, Low, Medium, High). Only available for certain Gemini models. |
| Saftey Settings : Sexual Block | Level of filtering/blocking for sexual content (None, Low, Medium, High). Only available for certain Gemini models. |
| Saftey Settings : Dangerous Content Block | Level of filtering/blocking for dangerous content (None, Low, Medium, High). Only available for certain Gemini models. |
| JSON Response | When enabled for specific models, returns the raw JSON response from the API instead of plain text. |
Output
The node outputs an array of items where each item contains a json field holding the response from the LLM API.
- For the Chat operation, the
jsonoutput typically contains the generated text response from the model. - If the "JSON Response" option is enabled (and supported by the selected model), the output will contain the full JSON response from the API, allowing access to additional metadata or structured information.
- Binary data output is not produced by this node.
Example output structure for a chat response:
{
"json": {
"response": "Generated text from the LLM based on the prompt."
}
}
Or if JSON Response is enabled, the entire API response JSON is returned.
Dependencies
- Requires an API key credential for authentication with the external LLM service.
- The node makes HTTP requests to the API endpoint
https://ai.system.sl/llm-modelsto load available models and to${domain}/llmsfor chat completions. - The domain and API key are obtained from the configured credentials.
- No other external dependencies are required.
Troubleshooting
No credentials returned!
This error occurs if the node cannot find the required API key credential. Ensure that the API key credential is properly configured in n8n.Error loading models
If the node fails to load the list of models, check network connectivity and verify that the API key has permission to access the model list endpoint.Invalid response from API
If the API returns unexpected data, it may indicate an issue with the API service or incorrect request parameters.API request failures
Network errors, invalid parameters, or exceeded rate limits can cause request failures. Review error messages and ensure all required properties are correctly set.Unsupported model for safety settings or JSON response
Some safety settings and the JSON Response option only apply to specific models. Using them with unsupported models may have no effect.
Links and References
- n8n Expressions Documentation — for using expressions in property fields.
- External LLM API documentation (not provided here) would be needed for deeper understanding of model options and response formats.