Overview
This node provides an interface to interact with large language models (LLMs) via a custom API. It supports two main operations: Chat and Video processing, though here we focus on the Chat operation.
In the Chat operation, users can send a text prompt to a selected LLM model and receive generated text responses. This is useful for scenarios such as:
- Generating conversational replies or chatbot messages.
- Creating content like articles, summaries, or code snippets.
- Performing natural language understanding or generation tasks.
- Experimenting with different LLM models and tuning generation parameters.
For example, you might use this node to generate customer support answers based on user queries or to create creative writing prompts dynamically.
Properties
| Name | Meaning |
|---|---|
| Model Name or ID | Select the LLM model to use from a dynamically loaded list or specify a model ID manually. |
| Prompt | The input text prompt that you want the LLM to respond to. |
| Temperature | Controls randomness in the response generation; higher values produce more diverse outputs. |
| Max Tokens | Maximum length of the generated response in tokens. |
| Top P | Nucleus sampling parameter controlling the breadth of token selection during generation. |
| Top K | Limits token selection to the top K probable tokens at each step. |
| Saftey Settings : Hate Block | (Only for certain Gemini models) Level of filtering applied to block hateful content: None, Low, Medium, High. |
| Saftey Settings : Harrasment Block | (Only for certain Gemini models) Level of filtering applied to block harassment content: None, Low, Medium, High. |
| Saftey Settings : Sexual Block | (Only for certain Gemini models) Level of filtering applied to block sexual content: None, Low, Medium, High. |
| Saftey Settings : Dangerous Content Block | (Only for certain Gemini models) Level of filtering applied to block dangerous content: None, Low, Medium, High. |
| JSON Response | For specific models, choose whether to receive the response as raw JSON (true) or plain text (false). |
Output
The node outputs an array of items where each item contains a json field with the response from the LLM API.
- For the Chat operation, the
jsonoutput contains the generated text or structured JSON response from the model depending on theJSON Responsesetting. - If the request fails and "Continue On Fail" is enabled, the output will contain an error message inside the
json.errorfield. - No binary data output is produced by this node.
Example output structure for a successful chat response:
{
"json": {
"response": "Generated text from the LLM"
}
}
Dependencies
- Requires an API key credential for authentication with the external LLM service.
- The node makes HTTP requests to the API endpoint hosted at a domain specified in the credentials.
- The API endpoint
/llmsis used for chat completions. - The node dynamically loads available models from the API endpoint
/llm-models. - Proper network access and valid API credentials are necessary.
Troubleshooting
No credentials returned!
This error occurs if the required API key credential is missing or not configured properly. Ensure you have set up the API key credential in n8n before using the node.Invalid response from API: Expected an array of models.
Happens when loading model options fails due to unexpected API response. Check API availability and credentials.API request errors (e.g., network issues, invalid parameters)
The node throws errors with messages from the API. Enable "Continue On Fail" to handle errors gracefully in workflows.Model not found or unsupported
Selecting a model not supported by the API or mistyping the model ID may cause failures. Use the dynamic model loader or verify model IDs.Incorrect property values
Providing invalid values for temperature, max tokens, or safety settings may lead to unexpected results or API rejections.
Links and References
- n8n Expressions Documentation — for using expressions in property fields.
- External API documentation (not provided here) would be needed for detailed model capabilities and safety settings explanations.