Overview
This node provides an interface to interact with large language models (LLMs) via a custom API. It supports two main operations: Chat and Video processing, though here we focus on the Chat operation.
In the Chat operation, users can send a text prompt to a selected LLM model and receive a generated response. This is useful for scenarios such as:
- Generating conversational replies or chatbot responses.
- Creating content based on prompts (e.g., articles, summaries).
- Assisting with creative writing or brainstorming ideas.
- Automating customer support answers.
For example, you might input a question like "Explain quantum computing in simple terms," select a model, and get a detailed explanation generated by the LLM.
Properties
| Name | Meaning |
|---|---|
| Model Name or ID | Select the LLM model to use from a dynamically loaded list or specify a model ID manually. |
| Prompt | The text prompt or message you want the LLM to respond to. |
| Temperature | Controls randomness in the response generation; higher values produce more diverse outputs. |
| Max Tokens | Maximum length (in tokens) of the generated response. |
| Top P | Nucleus sampling parameter controlling the diversity of token selection (probability mass). |
| Top K | Limits token selection to the top K most likely tokens at each step. |
| Saftey Settings : Hate Block | (Only for certain Gemini models) Level of filtering/blocking for hate speech content: None, Low, Medium, High. |
| Saftey Settings : Harrasment Block | (Only for certain Gemini models) Level of filtering/blocking for harassment content: None, Low, Medium, High. |
| Saftey Settings : Sexual Block | (Only for certain Gemini models) Level of filtering/blocking for sexual content: None, Low, Medium, High. |
| Saftey Settings : Dangerous Content Block | (Only for certain Gemini models) Level of filtering/blocking for dangerous content: None, Low, Medium, High. |
| JSON Response | For specific models, choose whether to receive the response as raw JSON (true) or plain text (false). |
Output
The node outputs an array of items where each item contains a json field holding the response from the LLM API.
- For the Chat operation, the
jsonoutput typically includes the generated text or structured data returned by the API. - If the
JSON Responseproperty is enabled for supported models, the output will be a JSON object representing the full structured response. - No binary data output is produced by this node.
Example output snippet (simplified):
{
"json": {
"response": "This is the generated answer from the LLM."
}
}
Or if JSON response is enabled:
{
"json": {
"choices": [...],
"usage": {...},
...
}
}
Dependencies
- Requires an API key credential for authentication with the external LLM service.
- The node makes HTTP requests to the API endpoint hosted at a domain specified in the credentials.
- The node dynamically loads available models from the API before execution.
- Proper network access to the API endpoints is necessary.
Troubleshooting
No credentials returned!
This error occurs if the required API key credential is missing or not configured. Ensure that the API key credential is set up correctly in n8n.Error loading models
If the node cannot fetch the list of models, check your API key validity and network connectivity to the model listing endpoint.API request failures
Errors during the request to generate chat completions may be due to invalid parameters, exceeded rate limits, or server issues. Review the error message and verify all input properties.Unsupported model or operation
Some safety settings and the JSON response option are only available for specific models. Using them with unsupported models may have no effect or cause errors.Continue On Fail
If enabled, the node will continue processing subsequent items even if one fails, returning error details in the output.
Links and References
- n8n Expressions Documentation — for using expressions in property fields.
- External LLM API documentation (not provided here) would be needed for deeper understanding of model options and response formats.