Overview
This node integrates with the Requesty API to send chat messages to a conversational AI model. It allows users to specify a model, provide system-level instructions (system prompt), and send user messages to receive AI-generated responses. This is useful for automating conversational workflows, generating text completions, or building chatbots within n8n.
Typical use cases include:
- Customer support automation by sending user queries and receiving AI-generated replies.
- Content generation or brainstorming assistance by interacting with an AI chat model.
- Integrating AI-driven conversation capabilities into business processes.
Properties
| Name | Meaning |
|---|---|
| Model Name or ID | Select the AI model to use from a dynamically loaded list or specify a model ID via expression. |
| System Prompt | A system message that sets the behavior or persona of the assistant (e.g., "You are a helpful assistant"). |
| Message | The actual user message to send to the chat model. |
| Temperature | Controls randomness in the output; higher values (e.g., 0.9) produce more creative responses. |
| Additional Fields | Optional parameters to fine-tune the response: |
| - Frequency Penalty | Number between -2.0 and 2.0; penalizes new tokens based on their existing frequency in the text. |
| - Max Tokens | Maximum number of tokens to generate in the response. |
| - Presence Penalty | Number between -2.0 and 2.0; penalizes new tokens based on whether they appear in the text so far. |
| - Top P | Nucleus sampling parameter as an alternative to temperature-based sampling. |
Output
The node outputs JSON data with the following structure:
{
"response": "string"
}
response: The trimmed text content generated by the AI chat model in reply to the input message.
No binary data output is produced by this node.
Dependencies
- Requires an active API key credential for the Requesty API.
- The node makes HTTP requests to
https://router.requesty.ai/v1/chat/completionsfor chat completions. - The node also fetches available models from
https://router.requesty.ai/v1/models. - Proper configuration of the API key credential in n8n is necessary for authentication.
Troubleshooting
- Missing or invalid API key: The node will throw an error if no valid API key is provided. Ensure the API key credential is correctly set up.
- Invalid response format: If the API returns unexpected data, the node throws an error indicating an invalid response format. This may be due to API changes or network issues.
- No models found: When loading models, if none are returned or the response is malformed, the node will report failure to load models.
- Continue on Fail: If enabled, the node will continue processing subsequent items even if one fails, returning error details per item.
- Network or connectivity issues can cause request failures; verify internet access and API availability.
Links and References
- Requesty API Documentation (referenced in headers)
- n8n Expressions Documentation (for specifying dynamic model IDs)