Overview
This node integrates with the 302.ai API to send chat messages to an AI language model. It allows users to specify a model, provide a system prompt to set the assistant's behavior, and send a user message optionally accompanied by an image URL. The node then returns the AI-generated chat response.
Common scenarios include:
- Building conversational AI assistants that respond based on custom prompts.
- Enhancing chatbots with image context by sending images alongside text.
- Experimenting with different AI models and tuning generation parameters like temperature and penalties.
Practical example:
- A customer support chatbot sends a user query and receives a helpful answer.
- An educational app sends a system prompt defining a tutor persona and asks questions.
- A creative writing tool sends a story prompt and gets AI-generated continuation.
Properties
| Name | Meaning |
|---|---|
| Model Name or ID | Select the AI model to use from a list or specify an ID via expression. |
| System Prompt | Optional system message to define the assistant's behavior or context. |
| Message | The main user message to send to the chat model (required). |
| Image URL | Optional URL or base64 string of an image to provide additional context along with the message. |
| Temperature | Sampling temperature controlling randomness in output (e.g., 0.9). |
| Additional Fields | Collection of optional parameters: - Frequency Penalty: Penalizes repeated tokens (-2.0 to 2.0). - Max Tokens: Maximum tokens to generate. - Presence Penalty: Penalizes tokens based on presence (-2.0 to 2.0). - Top P: Nucleus sampling parameter (0 to 1). |
Output
The node outputs JSON data with the following structure:
{
"response": "AI-generated chat message text"
}
- The
responsefield contains the trimmed text generated by the AI model in reply to the input message. - If an error occurs and "Continue On Fail" is enabled, the output will contain an
errorfield with the error message. - No binary data output is produced by this node.
Dependencies
- Requires an API key credential for authenticating with the 302.ai service.
- Makes HTTP requests to the 302.ai API endpoints:
GET https://api.302.ai/v1/models?llm=1to load available models.POST https://api.302.ai/v1/chat/completionsto send chat messages.
- The node expects the API key to be configured in n8n credentials for 302.ai.
Troubleshooting
- No valid API key provided: Ensure the API key credential is correctly set up and linked to the node.
- Failed to load models: Check network connectivity and validity of the API key; the node fetches models dynamically.
- Invalid response format from 302.ai API: This indicates unexpected API response structure; verify API status or version compatibility.
- Error during chat request: Could be due to invalid parameters or API limits; review input properties and API usage quotas.
- Enable "Continue On Fail" to handle errors gracefully and receive error details in output.
Links and References
- 302.ai API Documentation (assumed official docs)
- n8n Expressions Documentation
- General info on AI chat models and parameters:
- Temperature: Controls randomness in output.
- Frequency and Presence Penalties: Influence token repetition and novelty.
- Top P (Nucleus Sampling): Alternative sampling method focusing on top probability mass.