Overview
This node allows users to interact with large language models (LLMs) through the OpenWebUI API. Specifically, for the Chat resource and Send Message operation, it sends a user message to a specified LLM model and retrieves the model's generated response. This is useful for automating conversational AI tasks such as chatbots, virtual assistants, or any scenario requiring natural language generation.
For example, you can send a prompt like "What is the weather today?" to a model named "llama2:latest" and receive a text reply generated by that model.
Properties
| Name | Meaning |
|---|---|
| Model | The name of the LLM model to use (e.g., "llama2:latest"). |
| Message | The user message to send to the model. |
| Additional Fields | Optional parameters to customize the request: |
| - Temperature | Controls creativity of the response; value between 0 (deterministic) and 2 (creative). |
| - Max Tokens | Maximum number of tokens allowed in the model's response. |
| - System Message | A system-level message providing context or instructions to the model before the user input. |
| - Stream | Boolean flag indicating if the response should be streamed incrementally. |
Output
The node outputs JSON data with the following structure:
response: The textual content generated by the model in reply to the user's message.model: The name of the model used to generate the response.usage: Information about token usage during the request (e.g., tokens consumed).full_response: The complete raw response object returned by the OpenWebUI API, which may include additional metadata.
No binary data output is produced by this node.
Dependencies
- Requires an API key credential for authenticating with the OpenWebUI service.
- The base URL for the OpenWebUI API must be configured in the node credentials.
- The node makes HTTP requests to the OpenWebUI API endpoints
/api/chat/completionsfor sending messages.
Troubleshooting
- Missing Model or Message: If either the model name or message is not provided, the node will throw an error stating these fields are mandatory. Ensure both are filled.
- API Connectivity Issues: Network errors or invalid API credentials will cause request failures. Verify your API key and base URL configuration.
- Invalid Parameter Values: Providing out-of-range values for temperature (not between 0 and 2) or max tokens may lead to API errors.
- Streaming Flag Misuse: Setting streaming to true requires client support for handling streamed responses; otherwise, unexpected behavior may occur.
If the node encounters an error and "Continue On Fail" is enabled, it will output the error message in the JSON output instead of stopping execution.
Links and References
- OpenWebUI API Documentation (replace with actual URL)
- n8n Documentation on Creating Custom Nodes
- General info on Large Language Models