Overview
This node integrates with the OpenWebUI large language model (LLM) API to generate AI-driven text completions based on conversational messages. It is designed for use within AI agents or workflows that require natural language understanding and generation, such as chatbots, virtual assistants, content creation, or automated customer support.
Typical scenarios include:
- Sending a series of messages representing a conversation and receiving an AI-generated assistant reply.
- Customizing the behavior of the LLM by adjusting parameters like creativity, response length, and token penalties.
- Streaming responses when supported, enabling real-time interaction.
For example, you can provide a user message asking a question, and the node will return the assistant's answer generated by the specified OpenWebUI model.
Properties
| Name | Meaning |
|---|---|
| Model | The name of the OpenWebUI model to use for generating completions (e.g., "llama2:latest"). |
| Messages | A collection of messages forming the conversation history. Each message has: |
| - Role: Specifies the speaker role ("System", "User", or "Assistant"). | |
| - Content: The textual content of the message. | |
| Additional Options | Optional parameters to customize the LLM output: |
| - Temperature | Controls creativity of the response; range 0 (deterministic) to 2 (very creative). |
| - Max Tokens | Maximum number of tokens allowed in the response. |
| - Top P | Controls diversity via nucleus sampling; range 0 to 1. |
| - Frequency Penalty | Penalizes frequent tokens to reduce repetition; range -2 to 2. |
| - Presence Penalty | Penalizes tokens already present in the text to encourage new topics; range -2 to 2. |
| - Stream | Boolean flag indicating if the response should be streamed incrementally. |
Output
The node outputs an array of JSON objects, each corresponding to an input item, containing:
message: An object with:role: Always"assistant".content: The generated text completion from the model.
model: The model name used for the completion.usage: Token usage statistics returned by the API.content: Same asmessage.content, the assistant's reply text.response: Alias for the assistant's reply text.full_response: The complete raw response object returned by the OpenWebUI API, including all metadata and choices.
If streaming is enabled, the node handles partial responses accordingly (though details are abstracted here).
No binary data output is produced by this node.
Dependencies
- Requires an API key credential for authenticating with the OpenWebUI API.
- The base URL for the OpenWebUI API must be configured in the node credentials.
- The node makes HTTP POST requests to the
/api/chat/completionsendpoint of the OpenWebUI service.
Troubleshooting
- Missing Model Error: If the "Model" property is empty, the node throws an error stating the model is required. Ensure you specify a valid model name.
- No Messages Provided: The node requires at least one message with non-empty content. Providing an empty messages array or messages without content will cause an error.
- API Request Failures: Network issues, invalid credentials, or incorrect base URL configuration may cause request failures. Verify your API key and endpoint settings.
- Streaming Issues: If streaming is enabled but not supported by the server or network conditions prevent it, responses may fail or be incomplete.
- Continue On Fail: If enabled, errors per item will be caught and returned as error messages instead of stopping execution.
Links and References
- OpenWebUI GitHub Repository (for general info about the LLM provider)
- OpenAI Chat Completion API Spec (similar API pattern for reference)
- n8n Documentation on Creating Custom Nodes