OpenWebUI Chat Model icon

OpenWebUI Chat Model

Modelo de Chat OpenWebUI para uso com AI Agent

Overview

This node integrates with the OpenWebUI Chat Model API to generate AI-driven chat completions. It is designed for scenarios where users want to interact with language models for conversational AI, such as building chatbots, virtual assistants, or generating text responses based on prompts or message histories.

Typical use cases include:

  • Creating conversational agents that respond to user inputs.
  • Generating text completions or replies in a chat format.
  • Experimenting with different AI models and parameters to control response randomness and length.

For example, you can provide a series of chat messages or a single prompt, specify the model and parameters like temperature and max tokens, and receive a generated chat response from the AI.

Properties

Name Meaning
Model The name of the AI model to be used for generating chat completions (e.g., "llama3.2:latest").
Temperature Controls the randomness of the AI's responses, ranging from 0 (deterministic) to 2 (very random).
Max Tokens The maximum number of tokens allowed in the AI-generated response.

Output

The node outputs an array of JSON objects, each corresponding to an input item processed. Each output JSON contains:

  • response: The generated chat message content from the AI.
  • model: The model identifier used for the generation.
  • usage: Information about token usage during the request.
  • fullResponse: The complete raw response object returned by the OpenWebUI API.

If the node encounters an error and is set to continue on failure, it outputs an error message and the error details instead.

The node does not output binary data.

Dependencies

  • Requires an API key credential for authenticating with the OpenWebUI API.
  • The node makes HTTP POST requests to the /api/chat/completions endpoint of the OpenWebUI service.
  • Proper network access and valid credentials are necessary for successful operation.

Troubleshooting

  • Missing Model Parameter: If the "Model" property is not provided, the node throws an error indicating the model is mandatory.
  • Missing Input Messages or Prompt: The node requires either a messages array or a prompt string in the input JSON. Absence of both results in an error.
  • API Request Failures: Network issues, invalid credentials, or API errors will cause the node to throw exceptions unless "Continue On Fail" is enabled.
  • To resolve errors, ensure all required properties are set, input data includes either messages or prompt, and API credentials are correctly configured.

Links and References

Discussion