h2oGPTe icon

h2oGPTe

h2oGPTe is an AI-powered search assistant for your internal teams to answer questions gleaned from large volumes of documents, websites and workplace content.

Actions198

Overview

This node allows you to send a message (a question or query) to a large language model (LLM) and receive a response. It is designed for scenarios where you want to interact with an LLM to get answers, generate text, or perform conversational tasks based on your input. This can be useful for building chatbots, virtual assistants, or any application that requires natural language understanding and generation.

For example, you can ask the LLM questions about your data, request summaries, or engage in a multi-turn conversation by providing prior chat context.

Properties

Name Meaning
Model Name Name of the LLM to use. Use "auto" if you do not want to specify a particular model.
Question The text query or question to send to the LLM.
Additional Options A collection of optional parameters to customize the request:
- Chat Conversation JSON list of tuples representing previous human-bot conversation pairs to provide context for the current query.
- Guardrails Settings JSON object specifying guardrails or PII detection settings to control content filtering or compliance.
- Llm Args JSON map of arguments sent to the LLM, such as temperature (controls randomness), and other model-specific parameters.
- Pre Prompt Query Text prepended before contextual document chunks if provided; used to guide the LLM's understanding.
- Prompt Query Text appended after contextual document chunks if provided; used to guide the LLM's response.
- System Prompt Text sent as a system prompt to models supporting it, giving overall context on how to respond. Use "auto" for automatic default.
- Text Context List List of raw text strings to be summarized or used as context for the query.
- Timeout Timeout in seconds for the request execution.

Output

The node outputs the full HTTP response from the API call, including the LLM's answer to the question. The main output field is json, which contains the response data from the LLM service. This typically includes the generated text or answer from the model.

If the LLM supports streaming or binary data, those would be handled accordingly, but this node primarily deals with textual responses.

Dependencies

  • Requires an API key credential for authentication with the LLM service.
  • The base URL for the API is configured via credentials and environment variables.
  • The node sends requests to the /models/{model_name}/answer_question endpoint of the API.

Troubleshooting

  • Common issues:

    • Invalid or missing API key credential will cause authentication errors.
    • Specifying an unsupported model name may result in errors or fallback to default.
    • Improperly formatted JSON in additional options (e.g., chat_conversation, llm_args) can cause request failures.
    • Timeout errors if the LLM takes too long to respond; adjust the timeout property accordingly.
  • Error messages:

    • Authentication errors: Check that the API key credential is correctly set up.
    • Validation errors: Ensure required fields like model_name and question are provided.
    • Request timeout: Increase the timeout value or check network connectivity.

Links and References


This summary is based solely on static analysis of the provided source code and property definitions without runtime execution.

Discussion