h2oGPTe icon

h2oGPTe

h2oGPTe is an AI-powered search assistant for your internal teams to answer questions gleaned from large volumes of documents, websites and workplace content.

Actions198

Overview

The "Updates Collection Chat Settings" operation allows users to completely recreate or update the chat settings associated with a specific collection. This is useful for customizing how chat interactions behave within that collection, including options like whether to include chat history, which large language model (LLM) to use, and various advanced configurations such as retrieval-augmented generation (RAG) and self-reflection settings.

Typical scenarios where this node is beneficial include:

  • Adjusting chat behavior for a collection of documents to improve response relevance.
  • Switching between different LLMs or tuning their parameters for better performance.
  • Enabling or disabling chat history inclusion to control context retention.
  • Applying tags to influence the retrieval context in RAG setups.

For example, a user might update the chat settings of a product documentation collection to use a more powerful LLM with specific temperature settings and enable chat history to maintain conversational context.

Properties

Name Meaning
Collection ID The unique identifier of the collection whose chat settings are to be updated.
Additional Options A set of optional parameters to customize the chat settings:
- Include Chat History Whether to include previous questions and answers from the current chat session for each new chat request.
- Llm The name of the large language model to send queries to. Use "auto" for automatic model routing.
- Llm Args A JSON map of arguments sent to the LLM with the query. For example, temperature controls randomness in responses.
- Rag Config A JSON map controlling retrieval-augmented-generation (RAG) types and related parameters.
- Self Reflection Config A JSON map with self-reflection settings, such as specifying an LLM for reflection or prompt templates.
- Tags A string list of tags used to pull context for RAG, influencing which documents or chunks are considered during retrieval.

Output

The node outputs the full HTTP response from the API call to update the collection's chat settings. The main output field is json, which contains the updated chat settings data returned by the server after the update operation.

No binary data output is involved in this operation.

Dependencies

  • Requires an API key credential for authentication to the external service managing collections and chat settings.
  • The base URL for API requests is configured via credentials and environment variables.
  • The node sends a PUT request to the endpoint /collections/{collection_id}/chat_settings with the specified body parameters.

Troubleshooting

  • Missing or invalid Collection ID: The operation requires a valid collection ID. Ensure the ID is correct and the collection exists.
  • Authentication errors: Verify that the API key credential is correctly configured and has permissions to update collection chat settings.
  • Invalid JSON in additional options: Fields like llm_args, rag_config, and self_reflection_config expect valid JSON objects. Malformed JSON will cause errors.
  • Unsupported LLM name or configuration: Using an unsupported LLM or incorrect parameters may result in API errors. Check available models and parameter formats.
  • Network or timeout issues: Large updates or slow network connections may cause timeouts. Adjust timeout settings if applicable.

Links and References

Discussion