h2oGPTe icon

h2oGPTe

h2oGPTe is an AI-powered search assistant for your internal teams to answer questions gleaned from large volumes of documents, websites and workplace content.

Actions198

Overview

This node operation "Summarize One or More Contexts Using an LLM" allows users to generate summaries from one or more pieces of text using a large language model (LLM). It is designed to take multiple raw text strings as input and produce a concise summary that captures the essential information. This is useful in scenarios where you have large volumes of textual data—such as documents, articles, or chat logs—and want to quickly extract key points or create digestible summaries.

Practical examples include:

  • Summarizing customer feedback collected from multiple sources.
  • Creating executive summaries from lengthy reports.
  • Condensing multiple related documents into a brief overview for quick understanding.

Properties

Name Meaning
Model Name The name of the LLM to use for summarization. Use "auto" if you do not want to specify a particular model.
Additional Options A collection of optional parameters to customize the summarization request:
- Guardrails Settings JSON object specifying guardrails configurations to control content filtering or PII handling during summarization.
- Llm Args JSON map of arguments sent to the LLM with the query, e.g., temperature to modulate randomness in output generation.
- Pre Prompt Summary Text prepended before the list of texts to summarize; can be used to provide context or instructions to the model.
- Prompt Summary Text appended after the list of texts; often used to guide the model's response style or focus.
- System Prompt Text sent to models supporting system prompts to give overall context on how to respond. Use "auto" for automatic selection.
- Text Context List The list of raw text strings to be summarized, provided as a single string (likely JSON or concatenated).
- Timeout Timeout in seconds for the summarization request; 0 means no timeout.

Output

The node outputs a JSON object containing the summarization result returned by the LLM API. The exact structure depends on the external service but typically includes fields such as the generated summary text and possibly metadata about the summarization process.

If the node supports binary data output (not indicated here), it would represent the summary in a binary format, but this operation focuses on textual summarization.

Dependencies

  • Requires connection to an external AI service providing LLM capabilities.
  • Needs an API key credential configured in n8n to authenticate requests to the LLM service.
  • The base URL for the API is derived from user credentials and must be correctly set.
  • The node sends HTTP POST requests to the endpoint /models/{model_name}/summarize_content.

Troubleshooting

  • Timeouts: If the summarization takes too long, increase the timeout property or check network connectivity.
  • Invalid Model Name: Specifying a non-existent or unsupported model name may cause errors; use "auto" to let the system select a default.
  • Malformed Text Context List: Ensure the text_context_list is properly formatted and contains valid text strings.
  • Guardrails Misconfiguration: Incorrect guardrails settings might cause the request to fail or filter out too much content; verify JSON syntax and values.
  • API Authentication Errors: Confirm that the API key credential is valid and has necessary permissions.

Links and References

  • Refer to your LLM provider's API documentation for details on supported models and parameters.
  • For guardrails configuration, consult the service's guidelines on content filtering and PII handling.
  • n8n documentation on setting up API credentials and HTTP request nodes may help with configuration.

Discussion