Agenta icon

Agenta

Agenta prompt management and LLM invocation

Overview

This node integrates with the Agenta service to manage prompt configurations and invoke large language model (LLM) calls. It supports two main operations:

  • Fetch Prompt/Config: Retrieves prompt configuration data from Agenta based on environment and application identifiers. This is useful for dynamically obtaining prompt templates or settings tailored to specific deployment environments or application versions.
  • Invoke LLM: Sends text input to an LLM via Agenta and receives generated completions or responses. This enables automated text processing, generation, or conversational AI workflows.

Typical use cases include:

  • Dynamically loading prompt configurations for different environments (development, staging, production) before running AI tasks.
  • Sending user input or other text data to an LLM for natural language understanding, content generation, or chatbot interactions.

Properties

Name Meaning
Operation Choose between "Fetch Prompt/Config" (retrieve prompt configurations) or "Invoke LLM" (execute LLM calls).
Environment Select the environment context: Development, Staging, or Production.
Application Slug Identifier slug for the target application in Agenta. Required for both operations.
Text Input Text string to process with the LLM. Only shown and required when invoking the LLM.
Options Collection of optional parameters for fetching prompt config:
- Environment Version Specific version of the environment (optional).
- Environment ID Environment ID (optional).
- Application Version Application version (optional).
- Application ID Application ID (optional).

Output

The node outputs JSON objects containing the response from Agenta along with metadata about the operation performed:

  • For Fetch Prompt/Config, the output JSON includes the fetched prompt configuration data plus fields:

    • operation: "fetchPromptConfig"
    • environment: selected environment
    • applicationSlug: provided application slug
  • For Invoke LLM, the output JSON contains the LLM response data plus:

    • operation: "invokeLlm"
    • environment: selected environment
    • applicationSlug: provided application slug
    • textInput: the original input text sent to the LLM

If errors occur and the node is set to continue on failure, the output JSON will contain error details including:

  • error: error message
  • error_code: error code or "unknown_error"
  • timestamp: ISO timestamp of the error occurrence

The node does not output binary data.

Dependencies

  • Requires an API key credential for authenticating with the Agenta service.
  • The node makes HTTP POST requests to Agenta endpoints:
    • /api/api/variants/configs/fetch for fetching prompt configurations.
    • /services/completion/run for invoking the LLM.
  • The base URL for Agenta API is obtained from the credential.
  • Proper network access to Agenta endpoints is necessary.

Troubleshooting

  • Missing Application Slug: Both operations require a non-empty application slug. If omitted, the node throws an error indicating this requirement.
  • Missing Text Input: When invoking the LLM, text input must be provided; otherwise, an error is thrown.
  • Authentication Errors: Invalid or missing API credentials will cause HTTP request failures.
  • Network Issues: Connectivity problems to Agenta endpoints will result in request errors.
  • Error Handling: If the node is configured to stop on failure, any error during execution will halt the workflow with a descriptive message. Enabling "continue on fail" allows the workflow to proceed while returning error details in the output.

Links and References

Discussion