Actions2
Overview
This node integrates with the Agenta service to manage prompt configurations and invoke large language model (LLM) calls. It supports two main operations:
- Fetch Prompt/Config: Retrieves prompt configuration data from Agenta based on environment and application identifiers.
- Invoke LLM: Sends text input to an LLM via Agenta and returns the generated completion.
Typical use cases include automating prompt management workflows, dynamically fetching prompt templates or configurations, and processing text inputs through an LLM for tasks like content generation, summarization, or conversational AI within n8n workflows.
For example, a user might fetch the latest prompt configuration for a specific application environment before invoking the LLM to generate responses tailored to that configuration.
Properties
| Name | Meaning |
|---|---|
| Operation | Choose between "Fetch Prompt/Config" to retrieve prompt settings or "Invoke LLM" to run the language model. |
| Environment | Select the environment context: Development, Staging, or Production. |
| Application Slug | Identifier slug for the target application in Agenta. |
| Text Input | The text string to be processed by the LLM (required only for "Invoke LLM" operation). |
Additional options available when fetching prompt config (not shown here as per user request) allow specifying versions and IDs for environment and application.
Output
The node outputs JSON objects containing the response from Agenta's API along with metadata about the operation performed:
- For Fetch Prompt/Config, the output includes the fetched prompt configuration details plus the selected environment and application slug.
- For Invoke LLM, the output contains the LLM's response to the provided text input, along with the environment, application slug, and original text input.
If an error occurs during execution, the output JSON will contain error details including message, error code, and timestamp.
The node does not output binary data.
Dependencies
- Requires an API key credential for authenticating with the Agenta service.
- The node makes HTTP POST requests to Agenta endpoints:
/api/api/variants/configs/fetchfor fetching prompt configs./services/completion/runfor invoking the LLM.
- Proper configuration of the API base URL and credentials is necessary within n8n.
Troubleshooting
- Missing Application Slug: Both operations require the application slug; omitting it will cause an error.
- Missing Text Input: The "Invoke LLM" operation requires non-empty text input.
- Authentication Errors: Ensure the API key credential is valid and has access to the Agenta service.
- Network Issues: Verify connectivity to the Agenta API endpoints.
- Error Handling: If
continueOnFailis enabled, errors are returned as part of the output JSON; otherwise, execution stops with an error message indicating the failed item index.
Common error messages:
"Application slug is required for fetching prompt config"— Provide a valid application slug."Text input is required for invoking LLM"— Provide text to process.- Authentication or network errors will typically come from the underlying HTTP request and should be checked accordingly.
Links and References
- Agenta Documentation (Replace with actual URL if available)
- n8n HTTP Request Node documentation for understanding authentication and HTTP calls: https://docs.n8n.io/nodes/n8n-nodes-base.httpRequest/