Handit Prompt Fetcher icon

Handit Prompt Fetcher

Fetch production prompts from Handit API and make them available as variables for LLM nodes

Overview

The Handit Prompt Fetcher node fetches production prompts from the Handit API for a specified agent. It is designed to retrieve prompt data that can be used by various large language model (LLM) nodes in an n8n workflow, making these prompts available as variables or JSON objects.

This node is beneficial when you want to centralize and dynamically load prompt templates or configurations for different LLMs based on an agent identifier. For example, if you have multiple AI agents each with their own set of prompts, this node allows you to fetch those prompts at runtime and use them downstream in your workflow.

Practical examples:

  • Automatically loading updated prompt templates for GPT-4 or Claude before running LLM nodes.
  • Fetching environment-specific prompts (production, staging, development) to test different prompt versions.
  • Caching prompts to reduce repeated API calls while developing workflows.

Properties

Name Meaning
Agent Name/Slug The unique name or slug identifier of the agent whose prompts you want to fetch. This is required.
Output Format How to format the output prompts:
- Individual Variables: Creates separate variables for each prompt keyed by LLM node names (e.g., $gpt4_prompt, $claude_prompt).
- JSON Object: Returns all prompts as a single JSON object.
- Both: Provides both individual variables and a JSON object.
Variable Prefix Optional prefix added to variable names when using "Individual Variables" or "Both" output formats. For example, prefix prompt_ results in variables like $prompt_gpt4.
Additional Options Collection of optional settings:
Environment: Select which environment to fetch prompts from (production, staging, or development). Default is production.
Version: Specify a particular version of prompts to fetch; defaults to latest.
Cache Duration (Minutes): Duration to cache prompts to avoid repeated API calls; default is 5 minutes.

Output

The node outputs JSON data structured as follows:

  • agentSlug: The agent identifier used for fetching.
  • fetchedAt: ISO timestamp when prompts were fetched.
  • prompts: (if output format includes JSON) An object containing all fetched prompts keyed by LLM node names.
  • Individual variables: (if output format includes variables) Each prompt is exposed as a separate variable named after the LLM node, optionally prefixed, e.g., $prompt_gpt4 or $gpt4.
  • For each prompt variable, there is also a duplicate variable with _prompt suffix, e.g., $gpt4_prompt.
  • promptMetadata: Metadata about the fetch operation including:
    • agentSlug
    • environment
    • version
    • fetchedAt
    • totalPrompts: Number of prompts fetched
    • availableNodes: List of prompt keys/names available
  • success: Boolean indicating if the fetch was successful.
  • message: A success message summarizing the fetch result.

If an error occurs, the output JSON contains:

  • success: false
  • error: Error message string
  • timestamp: When the error occurred
  • agentSlug: The agent requested

The node does not output binary data.

Dependencies

  • Requires an API key credential for authenticating with the Handit API.
  • Makes HTTP GET requests to the Handit API endpoint:
    https://handit-api-oss-299768392189.us-central1.run.app/api/performance
  • Supports specifying environment and version parameters to control which prompts are fetched.
  • Uses internal caching controlled by the "Cache Duration" property to reduce API calls.

Troubleshooting

  • Invalid response from API: expected JSON object
    This error indicates the API did not return a valid JSON object. Check network connectivity, API availability, and ensure the provided agent slug is correct.

  • Authentication errors
    If the API key credential is missing or invalid, the request will fail. Verify the API key is correctly configured in n8n credentials.

  • No prompts returned
    If the agent slug is incorrect or no prompts exist for the specified environment/version, the node may return an empty prompts object. Confirm the agent slug and environment/version settings.

  • Caching issues
    If you update prompts but still see old data, increase or reset the cache duration to force fresh fetches.

Links and References

Discussion