Overview
This node, named "Structured Output Agent," is designed to interact with an AI language model (such as OpenAI's GPT models) to generate structured responses based on a conversation history and user input. It supports iterative querying of the model up to a maximum number of iterations, allowing complex interactions that may involve calling external tools during the conversation.
Typical use cases include:
- Automating multi-turn conversations where the AI can call external tools or APIs dynamically.
- Generating structured data outputs from natural language prompts.
- Integrating AI-driven workflows that require iterative refinement or tool-assisted responses.
For example, you might use this node to build a chatbot that can query databases or perform calculations by invoking custom tools during the conversation, or to generate JSON-formatted data summaries from unstructured text inputs.
Properties
| Name | Meaning |
|---|---|
| System Prompt | The initial prompt setting the context or instructions for the AI model. |
| Max Iterations | The maximum number of interaction cycles (iterations) the node will perform with the AI model. |
| Model | The AI model used to generate responses. Options include various GPT and fine-tuned models (e.g., "gpt-4.1-mini"). |
| Custom File LLM Provider | A string to specify a custom Large Language Model provider when using proxy services for file-related API calls (e.g., LiteLLM proxy). |
Output
The node outputs a JSON object containing:
messages: An array of message objects representing the conversation and tool call results. Each message includes:role: The role of the message sender (e.g., assistant, tool).type: The type of message (e.g., response, tool_calls, tool).content: The content of the message, which may be structured data or tool call results.- Additional metadata such as tool call IDs and status.
iterations: The number of iterations performed in the conversation loop.success: A boolean indicating whether the node successfully obtained a final response within the iteration limit.files: If any files were processed as part of the input, their processed representations are included here.
If the node processes binary data (e.g., files), it handles them via the integrated file processing utilities but does not output raw binary data directly; instead, it provides processed file metadata and content references.
Dependencies
- Requires an API key credential for accessing the AI service endpoint (e.g., OpenAI API or compatible proxy).
- Supports dynamic loading of available AI models from the configured API endpoint.
- Optionally integrates with external tools provided via the "Tool" input connection, which must implement an invocation interface.
- Uses internal utility functions for building conversation history, processing files, and formatting messages.
Troubleshooting
- No system prompt provided: The node requires a non-empty system prompt to function. Ensure the "System Prompt" property is set.
- Maximum iterations reached without final response: Indicates the AI did not produce a conclusive answer within the allowed iterations. Consider increasing "Max Iterations" or refining the prompt.
- Tool invocation errors: If a tool call fails, the error message is captured and returned in the output. Verify that all connected tools are correctly implemented and accessible.
- Model not set or invalid: If no model is specified, the node defaults to "gpt-4.1-mini" but logs a warning. Ensure the selected model is valid and available.
- Credential or API errors: Check that the API key and base URL are correctly configured and have necessary permissions.
Links and References
- OpenAI API Documentation
- n8n Node Development Guide
- Zod Schema Validation
- LiteLLM Proxy (example) (if using custom LLM providers)
This summary is based solely on static analysis of the provided source code and property definitions.