Overview
The Streaming AI Agent node enables running an AI agent that can interact with various tools and perform multi-step reasoning with real-time streaming support via the AG-UI protocol. It is designed to integrate with connected AI language model nodes (chat models) and orchestrate tool usage, memory management, and prompt handling in a flexible way.
This node is particularly useful for scenarios where you want an AI assistant to:
- Use external tools (e.g., calculators, APIs) dynamically during conversations.
- Maintain conversational context with configurable memory strategies.
- Stream detailed execution events in real time to an external system via webhook.
- Control agent behavior through system prompts and advanced options like temperature, max iterations, and timeout.
Practical examples:
- Building a customer support chatbot that calls external APIs to fetch data while responding.
- Creating an AI assistant that performs calculations or data lookups using custom tools.
- Monitoring AI agent execution live by streaming events to a dashboard or logging service.
- Experimenting with different AI providers/models and memory configurations for complex workflows.
Properties
| Name | Meaning |
|---|---|
| Streaming AI Agent with AG-UI Protocol | Informational notice about the node's streaming capabilities. |
| AI Model Configuration | Section header for AI provider and model selection. |
| AI Provider | Selects the AI provider for the agent. Options: Connected Model (a connected AI Language Model node), OpenAI (GPT-4, GPT-3.5), Anthropic (Claude models), Google (Gemini models). |
| Model | Selects the specific AI model depending on the chosen provider. For OpenAI: GPT-4, GPT-4 Turbo, GPT-3.5 Turbo; for Anthropic: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku; for Google: Gemini Pro, Gemini Pro Vision. |
| System Prompt | Defines the agent’s behavior, role, and instructions as a text prompt. Default: "You are a helpful AI assistant...". |
| Tools | List of tools available to the agent. Each tool has a name, description, and type (HTTP or Function). Only shown when using OpenAI, Anthropic, or Google providers. |
| Prompt Type | How to provide the prompt to the agent: Auto (use text from previous node automatically) or Define (manually enter prompt text). |
| Text | The manual prompt text if "Define" prompt type is selected. |
| Max Iterations | Maximum number of reasoning or action iterations the agent can perform (1–20). Default: 5. |
| Timeout (seconds) | Maximum time in seconds to wait for the agent execution before aborting. Range: 5–300. Default: 30. |
| Temperature | Controls randomness in AI output. Range: 0 (deterministic) to 2 (very random). Default: 0.7. |
| Memory Type | Conversation memory strategy: Buffer (keep recent messages), Summary (summarize history), None (no memory). Default: Buffer. |
| Enable Streaming | Enables real-time streaming of AI agent operations via AG-UI protocol. Default: false. |
| Streaming Level | Level of detail for streaming events: Minimal (only essential events), Standard (core events), Verbose (all events). Default: Standard. |
| Webhook URL | URL to send AG-UI streaming events to. Required if streaming is enabled. |
| Events to Stream | Types of events to stream: Run Started/Finished, Tool Calls, Text Messages, Steps, State Changes. Default: Run, Tools, Text. |
| Authentication | Authentication method for webhook requests: None, Bearer Token, Basic Auth, Custom Headers. Default: None. |
| Bearer Token | Token for Bearer authentication (if selected). |
| Username | Username for Basic authentication (if selected). |
| Password | Password for Basic authentication (if selected). |
| Custom Headers | Custom headers for webhook authentication (if selected). |
| Batch Size | Number of events to batch together before sending to webhook. Default: 10. |
| Batch Timeout (ms) | Maximum time in milliseconds to wait before sending a partial batch. Default: 1000 ms. |
| Retry Attempts | Number of retry attempts for failed webhook deliveries. Default: 3. |
| Retry Delay (ms) | Base delay in milliseconds between retry attempts (exponential backoff). Default: 1000 ms. |
| Test Webhook Connection | Button to test connectivity and authentication of the configured webhook URL. |
Output
The node outputs a JSON object per input item containing:
output: The final output generated by the AI agent after execution.input: The prompt text provided to the agent.success: Boolean indicating if the agent execution was successful.executionTime: Duration of the agent execution in milliseconds.tools: Array of tools used by the agent with their names and descriptions.intermediateSteps: Details of intermediate reasoning or tool call steps performed by the agent.timestamp: ISO timestamp of when the execution finished.phase: Static string indicating the node version phase ("Phase 2.3 - Node Configuration Interface").memory: Indicates if memory is connected or not.retryCount: Number of retries attempted during execution.configuration: Object summarizing key configuration parameters used (provider, model, system prompt, memory type, temperature, max iterations, timeout).error(optional): Error message if execution failed.streaming(optional): If streaming is enabled, includes:enabled: truewebhookUrl: URL used for streamingeventsToStream: List of event types streamedeventsGenerated: Number of events generated internallyaguiEventsGenerated: Number of AG-UI protocol events generatedstatus: Status message about streaming deliverythreadIdandrunId: Identifiers for the streaming sessionstreamingConfigured: Boolean indicating if streaming config is valid
Binary Data: The node does not output binary data.
Dependencies
- Requires a connected AI Language Model node when using the "Connected Model" AI provider option. This connected node must support chat and tool calling.
- Supports integration with OpenAI, Anthropic, and Google AI models but only via the connected model approach currently (traditional direct API usage is not implemented).
- Optional webhook URL for streaming AG-UI events in real time.
- If streaming is enabled, requires proper webhook URL and optional authentication credentials (Bearer token, Basic auth, or custom headers).
- Uses internal utility modules for agent execution, configuration validation, and webhook testing.
- Requires appropriate API credentials configured in n8n for the connected AI model node or the selected AI provider.
Troubleshooting
Error: "At least one tool must be connected for agent functionality"
Occurs if no tools are connected to the node. Ensure at least one tool node is connected to the "ai_tool" input.Error: "AI Language Model must be connected when using 'Connected Model' option."
Happens if the AI provider is set to "connected" but no AI language model node is connected. Connect a compatible chat model node.Error: "Connected model must be a Chat Model that supports tool calling."
The connected AI model node does not implement required interfaces for tool usage. Use a supported chat model node.Error: "The 'text' parameter is empty."
No prompt text was found or provided. Provide input text either from previous node (auto mode) or define it manually.Webhook streaming issues:
- If webhook URL is missing or invalid, streaming will fail.
- Authentication misconfiguration may cause webhook delivery failures. Use the "Test Webhook Connection" button to verify connectivity and credentials.
- Network issues or firewall restrictions can block webhook calls.
Agent configuration errors:
Validation errors on max iterations, timeout, or other parameters will prevent execution. Review error messages for details.Timeouts:
If the agent takes longer than the configured timeout, execution will abort. Increase timeout if needed.
Links and References
- AG-UI Protocol Documentation (for streaming event format and usage)
- OpenAI Models
- Anthropic Claude Models
- Google Gemini Models
- n8n Documentation on Creating Custom Nodes
- n8n Community Forum for troubleshooting and tips
If you need further details on any specific property or feature, feel free to ask!