Overview
The Round Robin node is designed to manage and orchestrate multi-participant conversations, especially for LLM (Large Language Model) workflows such as AI chatbots, role-based simulations, or collaborative dialogue systems. It provides a robust mechanism for storing, retrieving, running, and managing conversation turns between multiple roles (e.g., User, Assistant, System), with support for round tracking, memory recall, and flexible output formatting.
Common scenarios:
- Building conversational loops where each participant (role) takes turns in a structured manner.
- Managing persistent conversation state across workflow executions.
- Preparing conversation history in formats compatible with various LLM providers (OpenAI, Anthropic, Google Gemini, etc.).
- Supporting advanced use cases like multi-agent simulations, customer support bots, or interview roleplays.
Practical examples:
- Simulating a ChatGPT-like experience with User, Assistant, and System roles.
- Running a meeting simulation with multiple participants, each contributing messages in turn.
- Storing and recalling conversation memory for debugging or analysis.
Properties
Below are the main input properties relevant to the "Default" Resource and Operation:
| Display Name | Type | Description |
|---|---|---|
| Operation Mode | options | Selects the operation: Setup Conversation, Run Conversation, Store Message, Retrieve Conversation, Recall Memory, Clear Conversation. |
| Binary Input Property | string | Name of the binary property containing the conversation data. Must match previous Round Robin nodes. |
| Conversation ID | string | Optional identifier to maintain separate conversations within the same workflow. |
| Number of Participants | number | How many different roles/participants are in this conversation. |
| Maximum Conversation Rounds | number | Limit on the number of full conversation loops (0 = unlimited). |
| Participant Roles | fixedCollection | Defines the roles for each participant (name, description, color, system prompt, enabled status). |
| Current Participant | number | Index (0-based) of the participant sending the message. |
| Message Content Field | string | Name of the input field containing the message to store/include. |
| Current Role Index | number | Index of the current role taking a turn (for run mode). |
| Auto Advance Turn | boolean | Whether to automatically advance to the next participant after storing. |
| Track Rounds | boolean | Whether to track and enforce round limits. |
| Continue Until Complete | boolean | Whether to continue until max rounds is reached. |
| Output Format | options | Structure of the output: Conversation History for LLM, Message Array, Grouped by Role. |
| LLM Platform | options | Which AI model provider the conversation will be sent to (OpenAI, Anthropic, Google, Generic). |
| Include System Instructions | boolean | Whether to include system role messages in the conversation history. |
| System Instructions Position | options | Where to position system instructions (start or end of conversation). |
| Simplified Output | boolean | Whether to provide clean output with just essential fields. |
| Maximum Messages | number | Maximum number of recent messages to include (0 = all). |
| Auto-Detect Input Format | boolean | Whether to detect and use formats from the input data automatically. |
| Conversation Title | string | Title for this conversation. |
| Conversation Context | string | Initial context or instructions for the conversation. |
| Talking Points | string | List of topics to be discussed (one per line). |
| Binary Output Property | string | Name of the binary property where conversation data will be stored. |
| Define Roles | boolean | Whether to define custom roles during setup. |
| Use Roles from Input | boolean | Whether to use role definitions from input data. |
| Input Roles Path | string | Property path to roles array in the input data. |
| Roles | fixedCollection | Custom definition of participants in the conversation. |
| Include Messages | boolean | Whether to include message history in the recall output. |
| Include Roles | boolean | Whether to include role definitions in the recall output. |
| Include Metadata | boolean | Whether to include conversation metadata in the recall output. |
| Conversation Memory | resourceLocator | Choose which conversation memory to recall. |
Output
The structure of the json output field depends on the selected operation and output format:
1. Setup Conversation
- Outputs an object for each participant (if roles are defined), or a generic object if not.
- Example:
{
"status": "success",
"message": "Conversation initialized for Assistant",
"operation": "setup",
"storageId": "my-support-chat",
"conversation_id": "my-support-chat",
"RR_title": "Support Chat",
"conversation_context": "...",
"talking_points": ["..."],
"timestamp": "2024-06-01T12:00:00Z",
"number_of_participants": 3,
"total_rounds": 5,
"roleCount": 3,
"currentRole": "Assistant",
"currentRoleIndex": 1,
"roleInfo": { /* role details */ },
"binaryProperty": "data",
"isCurrentTurn": false,
"participant_id": ""
}
2. Store Message
- Outputs updated conversation state, including round count, max rounds, message count, and last updated timestamp.
- Example:
{
"roundCount": 2,
"maxRounds": 5,
"roundsRemaining": 3,
"messageCount": 6,
"lastUpdated": "2024-06-01T12:05:00Z",
"binaryProperty": "data"
}
- If the round limit is reached:
{
"status": "limit_reached",
"message": "Maximum conversation rounds (5) reached. Consider clearing the conversation to start over.",
"roundCount": 5,
"maxRounds": 5,
"messageCount": 15
}
3. Run Conversation
- Outputs the result of processing a turn, indicating whether the round or turn is complete, and information about the next role.
- Example:
{
"status": "turn_complete",
"operation": "run",
"title": "Support Chat",
"conversationId": "my-support-chat",
"currentRoundCount": 2,
"maxRounds": 5,
"nextRole": "User",
"nextRoleIndex": 0,
"messageCount": 7,
"isComplete": false,
"isRoundComplete": false,
"lastUpdated": "2024-06-01T12:10:00Z"
}
- If the maximum rounds are reached:
{
"status": "complete",
"message": "Conversation has reached the maximum number of rounds (5).",
"title": "Support Chat",
"conversationId": "my-support-chat",
"currentRoundCount": 5,
"maxRounds": 5,
"isComplete": true
}
4. Retrieve Conversation
- Output varies based on the selected format:
- Conversation History for LLM: Array of
{role, content}objects, optionally formatted for OpenAI, Anthropic, Google, or generic LLMs. - Message Array: Simple array of all messages with role and content.
- Grouped by Role: Object grouping messages by role.
- Conversation History for LLM: Array of
- Common fields:
messageCount,lastUpdated,roundCount,maxRounds,roundsRemaining,binaryProperty.
- Example (OpenAI format):
{
"conversationHistory": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hi, how can I help you?"}
],
"messageCount": 3,
"roundCount": 1,
"maxRounds": 5,
"roundsRemaining": 4,
"binaryProperty": "data"
}
5. Recall Memory
- Returns raw memory state, optionally including messages, roles, and metadata.
- Example:
{
"status": "success",
"operation": "recall",
"timestamp": "2024-06-01T12:20:00Z",
"memoryId": "my-support-chat",
"conversation_id": "my-support-chat",
"memory": {
"messages": [ ... ],
"roles": [ ... ],
"spotCount": 3,
"roundCount": 2,
"maxRounds": 5,
"lastUpdated": "2024-06-01T12:19:00Z",
"title": "Support Chat",
"context": "...",
"talkingPoints": [ ... ],
"isNewConversation": false
}
}
6. Clear Conversation
- Confirms that the conversation was cleared.
- Example:
{
"success": true,
"operation": "clear",
"timestamp": "2024-06-01T12:30:00Z",
"roundCount": 0,
"maxRounds": 0,
"messageCount": 0,
"status": "success",
"message": "Conversation cleared successfully",
"binaryProperty": "data"
}
Binary Data
- The node uses n8n's binary data storage to persist conversation state. The binary property name is configurable and must be consistent across related nodes.
Dependencies
- External Storage: Uses internal n8n binary data storage via a helper (
createStorageManager). No external API keys required. - n8n Configuration: Ensure binary data is passed correctly between nodes using the specified binary property name (default:
"data"). - Environment Variables: None required.
Troubleshooting
Common Issues:
- Missing Binary Data:
- Error:
"No conversation data found. Please connect this to a setup node first."
Solution: Ensure a "Setup Conversation" or "Store Message" operation runs before "Run Conversation" or "Retrieve Conversation".
- Error:
- Invalid Role Index:
- Error:
"Invalid role index: X. Must be between 0 and N."
Solution: Check that the "Current Role Index" or "Current Participant" is within the valid range.
- Error:
- Corrupted or Missing Binary Property:
- Error:
"Binary property 'data' not found. Check that your workflow correctly passes binary data."
Solution: Verify that the correct binary property name is used and that upstream nodes output binary data under that property.
- Error:
- Max Rounds Reached:
- Status:
"limit_reached"or"complete"
Solution: Either clear the conversation or increase the "Maximum Conversation Rounds".
- Status:
Other Notes:
- If roles are not defined or detected, the node falls back to default roles (User, Assistant, System).
- When using custom roles or input roles, ensure the input data structure matches expectations.