Package Information
Documentation
Loopman n8n Community Node
Loopman is a human-in-the-loop automation platform that allows you to seamlessly integrate human decision-making into your automated workflows.
This community node for n8n enables you to send messages to humans and wait for their responses within your n8n workflows.
š Quick Start
Installation
npm install n8n-nodes-loopman
Configuration
- Get your API key from loopman.ai
- In n8n, add a Loopman credential with your API key
- Add the Loopman node to your workflow
Usage
The Loopman node performs polling-only operations to wait for human validation on tasks created by the MCP Client.
Workflow Setup
- MCP Client Node (upstream): Creates a task with the workflow's external reference
- Loopman Node (this node): Automatically retrieves the task using
workflowIdandexecutionId, then polls for human decision - Output Routes: Routes the workflow based on the human decision (Approved, Rejected, Needs Changes)
Note: The Loopman node automatically finds the task created by the MCP Client using the n8n workflow and execution identifiers. No manual configuration needed!
Key Features
- Polling-Only: No task creation, only waits for decisions on existing tasks
- MCP Integration: Requires MCP Client node upstream to create tasks
- Server-Managed Configuration: Polling interval and timeout are retrieved from the task
- Automatic Routing: Routes to different outputs based on human decision
For detailed MCP setup instructions, see MCP Client Setup Guide.
š¤ LLM Configuration (Recommended)
When using the Loopman node with AI models (OpenAI, Claude, etc.), we recommend specific configurations to ensure optimal performance and consistent output formatting.
Temperature Settings
Recommended: 0.1 for consistent, deterministic behavior
0.0- Maximum determinism (may be too rigid)0.1- Very deterministic, excellent for structured tasks ā RECOMMENDED0.3- Good balance for most use cases0.7- Default value (may cause formatting issues)
Complete LLM Configuration
// OpenAI Chat Model Configuration
{
"model": "gpt-4o-mini",
"temperature": 0.1, // Very deterministic
"topP": 0.9, // Good balance
"maxTokens": 2000, // Prevent truncated responses
"timeout": 30000, // Appropriate timeout
"frequencyPenalty": 0.0, // No frequency penalty
"presencePenalty": 0.0 // No presence penalty
}
Why These Settings?
temperature: 0.1- Ensures consistent JSON formatting and reduces parsing errorsmaxTokens: 2000- Prevents truncated responses that can cause workflow failurestopP: 0.9- Maintains quality while staying deterministic
Common Issues Fixed
- ā Prevents duplicate "Action Input" errors in ReAct parsing
- ā Ensures consistent JSON formatting for MCP tools
- ā Reduces workflow failures from truncated responses
- ā Improves reliability of human-in-the-loop decisions
š¦ Installation Methods
Local n8n Instance
Option 1: Via npm (Recommended)
# In your n8n instance directory
npm install n8n-nodes-loopman
# Or install a specific version
npm install n8n-nodes-loopman@0.1.17
Option 2: Development with npm link
# In your loopman-n8n-connector project
npm link
# In your n8n instance directory
npm link n8n-nodes-loopman
Docker Installation
Add to your docker-compose.yml:
version: "3.8"
services:
n8n:
image: n8nio/n8n
ports:
- "5678:5678"
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=admin
- N8N_BASIC_AUTH_PASSWORD=password
volumes:
- n8n_data:/home/node/.n8n
command: >
sh -c "
npm install n8n-nodes-loopman &&
n8n start
"
volumes:
n8n_data:
n8n Cloud
- ā Not yet available - Currently under review by n8n team
- ā³ Coming soon - Will be available in the n8n marketplace once approved
š§ After Installation
Restart n8n:
# Local installation n8n restart # Docker docker-compose restart n8nVerify installation:
- Open n8n in your browser
- Go to Nodes ā Community Nodes
- Search for "Loopman"
- The node should appear in the list
Use the connector:
- Create a new workflow
- Add a node
- Search for "Loopman"
- Configure with your credentials
For detailed installation instructions, see Installation Guide.
š ļø Development
Quick Start
# Setup (first time only)
npm run setup
# Start development environment (isolated n8n instance)
npm run start:dev
# Stop all n8n processes
npm run stop
# Clean development data (if needed)
npm run clean:dev
This launches:
- TypeScript compiler in watch mode
- Isolated n8n instance (no conflicts with global n8n)
- Node.js debugger on port 9230 with TypeScript source maps
- Fresh database (no migration issues)
Debug TypeScript Directly
- Start development:
npm run start:dev - In VS Code: Run and Debug ā Debug n8n Connector (TypeScript)
- Set breakpoints directly in your
.tsfiles - Debug with full TypeScript support!
Manual Development
# Install and build
npm install
npm run build
npm link
# Watch TypeScript changes
npm run dev
# In another terminal, start n8n
N8N_CUSTOM_EXTENSIONS=$(pwd)/dist n8n start
š Features
- ā Polling-only workflow - Wait for human validation on tasks created by MCP Client
- ā Automatic routing - Route workflow based on human decision (Approved/Rejected/Needs Changes)
- ā MCP Client integration - Seamless integration with AI agent workflows
- ā Server-managed configuration - Polling interval and timeout retrieved from task
- ā TypeScript support - Full type safety and IntelliSense
- ā Debugging support - Debug directly in TypeScript with source maps
- ā Docker ready - Easy deployment with Docker Compose
- ā npm package - Install via npm for easy distribution
š Links
- Website: loopman.ai
- Documentation: docs.loopman.ai
- npm Package: n8n-nodes-loopman
- GitHub Repository: Loopman-AI/n8n-loopman-connector
š Support
- Email: support@loopman.ai
- Issues: GitHub Issues
- Documentation: docs.loopman.ai
š License
MIT License - see LICENSE file for details.
Made with ā¤ļø by the Loopman Team
EXECUTION PROTOCOL - FOLLOW THESE STEPS STRICTLY IN THIS EXACT ORDER:
1ļøā£ INITIALIZATION:
- FIRST, call the MCP tool exactly as follows (single line JSON):
{"action":"getHumanGuidelines","params":{"workflow_id":"{{ $workflow.id }}"}}
- Do not proceed until these guidelines have been successfully loaded.
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
2ļøā£ AGENT ACTION:
[Core of your Agent Prompt: email classification, reasoning, JSON output, etc.]
Apply the retrieved guidelines and decision rules to ensure your proposed output fully complies with policy and workflow expectations.
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
3ļøā£ SUBMISSION šØ MANDATORY FINAL STEP šØ:
ā YOU MUST CALL submitForHumanReview TO COMPLETE YOUR TASK ā
- After completing your reasoning and generating your final result, you MUST call the MCP tool `submitForHumanReview`
- This is NOT optional - it is a REQUIRED step for EVERY workflow
- You CANNOT provide a final answer without calling submitForHumanReview first
- Do NOT skip this step under any circumstances
- The tool description will show all required and optional fields
- Structure your submission as if writing an email to your supervisor for decision approval
- The reviewer may not have full context, so provide clear explanations and supporting evidence
ā ļø CRITICAL RULES:
- The order must always be: `getHumanGuidelines` ā reasoning ā `submitForHumanReview` (NO EXCEPTIONS)
- Use workflow Id = {{ $workflow.id }} and execution Id = {{ $execution.id }} (automatically added by the system)
- Only valid JSON (single line) will be executed ā any text, markdown, or explanation outside the JSON will cause an error
- Optional fields must be omitted, not left empty
- Always put the entire JSON on one line. Never use line breaks in Action Input
- If you provide a final answer without calling submitForHumanReview first, you have FAILED the task