Package Information
Available Nodes
Documentation
n8n-nodes-agentiff
Custom n8n community nodes for Agentiff.AI workflows. Reduces boilerplate in webhook-triggered, SSE-streamed, LLM-powered n8n workflows from ~15 nodes to ~5.
Nodes
| Node | Purpose |
|---|---|
| Agentiff Webhook | Trigger with auto-extraction of execution_id, api_token, inputs, credentials, and prompts from the standard Agentiff payload |
| Agentiff SSE Event | Send real-time progress/status/data/error/complete events to Agentiff clients |
| Agentiff LLM | Call the Agentiff LLM Gateway (OpenAI-compatible) with built-in JSON extraction |
| Agentiff PII Redact | Detect and redact 15 types of PII (emails, phones, SSNs, API keys, etc.) from text fields |
| Agentiff Gmail API | Gmail operations (list, get, send, delete) using injected OAuth access tokens |
Installation
Docker (Recommended)
environment:
- N8N_COMMUNITY_PACKAGES=n8n-nodes-agentiff
Manual
cd ~/.n8n/nodes
npm install n8n-nodes-agentiff
# Restart n8n
Community Nodes UI
- Settings > Community Nodes > Install
- Enter:
n8n-nodes-agentiff
Credentials
Create an Agentiff.AI API credential with:
- API Token: Your Agentiff.AI API token
- Base URL:
https://api.agentiff.ai(or staging URL)
Node Documentation
Agentiff Webhook
Webhook trigger that auto-extracts the standard Agentiff payload structure.
Standard payload (sent by the Agentiff platform):
{
"execution_id": "exec_abc123",
"api_token": "sk-...",
"inputs": { "max_results": 10, "query": "invoice" },
"credentials": { "gmail": { "access_token": "ya29..." } },
"prompts": { "system": "...", "user": "..." }
}
Output (with Auto-Extract enabled):
{
"execution_id": "exec_abc123",
"api_token": "sk-...",
"max_results": 10,
"query": "invoice",
"_agentiff": {
"execution_id": "exec_abc123",
"api_token": "sk-...",
"base_url": "https://api.agentiff.ai",
"inputs": { "max_results": 10, "query": "invoice" },
"credentials": { "gmail": { "access_token": "ya29..." } },
"prompts": { "system": "...", "user": "..." }
},
"body": { /* original body */ }
}
The _agentiff context object is automatically consumed by downstream Agentiff nodes (SSE Event, LLM).
Parameters:
- Path: Webhook URL path
- Response Mode:
onReceived,lastNode, orresponseNode - Auto-Extract Standard Fields: Extract and flatten standard payload (default: true)
- Extract Parameters: Additional custom parameters with type conversion and validation
Agentiff SSE Event
Send Server-Sent Events to stream progress updates to Agentiff clients.
Parameters:
- Execution ID: Auto-reads from
_agentiffcontext orexecution_idfield - Event Type:
progress,status,data,error,complete, orcustom - Event Data: JSON payload
Automatically resolves base_url and api_token from _agentiff context injected by Agentiff Webhook.
Agentiff LLM
Call the Agentiff LLM Gateway (OpenAI-compatible /chat/completions) with built-in response parsing. Replaces the common pattern of HTTP Request + Code node for JSON extraction.
Parameters:
- API Token Source:
From Agentiff Context(default) /From Credentials/Custom Expression - Gateway URL: Defaults to credential base URL. Set
={{ $env.LLM_GATEWAY_URL }}for env-based config. - Model:
gpt-4o-mini,gpt-4o,gpt-4-turbo,gpt-3.5-turbo, Claude 3.5 Sonnet, Claude 3 Haiku, Grok, or Custom - System Prompt: Optional system prompt (supports expressions)
- User Prompt: Required user prompt
- Response Mode:
Text Content: Returnschoices[0].message.contentas stringJSON Extraction(default): Extracts JSON from response with 3-strategy fallbackFull Response: Returns complete API response object
JSON extraction strategy (in json mode):
- Try full content as JSON
- Try to find JSON array
[{...}] - Try to find JSON object
{...} - Falls back to raw text with
json_extracted: false
Output (json mode):
{
"content": "raw LLM text",
"parsed": { /* extracted JSON object/array */ },
"json_extracted": true,
"model": "gpt-4o-mini",
"usage": { "prompt_tokens": 150, "completion_tokens": 200 }
}
Options: Temperature (0.3), Max Tokens (2000), Top P, Timeout (60s)
Agentiff PII Redact
Detect and redact personally identifiable information from text fields. Supports 15 PII patterns.
Parameters:
- Fields to Scan: Comma-separated field names, supports dot notation (e.g.,
content, user.email, nested.field) - PII Types: Multi-select of patterns to detect:
| Type | Redaction | Severity |
|---|---|---|
| Email Addresses | [EMAIL_REDACTED] |
HIGH |
| Phone Numbers (US) | [PHONE_REDACTED] |
HIGH |
| Phone Numbers (Intl) | [PHONE_REDACTED] |
HIGH |
| Social Security Numbers | [SSN_REDACTED] |
CRITICAL |
| Credit Card Numbers | [CC_REDACTED] |
CRITICAL |
| IP Addresses | [IP_REDACTED] |
MEDIUM |
| API Keys | [API_KEY_REDACTED] |
CRITICAL |
| AWS Keys | [AWS_KEY_REDACTED] |
CRITICAL |
| OpenAI Keys | [OPENAI_KEY_REDACTED] |
CRITICAL |
| Stripe Keys | [STRIPE_KEY_REDACTED] |
CRITICAL |
| JWT Tokens | [JWT_REDACTED] |
HIGH |
| Bearer Tokens | [BEARER_REDACTED] |
HIGH |
| Passwords | [PASSWORD_REDACTED] |
CRITICAL |
| IBAN Numbers | [IBAN_REDACTED] |
HIGH |
| Physical Addresses | [ADDRESS_REDACTED] |
MEDIUM |
Options:
- Include Findings Summary: Adds
pii_redactionobject with counts (default: true) - Passthrough Unscanned Fields: Include unscanned fields in output (default: true)
Agentiff Gmail API
Gmail operations using automatically injected OAuth access tokens.
Operations: List Messages, Get Message, Send Message, Delete Message
Access Token Source:
- From Previous Node: Auto-reads
access_tokenfrom upstream (e.g., from Agentiff Webhook's_agentiff.credentials) - Custom Expression: Provide a custom expression
See INSTALLATION.md for detailed Gmail API setup.
Development
npm install
npm run build # TypeScript + icons
npm run dev # Watch mode
npm run lint # ESLint with n8n rules
Local Testing
npm link
cd ~/.n8n/custom && npm link n8n-nodes-agentiff
# Restart n8n
License
MIT




