Package Information
Released: 8/22/2025
Downloads: 9 weeklyĀ /Ā 42 monthly
Latest Version: 1.0.0
Author: PromptLock Team
Available Nodes
Documentation
PromptLock Guard for n8n
š”ļø AI-powered security guardrails for your n8n workflows
PromptLock Guard is a community node that brings enterprise-grade content analysis and compliance checking directly into your n8n automations. Protect sensitive data, prevent prompt injection attacks, and ensure regulatory compliance with a simple drop-in node.
⨠Features
- š Multi-Framework Compliance: HIPAA, GDPR, PCI compliance checking
- š¦ 4-Output Routing: Clean workflow routing based on risk assessment
- š Fail-Closed Security: Secure by default with configurable error handling
- ā” Real-time Analysis: Fast API integration with configurable timeouts
- šÆ Flexible Targeting: Support for nested field paths with dot notation
- š Rich Metadata: Detailed analysis results and compliance status
š Installation
Community Nodes (Recommended)
- In n8n, go to Settings ā Community Nodes
- Click Install a community node
- Enter:
n8n-nodes-promptlock-guard - Click Install
- Restart n8n
npm Installation
# Install globally for n8n
npm install -g n8n-nodes-promptlock-guard
# Or install in your n8n user directory
cd ~/.n8n/
npm install n8n-nodes-promptlock-guard
# Restart n8n
āļø Setup
1. Create Credentials
- In n8n, create new PromptLock API Key credentials:
- Base URL:
https://api.promptlock.com(or your instance URL) - API Key: Your PromptLock API key (starts with
pl_) - Header Style:
X-API-Key(recommended)
- Base URL:
2. Add the Node
- Search for "PromptLock Guard" in the node panel
- Configure:
- Text Field: Path to your text data (e.g.,
text,payload.message) - Frameworks: Choose compliance frameworks (HIPAA, GDPR, PCI)
- Credentials: Select your PromptLock API Key
- Text Field: Path to your text data (e.g.,
3. Wire the Outputs
The node provides four distinct outputs:
- ā Allow: Content is safe, proceed normally
- ā ļø Flag: Content needs review, proceed with caution
- š Redact: Content has been cleaned, use
cleanTextfield - š« Block: Content is blocked, do not proceed
š Quick Example
Webhook ā PromptLock Guard
āā Allow ā Process Normally
āā Flag ā Send to Review Queue
āā Redact ā Process with Clean Text
āā Block ā Return 403 Error
š§ Configuration Options
Core Settings
- Text Field: Path to analyze (supports dot notation like
data.message.text) - Compliance Frameworks: Select HIPAA, GDPR, and/or PCI checking
- Action on High Risk: Override server policy (inherit/flag/redact/block/score)
Advanced Settings
- Write Clean Text To: Field path for redacted content (default:
cleanText) - Attach Metadata Under: Field path for analysis results (default:
promptLock) - On API Error: Error handling strategy (block/flag/allow/throw)
- Request Timeout: API timeout in milliseconds (default: 15000)
š Metadata Structure
The node attaches rich metadata to each item:
{
"promptLock": {
"risk_score": 0.85,
"action_taken": "redact",
"violations": [
{
"type": "pii_detection",
"severity": "high",
"confidence": 0.95
}
],
"compliance_status": {
"HIPAA": "violation",
"GDPR": "compliant",
"PCI": "compliant"
},
"usage": {
"input_tokens": 150,
"processing_time_ms": 245
}
}
}
š Security Best Practices
- On API Error: Keep as "Block (Fail Closed)" for maximum security
- Action on High Risk: Use "Inherit from Policy" to leverage server-side rules
- Write Clean Text To: Use a separate field to avoid data loss
š Support
- Email: contact@promptlock.com
š License
MIT License - see LICENSE file for details.
Built with ā¤ļø by the PromptLock team
Secure your AI workflows, protect your data, ensure compliance.