promptlock-guard

AI-powered security guardrails for n8n workflows - analyze, redact, or block content based on HIPAA, GDPR, and PCI compliance frameworks

Package Information

Released: 8/22/2025
Downloads: 9 weeklyĀ /Ā 42 monthly
Latest Version: 1.0.0
Author: PromptLock Team

Documentation

PromptLock Guard for n8n

šŸ›”ļø AI-powered security guardrails for your n8n workflows

PromptLock Guard is a community node that brings enterprise-grade content analysis and compliance checking directly into your n8n automations. Protect sensitive data, prevent prompt injection attacks, and ensure regulatory compliance with a simple drop-in node.

✨ Features

  • šŸ” Multi-Framework Compliance: HIPAA, GDPR, PCI compliance checking
  • 🚦 4-Output Routing: Clean workflow routing based on risk assessment
  • šŸ”’ Fail-Closed Security: Secure by default with configurable error handling
  • ⚔ Real-time Analysis: Fast API integration with configurable timeouts
  • šŸŽÆ Flexible Targeting: Support for nested field paths with dot notation
  • šŸ“Š Rich Metadata: Detailed analysis results and compliance status

šŸš€ Installation

Community Nodes (Recommended)

  1. In n8n, go to Settings → Community Nodes
  2. Click Install a community node
  3. Enter: n8n-nodes-promptlock-guard
  4. Click Install
  5. Restart n8n

npm Installation

# Install globally for n8n
npm install -g n8n-nodes-promptlock-guard

# Or install in your n8n user directory
cd ~/.n8n/
npm install n8n-nodes-promptlock-guard

# Restart n8n

āš™ļø Setup

1. Create Credentials

  1. In n8n, create new PromptLock API Key credentials:
    • Base URL: https://api.promptlock.com (or your instance URL)
    • API Key: Your PromptLock API key (starts with pl_)
    • Header Style: X-API-Key (recommended)

2. Add the Node

  1. Search for "PromptLock Guard" in the node panel
  2. Configure:
    • Text Field: Path to your text data (e.g., text, payload.message)
    • Frameworks: Choose compliance frameworks (HIPAA, GDPR, PCI)
    • Credentials: Select your PromptLock API Key

3. Wire the Outputs

The node provides four distinct outputs:

  • āœ… Allow: Content is safe, proceed normally
  • āš ļø Flag: Content needs review, proceed with caution
  • šŸ”’ Redact: Content has been cleaned, use cleanText field
  • 🚫 Block: Content is blocked, do not proceed

šŸ“‹ Quick Example

Webhook → PromptLock Guard
ā”œā”€ Allow → Process Normally
ā”œā”€ Flag → Send to Review Queue
ā”œā”€ Redact → Process with Clean Text
└─ Block → Return 403 Error

šŸ”§ Configuration Options

Core Settings

  • Text Field: Path to analyze (supports dot notation like data.message.text)
  • Compliance Frameworks: Select HIPAA, GDPR, and/or PCI checking
  • Action on High Risk: Override server policy (inherit/flag/redact/block/score)

Advanced Settings

  • Write Clean Text To: Field path for redacted content (default: cleanText)
  • Attach Metadata Under: Field path for analysis results (default: promptLock)
  • On API Error: Error handling strategy (block/flag/allow/throw)
  • Request Timeout: API timeout in milliseconds (default: 15000)

šŸ“Š Metadata Structure

The node attaches rich metadata to each item:

{
	"promptLock": {
		"risk_score": 0.85,
		"action_taken": "redact",
		"violations": [
			{
				"type": "pii_detection",
				"severity": "high",
				"confidence": 0.95
			}
		],
		"compliance_status": {
			"HIPAA": "violation",
			"GDPR": "compliant",
			"PCI": "compliant"
		},
		"usage": {
			"input_tokens": 150,
			"processing_time_ms": 245
		}
	}
}

šŸ”’ Security Best Practices

  • On API Error: Keep as "Block (Fail Closed)" for maximum security
  • Action on High Risk: Use "Inherit from Policy" to leverage server-side rules
  • Write Clean Text To: Use a separate field to avoid data loss

šŸ“ž Support

šŸ“œ License

MIT License - see LICENSE file for details.


Built with ā¤ļø by the PromptLock team

Secure your AI workflows, protect your data, ensure compliance.

Discussion