Overview
This node, named "PromptLock Guard," is an AI-powered content analysis tool designed to help ensure compliance with various regulatory frameworks by analyzing text data. It inspects input text for sensitive or high-risk content according to selected compliance standards such as HIPAA (healthcare), GDPR (privacy), and PCI (payment security). Based on the analysis, it can take actions like allowing the content, flagging it for review, redacting sensitive parts, or blocking the content entirely.
Common scenarios where this node is beneficial include:
- Automatically screening user-generated content or messages for compliance violations before processing or storage.
- Sanitizing sensitive information in documents or communications to meet privacy regulations.
- Enforcing organizational policies on data handling by automatically blocking or redacting risky content.
Practical example:
- A healthcare application uses this node to scan patient notes or messages for protected health information (PHI) and redact or block any content that violates HIPAA rules before saving or forwarding the data.
Properties
| Name | Meaning |
|---|---|
| Text Field | Path to the field containing the text to analyze. Supports dot notation (e.g., payload.message). |
| Compliance Frameworks | Select one or more compliance frameworks to check against: HIPAA, GDPR, PCI. |
| Action on High Risk | Override default action when high risk content is detected. Options: inherit from policy, flag, redact, block, score only (return risk score without enforcement). |
| Write Clean Text To | Field path where redacted or cleaned text will be written (e.g., cleanText, sanitized). |
| Attach Metadata Under | Field path where analysis metadata (scores, violations, compliance status, etc.) will be attached. |
| Route "Score Only" To | When action is "score only", choose which output to route to: flag output or allow output. |
| On API Error | Behavior when the external API is unreachable or returns an error. Options: block (fail closed), flag, allow (fail open), throw error. |
| Request Timeout (ms) | HTTP timeout for the analyze request, between 1 and 60 seconds. |
Output
The node has four outputs representing different outcomes of the content analysis:
Allow (✅ Allow)
Items where the content passed compliance checks or was allowed based on configured actions.Flag (⚠️ Flag)
Items flagged for manual review due to detected risks or policy decisions.Redact (🔒 Redact)
Items where sensitive content was redacted/cleaned. The cleaned text is written to the specified field path.Block (🚫 Block)
Items blocked due to high risk or policy enforcement.
Each output item contains a JSON object with the original data plus additional metadata under the configured metadata path. This metadata includes:
risk_score: Numeric risk score assigned by the analysis.action_taken: The action applied (allow,flag,redact,block, orscore).violations: List of detected compliance violations.compliance_status: Status details per compliance framework.usage: API usage statistics.medical_context_detected: Boolean indicating if medical context was detected.timestamp: ISO timestamp of the analysis.node_version: Version of the node implementation.
If redaction occurs, the cleaned text is added at the configured clean text path.
Dependencies
- Requires an external API key credential to authenticate requests to the PromptShield API service.
- The node sends HTTP POST requests to the API endpoint
/v1/analyzewith the text and selected compliance frameworks. - Configurable HTTP headers support either an API key header or bearer token style authentication.
- Requires network access to the external API URL specified in credentials.
- Timeout for API requests is configurable (default 15 seconds).
Troubleshooting
- Text field not found or empty: If the specified text field path does not exist or is not a string, the node throws an error for that item. Ensure the correct field path is provided and contains valid text.
- API errors or timeouts: If the external API is unreachable or returns an error, behavior depends on the "On API Error" setting:
- Block output (fail closed) routes items to the Block output.
- Flag output routes items for manual review.
- Allow output lets items pass through.
- Throw error stops workflow execution.
- Invalid configuration: Incorrect field paths for clean text or metadata may cause unexpected results or missing data.
- Empty or whitespace-only text: The node treats empty strings as errors; ensure input text is non-empty.
- Action on High Risk conflicts: Overriding the default action may lead to unexpected routing; verify settings align with desired enforcement.
Links and References
- HIPAA Compliance
- GDPR Overview
- PCI DSS Standards
- PromptShield API Documentation (replace with actual URL)