Package Information
Available Nodes
Documentation
n8n-nodes-s3cache
n8n-nodes-s3cache is an n8n community node that turns Amazon S3 (or S3-compatible storage) into a flexible cache for your workflows. It adds two actions—Store and Check—to help you persist intermediate data, avoid expensive API calls, and fan workflows across the Hit/Miss outputs.
Unlike traditional cache nodes, this package keeps the node "transparent": Cache Store passes its input straight through after writing to S3, and Cache Check only emits cached content when it finds a fresh entry, otherwise forwarding the original item down the miss branch.

The node ships as two discrete actions for a few reasons:
- A pair of simple nodes is easier to reason about in large workflows.
- Complex branching logic often needs miss traffic to loop back into different branches without disturbing other inputs.
- Combined cache implementations can mangle n8n’s
itemIndexwhen multiple inputs merge—splitting Store/Check keeps indices consistent.
Installation •
Operations •
Credentials •
Usage •
Resources •
Version history
Installation
- Open your self-hosted n8n instance.
- Navigate to Settings → Community Nodes → Install.
- Enter the package name
n8n-nodes-s3cacheand confirm. - Restart n8n if prompted so the node bundle loads correctly.
ℹ️ Community nodes aren’t available on n8n Cloud. Install them on self-hosted or desktop builds only.
Operations
| Operation | Description | Outputs |
|---|---|---|
| Cache Store | Writes either JSON or a chosen binary property to the configured S3 bucket/folder, tagging objects with TTL info | Hit (pass-through) |
| Cache Check | Looks up the object by Cache ID, validates TTL, and emits cached payloads (JSON/binary) or routes to miss output | Hit (cached data), Miss (original input) |
- Cache IDs map directly to object keys (optionally under a folder/prefix from the credentials).
- TTL is stored as metadata and compared against the object’s last-modified timestamp.
Credentials
Create a single S3 Credentials entry (ships with this package) that holds:
| Field | Notes |
|---|---|
| Access Key ID | IAM user access key or equivalent |
| Secret Access Key | Matching secret key (stored as password field) |
| S3 Region | Region for your bucket (defaults to us-east-1) |
| S3 Bucket Name | Bucket where cache entries will live |
| Folder Name | Optional prefix/folder (omit leading/trailing slashes) |
| Force Path-Style | Enable for bucket names containing dots, custom S3-compatible endpoints (MinIO, DigitalOcean Spaces, Cloudflare R2), or local S3 emulators that require path-style URLs |
The node signs requests with AWS Signature Version 4 using these credentials, so any S3-compatible provider that supports SigV4 should work (AWS S3, MinIO, Cloudflare R2, etc.).
Recommended IAM Policy
Grant the IAM user only the permissions the cache needs. Replace <insert bucketname here> with your bucket name, then attach the following policy (adapted from AWS S3 policy docs):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SeeAllBucketsInUI",
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Sid": "BucketMetadataAndSearch",
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:ListBucketMultipartUploads"
],
"Resource": "arn:aws:s3:::<insert bucketname here>"
},
{
"Sid": "ObjectReadWriteAndTags",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetObjectAttributes",
"s3:GetObjectTagging",
"s3:PutObject",
"s3:PutObjectTagging",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:RestoreObject"
],
"Resource": "arn:aws:s3:::<insert bucketname here>/*"
},
{
"Sid": "MultipartHelpers",
"Effect": "Allow",
"Action": [
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::<insert bucketname here>/*"
}
]
}
Usage
- Store: Drop the node after an expensive step, set Cache ID (e.g., from an item field), pick JSON or Binary payload, and optionally adjust TTL. The node writes to S3 and forwards the original data to the next node.
- Check: Place this before the expensive step. Use the same Cache ID. Wire output
0(Hit) to the downstream logic that consumes cached data, and output1(Miss) back into the expensive branch to recompute and store. - To cache binary data, select
Binaryand provide the property name (defaultdata). Cache Check will restore the binary with the same property name and MIME type metadata.
Tips:
- Use deterministic IDs (hashes, concatenated parameters) so repeated inputs hit the same key.
- Combine with IF/Switch nodes to merge hit/miss branches if needed.
- TTL enforcement requires the S3
Last-Modifiedheader; if it’s missing or stale the node automatically routes to the miss output.
Testing
- Run
npm run buildto refresh the compileddistbundle so tests can import the latest helpers. - Execute
npm testto run the lightweight Node test suite (no extra dependencies required). The suite covers path canonicalization, response buffering, and TTL freshness checks so regressions in those areas are caught early. - (Optional) Import the sample workflow from
examples/cache-demo.workflow.jsoninto n8n. Set your S3 credentials on the Store/Check nodes, then run the workflow twice:- Run 1: You should see the miss branch populate and store the computed payload.
- Run 2: The hit branch should fire immediately, proving the cache is being used.
Tip: the example workflow mirrors the recommended production pattern—Check node upfront, miss branch recomputes data and passes through the Store node before rejoining the main path.
Resources
Version history
| Version | Highlights |
|---|---|
| 0.2.0 | Single S3 node with Cache Check/Store actions, dual outputs, SigV4 signing, JSON/Binary payload support. |
| 0.1.x | Initial credential scaffolding and basic Cache Store prototype. |