Package Information
Documentation
n8n-nodes-aws-bedrock-assumerole
An n8n community node for AWS Bedrock with AssumeRole authentication support.

Features
- š AssumeRole Authentication: Secure cross-account access using AWS STS AssumeRole
- š¤ Multiple Claude Models: Support for Claude 3.5 Sonnet, Claude 3 Opus, Sonnet, Haiku, and more
- šØ Image Generation: Support for Amazon Nova Canvas and Titan Image Generator models
- š¤ AI Agent Compatible: Includes Chat Model sub-node for use with n8n AI Agent
- ā” Credential Caching: Automatic caching of temporary credentials with expiration handling
- š”ļø Error Handling: Comprehensive error handling and logging
- š Batch Processing: Process multiple items in a single workflow execution
- š Usage Tracking: Detailed usage information and response metadata
Available Nodes
This package includes two nodes:
- AWS Bedrock (AssumeRole) - Standalone node for direct AWS Bedrock API calls
- AWS Bedrock Chat Model - Chat Model sub-node for use with n8n AI Agent
Supported Models
This node uses AWS Bedrock inference profiles for optimal performance and availability:
Text/Chat Models (Claude)
- Claude 3.5 Sonnet v2 -
us.anthropic.claude-3-5-sonnet-20241022-v2:0(default) - Claude 3.5 Sonnet v1 -
us.anthropic.claude-3-5-sonnet-20240620-v1:0 - Claude 3.5 Haiku -
us.anthropic.claude-3-5-haiku-20241022-v1:0 - Claude 3.7 Sonnet -
us.anthropic.claude-3-7-sonnet-20250219-v1:0 - Claude Sonnet 4 -
us.anthropic.claude-sonnet-4-20250514-v1:0 - Claude Sonnet 4.5 -
us.anthropic.claude-sonnet-4-5-20250929-v1:0 - Claude Haiku 4.5 -
us.anthropic.claude-haiku-4-5-20251001-v1:0 - Claude Opus 4 -
us.anthropic.claude-opus-4-20250514-v1:0 - Claude Opus 4.1 -
us.anthropic.claude-opus-4-1-20250805-v1:0
Image Generation Models
- Amazon Nova Canvas v1 -
amazon.nova-canvas-v1:0- State-of-the-art image generation - Amazon Titan Image Generator v2 -
amazon.titan-image-generator-v2:0- High-quality image generation with advanced controls
Installation
Option 1: Install from npm (Recommended)
# Install globally for n8n
npm install -g n8n-nodes-aws-bedrock-assumerole
# Or install locally in your n8n custom nodes directory
cd ~/.n8n/custom/
npm install n8n-nodes-aws-bedrock-assumerole
Option 2: Install from source
# Clone the repository
git clone https://github.com/cabify/n8n-nodes-aws-bedrock-assumerole.git
cd n8n-nodes-aws-bedrock-assumerole
# Install dependencies
npm install
# Build the project
npm run build
# Link for local development
npm link
# In your n8n installation directory
npm link n8n-nodes-aws-bedrock-assumerole
Configuration
1. AWS Credentials Setup
You have two options for providing AWS credentials:
Option A: Environment Variables (Recommended)
Set these environment variables on your n8n server:
export AWS_ACCESS_KEY_ID="your-access-key-id"
export AWS_SECRET_ACCESS_KEY="your-secret-access-key"
export AWS_REGION="us-east-1"
Option B: Credential Fields
Fill in the credential fields directly in the n8n UI (less secure).
2. Create AWS AssumeRole Credential
- Go to Credentials in your n8n instance
- Click Add Credential
- Search for "AWS Assume Role"
- Configure the following:
- Access Key ID: Leave empty to use environment variable (recommended)
- Secret Access Key: Leave empty to use environment variable (recommended)
- Role ARN to Assume:
arn:aws:iam::<account-id>:role/<role-name> - AWS Region:
us-east-1(or your preferred region) - Session Duration:
3600(1 hour, adjust as needed)
3. AWS IAM Setup
Base Account Role/User Permissions
The base AWS credentials need the following permission:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::<target-account-id>:role/<target-role-name>"
}
]
}
Target Account Role
The role to be assumed needs:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel"
],
"Resource": [
"arn:aws:bedrock:*::foundation-model/anthropic.*"
]
}
]
}
And the trust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<base-account-id>:role/<base-role-name>"
},
"Action": "sts:AssumeRole"
}
]
}
4. Application Inference Profiles
This node supports AWS Bedrock Application Inference Profiles, allowing you to route traffic through specific profiles for cost and usage tracking.
4.1. Credential configuration
In the AWS AssumeRole credential, you can optionally configure:
- Application Inference Profile Account ID: The AWS account ID where your application inference profiles live.
- Application Inference Profiles JSON: A JSON object mapping Bedrock model IDs to application inference profile IDs.
Example JSON:
{
"us.anthropic.claude-3-5-sonnet-20240620-v1:0": "hs4uvikaus5b",
"us.anthropic.claude-3-5-sonnet-20241022-v2:0": "0xumpou8xusv",
"us.anthropic.claude-3-5-haiku-20241022-v1:0": "abc123haiku"
}
- The key is the Bedrock model ID (for example,
us.anthropic.claude-3-5-sonnet-20241022-v2:0). - The value is the application inference profile ID (for example,
0xumpou8xusv), not the full ARN.
The node then builds the final ARN internally using:
arn:aws:bedrock:{region}:{account-id}:application-inference-profile/{profile-id}
If the JSON is invalid, the node will fail with a clear error message pointing to the Application Inference Profiles JSON field.
4.2. Model dropdown behaviour
The Model ID dropdown in the node behaves as follows:
- If Application Inference Profiles JSON is empty or not set:
- The dropdown shows all supported Claude models (the default static list).
- If Application Inference Profiles JSON is present and valid:
- The dropdown shows only the models present in that JSON.
- Known model IDs are displayed with friendly names (for example, "Claude 3.5 Sonnet v2"), unknown ones are shown as their raw model ID.
This ensures that, when you configure specific models and profiles in the credential, users of the node can only select those models.
4.3. Backwards compatibility
If no application inference profile mapping is found for a selected model ID, the node will:
- Try the legacy single Application Inference Profile ID field (if configured).
- Otherwise, fall back to using the raw model ID directly (original behaviour).
Usage
Option 1: Using with AI Agent (Recommended for Conversational AI)
The AWS Bedrock Chat Model node is designed to work with n8n's AI Agent node, enabling conversational AI workflows with tool calling, memory, and more.
Setup Steps:
- Add an AI Agent node to your workflow
- Connect the AWS Bedrock Chat Model node to the "Chat Model" input of the AI Agent
- Select your credential in the Chat Model node (the same AWS AssumeRole credential)
- Choose your model (e.g., Claude 3.5 Sonnet v2)
- Add tools (optional): Connect tool nodes like Vector Store, Calculator, HTTP Request, etc.
- Add memory (optional): Connect a memory node for conversation history
Benefits of Using with AI Agent:
- ā Tool Calling: The AI can use tools to fetch data, perform calculations, etc.
- ā Conversation Memory: Maintain context across multiple interactions
- ā Structured Output: Parse responses into structured data
- ā Multi-step Reasoning: The agent can plan and execute complex tasks
Option 2: Direct API Calls (Standalone Node)
For simple, direct API calls without AI Agent features, use the AWS Bedrock (AssumeRole) node.
Basic Workflow Example
- Add the AWS Bedrock (AssumeRole) node to your workflow
- Select your credential (created in step 2 above)
- Configure the node:
- Model ID: Choose from the dropdown (e.g., Claude 3.5 Sonnet)
- Prompt: Enter your prompt or use an expression to get it from previous nodes
- Max Tokens: Set the maximum response length (default: 1000)
- Temperature: Control randomness (0.0 = deterministic, 1.0 = very random)
Example Prompt
Analyze the following customer feedback and provide:
1. Sentiment (positive/negative/neutral)
2. Key themes
3. Suggested actions
Customer feedback: "The service was okay but the wait time was too long."
Image Analysis Workflow (Text and Image input)
Note: Image analysis is currently only available with the standalone AWS Bedrock (AssumeRole) node, not with the Chat Model sub-node.
To analyze an image together with a text prompt using Claude models that support vision capabilities:
- Add a Form Trigger (or any node that outputs binary data) with a file field, for example labeled
image_to_analize. - Connect that node to AWS Bedrock (AssumeRole).
- Configure the Bedrock node:
- Model ID: Select any Claude model that supports image input (for example, Claude Sonnet 4).
- Input Type: Set to
Text and Image. - Image Binary Property: Set to the name of the binary field that contains the uploaded image. For a Form Trigger file field labeled
image_to_analize, the binary key is alsoimage_to_analize. - Prompt: Provide the instruction you want to send together with the image, for example:
Describe what is written in this image.
- Execute the workflow by submitting the form with an image file.
You can import the ready-to-use example workflow from examples/image-analysis-workflow.json.
Image Generation Workflow (Nova Canvas / Titan Image)
Generate images from text prompts using Amazon Nova Canvas or Titan Image Generator models:
Add the AWS Bedrock (AssumeRole) node to your workflow.
Configure the Bedrock node:
- Model ID: Select
Amazon Nova Canvas v1orAmazon Titan Image Generator v2. - Prompt: Describe the image you want to generate (e.g., "A futuristic city at sunset with flying cars").
- Negative Prompt (optional): Describe what NOT to include (e.g., "blurry, low quality, text").
- Image Width/Height: Choose the dimensions (512, 768, 1024, or 1280 pixels).
- Image Quality: Select
standardorpremium. - Number of Images: Generate 1-4 images at once.
- Seed (optional): Set a specific seed for reproducible results (0 = random).
- CFG Scale (Titan Image only): Controls how closely the image follows the prompt (1-15).
- Model ID: Select
The node outputs binary image data that can be:
- Saved to disk using the Write Binary File node
- Uploaded to cloud storage (S3, Google Drive, etc.)
- Sent via email or messaging platforms
- Further processed in your workflow
Image Generation Response Format
For image generation models, the node returns:
{
"modelId": "arn:aws:bedrock:us-east-1:123456789:application-inference-profile/abc123",
"configuredModelId": "amazon.nova-canvas-v1:0",
"prompt": "A futuristic city at sunset",
"imageIndex": 0,
"totalImages": 1,
"imageWidth": 1024,
"imageHeight": 1024,
"imageQuality": "standard",
"timestamp": "2026-01-08T10:00:00.000Z"
}
The generated image is available in the binary.data property as a PNG file.
Image Editing Workflow (Inpainting, Outpainting, Variations, Background Removal)
Both Nova Canvas and Titan Image Generator support advanced image editing capabilities:
Image Task Types
| Task Type | Description | Required Fields |
|---|---|---|
| Text to Image | Generate a new image from a text prompt | Prompt |
| Inpainting | Modify areas inside a masked region | Source Image, Mask (prompt or image), Prompt |
| Outpainting | Extend or modify areas outside a masked region | Source Image, Mask (prompt or image), Prompt |
| Image Variation | Create variations of an existing image | Source Image, Prompt (optional) |
| Background Removal | Remove the background (outputs transparent PNG) | Source Image |
Inpainting Example
Replace part of an image based on a text description of the area to modify:
- Add a node that provides an image (e.g., Read Binary File, HTTP Request, or Form Trigger).
- Add the AWS Bedrock (AssumeRole) node.
- Configure:
- Model ID: Select
Amazon Nova Canvas v1orAmazon Titan Image Generator v2 - Image Task Type: Select
Inpainting (Edit Inside Mask) - Source Image Binary Property:
data(or the name of your binary property) - Mask Prompt: Describe the area to modify (e.g., "the sky", "the person's shirt")
- Prompt: Describe what to put in that area (e.g., "a beautiful sunset sky")
- Negative Prompt (optional): What to avoid
- Model ID: Select
Outpainting Example
Extend an image beyond its original boundaries:
- Provide a source image.
- Configure:
- Image Task Type: Select
Outpainting (Edit Outside Mask) - Mask Prompt: Describe the area to preserve (e.g., "the main subject")
- Prompt: Describe what to generate in the extended area
- Outpainting Mode:
Default(allows blending) orPrecise(strict boundary)
- Image Task Type: Select
Image Variation Example
Create variations of an existing image:
- Provide a source image.
- Configure:
- Image Task Type: Select
Image Variation - Similarity Strength: 0.2 (more variation) to 1.0 (more similar to original)
- Prompt (optional): Guide the variation direction
- Image Task Type: Select
Background Removal Example
Remove the background from an image (outputs transparent PNG):
- Provide a source image.
- Configure:
- Image Task Type: Select
Background Removal - No prompt needed - the model automatically detects and removes the background
- Image Task Type: Select
Mask Options
For Inpainting and Outpainting, you can specify the mask in two ways:
- Mask Prompt (recommended): A text description of the area to mask (e.g., "the sky", "the person's face")
- Mask Image: A binary black/white image where:
- Black pixels = area to modify
- White pixels = area to preserve
If both are provided, the Mask Image takes precedence.
AWS Documentation
For more details on image editing capabilities, see:
Application Inference Profiles for Image Models
Configure image generation models in your credentials JSON just like Claude models:
{
"us.anthropic.claude-3-5-sonnet-20241022-v2:0": "0xumpou8xusv",
"amazon.nova-canvas-v1:0": "b3tcu2bezmae",
"amazon.titan-image-generator-v2:0": "12fut6sh2vgi"
}
Response Format (Standalone Node - Text Models)
The AWS Bedrock (AssumeRole) standalone node returns a JSON object with:
{
"modelId": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
"prompt": "Your original prompt",
"response": {
"content": [
{
"text": "The AI response text",
"type": "text"
}
],
"usage": {
"input_tokens": 25,
"output_tokens": 150
}
},
"usage": {
"input_tokens": 25,
"output_tokens": 150
},
"content": "The AI response text",
"timestamp": "2024-11-12T17:46:00.000Z"
}
Comparison: Chat Model vs Standalone Node
| Feature | AWS Bedrock Chat Model | AWS Bedrock (AssumeRole) |
|---|---|---|
| Use Case | AI Agent workflows | Direct API calls |
| Tool Calling | ā Yes (via AI Agent) | ā No |
| Conversation Memory | ā Yes (via AI Agent) | ā No |
| Image Analysis | ā Not yet supported | ā Yes |
| Image Generation | ā No | ā Yes (Nova Canvas, Titan Image) |
| Batch Processing | ā No | ā Yes |
| Structured Output | ā Yes (via AI Agent) | ā ļø Manual parsing |
| Best For | Conversational AI, agents with tools | Simple prompts, image analysis/generation, batch jobs |
Development
Prerequisites
- Node.js 18+
- npm or yarn
- TypeScript
Setup
# Clone the repository
git clone https://github.com/cabify/n8n-nodes-aws-bedrock-assumerole.git
cd n8n-nodes-aws-bedrock-assumerole
# Install dependencies
npm install
# Build the project
npm run build
# Run linting
npm run lint
# Run tests
npm test
Project Structure
n8n-nodes-aws-bedrock-assumerole/
āāā credentials/
ā āāā AwsAssumeRole.credentials.ts # AWS AssumeRole credential definition
āāā nodes/
ā āāā AwsBedrockAssumeRole.node.ts # Main node implementation
āāā icons/
ā āāā aws.svg # AWS credential icon
ā āāā bedrock.svg # Bedrock node icon
āāā dist/ # Compiled JavaScript (generated)
āāā package.json # Package configuration
āāā tsconfig.json # TypeScript configuration
āāā .eslintrc.js # ESLint configuration
āāā .prettierrc # Prettier configuration
āāā README.md # This file
Troubleshooting
Common Issues
1. "Missing AWS base credentials"
- Ensure AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are set as environment variables
- Or fill in the credential fields in the n8n UI
2. "AssumeRole failed"
- Verify the Role ARN is correct
- Check that the base credentials have
sts:AssumeRolepermission - Ensure the target role trusts the base account/role
3. "Access Denied" when invoking Bedrock
- Verify the assumed role has
bedrock:InvokeModelpermission - Check that the model ID is available in your AWS region
- Ensure Bedrock is enabled in your AWS account
4. Node not appearing in n8n
- Restart n8n after installation
- Check that the package is installed in the correct location
- Verify the package.json n8n configuration is correct
Debug Logging
The node provides detailed console logging. Check your n8n logs for:
[AWS Bedrock] Resolved credentials[AWS Bedrock] AssumeRole successful[AWS Bedrock] Model response received
Developers
This project is developed and maintained by:
Development
Quick Start with Make
This project includes a Makefile for easy development and deployment:
# Show all available commands
make help
# Development
make install # Install dependencies
make build # Build the project
make dev # Build and start Docker for local testing
make clean # Clean build artifacts
# Docker
make docker-up # Start Docker containers
make docker-down # Stop Docker containers
make docker-logs # Show Docker logs
# Deployment
make publish # Publish to npm (interactive)
make sync # Sync repositories (GitHub + GitLab)
make release # Full release: build + publish + sync
Publishing a New Version
The make publish command provides an interactive workflow that handles everything:
Step 1: Version Bump
Choose the type of version bump:
- patch (1.0.2 ā 1.0.3) - Bug fixes
- minor (1.0.2 ā 1.1.0) - New features (backwards compatible)
- major (1.0.2 ā 2.0.0) - Breaking changes
- custom - Specify version manually
Step 2: Changelog Generation
Select the types of changes included:
- Added - New features
- Changed - Changes in existing functionality
- Deprecated - Soon-to-be removed features
- Removed - Removed features
- Fixed - Bug fixes
- Security - Security fixes
Step 3: Changelog Entries
Enter detailed changes for each selected section. The script will automatically:
- Update
CHANGELOG.mdwith proper formatting - Follow Keep a Changelog format
- Add the current date
- Insert the new entry at the top
Step 4: Build & Publish
The script will:
- Build the project (
npm run build) - Publish to npm with public access
- Commit changes to
package.json,package-lock.json, andCHANGELOG.md - Create a git tag (e.g.,
v1.0.2)
Example Workflow
# Start the publish process
make publish
# Follow the prompts:
# 1. Select version bump: 1 (patch)
# 2. Select change types: 5 (Fixed)
# 3. Enter changes:
# - Fixed custom SVG icons not displaying correctly
# - Removed unused code and imports
# 4. Confirm publish: y
# After publishing, sync repositories
make sync
# Or do everything in one command:
make release
Repository Sync
The project supports syncing to multiple repositories:
- GitHub: https://github.com/cabify/n8n-nodes-aws-bedrock-assumerole
- GitLab: https://gitlab.otters.xyz/platform/business-automation/n8n-nodes-aws-bedrock-assumerole
The make sync command will:
- Push code to both GitHub and GitLab
- Push all tags to both repositories
- Verify you're on the main branch
- Show current status before pushing
Manual Development
If you prefer not to use Make:
# Install dependencies
npm install
# Build
npm run build
# Start Docker for testing
docker-compose up -d
# View logs
docker-compose logs -f n8n
# Publish manually
npm version patch # or minor, major
npm run build
npm publish --access public
git push && git push --tags
Project Structure
n8n-bedrock-node/
āāā credentials/
ā āāā AwsAssumeRole.credentials.ts # Credential definition
ā āāā aws.svg # AWS icon
āāā nodes/
ā āāā AwsBedrockAssumeRole.node.ts # Main node implementation
ā āāā bedrock.svg # Bedrock icon
āāā icons/
ā āāā aws.svg # Source AWS icon
ā āāā bedrock.svg # Source Bedrock icon
āāā dist/ # Compiled output
āāā docker-compose.yml # Local development setup
āāā Makefile # Development commands
āāā publish-npm.sh # npm publish script
āāā sync-repos.sh # Repository sync script
āāā package.json # Package configuration
Scripts
npm run build- Compile TypeScript and copy iconsnpm run copy-icons- Copy icons to dist directoriesnpm run lint- Run ESLint (requires setup)npm test- Run tests (if available)
Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Support
- š§ Email: business-automation@cabify.com
- š Issues: GitHub Issues
- š n8n Documentation: n8n.io/docs
Acknowledgments
- Built for the n8n workflow automation platform
- Uses AWS SDK v3 for optimal performance
- Inspired by the need for secure cross-account AWS Bedrock access