Overview
This node integrates with the Claude AI models provided via AWS Bedrock to generate AI-driven text completions and responses. It is designed to stream responses progressively while also providing a final complete output. The node supports advanced features such as tool usage integration, message state control, and progress messaging.
Common scenarios where this node is beneficial include:
- Building conversational AI assistants that require streaming partial responses for better user experience.
- Automating content generation or summarization tasks using Claude's language models.
- Integrating AI-powered tools within workflows that need dynamic tool invocation or multi-step interactions.
Practical example:
- A chatbot workflow where user messages are sent to Claude, which streams back partial answers while processing, then outputs the full response once complete.
- An automation that sends prompts to Claude to generate marketing copy, controlling randomness and token limits for tailored results.
Properties
| Name | Meaning |
|---|---|
| Model | The Claude model to use. Options: "Claude 3.5 Sonnet 2", "Claude 3.5 Haiku" |
| System Prompt | System prompt to guide Claude's behavior (e.g., "You are Claude, a helpful AI assistant.") |
| Message | The message to send to Claude in JSON format, representing the conversation input |
| Temperature | Controls randomness in the response; value between 0 (deterministic) and 1 (more random) |
| Max Tokens | Maximum number of tokens to generate in the response |
| Use Gravity MCP provider | Whether to send output to the MCP provider (required for MCP Client integration) |
| Enable Any Tool | Allow Claude to use any tool without restrictions |
| Tool Choice | Specify a specific tool for Claude to use; leave empty for automatic tool selection |
| Advanced Options | Collection of additional options: |
| - Message State | Override the message state for all chunks. Options: Default (auto), Thinking, Responding, Active, Waiting, Complete |
| - Progress Message | Optional progress message to include with the state |
Output
The node produces three outputs:
Stream: Streams partial response chunks from Claude as they arrive. Each chunk includes formatted data about the current state of the response, optionally including progress messages and message states.
Result: Contains the final complete response from Claude after all chunks have been received and processed. This includes the full text generated, metadata such as token count, and the original messages sent.
MCP: Outputs formatted data suitable for the MCP provider integration, enabling further processing or client-side consumption.
The json output fields typically contain structured objects representing the AI-generated text, message states, and metadata about the interaction.
No binary data output is indicated by the source code.
Dependencies
- Requires an AWS credential with access keys configured in n8n credentials to authenticate requests to AWS Bedrock.
- Uses the
@gravityai-dev/gravity-serverpackage internally for chat state constants and formatting utilities. - Relies on environment variable
AWS_REGIONor defaults to"us-east-1"for AWS region configuration. - Integration with MCP provider requires enabling the corresponding option and proper MCP client setup.
Troubleshooting
- Authentication errors: Ensure AWS credentials are correctly set up in n8n with valid access key ID and secret access key.
- Invalid message format: The
Messageproperty expects valid JSON representing the conversation message. Malformed JSON will cause errors. - Model not found or unsupported: Selecting a model outside the provided options may result in request failures.
- Streaming issues: Network interruptions can disrupt streaming responses; retrying the node execution may help.
- Tool usage restrictions: If tools are enabled but not properly configured or allowed, Claude may fail to invoke them. Verify tool settings and permissions.
- Missing AWS region: If
AWS_REGIONenvironment variable is not set, the node defaults to"us-east-1". Confirm this matches your AWS setup.
Error responses are formatted and returned as JSON objects containing error details to assist debugging.
Links and References
- AWS Bedrock Documentation
- Anthropic Claude Models Overview
- n8n Documentation on Custom Nodes
- Gravity AI Developer Resources (for related tooling and SDKs)