Package Information
Downloads: 0 weekly / 0 monthly
Latest Version: 1.1.3
Available Nodes
Documentation
n8n-nodes-openai-batch
An n8n community node for executing batch requests to OpenAI's Batch API. This node allows you to process multiple requests at once with 50% cost savings compared to synchronous API calls.
Features
- Chat Completions - Batch process multiple chat completion requests
- Embeddings - Batch generate embeddings for multiple texts
- Automatic polling - Waits for batch completion and returns results
- Cost effective - OpenAI Batch API offers 50% discount vs synchronous requests
- Auto-splitting - Large request sets are automatically split into multiple batches (configurable max size)
- Fallback mode - Option to cancel batches after a deadline and run remaining requests synchronously
Installation
In n8n
- Go to Settings > Community Nodes
- Select Install
- Enter
n8n-nodes-openai-batch - Click Install
Manual Installation
# In your n8n installation directory
npm install n8n-nodes-openai-batch
Setup
- Create OpenAI API credentials in n8n:
- Go to Credentials > New
- Search for OpenAI API
- Enter your API key from OpenAI Platform
Usage
Chat Completions
- Add the OpenAI Batch node to your workflow
- Select Chat Completion operation
- Choose your model (e.g.,
gpt-4o-mini) - Configure messages using expressions to reference input data:
- Role:
user - Content:
{{ $json.prompt }}(or your input field)
- Role:
Embeddings
- Add the OpenAI Batch node to your workflow
- Select Embeddings operation
- Choose embedding model (e.g.,
text-embedding-3-small) - Set input text using expression:
{{ $json.text }}
Options
| Option | Description | Default |
|---|---|---|
| Max Tokens | Maximum tokens to generate (chat only) | 1000 |
| Temperature | Controls randomness (0-2) | 1 |
| Max Batch Size | Maximum requests per batch (larger inputs split automatically) | 100 |
| Polling Interval | Seconds between status checks | 30 |
| Timeout | Maximum wait time in minutes | 1440 (24h) |
| Fallback Deadline | Minutes before canceling batches and running remaining requests synchronously (0 = disabled) | 0 |
| Metadata | Custom JSON metadata for the batch | {} |
Output
Each input item receives corresponding output with:
response- The generated text (chat) or embedding arrayfullResponse- Complete API responsebatchId- OpenAI batch ID for referencecustomId- Request identifier
Example Workflow
[Read CSV] → [OpenAI Batch] → [Write Results]
Input items with a prompt field will be batched and processed together.
Development
# Install dependencies
npm install
# Run tests (requires .env with OPENAI_API_KEY)
npm test
# Build
npm run build
License
MIT