streaming-http-request

n8n community nodes: Streaming HTTP Request

Package Information

Released: 11/12/2025
Downloads: 34 weekly / 120 monthly
Latest Version: 0.2.6
Author: Leandro Menezes

Documentation

n8n-nodes-streaming-http-request

A custom n8n community node that enables HTTP requests with real-time streaming support. Perfect for proxying streaming APIs, handling Server-Sent Events (SSE), or building real-time data pipelines.

🎯 Purpose

This node allows you to make HTTP requests and stream responses in real-time to webhook consumers. It's designed for scenarios where you need to:

  • Proxy streaming APIs (OpenAI, Anthropic, etc.) through n8n workflows
  • Forward AI assistant responses in real-time to end users
  • Chain multiple streaming endpoints while preserving original metadata
  • Handle Server-Sent Events (SSE) and newline-delimited JSON (JSONL) streams
  • Build real-time data pipelines with incremental processing

✨ Features

Streaming Modes

  1. Proxy Mode (Proxy Streaming Chunks enabled)

    • Transparently forwards JSONL streaming envelopes from remote endpoints
    • Preserves original node metadata (nodeId, nodeName, timestamps)
    • Perfect for chaining n8n workflows or proxying streaming APIs
    • Parses newline-delimited JSON and validates each chunk
  2. Wrap Mode (Proxy Streaming Chunks disabled, default)

    • Wraps raw response chunks in new streaming envelopes
    • Adds current node's metadata to each chunk
    • Suitable for consuming non-streaming APIs in streaming workflows

Request Configuration

  • HTTP Methods: GET, POST, PUT, PATCH, DELETE, HEAD
  • Query Parameters: Key-value pairs for URL query strings
  • Headers: Custom HTTP headers
  • Body: JSON or raw text body for POST/PUT/PATCH requests
  • Authentication: Supports all standard n8n authentication methods

Response Handling

  • Autodetect: Automatically determines response format (JSON, text, or binary)
  • JSON: Parses and returns JSON responses
  • Text: Returns plain text responses
  • File (Binary): Handles binary file downloads

📦 Installation

Via n8n Community Nodes

  1. Go to Settings > Community Nodes in your n8n instance
  2. Click Install and enter: n8n-nodes-streaming-http-request
  3. Click Install and restart n8n

Manual Installation

cd ~/.n8n/nodes
npm install n8n-nodes-streaming-http-request

🚀 Usage

Basic Streaming Setup

  1. Add a Webhook node to your workflow

    • Set Response Mode to Using Respond to Webhook Node
  2. Add HTTP Request (Streaming) node

    • Configure URL and method
    • Enable Enable Streaming in Options
  3. Add a Respond to Webhook node

    • Enable Enable Streaming in Options
    • Connect it after your HTTP Request (Streaming) node

Example 1: Proxy OpenAI Streaming

Webhook (streaming) 
  → HTTP Request (Streaming) [POST to OpenAI API, streaming enabled]
  → Respond to Webhook (streaming)

Configuration:

  • URL: https://api.openai.com/v1/chat/completions
  • Method: POST
  • Body: JSON with stream: true
  • Options: Enable Streaming = true, Proxy Streaming Chunks = false

Example 2: Chain Multiple n8n Workflows

Webhook A (streaming)
  → HTTP Request (Streaming) [POST to Webhook B, proxy mode]
  → Respond to Webhook (streaming)

Webhook B (on another workflow)
  → AI Agent/Processing
  → Respond to Webhook (streaming)

Configuration:

  • URL: https://your-n8n.com/webhook/workflow-b
  • Method: POST
  • Options: Enable Streaming = true, Proxy Streaming Chunks = trueKey setting!

This preserves the original node names and IDs from Workflow B's response.

Example 3: Non-Streaming Mode

Use it as a regular HTTP Request node with enhanced response handling:

Manual Trigger
  → HTTP Request (Streaming) [GET request, streaming disabled]
  → Process Response

Configuration:

  • URL: https://api.example.com/data
  • Method: GET
  • Options: Enable Streaming = false, Response Format = json

⚙️ Options

Option Type Default Description
Enable Streaming Boolean true Enable real-time streaming to webhook response
Proxy Streaming Chunks Boolean false Forward JSONL envelopes with original metadata (for proxying n8n workflows)
Response Format Select autodetect How to parse the response: autodetect, json, text, or file
Full Response Boolean false Return complete response including headers and status code
Output Property Name String data Property name for response data in output
Timeout Number 300000 Request timeout in milliseconds

🔧 Technical Details

Streaming Envelope Format

When streaming is enabled, data is sent as newline-delimited JSON (JSONL):

{"type":"begin","metadata":{"nodeId":"abc123","nodeName":"My Node","itemIndex":0,"runIndex":0,"timestamp":1234567890}}
{"type":"item","content":"Hello","metadata":{"nodeId":"abc123","nodeName":"My Node","itemIndex":0,"runIndex":0,"timestamp":1234567891}}
{"type":"item","content":" World","metadata":{"nodeId":"abc123","nodeName":"My Node","itemIndex":0,"runIndex":0,"timestamp":1234567892}}
{"type":"end","metadata":{"nodeId":"abc123","nodeName":"My Node","itemIndex":0,"runIndex":0,"timestamp":1234567893}}

Proxy Mode (Metadata Preservation)

When Proxy Streaming Chunks is enabled:

  1. Buffers incoming data and splits by newlines
  2. Parses each complete line as JSON
  3. Validates it's a proper streaming envelope (type + metadata)
  4. Forwards via hooks with original metadata intact
  5. Falls back to standard sendChunk if hooks unavailable

This allows transparent proxying of streaming responses from other n8n workflows without modifying node metadata.

🛠️ Development

Build from Source

git clone <repository-url>
cd streaming-http-request-node
npm install
npm run build

Link for Local Testing

npm link
cd <your-n8n-directory>
npm link n8n-nodes-streaming-http-request

Restart n8n after linking.

📝 License

MIT

🤝 Contributing

Contributions are welcome! Please open an issue or submit a pull request.

💡 Tips

  • Use Proxy Mode when forwarding responses from another n8n streaming workflow
  • Use Wrap Mode when consuming external streaming APIs (OpenAI, Anthropic, etc.)
  • For large responses, consider increasing the timeout value
  • Enable Full Response if you need access to headers or status codes

⚠️ Troubleshonings

Problem: Seeing "undefined" in output
Solution: Ensure the remote endpoint is returning JSONL format when using Proxy Mode

Problem: Metadata is replaced with HTTP Request node's metadata
Solution: Enable Proxy Streaming Chunks option and ensure hooks are available (should work by default in n8n)

Problem: Streaming not working
Solution: Verify that:

  1. Webhook is set to "Using Respond to Webhook Node"
  2. Respond to Webhook has streaming enabled
  3. HTTP Request (Streaming) has Enable Streaming = true

Version: 0.2.4
Author: Leandro Menezes

Discussion