Package Information
Downloads: 166 weekly / 166 monthly
Latest Version: 1.0.2
Author: Julius
Documentation
n8n-nodes-nvidia 🟢
A custom n8n node for the NVIDIA NIM API — supports chat completions, streaming,
thinking mode and all NIM-compatible models (Kimi K2.5, Llama, Mistral, etc.)
Features
- ✅ OpenAI-compatible Chat Completions
- ✅ Streaming support (SSE / text/event-stream)
- ✅ Thinking mode (
chat_template_kwargs) - ✅ All NVIDIA NIM models selectable
- ✅ Full credential management in n8n
- ✅ Configurable: temperature, top_p, max_tokens
Installation
Option A — Custom Nodes Folder (recommended for self-hosted n8n)
# 1. Find your n8n custom nodes folder (usually):
# ~/.n8n/custom OR /home/<user>/.n8n/nodes
cd ~/.n8n/nodes # create if it doesn't exist: mkdir -p ~/.n8n/nodes
# 2. Clone / copy this package
git clone https://github.com/julius/n8n-nodes-nvidia.git
cd n8n-nodes-nvidia
# 3. Install dependencies & build
npm install
npm run build
# 4. Tell n8n where to find it (in your .env or n8n config):
# N8N_CUSTOM_EXTENSIONS=/home/<user>/.n8n/nodes/n8n-nodes-nvidia/dist
# 5. Restart n8n
Option B — npm link (for development)
cd n8n-nodes-nvidia
npm install
npm run build
npm link
# In your n8n folder:
npm link n8n-nodes-nvidia
Option C — Publish to npm, then install via n8n UI
npm publish --access public
# In n8n UI: Settings → Community Nodes → Install → "n8n-nodes-nvidia"
Configuration
Credential Setup
- In n8n: Credentials → New → NVIDIA NIM API
- Paste your API key from https://integrate.api.nvidia.com/
- Hit "Test" to verify
Node Parameters
| Parameter | Description | Default |
|---|---|---|
| Model | NIM model identifier | moonshotai/kimi-k2.5 |
| Messages | Chat message list (role + content) | user message |
| Max Tokens | Max tokens to generate | 16384 |
| Temperature | Creativity (0–2) | 1.0 |
| Top P | Nucleus sampling (0–1) | 1.0 |
| Enable Streaming | Use SSE streaming | false |
| Enable Thinking | Adds chat_template_kwargs: {thinking: true} |
false |
| Custom chat_template_kwargs | Raw JSON override | — |
| System Prompt Shortcut | Quick system message | — |
| Return Raw Response | Include full API response | false |
Popular Models
| Model | Description |
|---|---|
moonshotai/kimi-k2.5 |
Kimi K2.5 with thinking support |
meta/llama-3.1-70b-instruct |
Llama 3.1 70B |
meta/llama-3.3-70b-instruct |
Llama 3.3 70B |
mistralai/mistral-large-2-instruct |
Mistral Large 2 |
nvidia/llama-3.1-nemotron-70b-instruct |
NVIDIA Nemotron |
google/gemma-3-27b-it |
Gemma 3 27B |
Browse all: https://build.nvidia.com/explore/discover
Example Workflow
{
"nodes": [
{
"name": "NVIDIA NIM LLM",
"type": "nvidiaLlm",
"parameters": {
"model": "moonshotai/kimi-k2.5",
"messages": {
"messagesValues": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum entanglement simply."}
]
},
"maxTokens": 4096,
"temperature": 0.7,
"stream": false,
"advancedOptions": {
"thinking": true
}
}
}
]
}
Output
The node outputs:
{
"text": "The model's response text here...",
"model": "moonshotai/kimi-k2.5",
"stream": false,
"usage": {
"prompt_tokens": 42,
"completion_tokens": 512,
"total_tokens": 554
}
}
Development
npm run dev # watch mode
npm run lint # lint check
npm run lint:fix # auto-fix lint
npm run build # production build
MIT License — Built by Julius @ funnyareas.com