nvidia-nim

n8n community node for NVIDIA NIM - Chat completions and image analysis with NVIDIA AI models

Package Information

Downloads: 55 weekly / 413 monthly
Latest Version: 2.3.4
Author: Akash Kumar Naik

Documentation

n8n-nodes-nvidia-nim

n8n.io - Workflow Automation
npm version
License: MIT

n8n community node for NVIDIA NIM - Chat completions and image analysis with NVIDIA AI models.

📋 Requirements

  • n8n version 1.0.0 or higher
  • Node.js v18.17.0 or higher
  • NVIDIA NGC API Key - Get yours at ngc.nvidia.com

📦 Installation

Via n8n Community Nodes (Recommended):

  1. Go to SettingsCommunity Nodes
  2. Click Install
  3. Enter: n8n-nodes-nvidia-nim
  4. Restart n8n after installation

Via npm:

npm install n8n-nodes-nvidia-nim

⚙️ Setup

  1. Get NVIDIA API Key: ngc.nvidia.com
  2. Add Credentials in n8n:
    • Go to CredentialsNew
    • Select NVIDIA NIM API
    • Enter API Key and Base URL: https://integrate.api.nvidia.com/v1

🎯 Basic Usage

Text Chat

  1. Add "NVIDIA NIM" node → Configure model (e.g., meta/llama3-8b-instruct)
  2. Connect: Trigger → NVIDIA NIM (main)
  3. Execute workflow

Image Analysis

  1. Add "NVIDIA NIM Image Analysis" node → Configure vision model (e.g., meta/llama-3.2-11b-vision-instruct)
  2. Provide image data (URL, base64, or data URL) and analysis prompt
  3. Connect: Trigger → NVIDIA NIM Image Analysis (main)
  4. Execute workflow

🤖 Available Models

Text Models

Current recommended models include:

  • meta/llama-3.1-8b-instruct ⭐ (recommended, fast & efficient)
  • meta/llama-3.1-70b-instruct (high quality)
  • meta/llama-3.1-405b-instruct (best performance)
  • nvidia/nemotron-4-340b-instruct (enterprise-grade)
  • mistralai/mixtral-8x7b-instruct-v0.1
  • deepseek-ai/deepseek-r1 (reasoning models)
  • google/gemma-2-27b-it
  • snowflake/snowflake-arctic-instruct

View all models →
Try NVIDIA NIM APIs →

Vision Models

  • meta/llama-3.2-11b-vision-instruct ⭐ (recommended)
  • meta/llama-3.2-90b-vision-instruct

View all vision models →

⚙️ Configuration

Key Parameters:

  • Model: Choose from NVIDIA models
  • Temperature: 0.0-2.0 (default: 0.7)
  • Max Tokens: Response length (default: 1024)
  • Top P: Nucleus sampling (default: 1.0)

📚 Resources

🤝 Contributing

Issues and PRs welcome on GitHub


Made by Akash Kumar Naik

Discussion