rooyai-message

Rooyai Message / Chat Model for n8n - A first-class LLM provider node compatible with AI Agent, Basic LLM Chain, and other n8n AI workflows

Package Information

Downloads: 361 weeklyĀ /Ā 684 monthly
Latest Version: 0.3.7
Author: Rooyai

Documentation

n8n-nodes-rooyai-message

A production-ready Rooyai Message / Chat Model node for n8n, providing first-class LLM provider integration equivalent to OpenAI, Gemini, or DeepSeek.

šŸŽÆ Overview

This custom n8n community node enables you to use Rooyai's LLM API as a message/chat model provider in your n8n workflows. It appears under AI → Language Models → Rooyai Message Model and works seamlessly with:

  • āœ… AI Agent
  • āœ… Better AI Agent
  • āœ… Basic LLM Chain
  • āœ… Tools
  • āœ… Memory

šŸ“¦ Installation

Option 1: Install in n8n Custom Directory (Recommended for Testing)

# Create custom nodes directory if it doesn't exist
mkdir -p ~/.n8n/custom

# Copy the entire dist folder to the custom directory
cp -r ./dist ~/.n8n/custom/n8n-nodes-rooyai-message

# Restart n8n
n8n restart

Option 2: Install via npm (Production)

# In your n8n installation directory
npm install n8n-nodes-rooyai-message

# Restart n8n
n8n restart

Option 3: Development Link

# In this project directory
npm run build
npm link

# In your n8n directory
npm link n8n-nodes-rooyai-message
n8n restart

šŸ”‘ Credentials Setup

  1. In n8n, navigate to Credentials → Create New Credential
  2. Search for "Rooyai API"
  3. Configure the following fields:
Field Type Required Description
API Key Password āœ… Yes Your Rooyai API authentication key
Base URL String āœ… Yes API endpoint (default: https://rooyai.com/api/v1/chat)
Optional Headers JSON String āŒ No Additional headers in JSON format: {"X-Custom": "value"}
  1. Click Save to store your credentials

šŸš€ Usage

Basic Chat Completion

  1. Add Rooyai Message Model node to your workflow
  2. Select your Rooyai API credentials
  3. Configure the node:
    • Model: Select from dropdown (15 models available)
    • Messages: Add user/system/assistant messages
    • Temperature: 0.7 (0-2 range)
    • Max Tokens: 1024 (optional)

Example Workflow

Start Node → Rooyai Message Model → Output Node

Configuration:

  • Model: LLaMa 3.3 70B (from dropdown)
  • Messages:
    • Role: system, Content: You are a helpful assistant
    • Role: user, Content: Explain quantum computing in simple terms
  • Temperature: 0.7

With AI Agent

Manual Chat Trigger → AI Agent → Rooyai Message Model

The Rooyai Message Model node integrates directly as a language model provider in AI Agent workflows.

With Basic LLM Chain

Start → Basic LLM Chain → Rooyai Message Model → Output

Configure the chain with your prompt template, and it will automatically use Rooyai for text generation.

āš™ļø Configuration Options

Model Selection

Select from 15 available Rooyai models via dropdown:

Model Description Best For
LLaMa 3.3 70B Meta's flagship model with 70B parameters Complex reasoning, detailed analysis
DeepSeek R1 Reasoning-optimized model Logical tasks, problem-solving
DeepSeek v3.1 Nex Latest DeepSeek with enhancements General purpose, advanced tasks
Qwen3 Coder Code generation specialist Programming, technical documentation
GPT OSS 120B Large open-source GPT Complex tasks, high accuracy
GPT OSS 20B Efficient open-source GPT Fast responses, good balance
TNG R1T Chimera TNG reasoning architecture Analytical tasks
TNG DeepSeek Chimera Hybrid TNG-DeepSeek model Multi-domain tasks
Kimi K2 Moonshot AI's multilingual model Chinese language, translations
GLM 4.5 Air Lightweight ChatGLM Fast interactions, efficiency
Devstral Developer-focused model Coding, debugging, tech docs
Mimo v2 Flash High-speed model Quick responses, real-time chat
Gemma 3 27B Google Gemma large variant General purpose, quality
Gemma 3 12B Google Gemma balanced Good performance/speed ratio
Gemma 3 4B Google Gemma compact Fastest responses, simple tasks

Message Roles

  • system: Defines AI behavior and context
  • user: Human input/questions
  • assistant: AI responses (for conversation history)

Advanced Options

Option Type Range Description
Temperature Number 0-2 Controls randomness (0=deterministic, 2=very creative)
Max Tokens Number 1-32768 Maximum response length
Frequency Penalty Number -2 to 2 Reduces word repetition
Presence Penalty Number -2 to 2 Encourages new topics
Top P Number 0-1 Nucleus sampling (alternative to temperature)

Simplify Output

  • Enabled (default): Returns only the assistant's message content as a clean string
  • Disabled: Returns full API response including usage metadata (cost_usd)

šŸ”§ API Integration Details

Request Format

The node sends POST requests to your configured Base URL with:

{
  "model": "gemini-2.0-flash",
  "messages": [
    { "role": "system", "content": "You are helpful" },
    { "role": "user", "content": "Hello!" }
  ],
  "temperature": 0.7,
  "max_tokens": 1024
}

Headers:

Authorization: Bearer {YOUR_API_KEY}
Content-Type: application/json
{...optional custom headers}

Response Parsing

Rooyai returns responses in this format:

{
  "choices": [
    {
      "message": {
        "content": "Hello! How can I assist you today?"
      }
    }
  ],
  "usage": {
    "cost_usd": 0.000123
  }
}

The node automatically extracts choices[0].message.content for the final output.

šŸ“ Project Structure

n8n-nodes-rooyai-message/
ā”œā”€ā”€ credentials/
│   └── RooyaiApi.credentials.ts    # API credentials definition
ā”œā”€ā”€ nodes/
│   └── RooyaiMessage/
│       ā”œā”€ā”€ RooyaiMessage.node.ts   # Main node implementation
│       ā”œā”€ā”€ ChatDescription.ts       # Message/chat operations
│       ā”œā”€ā”€ GenericFunctions.ts      # Error handling & utilities
│       ā”œā”€ā”€ RooyaiMessage.node.json  # Node metadata
│       └── rooyai.svg              # Node icon
ā”œā”€ā”€ dist/                           # Compiled JavaScript output
ā”œā”€ā”€ package.json                    # Package metadata & dependencies
ā”œā”€ā”€ tsconfig.json                   # TypeScript configuration
ā”œā”€ā”€ gulpfile.js                     # Build tasks (icon copying)
└── README.md                       # This file

šŸ› ļø Development

Prerequisites

  • Node.js 18+
  • npm 8+
  • TypeScript 5.3+

Build from Source

# Install dependencies
npm install

# Build the project (compiles TypeScript + copies icons)
npm run build

# Watch mode for development
npm run dev

Modifying the API Integration

āš™ļø Change Base URL:
Edit credentials/RooyaiApi.credentials.ts, line 20:

default: 'https://your-new-endpoint.com/api/v1/chat'

āš™ļø Modify Response Parsing:
Edit nodes/RooyaiMessage/ChatDescription.ts, lines 140-160 (postReceive function):

// Update to match your API's response structure
const assistantText = item.json?.choices?.[0]?.message?.content || '';

āš™ļø Add Custom Headers:
Users can add custom headers via the "Optional Headers" credential field without code changes.

āœ… Verification

After installation, verify the node:

  1. Node Appears: Search for "Rooyai" in n8n's "Add Node" menu
  2. Credentials Work: Create credential and test with valid API key
  3. Chat Works: Send a test message and receive response
  4. No Errors: Check n8n logs for any error messages

Expected behavior:

  • Node is categorized under AI or Language Models
  • Requests sent to configured Base URL
  • Responses parsed correctly as strings
  • Compatible with AI Agent and LLM Chain nodes

šŸ› Troubleshooting

Node doesn't appear in n8n

  • Ensure dist/ folder is copied to ~/.n8n/custom/
  • Restart n8n: n8n restart or service n8n restart
  • Check n8n logs: ~/.n8n/logs/n8n.log

"Cannot find credentials" error

  • Create "Rooyai API" credentials in n8n UI first
  • Ensure API key is valid and not expired

API request fails

  • Verify Base URL is correct: https://rooyai.com/api/v1/chat
  • Check API key has proper permissions
  • Review error message in n8n execution view

Response parsing error

  • Enable "Simplify Output: false" to see raw API response
  • Verify Rooyai API returns choices[0].message.content

šŸ“ License

MIT

šŸ‘„ Author

Rooyai
Website: https://rooyai.com
Support: support@rooyai.com

šŸ”— Links


Built with ā¤ļø for the n8n community

Discussion