Package Information
Documentation
n8n-nodes-rooyai-message
A production-ready Rooyai Message / Chat Model node for n8n, providing first-class LLM provider integration equivalent to OpenAI, Gemini, or DeepSeek.
šÆ Overview
This custom n8n community node enables you to use Rooyai's LLM API as a message/chat model provider in your n8n workflows. It appears under AI ā Language Models ā Rooyai Message Model and works seamlessly with:
- ā AI Agent
- ā Better AI Agent
- ā Basic LLM Chain
- ā Tools
- ā Memory
š¦ Installation
Option 1: Install in n8n Custom Directory (Recommended for Testing)
# Create custom nodes directory if it doesn't exist
mkdir -p ~/.n8n/custom
# Copy the entire dist folder to the custom directory
cp -r ./dist ~/.n8n/custom/n8n-nodes-rooyai-message
# Restart n8n
n8n restart
Option 2: Install via npm (Production)
# In your n8n installation directory
npm install n8n-nodes-rooyai-message
# Restart n8n
n8n restart
Option 3: Development Link
# In this project directory
npm run build
npm link
# In your n8n directory
npm link n8n-nodes-rooyai-message
n8n restart
š Credentials Setup
- In n8n, navigate to Credentials ā Create New Credential
- Search for "Rooyai API"
- Configure the following fields:
| Field | Type | Required | Description |
|---|---|---|---|
| API Key | Password | ā Yes | Your Rooyai API authentication key |
| Base URL | String | ā Yes | API endpoint (default: https://rooyai.com/api/v1/chat) |
| Optional Headers | JSON String | ā No | Additional headers in JSON format: {"X-Custom": "value"} |
- Click Save to store your credentials
š Usage
Basic Chat Completion
- Add Rooyai Message Model node to your workflow
- Select your Rooyai API credentials
- Configure the node:
- Model: Select from dropdown (15 models available)
- Messages: Add user/system/assistant messages
- Temperature:
0.7(0-2 range) - Max Tokens:
1024(optional)
Example Workflow
Start Node ā Rooyai Message Model ā Output Node
Configuration:
- Model:
LLaMa 3.3 70B(from dropdown) - Messages:
- Role:
system, Content:You are a helpful assistant - Role:
user, Content:Explain quantum computing in simple terms
- Role:
- Temperature:
0.7
With AI Agent
Manual Chat Trigger ā AI Agent ā Rooyai Message Model
The Rooyai Message Model node integrates directly as a language model provider in AI Agent workflows.
With Basic LLM Chain
Start ā Basic LLM Chain ā Rooyai Message Model ā Output
Configure the chain with your prompt template, and it will automatically use Rooyai for text generation.
āļø Configuration Options
Model Selection
Select from 15 available Rooyai models via dropdown:
| Model | Description | Best For |
|---|---|---|
| LLaMa 3.3 70B | Meta's flagship model with 70B parameters | Complex reasoning, detailed analysis |
| DeepSeek R1 | Reasoning-optimized model | Logical tasks, problem-solving |
| DeepSeek v3.1 Nex | Latest DeepSeek with enhancements | General purpose, advanced tasks |
| Qwen3 Coder | Code generation specialist | Programming, technical documentation |
| GPT OSS 120B | Large open-source GPT | Complex tasks, high accuracy |
| GPT OSS 20B | Efficient open-source GPT | Fast responses, good balance |
| TNG R1T Chimera | TNG reasoning architecture | Analytical tasks |
| TNG DeepSeek Chimera | Hybrid TNG-DeepSeek model | Multi-domain tasks |
| Kimi K2 | Moonshot AI's multilingual model | Chinese language, translations |
| GLM 4.5 Air | Lightweight ChatGLM | Fast interactions, efficiency |
| Devstral | Developer-focused model | Coding, debugging, tech docs |
| Mimo v2 Flash | High-speed model | Quick responses, real-time chat |
| Gemma 3 27B | Google Gemma large variant | General purpose, quality |
| Gemma 3 12B | Google Gemma balanced | Good performance/speed ratio |
| Gemma 3 4B | Google Gemma compact | Fastest responses, simple tasks |
Message Roles
- system: Defines AI behavior and context
- user: Human input/questions
- assistant: AI responses (for conversation history)
Advanced Options
| Option | Type | Range | Description |
|---|---|---|---|
| Temperature | Number | 0-2 | Controls randomness (0=deterministic, 2=very creative) |
| Max Tokens | Number | 1-32768 | Maximum response length |
| Frequency Penalty | Number | -2 to 2 | Reduces word repetition |
| Presence Penalty | Number | -2 to 2 | Encourages new topics |
| Top P | Number | 0-1 | Nucleus sampling (alternative to temperature) |
Simplify Output
- Enabled (default): Returns only the assistant's message content as a clean string
- Disabled: Returns full API response including usage metadata (
cost_usd)
š§ API Integration Details
Request Format
The node sends POST requests to your configured Base URL with:
{
"model": "gemini-2.0-flash",
"messages": [
{ "role": "system", "content": "You are helpful" },
{ "role": "user", "content": "Hello!" }
],
"temperature": 0.7,
"max_tokens": 1024
}
Headers:
Authorization: Bearer {YOUR_API_KEY}
Content-Type: application/json
{...optional custom headers}
Response Parsing
Rooyai returns responses in this format:
{
"choices": [
{
"message": {
"content": "Hello! How can I assist you today?"
}
}
],
"usage": {
"cost_usd": 0.000123
}
}
The node automatically extracts choices[0].message.content for the final output.
š Project Structure
n8n-nodes-rooyai-message/
āāā credentials/
ā āāā RooyaiApi.credentials.ts # API credentials definition
āāā nodes/
ā āāā RooyaiMessage/
ā āāā RooyaiMessage.node.ts # Main node implementation
ā āāā ChatDescription.ts # Message/chat operations
ā āāā GenericFunctions.ts # Error handling & utilities
ā āāā RooyaiMessage.node.json # Node metadata
ā āāā rooyai.svg # Node icon
āāā dist/ # Compiled JavaScript output
āāā package.json # Package metadata & dependencies
āāā tsconfig.json # TypeScript configuration
āāā gulpfile.js # Build tasks (icon copying)
āāā README.md # This file
š ļø Development
Prerequisites
- Node.js 18+
- npm 8+
- TypeScript 5.3+
Build from Source
# Install dependencies
npm install
# Build the project (compiles TypeScript + copies icons)
npm run build
# Watch mode for development
npm run dev
Modifying the API Integration
āļø Change Base URL:
Edit credentials/RooyaiApi.credentials.ts, line 20:
default: 'https://your-new-endpoint.com/api/v1/chat'
āļø Modify Response Parsing:
Edit nodes/RooyaiMessage/ChatDescription.ts, lines 140-160 (postReceive function):
// Update to match your API's response structure
const assistantText = item.json?.choices?.[0]?.message?.content || '';
āļø Add Custom Headers:
Users can add custom headers via the "Optional Headers" credential field without code changes.
ā Verification
After installation, verify the node:
- Node Appears: Search for "Rooyai" in n8n's "Add Node" menu
- Credentials Work: Create credential and test with valid API key
- Chat Works: Send a test message and receive response
- No Errors: Check n8n logs for any error messages
Expected behavior:
- Node is categorized under AI or Language Models
- Requests sent to configured Base URL
- Responses parsed correctly as strings
- Compatible with AI Agent and LLM Chain nodes
š Troubleshooting
Node doesn't appear in n8n
- Ensure
dist/folder is copied to~/.n8n/custom/ - Restart n8n:
n8n restartorservice n8n restart - Check n8n logs:
~/.n8n/logs/n8n.log
"Cannot find credentials" error
- Create "Rooyai API" credentials in n8n UI first
- Ensure API key is valid and not expired
API request fails
- Verify Base URL is correct:
https://rooyai.com/api/v1/chat - Check API key has proper permissions
- Review error message in n8n execution view
Response parsing error
- Enable "Simplify Output: false" to see raw API response
- Verify Rooyai API returns
choices[0].message.content
š License
MIT
š„ Author
Rooyai
Website: https://rooyai.com
Support: support@rooyai.com
š Links
Built with ā¤ļø for the n8n community
