Package Information
Downloads: 29 weekly / 67 monthly
Latest Version: 0.2.6
Author: Sang Duong
Documentation
n8n-nodes-azure-deepseek
This is an n8n node to interact with the DeepSeek LLM model available on Azure AI Foundry.
Features
- Chat Completion: Generate responses from the DeepSeek LLM model
- Supports various parameters for controlling the model's behavior:
- Temperature
- Max Tokens
- Top P
- Frequency and Presence Penalties
- Streaming capabilities
Prerequisites
- n8n instance (v1.0.0+)
- Azure account with access to Azure AI Foundry
- A deployed DeepSeek LLM model on Azure AI Foundry
Installation
Installation via n8n Admin Panel
- Go to Settings > Community Nodes
- Select Install
- Enter
n8n-nodes-azure-deepseekin "Enter npm package name" - Click Install
Installation via npm
- Go to your n8n installation directory
- Run
npm install n8n-nodes-azure-deepseek - Start n8n
Troubleshooting Installation
If you encounter a "504 Gateway Timeout" error during installation:
- Try installing with npm directly in your n8n directory:
npm install n8n-nodes-azure-deepseek - Make sure your n8n instance has a stable internet connection
- Try restarting your n8n instance after installation
- If using Docker, ensure your container has enough resources (CPU/memory)
For persistent issues, you can install the node manually:
- Download the package:
npm pack n8n-nodes-azure-deepseek - Extract the contents to your n8n custom nodes directory
- Restart n8n
Credentials
To use this node, you need to create credentials for the Azure DeepSeek API:
- In n8n, go to Credentials and click Create New
- Search for "Azure DeepSeek API" and select it
- Enter the following details:
- API Key: Your Azure AI Foundry API key
- Endpoint: Your Azure AI Foundry endpoint URL (e.g.,
https://your-resource-name.openai.azure.com) - Model Deployment Name: The deployment name of your DeepSeek model
- API Version: The API version (default: 2024-05-01-preview)
Usage
- Add the "Azure DeepSeek LLM" node to your workflow
- Connect it to a trigger or previous node
- Configure the operation ("Chat Completion" or "LLM Chain" for agent workflows)
- Enter your system prompt and user prompt
- Configure additional parameters as needed (temperature, max tokens, etc.)
- Execute the workflow
Request Timeout Settings
If you experience "504 Gateway Timeout" errors during API requests to Azure:
- In the node settings, expand "Additional Options"
- Find "Request Timeout (ms)" and increase the value (default: 300000ms/5min)
- For complex prompts or long responses, you might need to increase this to 600000ms (10min)
Development
If you want to contribute to this node:
- Clone this repository
- Install dependencies:
npm install - Build the code:
npm run build - Link to your n8n installation:
npm link(from this directory) andnpm link n8n-nodes-azure-deepseek(from your n8n installation directory)