azure-deepseek

n8n node for DeepSeek LLM on Azure AI Foundry

Package Information

Downloads: 29 weekly / 67 monthly
Latest Version: 0.2.6
Author: Sang Duong

Documentation

n8n-nodes-azure-deepseek

This is an n8n node to interact with the DeepSeek LLM model available on Azure AI Foundry.

Features

  • Chat Completion: Generate responses from the DeepSeek LLM model
  • Supports various parameters for controlling the model's behavior:
    • Temperature
    • Max Tokens
    • Top P
    • Frequency and Presence Penalties
    • Streaming capabilities

Prerequisites

  • n8n instance (v1.0.0+)
  • Azure account with access to Azure AI Foundry
  • A deployed DeepSeek LLM model on Azure AI Foundry

Installation

Installation via n8n Admin Panel

  1. Go to Settings > Community Nodes
  2. Select Install
  3. Enter n8n-nodes-azure-deepseek in "Enter npm package name"
  4. Click Install

Installation via npm

  1. Go to your n8n installation directory
  2. Run npm install n8n-nodes-azure-deepseek
  3. Start n8n

Troubleshooting Installation

If you encounter a "504 Gateway Timeout" error during installation:

  1. Try installing with npm directly in your n8n directory: npm install n8n-nodes-azure-deepseek
  2. Make sure your n8n instance has a stable internet connection
  3. Try restarting your n8n instance after installation
  4. If using Docker, ensure your container has enough resources (CPU/memory)

For persistent issues, you can install the node manually:

  1. Download the package: npm pack n8n-nodes-azure-deepseek
  2. Extract the contents to your n8n custom nodes directory
  3. Restart n8n

Credentials

To use this node, you need to create credentials for the Azure DeepSeek API:

  1. In n8n, go to Credentials and click Create New
  2. Search for "Azure DeepSeek API" and select it
  3. Enter the following details:
    • API Key: Your Azure AI Foundry API key
    • Endpoint: Your Azure AI Foundry endpoint URL (e.g., https://your-resource-name.openai.azure.com)
    • Model Deployment Name: The deployment name of your DeepSeek model
    • API Version: The API version (default: 2024-05-01-preview)

Usage

  1. Add the "Azure DeepSeek LLM" node to your workflow
  2. Connect it to a trigger or previous node
  3. Configure the operation ("Chat Completion" or "LLM Chain" for agent workflows)
  4. Enter your system prompt and user prompt
  5. Configure additional parameters as needed (temperature, max tokens, etc.)
  6. Execute the workflow

Request Timeout Settings

If you experience "504 Gateway Timeout" errors during API requests to Azure:

  1. In the node settings, expand "Additional Options"
  2. Find "Request Timeout (ms)" and increase the value (default: 300000ms/5min)
  3. For complex prompts or long responses, you might need to increase this to 600000ms (10min)

Development

If you want to contribute to this node:

  1. Clone this repository
  2. Install dependencies: npm install
  3. Build the code: npm run build
  4. Link to your n8n installation: npm link (from this directory) and npm link n8n-nodes-azure-deepseek (from your n8n installation directory)

License

MIT

Discussion