Package Information
Downloads: 41 weekly / 1,225 monthly
Latest Version: 0.9.1
Author: adcom
Documentation
n8n-nodes-deepseek-reasoner
DeepSeek R1/Reasoner Chat Model for n8n with proper
easoning_content handling for tool calls.
What This Fixes
When using DeepSeek R1 (deepseek-reasoner) with n8n's AI Agent node and tools, you get this error:
Missing reasoning_content field in the assistant message at message index 2
This happens because the built-in DeepSeek Chat Model node uses LangChain's ChatOpenAI, which doesn't preserve the
easoning_content field from DeepSeek's API responses. During multi-step tool-calling loops, LangChain converts messages back to API format but drops
easoning_content, causing the DeepSeek API to reject the request.
This node fixes that by implementing a custom LangChain BaseChatModel that:
- Captures
easoning_content from DeepSeek API responses - Stores it in the LangChain message's dditional_kwargs
- Re-includes it when converting messages back to API format on subsequent tool-call turns
How to Use
- Install this community node in your n8n instance
- In your workflow, use the AI Agent node as usual
- Instead of the built-in "DeepSeek Chat Model", connect DeepSeek Reasoner Chat Model as the Chat Model sub-node
- Configure your DeepSeek API credentials
- Connect your tools the reasoning_content error is now fixed!
Workflow Setup
[AI Agent] <-- Chat Model: [DeepSeek Reasoner Chat Model]
<-- Tools: [Your Tool 1], [Your Tool 2], ...
<-- Memory: [Optional Memory Node]
Credentials
Create a "DeepSeek Reasoner API" credential with:
- API Key: Your DeepSeek API key from https://platform.deepseek.com/
- Base URL: https://api.deepseek.com (default, change if using a proxy)
Models
| Model | Description |
|---|---|
| deepseek-reasoner | DeepSeek R1 always uses thinking/reasoning mode |
| deepseek-chat | DeepSeek V3 standard chat model |
Options
| Option | Default | Description |
|---|---|---|
| Max Tokens | 8192 | Maximum tokens to generate (up to 65536) |
| Temperature | 0.7 | Controls randomness (ignored by deepseek-reasoner) |
| Top P | 1 | Nucleus sampling (ignored by deepseek-reasoner) |
| Frequency Penalty | 0 | Penalize repeated tokens |
| Presence Penalty | 0 | Penalize tokens already in text |
| Max Retries | 2 | Retry attempts on API failure |
Technical Details
The core fix works by:
- On API response: When DeepSeek returns
easoning_content in an assistant message (during thinking/reasoning), the node stores it in dditional_kwargs.reasoning_content on the LangChain AIMessage - On next API call: When the AI Agent sends the conversation back for the next tool-call turn, the node's convertMessages method reads dditional_kwargs.reasoning_content and includes it as a top-level field in the API request body
- Result: The DeepSeek API receives the required
easoning_content field and processes the tool-call loop correctly
License
MIT