Package Information
Documentation
n8n-nodes-aot-harness
CHIP + Atom of Thoughts (AoT) for n8n β multi-provider agent harness with built-in cost tracking.
A community node that brings the AoT-Harness (atomic decomposition + QA loop) into n8n.
One node, one goal β the harness decomposes the task into atoms, runs them in parallel, QA-scores the result, and returns the polished output plus a per-call cost breakdown.
What's new in v0.3.0 β Multi-Provider
- π 5 providers: Anthropic, OpenAI, Google Gemini, Mistral (EU/GDPR), OpenRouter (100+ models, 1 key)
- πͺ Mixed-Provider Mode β let a smart model decompose, a cheap one execute. Typical saving: 60β80% at comparable quality.
- π° Cost tracking in node output β every run reports
cost.total_usd, tokens, breakdown by provider and by model - πͺπΊ Mistral (la Plateforme) β fully EU-hosted, GDPR-friendly, ideal for German/EU SMB workflows
β οΈ Breaking change for existing v0.2.x users β see Migration below.
Install
In n8n β Settings β Community Nodes β Install:
n8n-nodes-aot-harness
Then add the credential(s) for the provider(s) you want to use:
AoT Harness β Anthropic / OpenAI / Google Gemini / Mistral / OpenRouter.
Quick Start
- Drop AoT Harness into a workflow.
- Pick a Provider (e.g. Anthropic) and a Model (e.g.
claude-sonnet-4-6). - Attach the matching credential.
- Set the Goal field to the task in plain language:
Erstelle eine kurze IDD-Dokumentation für einen Kunden, 42 J., sucht private Haftpflicht⦠- Run.
The node returns:
{
"goal": "...",
"result": "polished final output",
"qa_score": 0.92,
"success": true,
"atoms_done": 4,
"atoms_total": 4,
"provider_used": "anthropic",
"model_used": "claude-sonnet-4-6",
"cost": {
"total_usd": 0.0143,
"total_calls": 6,
"prompt_tokens": 2480,
"completion_tokens": 1120,
"by_provider": { "anthropic": { "cost_usd": 0.0143, "calls": 6, ... } },
"by_model": { "claude-sonnet-4-6": { "cost_usd": 0.0143, ... } }
}
}
πͺ Mixed-Provider Mode (cost-saver)
Toggle "Enable Mixed-Provider Mode" in the node:
- Decomposer = the smart model that splits the goal into atoms (e.g. Claude Opus / Sonnet)
- Executor = the cheap model that solves each atom and runs QA (e.g. Gemini Flash)
Typical results on a German IDD-documentation task:
| Setup | Cost / run | QA Score | Saving |
|---|---|---|---|
| Single (Claude Sonnet only) | ~$0.014 | 0.92 | β |
| Mixed (Claude Sonnet decompose + Gemini execute) | ~$0.003 | 0.88 | ~78% |
Annualized at 1k runs/month: ~$130/year saved per workflow.
A ready-to-import demo workflow ships in examples/mixed-provider-cost-saver.json β runs both setups against the same goal and reports the delta in a Code node.
π‘οΈ Production demo: Insurance-broker claim triage
A full Versicherungsmakler-use-case lives in examples/schadenmeldung-triage/:
- Goal: incoming claim email β 2 ready-to-send Gmail drafts (to customer + to insurer) in ~60s
- Features shown: AoT decomposition (4 atoms), Mixed-Provider cost saving, QA gate with HITL fallback (Makler gets review email if
qa_score < 0.75) - Economics: ~20.000β¬/year savings for a broker handling 50 claims/month β full math in the folder's
README.md - Includes: importable workflow, test mail, Supabase seed SQL for automatic policy lookup
This is the workflow to show an interested broker.
Providers & default models
| Provider | Default model | Best for |
|---|---|---|
| Anthropic | claude-sonnet-4-6 |
Decomposer, complex reasoning |
| OpenAI | gpt-4o |
General-purpose, structured output |
gemini-2.5-flash |
Cheap, fast atom executor | |
| Mistral (EU) | mistral-large-latest |
GDPR-sensitive workloads, EU residency |
| OpenRouter | anthropic/claude-sonnet-4-6 |
One key for 100+ models, model A/B tests |
Per-provider model dropdowns are pre-curated in the node UI.
How it works
Goal
β
βΌ
[Decomposer LLM] AoT decomposition β AtomGraph (1β6 atoms, dependency-aware)
β
βΌ
[Executor LLM] Atoms run in parallel via Promise.all (where dependencies allow)
β
βΌ
[Executor LLM] QA-Agent scores 0β1 (retry on failure)
β
βΌ
{ result, qa_score, success, atoms_used, cost }
In single-provider mode Decomposer = Executor.
In mixed mode they're independent β different provider, different model, different credential.
Modes
| Mode | Behavior |
|---|---|
| CHIP + AoT | Full pipeline: decompose β solve atoms β QA loop |
| AoT only | Decompose + solve, no QA (faster, less polished) |
| Webhook (Python) | Forwards goal to a running aot-harness Python server. Use this when you want the full Vault/Obsidian pipeline. |
Migration from v0.2.x
v0.3.0 is a breaking change because the node now requires a provider-specific credential.
After updating:
- Open every workflow that uses AoT Harness.
- Set Provider β
Anthropic(matches the v0.2.x default behavior). - Attach the new AoT Harness β Anthropic credential. Re-enter your
ANTHROPIC_API_KEY. - Pick a Model from the dropdown (default:
claude-sonnet-4-6). - Save & test.
The legacy aotHarnessApi credential type is still registered so existing credential entries don't disappear from your n8n credentials list β but the new node doesn't read it. Delete it after migration if you like.
Credentials
| Credential | Required env var | Notes |
|---|---|---|
| AoT Harness β Anthropic | ANTHROPIC_API_KEY |
Default provider |
| AoT Harness β OpenAI | OPENAI_API_KEY |
Optional OpenAI-Organization header |
| AoT Harness β Google Gemini | GEMINI_API_KEY |
Get one at aistudio.google.com |
| AoT Harness β Mistral (EU) | MISTRAL_API_KEY |
EU-hosted, GDPR-compliant |
| AoT Harness β OpenRouter | OPENROUTER_API_KEY |
Optional HTTP-Referer / X-Title |
You only need credentials for providers you actually use.
Roadmap
- v0.3.1 β per-atom provider override (route specific atoms to specific models)
- v0.4 β Mistral self-hosted via Ollama, ReAct-style tool loops, prompt-cache visibility for non-Anthropic providers
Based on
- AoT Paper: arXiv:2502.12018 (NeurIPS 2025) β MIT License
- CHIP Architecture: Ronny Schumann
- Harness Concept: Anthropic Engineering, Martin Fowler (2026)
License
MIT