Package Information
Documentation
N8N Tools - Agno Agent
⥠Ultra-fast AI Agent powered by the Agno framework - Multi-agent systems with ~3Ξs performance, advanced reasoning, memory, and multi-modal capabilities.
ð Features
ðŊ Agent Levels (Progressive Complexity)
- ⥠Level 1: Basic Agent - Tools + instructions with ultra-fast execution
- ð Level 2: Knowledge Agent - Knowledge base + storage capabilities
- ð§Ū Level 3: Reasoning Agent - Memory + advanced reasoning capabilities
- ðĨ Level 4: Team Agent - Collaborative multi-agent team systems
- ð Level 5: Workflow Agent - State management + automation workflows
⥠Performance & Capabilities
- ð Ultra-fast Performance: ~3Ξs agent instantiation (powered by Agno)
- ð 23+ Model Providers: Anthropic, OpenAI, Google, Groq, Perplexity, Ollama, and more
- ðŽ Multi-Modal: Native support for text, images, audio, and video
- ð§ Advanced Reasoning: Built-in reasoning tools and logic capabilities
- ðū Smart Memory: Conversation context and persistent long-term memory
- ð ïļ Native Tools: 10+ ultra-fast tool categories optimized for performance
- ðĨ Team Collaboration: Multi-agent systems with different collaboration strategies
- ð Real-time Streaming: Live response streaming capabilities
- ðĄïļ Fallback Protection: Automatic fallback to secondary models
ðĶ Installation
- Install the package:
npm install n8n-nodes-n8ntools-agno-agent
Add to your N8N installation or restart N8N to load the node.
Configure your N8N Tools API credentials in N8N.
ð§ Configuration
ð Required Inputs
- ðĪ AI Language Model (Required): Connect any N8N LLM node (Claude, OpenAI, Gemini, etc.)
- ð Main Data (Standard): Regular workflow data flow
ð Required Credentials
- N8N Tools API: Get your API key from n8ntools.io
ðĄ Simplified Architecture
The Agno Agent has 2 clean inputs:
ðĪ AI Language Model (Required)
- Connect Claude, OpenAI, Gemini, Groq, or any LLM node
- Automatically inherits model configuration (temperature, max tokens, etc.)
- Supports 23+ model providers for maximum flexibility
ð Main Data (Standard)
- Regular data flow from previous nodes
- Used as context and input for agent processing
ð ïļ Native Tools (Built-in)
Unlike traditional agents, Agno uses native ultra-fast tools selected via dropdown:
- ð§Ū Reasoning Tools - Advanced logic and chain-of-thought processing
- ð Web Search - Search engines and web information retrieval
- ð Data Analysis - Statistical analysis and data processing
- ð° Finance Tools - Stock prices and market analysis (YFinance)
- ð§ Email Tools - Send emails and manage communications
- ð HTTP Request - Make API calls and HTTP requests
- ð File System - Cloud storage operations (S3, R2, MinIO, GCS)
- ðïļ Database - SQL queries and database operations
- ð§ Knowledge - Knowledge base search and retrieval
- ð Shell Tools - Execute shell commands and scripts
ðŊ Agent Types
⥠Basic Agent (Level 1)
Simple agent with tools and instructions - Ultra-fast execution for straightforward tasks.
ð Knowledge Agent (Level 2)
Agent with knowledge base integration and storage capabilities for information retrieval.
ð§Ū Reasoning Agent (Level 3)
Advanced agent with memory and reasoning capabilities for complex problem-solving.
ðĨ Team Agent (Level 4)
Collaborative multi-agent team system with different strategies:
- Sequential: Agents work in sequence
- Parallel: Agents work simultaneously
- Hierarchical: Manager-worker structure
- Democratic: Consensus-based decisions
ð Workflow Agent (Level 5)
Agentic workflow with state management and automation:
- Linear: Step-by-step execution
- Branching: Conditional paths
- Loop: Iterative processing
- State Machine: Complex state management
âïļ Cloud Storage Support
File System Tools support multiple cloud storage providers:
- Amazon S3 - Native AWS S3 storage
- Cloudflare R2 - Cost-effective S3-compatible storage
- MinIO - Self-hosted S3-compatible storage
- Google Cloud Storage - GCS with S3-compatible API
ðĻ Examples
⥠Simple Setup (Current Architecture)
{
"workflow": "Ultra-fast Agno Agent",
"nodes": {
"llm": "Claude 3.5 Sonnet node â AI Language Model input",
"agno_agent": {
"agentType": "reasoningAgent",
"instructions": "You are a research assistant with web search and reasoning capabilities.",
"message": "Research quantum computing developments and analyze the data",
"agnoNativeTools": ["reasoning", "webSearch", "dataAnalysis"],
"enableMemory": true,
"memoryKey": "quantum_research"
}
}
}
ðĒ Enterprise File Processing
{
"agentType": "basicAgent",
"instructions": "Process documents from cloud storage and analyze content.",
"message": "Extract insights from uploaded reports",
"agnoNativeTools": ["fileSystem", "dataAnalysis", "reasoning"],
"toolsConfig": {
"fileSystemConfig": {
"storageType": "s3",
"s3BucketName": "company-documents",
"s3Region": "us-east-1"
}
}
}
Team Agent Configuration
{
"agentType": "teamAgent",
"teamConfig": {
"teamSize": 3,
"collaborationStrategy": "sequential",
"specialistRoles": ["researcher", "contentWriter", "designer"]
}
}
Advanced Options
{
"advancedOptions": {
"temperature": 0.7,
"maxTokens": 4000,
"enableMemory": true,
"memoryKey": "conversation_1",
"enableStreaming": true,
"outputFormat": "markdown",
"fallbackModel": "gpt-4o-mini"
}
}
ð Model Providers
Supported Providers
- Anthropic: Claude Sonnet, Haiku, Opus
- OpenAI: GPT-4, GPT-3.5, o1 models
- Google: Gemini Pro, Flash, Ultra
- Groq: Ultra-fast inference
- Perplexity: Search-powered responses
- Ollama: Local models (Llama, Mistral, etc.)
- 20+ Others: Including Azure OpenAI, AWS Bedrock, Hugging Face, etc.
ð Response Format
{
"response": "Agent's response text",
"usage": {
"promptTokens": 150,
"completionTokens": 300,
"totalTokens": 450
},
"model": "claude-3-5-sonnet-20241022",
"agentType": "basicAgent",
"executionTime": 1250,
"reasoning": ["Step 1: Analysis", "Step 2: Synthesis"],
"toolsUsed": ["reasoning", "dataAnalysis"],
"memoryKey": "conversation_1"
}
ð Performance
- Agent Instantiation: ~3 microseconds (Agno framework optimization)
- Memory Usage: ~6.5KiB per agent instance
- Concurrent Agents: Supports hundreds of simultaneous agents
- Response Time: Optimized for sub-second responses
ð vs N8N LangChain Agent
| Feature | N8N LangChain Agent | N8N Tools Agno Agent |
|---|---|---|
| Performance | Standard (~200ms) | ⥠Ultra-fast (~3Ξs) âĻ |
| Model Providers | 15+ | ð 23+ |
| Multi-Modal | Limited | ðŽ Native support âĻ |
| Memory | External connection required | ðū Built-in + External âĻ |
| Team Agents | â | ðĨ Yes (Level 4) âĻ |
| Reasoning | Basic | ð§Ū Advanced built-in âĻ |
| Tool System | N8N tool connections | ð ïļ Native ultra-fast tools âĻ |
| Architecture | 4 inputs (LLM, Memory, Tools, Main) | ð 2 inputs (LLM, Main) âĻ |
| Workflow Agents | â | ð Yes (Level 5) âĻ |
| Cloud Storage | â | âïļ S3, R2, MinIO, GCS âĻ |
| Cost Model | Free | ð° Performance tiers |
ð Architecture Comparison
N8N LangChain Agent (Complex):
- Inputs: LLM + Memory + Tools + Main Data (4 connections)
- Processing: LangChain framework (slower)
- Setup: Multiple node connections required
N8N Tools Agno Agent (Simplified):
- Inputs: LLM + Main Data (2 connections) âĻ
- Processing: Agno framework (~3Ξs execution) âĻ
- Setup: Simple connection, native tools built-in âĻ
- Memory: Built-in + configurable persistence âĻ
- Tools: Native ultra-fast tools via dropdown âĻ
ð Documentation
ðĪ Support
- Community: Discord
- Documentation: docs.n8ntools.io
- Issues: GitHub Issues
- Email: support@n8ntools.io
ð License
MIT License - see LICENSE file for details.
Powered by Agno Framework | Created by N8N Tools