Overview
This node, "N8N Tools - CrewAI Agent," enables users to create and manage AI agents with specialized roles and capabilities. It allows defining an agent's role, goal, and backstory, then uses either native AI models or connected external models to generate responses or perform tasks accordingly.
Common scenarios include:
- Automating complex workflows by delegating tasks to AI agents with specific expertise (e.g., a Senior Data Analyst agent).
- Creating conversational agents that maintain context and follow a defined goal.
- Integrating advanced AI capabilities into n8n workflows using either built-in models or external LangChain-compatible models.
Practical examples:
- Defining an agent as a "Marketing Strategist" tasked with generating campaign ideas based on a detailed backstory.
- Using the node to run iterative problem-solving steps with verbose logging and memory enabled for better context retention.
- Connecting a custom LangChain model to leverage specialized AI providers or configurations beyond native options.
Properties
| Name | Meaning |
|---|---|
| Model Selection | Choose between using native N8N Tools AI models or connected LangChain-compatible external models. Options: "Native Model (N8N Tools)", "Connected Model (LangChain)" |
| Model Provider | Select the AI provider when using native models. Options: "OpenAI", "Anthropic (Claude)" |
| Model | Select the specific AI model to use depending on the provider: - For OpenAI: "GPT-4", "GPT-4 Turbo", "GPT-3.5 Turbo" - For Anthropic: "Claude 3.5 Sonnet", "Claude 3 Opus", "Claude 3 Haiku" |
| Temperature | Controls randomness in the AI output; ranges from 0 (deterministic) to 2 (very random). Default is 0.7. |
| Max Tokens | Maximum number of tokens to generate in the AI response. Range: 1 to 8192. Default is 1000. |
| Connected Model Info | Informational notice displayed when using connected LangChain models, explaining how to connect external LLM providers or specialized models. |
| Role | The role assigned to the AI agent (e.g., "Senior Data Analyst"). This defines the agent’s persona and expertise. Required field. |
| Goal | The objective or task the agent should achieve. Required field. |
| Backstory | Contextual background information about the agent to guide its behavior. Required field. |
| Advanced Options | Collection of optional settings: - Allow Delegation (boolean): Whether the agent can delegate tasks. - Verbose (boolean): Enable detailed logging. - Memory (boolean): Enable memory for the agent. - Max Execution Time (number): Max seconds allowed per task. - Max Iterations (number): Max iterations the agent can perform. - System Message (string): Additional system instructions. - Step Callback (string): Webhook URL for step-by-step execution callbacks. |
| Report Issue | Notice with links to report bugs or get help. |
| Request Feature | Notice with links to request new features or enhancements. |
Output
The node outputs JSON data representing the created or managed CrewAI agent(s). Each output item corresponds to one input item and contains the API response from the CrewAI service, which includes details about the agent configuration and status.
If errors occur during execution and "Continue On Fail" is enabled, the output will contain an error message object instead.
No binary data output is produced by this node.
Dependencies
- Requires an API key credential for the CrewAI API service.
- When using native models, it depends on the selected AI provider (OpenAI or Anthropic) configured via the CrewAI backend.
- For connected models, it requires a LangChain-compatible AI language model node connected as input.
- No additional environment variables are explicitly required within the node itself.
Troubleshooting
- Missing Required Fields: The node validates that "Role," "Goal," and "Backstory" are provided. Omitting these will cause errors. Ensure these fields are filled.
- API Request Failures: Errors from the CrewAI API may occur due to invalid credentials, network issues, or exceeding rate limits. Verify API keys and connectivity.
- Model Selection Mismatch: Selecting "connected" model without providing a compatible LangChain model input may lead to unexpected behavior.
- Token Limits: Setting "Max Tokens" too high might cause API rejections or increased latency.
- Verbose Logging: Enabling verbose mode can help diagnose issues by providing detailed logs.
- If "Continue On Fail" is disabled, any error will stop the workflow execution.