AI Model Overview icon

AI Model Overview

Get detailed information about multiple AI models including pricing, max tokens, and supported parameters.

Overview

This node, named AI Model Overview, fetches detailed information about multiple AI models from an external API. It retrieves data such as pricing, maximum tokens, supported parameters, and available endpoints for each specified model. This is useful when you want to compare or analyze different AI models' capabilities and configurations programmatically within an n8n workflow.

Common scenarios:

  • Comparing features and limits of various AI models before selecting one for a project.
  • Dynamically fetching up-to-date model details to inform downstream processing or decision-making.
  • Monitoring changes in model offerings or pricing over time.

Practical example:
You could use this node to query several OpenAI or OpenRouter models at once, then route the output to a dashboard or notification system that alerts your team if any model's pricing or token limits change.


Properties

Name Meaning
Models A collection of AI models to fetch information for. You can add multiple models by selecting from a list or entering their IDs manually. Default example is "openai/gpt-5".
Output Format Defines how the output data is structured:
- One Item: Single item containing an array of all model overviews.
- Multiple Items: Separate output item for each model overview.
- Labeled: Single item with each model overview labeled by its model ID (default).

Output

The node outputs JSON data containing detailed overviews of the requested AI models. The structure depends on the selected output format:

  • One Item:
    {
      "model_overview": [
        { /* overview data for model 1 */ },
        { /* overview data for model 2 */ },
        ...
      ]
    }
    
  • Multiple Items:
    Each output item corresponds to one model overview:
    {
      /* overview data for a single model */
    }
    
  • Labeled:
    A single item where each key is a model ID and the value is the corresponding overview data:
    {
      "modelId1": { /* overview data */ },
      "modelId2": { /* overview data */ },
      ...
    }
    

The overview data includes fields returned by the external API endpoint /api/v1/models/{model}/endpoints, typically encompassing pricing, max tokens, supported parameters, and other metadata.

The node does not output binary data.


Dependencies

  • Requires an API key credential for authentication with the external service (OpenRouter or similar).
  • Makes HTTP GET requests to https://openrouter.ai/api/v1/models/{model}/endpoints to retrieve model details.
  • Uses internal helper methods for HTTP requests and resource locator UI components for model selection.
  • No additional environment variables are explicitly required beyond the API credential.

Troubleshooting

Common issues:

  • No models specified error: If no models are provided in the input, the node throws an error indicating that at least one model must be specified.
  • HTTP request failures: Network issues, invalid API keys, or incorrect model IDs may cause request failures. The node logs errors and returns error messages per model.
  • Output format confusion: Selecting an output format inconsistent with downstream nodes might cause data handling issues.

Error messages and resolutions:

  • "No models specified": Ensure you add at least one model in the "Models" property.
  • "Failed to fetch model overview" or other HTTP errors: Verify your API key credential is valid and has access to the external API. Check network connectivity and confirm the model ID is correct.
  • If the node fails but "Continue On Fail" is enabled, it will output error details in the chosen output format instead of stopping the workflow.

Links and References

Discussion