GPT4ALL

Run query against local GPT4All instance

Overview

This n8n node allows you to run a prompt against a local GPT4All instance. It is designed for scenarios where you want to leverage a locally hosted large language model (LLM) for text generation, completion, or conversational AI tasks directly within your n8n workflows—without relying on external cloud-based APIs.

Common use cases:

  • Automating content generation or summarization.
  • Building chatbots or virtual assistants that operate entirely offline.
  • Integrating AI-powered responses into data processing pipelines.

Example:
You could use this node to generate product descriptions from a list of features, summarize customer support tickets, or create automated replies in a workflow—all powered by your own local GPT4All model.

Properties

Name Type Meaning
Prompt String The input text or question you want the GPT4All model to process and respond to.
Thread Count Number The number of threads to allocate for running GPT4All, which can affect performance and speed. Minimum value is 1.

Output

The node outputs an object with at least the following structure in the json field:

Field Meaning
output The generated response from the GPT4All model, as a string. Any terminal color codes are removed from the output.

If an error occurs and "Continue On Fail" is enabled, the output may also include:

Field Meaning
error Error information if the prompt execution failed for a particular item.
pairedItem Index reference to the original input item.

Dependencies

  • External Service: Requires a local installation of the GPT4All model (gpt4all-lora-quantized).
  • Node.js Package: Depends on the gpt4all npm package.
  • System Resources: Sufficient CPU resources to run multiple threads as specified by "Thread Count".
  • n8n Configuration: No special environment variables required, but the local GPT4All model must be accessible.

Troubleshooting

Common Issues:

  • Model Initialization Failure: If the GPT4All model files are missing or corrupted, initialization will fail.
  • Insufficient Resources: Setting a high thread count without enough CPU cores may cause slowdowns or errors.
  • Prompt Errors: If the prompt is empty or malformed, the model may return unexpected results or errors.

Error Messages:

  • "Cannot open gpt model": Indicates issues accessing the model file. Ensure the model is correctly installed and the path is correct.
  • "Execute prompt <prompt>": If followed by an error, check the prompt content and system resource usage.
  • If "Continue On Fail" is enabled, errors are included in the output under the error field; otherwise, the workflow will stop and display the error.

Resolution Steps:

  • Verify the GPT4All model is properly installed and accessible.
  • Adjust the "Thread Count" according to your machine's capabilities.
  • Check the prompt input for correctness.

Links and References

Discussion