N8N Tools - GraphRAG icon

N8N Tools - GraphRAG

AI Tool for GraphRAG operations: document processing, knowledge graphs, vector search, and intelligent analysis. Can be used by AI agents.

Actions9

Overview

The node implements an AI-powered tool called "GraphRAG" designed for advanced document processing and knowledge graph operations. Specifically, the Document Processing - Process operation enables users to input textual documents or data and perform intelligent analysis such as chunking, entity extraction, vectorization, and integration with external vector and graph databases. It can also enhance results using Large Language Models (LLMs) for deeper insights.

This node is beneficial in scenarios like:

  • Processing large documents by splitting them into manageable chunks.
  • Extracting meaningful entities and relationships from text.
  • Storing and querying document embeddings in vector databases.
  • Building and querying knowledge graphs.
  • Enhancing document understanding with LLMs for summarization, question answering, or semantic search.

Practical examples include:

  • Ingesting customer support transcripts, chunking them, and storing embeddings for fast semantic search.
  • Extracting named entities from research papers and building a knowledge graph for relationship exploration.
  • Using LLM enhancement to generate summaries or answer questions based on uploaded documents.

Properties

Name Meaning
Input Data The raw document text or data that you want to process.
Additional Options Collection of optional parameters:
• Language: Processing language (English, Portuguese, Spanish, French).
• Result Limit: Max number of results to return.
• Chunk Size: Size of text chunks.
• Chunk Overlap: Overlap between chunks.
Database Configuration Settings to connect and configure external vector and graph databases:
• Vector Database: Choose provider (Local FAISS/ChromaDB, Pinecone, Weaviate, Qdrant, Milvus).
• Vector DB Connection URL & API Key (for external DBs).
• Graph Database: Choose provider (Local NetworkX, Neo4j, ArangoDB, Amazon Neptune).
• Graph DB connection details including URL, username, password, database name.
• AWS credentials and region for Amazon Neptune.
🤖 LLM Enhancement Configure Large Language Model options:
• Enable LLM Enhancement: Toggle for LLM-powered analysis.
• LLM Provider: Select provider (N8N Tools internal, OpenAI, Anthropic).
• LLM Model: Specific model selection depending on provider.

Output

The node outputs a JSON object containing the response from the GraphRAG API. This response includes processed document data, analysis results, extracted entities, embeddings, or knowledge graph information depending on the operation and configuration.

  • The output JSON structure varies but generally contains the processed results relevant to the input document and selected options.
  • No binary data output is indicated.

Example output snippet (conceptual):

{
  "processedText": "...",
  "entities": [...],
  "embeddings": [...],
  "graphData": {...},
  "llmAnalysis": {...}
}

Dependencies

  • Requires an API key credential for the N8N Tools GraphRAG service.
  • Optional external dependencies based on configuration:
    • Vector databases: Local FAISS/ChromaDB or cloud providers like Pinecone, Weaviate, Qdrant, Milvus.
    • Graph databases: Local NetworkX or remote services like Neo4j, ArangoDB, Amazon Neptune.
  • For Amazon Neptune, AWS credentials and region must be provided.
  • If LLM enhancement is enabled, access to the chosen LLM provider's API is required (e.g., OpenAI or Anthropic).

Troubleshooting

  • Common issues:

    • Missing or invalid API key for the GraphRAG service will cause authentication errors.
    • Incorrect or incomplete database connection details may lead to connection failures.
    • Exceeding rate limits or quota on LLM providers can cause request rejections.
    • Improperly formatted input data might result in processing errors.
  • Error messages:

    • "GraphRAG Tool error: <message>" indicates an issue during API interaction or processing.
    • Authentication errors suggest checking API keys and credentials.
    • Connection errors to vector or graph databases require verifying URLs, credentials, and network accessibility.
    • Validation errors on input properties mean some required fields are missing or invalid.
  • Resolutions:

    • Ensure all required credentials and connection parameters are correctly set.
    • Validate input data format and size constraints.
    • Monitor usage quotas on external APIs.
    • Use the node’s "Continue On Fail" option to handle errors gracefully in workflows.

Links and References


This summary is based solely on static code analysis of the bundled source and provided property definitions.

Discussion