N8N Tools - GraphRAG icon

N8N Tools - GraphRAG

AI Tool for GraphRAG operations: document processing, knowledge graphs, vector search, and intelligent analysis. Can be used by AI agents.

Actions9

Overview

The "Vector Search - Embed" operation of this node generates vector embeddings from input text data. It processes document text or data by splitting it into chunks, embedding those chunks into a vector space, and optionally storing or querying them in external vector and graph databases. This enables semantic search, similarity matching, and advanced AI-driven analysis.

Common scenarios include:

  • Creating vector representations of documents for semantic search.
  • Indexing large text corpora to enable fast similarity queries.
  • Enhancing knowledge graphs with embedded document vectors.
  • Using Large Language Models (LLMs) to enrich embeddings with deeper insights.

Practical example: You have a collection of product descriptions and want to embed them so you can later perform semantic searches to find similar products based on user queries.

Properties

Name Meaning
Input Data Document text or data to process and embed into vectors.
Additional Options Collection of options to customize processing:
Language: Processing language (English, Portuguese, Spanish, French).
Result Limit: Max number of results to return.
Chunk Size: Size of text chunks.
Chunk Overlap: Overlap between chunks.
Database Configuration Configure external vector and graph databases:
Vector Database: Choose provider (Local FAISS/ChromaDB, Pinecone, Weaviate, Qdrant, Milvus).
Vector DB Connection URL: URL for external vector DB.
Vector DB API Key: API key for vector DB.
Graph Database: Choose provider (Local NetworkX, Neo4j, ArangoDB, Amazon Neptune).
Graph DB Connection URL: URL for external graph DB.
Graph DB Username/Password: Credentials for graph DB.
Graph DB Database Name: Database or collection name.
AWS Credentials and Region: For Amazon Neptune.
🤖 LLM Enhancement Configure Large Language Model enhancement:
Enable LLM Enhancement: Enable deeper AI-powered analysis.
LLM Provider: Select provider (N8N Tools internal, OpenAI, Anthropic).
LLM Model: Specific model selection depending on provider.

Output

The node outputs a JSON object containing the response from the GraphRAG API. This typically includes:

  • The generated embeddings for the input data chunks.
  • Metadata about the processed data.
  • Any additional information returned by the API related to vector storage or indexing.

If LLM enhancement is enabled, the output may also contain enriched analysis or insights derived from the embeddings.

No binary data output is indicated.

Dependencies

  • Requires an API key credential for the N8N Tools GraphRAG service.
  • Optional external vector database credentials and URLs if using Pinecone, Weaviate, Qdrant, or Milvus.
  • Optional external graph database credentials and URLs if using Neo4j, ArangoDB, or Amazon Neptune.
  • AWS credentials and region configuration are required if using Amazon Neptune as the graph database.
  • Internet access to call the GraphRAG API endpoint at https://graphrag.n8ntools.io/api/v1/graphrag.

Troubleshooting

  • API Authentication Errors: Ensure the provided API key credential is valid and has permissions to access the GraphRAG API.
  • Connection Failures to External Databases: Verify that the connection URLs, API keys, usernames, and passwords for vector and graph databases are correct and accessible from your environment.
  • Invalid Input Data: Make sure the "Input Data" property contains valid text or document content; empty or malformed input may cause errors.
  • LLM Configuration Issues: If enabling LLM enhancement, confirm that the selected provider and model are supported and that any required credentials are configured properly.
  • Error Messages: The node throws errors prefixed with "GraphRAG Tool error:" followed by the specific message. Review these messages to identify issues such as invalid parameters or network problems.
  • Use the "Continue On Fail" option to handle errors gracefully during batch processing.

Links and References

Discussion