Actions9
- File Actions
- Vector Actions
- Agentic RAG Actions
Overview
The Agentic RAG Supabase node provides a Retrieval-Augmented Generation (RAG) workflow leveraging vector search capabilities with Supabase's pgvector extension and OpenAI's GPT models. Specifically, the Process Query operation under the Agentic RAG resource enables users to submit a natural language query that is iteratively refined and answered based on relevant documents retrieved from a vector database.
This node is beneficial in scenarios where you want to build intelligent question-answering systems over custom document collections. For example:
- Customer support bots that answer questions using company manuals or FAQs.
- Research assistants that retrieve and summarize information from large document corpora.
- Any application requiring semantic search combined with generative AI to produce context-aware answers.
The Process Query operation performs multiple iterations of searching for relevant documents, generating an answer, evaluating it, and refining the query if needed, up to a maximum number of iterations or until a satisfactory answer quality is reached.
Properties
| Name | Meaning |
|---|---|
| Query | The natural language question or query string to process and answer. |
| Top K | Number of top matching documents to retrieve from the vector database per iteration (default: 5). |
| Max Iterations | Maximum number of iterations to perform for query refinement and answer generation (default: 3). |
| OpenAI API Key | Required API key for accessing OpenAI services to generate answers and evaluate them. |
| Similarity Threshold | Minimum similarity score threshold for considering documents as relevant during vector search (default: 0.78). |
Output
The output JSON object contains detailed information about the iterative query processing:
originalQuery: The initial user query submitted.finalAnswer: The best generated answer after all iterations.bestScore: The highest evaluation score achieved by any answer.iterationsUsed: Number of iterations actually performed.iterations: An array of objects, each representing one iteration with fields:iteration: Iteration index (1-based).query: The query used in this iteration (may be refined).retrievalSuccess: Boolean indicating if relevant documents were found.reason: Reason for failure if no documents found.documentsFound: Number of documents retrieved.answer: Generated answer text.evaluation: Object containing evaluation metrics such as relevance, groundedness, completeness, clarity, accuracy, overallScore, needsRefinement, and refinementSuggestion.
success: Boolean indicating if the process produced a successful answer (based on evaluation scores).
This output allows downstream nodes or workflows to understand how the answer was derived, inspect intermediate queries and evaluations, and decide whether further action is needed.
Dependencies
- Supabase: Requires a Supabase project with the pgvector extension enabled and a table (
rag_documents) set up to store document embeddings. - OpenAI API: Needs an OpenAI API key to call GPT-3.5-turbo for answer generation and evaluation.
- Hugging Face Inference API: Uses a Hugging Face model (
thenlper/gte-small) for embedding generation; requires an API token. - Node.js packages: Uses axios for HTTP requests, supabase-js client, and other utilities bundled internally.
The node expects credentials configured in n8n for:
- Supabase project URL and API key.
- Hugging Face API key.
- OpenAI API key (provided as input property).
Troubleshooting
- No documents found error: If the retrieval step returns no documents, ensure that the vector database is populated with relevant embeddings and that the similarity threshold is not too high.
- Embedding errors: Failures in generating embeddings may indicate invalid or missing Hugging Face API keys or network issues.
- OpenAI API errors: Errors during answer generation or evaluation often relate to invalid API keys, rate limits, or malformed requests.
- Iteration limit reached without good answer: If the max iterations are exhausted but the answer quality remains low, consider increasing iterations or improving document coverage.
- Unsupported file types: When ingesting documents, only PDF, TXT, and DOCX files are supported for parsing.
To resolve these issues:
- Verify all API keys and credentials are correctly set.
- Confirm the Supabase vector table is properly initialized and populated.
- Adjust similarity thresholds and iteration counts according to your dataset.
- Check network connectivity and API usage limits.