Actions9
- File Actions
- Vector Actions
- Agentic RAG Actions
Overview
The node "Agentic RAG Supabase" provides operations to handle Retrieval-Augmented Generation (RAG) workflows using Supabase's pgvector extension and Hugging Face embeddings. Specifically, for the File resource with the Generate Embeddings operation, it takes input text, splits it into chunks, generates vector embeddings for each chunk using a Hugging Face model, and returns these embeddings along with metadata.
This node is beneficial in scenarios where you want to convert textual data into vector representations for semantic search, similarity matching, or downstream machine learning tasks. For example, you might use it to embed paragraphs of a document so they can later be stored in a vector database and queried efficiently.
Practical examples:
- Generating embeddings from user-provided text snippets to build a semantic search index.
- Preprocessing documents by embedding their content before upserting into a vector store.
- Creating vector representations of knowledge base articles for AI-powered question answering.
Properties
| Name | Meaning |
|---|---|
| Text | The raw text string to generate embeddings for. The node will split this text into smaller chunks and create an embedding vector for each chunk. |
Output
The output JSON contains:
embeddings: An array of objects, each representing a chunk of the input text with:id: A unique identifier for the chunk (e.g.,"chunk_0").content: The text content of the chunk.embedding: The numeric vector embedding generated for the chunk.metadata: Additional info including:chunkIndex: The index of the chunk.length: Number of characters in the chunk.timestamp: ISO timestamp when the embedding was created.
totalChunks: Total number of chunks the input text was split into.embeddingModel: The name of the embedding model used ("thenlper/gte-small").
This structured output allows subsequent nodes to process or store embeddings easily.
Dependencies
- Requires credentials containing:
- Supabase project URL and API key (for other operations, but not directly used here).
- Hugging Face API key for accessing the embedding model inference endpoint.
- Uses the Hugging Face Inference API to generate embeddings with the
"thenlper/gte-small"model. - No direct file system dependencies for this operation since it works on provided text.
Troubleshooting
- Embedding error: If the Hugging Face API call fails, the node throws an error like
Embedding error: <message>. This usually indicates invalid or missing API keys, network issues, or rate limits. Verify your Hugging Face API key and network connectivity. - Empty or too short text: Providing empty or very short text may result in no meaningful embeddings or errors. Ensure the "Text" property contains sufficient content.
- Chunking behavior: The node splits text into chunks of about 200 words with 20-word overlap. Extremely large texts may produce many chunks, potentially causing performance delays or API rate limiting.
- API quota limits: Frequent or large embedding requests may hit Hugging Face API quotas. Monitor usage and upgrade plans if necessary.
Links and References
If you need details on other operations or resources, feel free to ask!