Embeddings WCEA Supabase icon

Embeddings WCEA Supabase

Use WCEA Embeddings for Supabase

Overview

This node integrates with the WCEA (presumably a local or remote embedding service) to generate vector embeddings from text documents or queries. It is designed to be connected with other AI nodes that utilize embeddings, such as Vector Store, Retriever, or AI Agent nodes, enabling workflows that require semantic search, similarity matching, or other embedding-based operations.

Typical use cases include:

  • Converting large batches of text documents into embeddings for indexing and retrieval.
  • Embedding user queries to perform semantic search against a vector database.
  • Preprocessing textual data for downstream AI tasks that rely on vector representations.

For example, you might use this node to embed product descriptions in an e-commerce platform, then connect it to a vector store node to enable semantic product search.

Properties

Name Meaning
(notice) Informational notice: Connect this node to other AI nodes that use embeddings (like Vector Store, Retriever, or AI Agent nodes).
Model The embedding model to use from WCEA. Models are dynamically loaded from the WCEA service.
Options Additional configuration options for embeddings:
- Batch Size Maximum number of documents sent per request (1 to 2048).
- Strip New Lines Whether to remove newline characters from input text before embedding (true/false).
- Timeout Maximum time allowed for each request in milliseconds (default 60000 ms).
- Dimensions Number of dimensions for the embeddings; set to -1 to use the model's default.
- Encoding Format Format of returned embeddings: "Float" (floating point numbers) or "Base64" (base64 encoded).

Output

The node outputs data on the ai_embedding output channel. The JSON structure contains:

  • response: An array of embeddings corresponding to the input documents or query.
    • Each embedding is either:
      • An array of floating point numbers (if encoding format is "float").
      • An array of floats decoded from base64 (if encoding format is "base64").

If multiple documents are embedded, the response is an array of arrays, each representing one document's embedding vector.

No binary data output is produced by this node.

Dependencies

  • Requires access to a running WCEA embedding service endpoint (baseUrl) with an API key for authentication.
  • The node fetches available models dynamically from the WCEA service.
  • Requires an API key credential configured in n8n to authenticate requests.
  • Network connectivity to the WCEA service must be available.
  • Uses standard HTTP(S) requests with JSON payloads.

Troubleshooting

  • No model selected error: Ensure you select a valid embedding model from the dropdown. If no models appear, verify the WCEA service is running and accessible.
  • Connection timeout or failure: Check that the WCEA service URL is correct and reachable. Increase timeout if necessary. Confirm API key validity.
  • API errors from WCEA: The node surfaces HTTP status codes and messages from the WCEA API. Review these messages for issues like invalid parameters or authorization failures.
  • Invalid response format: Indicates unexpected data from WCEA. Verify the service version and compatibility.
  • Batch size too large: If embedding large datasets, reduce batch size to avoid request failures.
  • Request timeout: Increase the timeout option if embedding large documents or slow network conditions.

Links and References

Discussion