Embeddings LiteLLM icon

Embeddings LiteLLM

For advanced usage with an AI chain

Overview

This node generates text embeddings using a specified AI model from an external embedding service. It is designed to be connected to a vector store, enabling workflows that require transforming textual data into vector representations for tasks such as semantic search, similarity comparison, or machine learning feature extraction.

Common scenarios include:

  • Converting documents or sentences into embeddings for indexing in a vector database.
  • Preparing input data for AI models that operate on vectorized text.
  • Enhancing search relevance by comparing query embeddings with stored document embeddings.

For example, you might use this node to embed customer feedback texts before storing them in a vector store to enable fast similarity searches or clustering.

Properties

Name Meaning
This node must be connected to a vector store. Insert one A notice indicating the node requires connection to a vector store node to function properly.
Model The AI model used to generate embeddings. Options are dynamically loaded from the embedding service and filtered to only those models whose IDs include "embed". Examples include "embedding/text-embedding-3-small" and "text-embedding-3-small".
Options Additional optional settings:
Batch Size: Maximum number of documents sent per request (up to 2048).
Strip New Lines: Whether to remove new line characters from input text.
Timeout: Max request time in seconds (-1 means no timeout).

Output

The node outputs JSON data containing the generated embeddings. Each output item includes an ai_embedding field with the vector representation of the input text.

If binary data were supported, it would represent raw embedding vectors or related binary formats, but this node focuses on JSON embeddings.

Dependencies

  • Requires connection to an external embedding API service compatible with OpenAI-style embeddings.
  • Needs an API key credential for authentication with the embedding service.
  • Must be connected to a vector store node within n8n to store or further process the generated embeddings.

Troubleshooting

  • Connection issues: Ensure the node is connected to a valid vector store node; otherwise, it will not function correctly.
  • Invalid API key or credentials: Verify that the API key credential is correctly configured and has permissions to access the embedding service.
  • Model loading errors: If no embedding models appear in the dropdown, check network connectivity and API endpoint configuration.
  • Timeouts: Adjust the timeout option if requests take too long or fail due to network latency.
  • Batch size limits: Sending too many documents at once may cause errors; reduce batch size accordingly.

Links and References

Discussion