Embeddings LiteLLM icon

Embeddings LiteLLM

For advanced usage with an AI chain

Overview

This node generates text embeddings using a specified AI model from a vector store service. It is designed to convert input text into numerical vector representations, which are useful for tasks such as semantic search, clustering, recommendation systems, and other natural language processing applications that require understanding of text similarity or meaning.

Typical use cases include:

  • Creating embeddings for documents to enable efficient similarity searches in a vector database.
  • Generating feature vectors for machine learning models.
  • Enhancing AI workflows by integrating semantic understanding of text data.

For example, you might use this node to embed customer feedback texts so that similar feedback can be grouped or retrieved quickly based on semantic content.

Properties

Name Meaning
This node must be connected to a vector store. Insert one A notice indicating that the node requires a connection to a vector store to function properly.
Model The AI model used to generate embeddings. Options are dynamically loaded from the connected service and filtered to only include embedding-capable models. Examples include "embedding/text-embedding-3-small" and "text-embedding-3-small".
Options Additional optional parameters:
Batch Size: Maximum number of documents sent per request (up to 2048).
Strip New Lines: Whether to remove new line characters from input text.
Timeout: Max time allowed for each request in seconds (-1 means no timeout).

Output

The node outputs an object containing the generated embeddings under the json field. Each embedding corresponds to the input text transformed into a high-dimensional vector representation suitable for downstream AI or search tasks.

If binary data output is supported, it would represent raw embedding vectors or related binary formats, but this node primarily focuses on JSON embeddings.

Dependencies

  • Requires connection to a compatible vector store service with API access.
  • Needs an API key credential for authentication with the embedding service.
  • The node uses an external library to interact with the embedding API.
  • Network access to the embedding service endpoint is necessary.
  • Configuration of base URL and credentials must be done in n8n settings.

Troubleshooting

  • Connection issues: Ensure the node is connected to a valid vector store and that the API key is correct and has sufficient permissions.
  • Model loading errors: If no embedding models appear in the dropdown, verify network connectivity and that the service endpoint supports listing models.
  • Timeouts: Adjust the "Timeout" option if requests take too long or fail due to network latency.
  • Batch size limits: Large batch sizes may cause failures; reduce batch size if encountering errors related to request payload size.
  • Input formatting: If embeddings seem incorrect, try toggling the "Strip New Lines" option to clean input text.

Links and References

  • OpenAI Models Overview — Documentation on available embedding models.
  • General concepts on text embeddings and vector stores can be found in AI and NLP literature online.

Discussion