G4f icon

G4f

Interact with the g4f API to use various LLMs

Overview

This node interacts with the g4f API to perform AI-related tasks, specifically to chat with a language model (Ask AI operation). It allows users to send a conversation history to a selected AI model from a chosen provider and receive a generated response. This is useful for applications like chatbots, virtual assistants, or any scenario requiring natural language understanding and generation.

Use Case Examples

  1. A user sends a series of messages to an AI model to get a conversational response.
  2. A developer integrates this node to automate customer support responses using AI.
  3. A content creator uses the node to generate text based on prompts for creative writing.

Properties

Name Meaning
Provider The AI service provider to use for the request. The list is dynamically loaded from the API.
Model The specific AI model to use from the selected provider. The list is dynamically loaded based on the provider.
Messages The conversation history sent to the AI model. Each message includes content and the role of the sender (user, assistant, or system). The last message should be the user's prompt.
Options Additional parameters to control the AI model's output, such as streaming response, max tokens, temperature, top_p, and JSON mode.

Output

JSON

  • role - Role of the message sender in the response, typically 'assistant'.
  • content - The content of the AI-generated message.
  • usage - Token usage statistics from the API response.
  • model - The model used to generate the response.
  • id - Unique identifier for the response.
  • fullResponse - The complete raw response from the API for further inspection or processing.

Dependencies

  • Requires an API key credential for the g4f API, including base URL and optional API key for authorization.

Troubleshooting

  • Common issues include failure to fetch providers or models due to incorrect API credentials or network issues.
  • Errors during the chat completion request may occur if the API endpoint is unreachable or if the request parameters are invalid.
  • If streaming is enabled, ensure the environment supports handling streamed responses properly.
  • Error messages from the API are surfaced as node operation errors with descriptive messages to aid debugging.

Discussion