Github Models icon

Github Models

This node encapsulates the functionality for the github models API.

Overview

This node integrates with the GitHub Models API to generate chat completions using various AI models available on GitHub's platform. It allows users to send a sequence of messages (with roles such as user, assistant, or system) to a selected model and receive generated responses. This is useful for building conversational AI workflows, automating customer support, generating content, or experimenting with different AI models hosted by GitHub.

Practical examples:

  • Automate answering FAQs by sending user questions and receiving AI-generated answers.
  • Create chatbots that simulate conversations with an assistant or system persona.
  • Generate creative writing prompts or code snippets based on user input.

Properties

Name Meaning
Model Name or ID Select a GitHub AI model from a dynamically loaded list or specify its ID via expression.
Messages A list of messages forming the conversation history sent to the model. Each message has:
- Prompt: The text content of the message.
- Role: The role of the message sender; options are User, Assistant, or System.
Options Configuration parameters for the model, each can be added multiple times:
- Temperature: Controls randomness in output (number).
- Top P: Controls nucleus sampling probability (number).
- Max Tokens: Maximum number of tokens in the response (number).
Output as JSON Boolean flag indicating if the model's output should be parsed and returned as JSON data instead of plain text.

Output

The node outputs an array of items where each item contains a json field with the model's response:

  • If Output as JSON is enabled, the entire API response is returned as JSON.
  • Otherwise, the output JSON contains a result field holding the textual content of the first choice's message from the model's response.

Example output when JSON format is disabled:

{
  "result": "Hello! How can I assist you today?"
}

If the node encounters an error and "Continue On Fail" is enabled, it outputs an error object with the error message.

Dependencies

  • Requires an API authentication token credential for GitHub (a personal access token or similar) with permission to access the GitHub Models API.
  • Makes HTTP requests to:
    • https://models.github.ai/catalog/models to fetch available models.
    • https://models.github.ai/inference/chat/completions to send chat completion requests.
  • No additional environment variables are required beyond the configured API credential.

Troubleshooting

  • Common issues:

    • Invalid or missing API token will cause authentication errors.
    • Specifying a non-existent model ID will result in API errors.
    • Improperly formatted messages (e.g., missing roles or prompts) may cause unexpected results.
    • Exceeding max tokens or rate limits imposed by the API could lead to failures.
  • Error messages:

    • Authentication errors typically indicate invalid or expired credentials; verify and update your API token.
    • HTTP request failures might indicate network issues or incorrect endpoint URLs.
    • Parsing errors when JSON output is enabled suggest the model did not return valid JSON; try disabling JSON output or validating the prompt/messages.

Links and References

Discussion