Overview
This node integrates with Oracle Cloud Infrastructure (OCI) Generative AI services to perform chat-based text generation using a specified AI model. It is designed to send chat prompts to an OCI-hosted generative AI model and receive generated responses, supporting advanced options to control the output style and randomness.
Common scenarios where this node is beneficial include:
- Automating customer support chatbots using custom or pre-trained OCI models.
- Generating creative content such as marketing copy or story ideas.
- Enhancing applications with conversational AI capabilities hosted on OCI.
- Experimenting with different AI models and tuning generation parameters for tailored outputs.
Practical example: A user can input a conversation prompt and specify an OCI model ID to generate a relevant reply, adjusting temperature and penalties to make the response more focused or diverse.
Properties
| Name | Meaning |
|---|---|
| Service Endpoint | The OCI Generative AI Inference service endpoint URL for your region. Example: https://inference.generativeai.me-riyadh-1.oci.oraclecloud.com. This directs requests to the appropriate regional service. |
| Model ID | The OCID (Oracle Cloud Identifier) of the generative AI model to use for chat generation. The node auto-detects the API format and corrects the region based on this ID. |
| Note | Informational notice explaining that API format (GENERIC, COHERE, LLAMA) and region are auto-detected from the model vendor and ID, and that model validation occurs before execution. |
| Options | Additional generation parameters to customize the output: |
| - Temperature | Controls randomness of generated text (0 to 1). Lower values produce more deterministic output; higher values increase diversity. Default is 0.2. |
| - Top P | Probability threshold for nucleus sampling (0 to 1). Limits token selection to a subset whose cumulative probability exceeds this value, reducing repetition. Default is 1. |
| - Top K | Limits the number of highest probability tokens considered at each step (0 to 500). Higher values increase diversity but may reduce coherence. Set to -1 to disable. Default is 0. |
| - Frequency Penalty | Penalizes tokens that have already appeared in the generated text to discourage repetition. Range 0 to 1. Default is 0. |
| - Presence Penalty | Penalizes tokens based on their presence so far to encourage diversity. Positive values increase penalty. Default is 0. |
Output
The node outputs a JSON object containing the generated chat response from the OCI Generative AI model. The exact structure depends on the underlying OCI API response but typically includes:
- The generated text message(s) from the model.
- Metadata about the generation such as model used and parameters applied.
If binary data were supported (not indicated here), it would represent any non-textual output like images or audio, but this node focuses on text chat generation only.
Dependencies
- Requires an API key credential for authenticating with OCI Generative AI services.
- Depends on several OCI SDK packages (
oci-common,oci-generativeaiinference,oci-generativeai) which must be installed in the n8n environment. - The node automatically detects the model's API format and adjusts the service endpoint region accordingly.
- Proper OCI tenancy, user OCID, key fingerprint, private key, and passphrase credentials must be configured for authentication.
Troubleshooting
- Model ID or Region Mismatch: If the model's region does not match the provided service endpoint, the node attempts to auto-correct the endpoint but warns the user. Ensure the model OCID and endpoint correspond to the same region.
- Failed Model Details Fetch: Errors fetching model metadata usually indicate incorrect model ID, insufficient permissions, or network issues. Verify the model OCID is valid and accessible with your credentials.
- Missing OCI SDK Modules: If required OCI modules are not installed, the node throws an error instructing to install them via npm.
- Authentication Errors: Check that all OCI credential fields (private key, tenancy OCID, user OCID, fingerprint, passphrase) are correctly set and valid.
- Parameter Validation: Ensure numeric options like temperature, topP, topK, frequencyPenalty, and presencePenalty are within allowed ranges.
Links and References
- Oracle Cloud Infrastructure Generative AI Documentation
- OCI Generative AI SDK GitHub
- N8N Node Development Guide
- OpenAI GPT Sampling Parameters Explanation (for understanding temperature, top_p, etc.)