Actions13
Overview
This node integrates with the FastGPT AI customer service platform to manage datasets (collections). Specifically, the operation "创建一个纯文本集合" ("Create a plain text collection") allows users to create a new dataset consisting of plain text data. This is useful for organizing and training AI models on custom textual data grouped into collections.
Common scenarios include:
- Creating a new text dataset for AI training or knowledge base purposes.
- Organizing raw text data into manageable collections within FastGPT.
- Preparing textual content for further processing such as chunking or QA-based splitting.
Practical example:
- A user wants to upload a large body of text (e.g., company manuals) as a new dataset in FastGPT to enable AI-powered question answering on that content. They use this node to create the dataset by specifying the parent folder, knowledge base, dataset name, training type, and the raw text itself.
Properties
| Name | Meaning |
|---|---|
| 父级目录ID | The ID of the parent directory/folder in FastGPT where the new dataset will be created. Users can find this ID by opening the folder in FastGPT and copying it from the browser URL. |
| 选择知识库 | Select the knowledge base under which the dataset will be created. The options are dynamically loaded based on the selected parent directory. |
| 名称 | The name of the new dataset to be created. This is a required field. |
| 训练类型 | The training type for the dataset. Options: - 按文本长度进行分割 ("chunk"): Split text by length. - QA拆分 ("qa"): Split text using a question-answer approach. |
| Chunk长度 | (Shown only if training type is "chunk") The maximum length of each text chunk when splitting the text. Default is 3000 characters. |
| 自定义最高优先分割符号 | (Shown only if training type is "chunk") Custom highest priority splitter symbol used when chunking the text. |
| QA拆分自定义提示词 | (Shown only if training type is "qa") Custom prompt used for QA-based splitting of the text. |
| 原文本 | The raw plain text content to be included in the dataset. This is a multiline string input. |
Output
The node outputs JSON data representing the result of the dataset creation request. Typically, this includes details about the newly created dataset such as its ID, name, and metadata returned from the FastGPT API.
If the node supports binary data output (not indicated here), it would represent any files or attachments related to the dataset, but for this operation, the output is purely JSON.
Dependencies
- Requires an API key credential for authenticating with the FastGPT platform.
- The node sends HTTP POST requests to FastGPT endpoints, so network connectivity to the configured FastGPT base URL is necessary.
- Dynamic loading of knowledge base options depends on the parent directory ID being correctly specified.
Troubleshooting
- Missing or invalid parent directory ID: If the parent directory ID is incorrect or missing, the node may fail to load knowledge bases or create the dataset. Ensure the ID is copied exactly from the FastGPT UI.
- API authentication errors: If the API key credential is invalid or expired, the node will return authentication errors. Verify and update the API key in n8n credentials.
- Validation errors: Required fields like dataset name must be provided; otherwise, the node will throw validation errors.
- Incorrect training type or parameters: Using incompatible combinations of training type and parameters (e.g., providing chunk size when training type is "qa") might cause unexpected behavior or API errors.
- Network issues: Connectivity problems to the FastGPT API endpoint will cause request failures.
Links and References
- FastGPT Official Documentation (replace with actual URL)
- FastGPT API reference for dataset management (check FastGPT developer portal)
- n8n documentation on creating custom nodes and handling dynamic options