Actions364
- Continuous Activity Actions
- Dataset Actions
- Get Last Metric Values
- Get Metadata
- Get Schema
- Get Single Metric History
- List Datasets
- List Partitions
- Compute Metrics
- Create Dataset
- Create Managed Dataset
- Delete Data
- Delete Dataset
- Execute Tables Import
- Get Column Lineage
- Get Data
- Get Data - Alternative Version
- Get Dataset Settings
- Get Full Info
- List Tables
- List Tables Schemas
- Prepare Tables Import
- Run Checks
- Set Metadata
- Set Schema
- Synchronize Hive Metastore
- Update Dataset Settings
- Update From Hive Metastore
- API Service Actions
- Bundles Automation-Side Actions
- Bundles Design-Side Actions
- Connection Actions
- Dashboard Actions
- Data Collection Actions
- Data Quality Actions
- Compute Rules on Specific Partition
- Create Data Quality Rules Configuration
- Delete Rule
- Get Data Quality Project Current Status
- Get Data Quality Project Timeline
- Get Data Quality Rules Configuration
- Get Dataset Current Status
- Get Dataset Current Status per Partition
- Get Last Outcome on Specific Partition
- Get Last Rule Results
- Get Rule History
- Update Rule Configuration
- DSS Administration Actions
- Job Actions
- Library Actions
- Dataset Statistic Actions
- Discussion Actions
- Flow Documentation Actions
- Insight Actions
- Internal Metric Actions
- LLM Mesh Actions
- Machine Learning - Lab Actions
- Delete Visual Analysis
- Deploy Trained Model to Flow
- Download Model Documentation of Trained Model
- Generate Model Documentation From Custom Template
- Start Training ML Task
- Update User Metadata for Trained Model
- Update Visual Analysis
- Adjust Forecasting Parameters and Algorithm
- Compute Partial Dependencies of Trained Model
- Compute Subpopulation Analysis of Trained Model
- Create ML Task
- Create Visual Analysis
- Create Visual Analysis and ML Task
- Generate Model Documentation From Default Template
- Generate Model Documentation From File Template
- Get ML Task Settings
- Get ML Task Status
- Get Model Snippet
- Get Partial Dependencies of Trained Model
- Get Scoring Jar of Trained Model
- Get Scoring PMML of Trained Model
- Get Subpopulation Analysis of Trained Model
- Get Trained Model Details
- Get Visual Analysis
- List ML Tasks of Project
- List ML Tasks of Visual Analyses
- List Visual Analyses
- Reguess ML Task
- Machine Learning - Saved Model Actions
- Compute Partial Dependencies of Version
- Get Version Scoring PMML
- Get Version Snippet
- Import MLflow Version From File or Path
- List Saved Models
- List Versions
- Set Version Active
- Compute Subpopulation Analysis of Version
- Create Saved Model
- Delete Version
- Download Model Documentation of Version
- Evaluate MLflow Model Version
- Generate Model Documentation From Custom Template
- Generate Model Documentation From Default Template
- Generate Model Documentation From File Template
- Get MLflow Model Version Metadata
- Get Partial Dependencies of Version
- Get Saved Model
- Get Subpopulation Analysis of Version
- Get Version Details
- Get Version Scoring Jar
- Set Version User Meta
- Update Saved Model
- Long Task Actions
- Machine Learning - Experiment Tracking Actions
- Macro Actions
- Plugin Actions
- Download Plugin
- Fetch From Git Remote
- Get File Detail From Plugin
- Get Git Remote Info
- Get Plugin Settings
- Install Plugin From Git
- Install Plugin From Store
- List Files in Plugin
- List Git Branches
- List Plugin Usages
- Move File or Folder in Plugin
- Add Folder to Plugin
- Create Development Plugin
- Create Plugin Code Env
- Delete File From Plugin
- Delete Git Remote Info
- Delete Plugin
- Download File From Plugin
- Move Plugin to Dev Environment
- Pull From Git Remote
- Push to Git Remote
- Rename File or Folder in Plugin
- Reset to Local Head State
- Reset to Remote Head State
- Set Git Remote Info
- Set Plugin Settings
- Update Plugin Code Env
- Update Plugin From Git
- Update Plugin From Store
- Update Plugin From Zip Archive
- Upload File to Plugin
- Upload Plugin
- Project Deployer Actions
- Get Deployment Settings
- Get Deployment Status
- Create Deployment
- Create Infra
- Create Project
- Delete Bundle
- Delete Deployment
- Delete Infra
- Delete Project
- Get Deployment
- Get Deployment Governance Status
- Get Infra
- Get Infra Settings
- Get Project
- Get Project Settings
- Save Deployment Settings
- Save Infra Settings
- Save Project Settings
- Update Deployment
- Upload Bundle
- SQL Query Actions
- Wiki Actions
- Managed Folder Actions
- Meaning Actions
- Model Comparison Actions
- Notebook Actions
- Project Actions
- Project Folder Actions
- Recipe Actions
- Scenario Actions
- Security Actions
- Streaming Endpoint Actions
- Webapp Actions
- Workspace Actions
Overview
The "Upload Bundle" operation of the Project Deployer resource in this node allows users to upload a project bundle to an existing project on a Dataiku DSS instance via its API. This is useful for automating deployment workflows where project bundles (which may contain code, configurations, and data) need to be programmatically uploaded to a deployment environment.
Typical scenarios include:
- Automating continuous integration/continuous deployment (CI/CD) pipelines by uploading updated project bundles.
- Managing multiple project deployments from external systems or orchestrations.
- Integrating Dataiku project deployment into broader automation workflows.
For example, a user might export a project bundle from a development environment and then use this node to upload it to a production deployment project automatically.
Properties
| Name | Meaning |
|---|---|
| Query Parameters | Optional key-value pairs appended as query parameters to the API request URL. Various options are available such as active, archivePath, deletionMode, dropData, exportAnalysisModels, forceRebuildEnv, includeLibs, limit, name, wait, etc. These control specific behaviors of the upload or other API calls. |
| File | The binary file content representing the bundle to upload. This is used when the operation requires sending a file stream (e.g., uploading a bundle archive). |
| Project Key | The identifier of the target project to which the bundle will be uploaded. This is required to specify the destination project. |
| Bundle ID | Identifier of a bundle, used in some operations related to bundles but not specifically required for upload unless referencing an existing bundle. |
| Request Body | JSON object containing additional data to send in the body of the API request. For upload, this can include metadata or configuration details relevant to the bundle upload. |
Note: The full list of query parameter options is extensive and includes booleans, strings, numbers, and flags that influence how the upload or other API calls behave.
Output
The node outputs the response from the Dataiku DSS API call:
- If the response is JSON, it returns it as a JSON object in the
jsonoutput field. - If the response is a downloadable file (binary), such as a bundle archive or documentation, it returns the binary data prepared for further use or saving.
- If the response is plain text logs, it returns them under a
logsJSON property. - If no content is returned (HTTP 204), it outputs a status message indicating no content.
This flexible output handling allows downstream nodes to process either metadata responses or binary files depending on the operation.
Dependencies
- Requires an active connection to a Dataiku DSS instance with a valid API key credential.
- The node expects the base URL of the DSS server and a user API key for authentication.
- The node uses HTTP requests to interact with the Dataiku DSS REST API endpoints.
- For file uploads, it uses multipart form-data encoding.
- No additional external services beyond the Dataiku DSS API are required.
Troubleshooting
- Missing Credentials: The node throws an error if the required API credentials are missing. Ensure you have configured the API key credential properly.
- Required Parameters Missing: Many operations require specific parameters like
Project Key,Bundle ID, orDeployment ID. Errors will indicate which parameter is missing. Provide all required inputs. - Invalid URLs or Endpoints: If the constructed API endpoint URL is incorrect due to missing or wrong parameters, the API call will fail. Double-check input values.
- File Upload Issues: When uploading bundles, ensure the file binary data is correctly provided and accessible.
- API Errors: Any errors returned by the Dataiku DSS API are wrapped and reported with messages including the original error message and stack trace if available.
- Timeouts or Network Issues: Network connectivity problems to the DSS server will cause request failures. Verify network access and server availability.
Links and References
- Dataiku DSS API Documentation — Official API reference for understanding endpoints and parameters.
- Dataiku Project Deployer Guide — Documentation on managing project deployments in Dataiku DSS.
- n8n Documentation — For general usage of n8n nodes and credentials setup.
This summary focuses on the "Upload Bundle" operation within the Project Deployer resource, describing its purpose, inputs, outputs, dependencies, and common troubleshooting points based on static analysis of the node's source code and provided properties.