Actions364
- Continuous Activity Actions
- Dataset Actions
- Get Last Metric Values
- Get Metadata
- Get Schema
- Get Single Metric History
- List Datasets
- List Partitions
- Compute Metrics
- Create Dataset
- Create Managed Dataset
- Delete Data
- Delete Dataset
- Execute Tables Import
- Get Column Lineage
- Get Data
- Get Data - Alternative Version
- Get Dataset Settings
- Get Full Info
- List Tables
- List Tables Schemas
- Prepare Tables Import
- Run Checks
- Set Metadata
- Set Schema
- Synchronize Hive Metastore
- Update Dataset Settings
- Update From Hive Metastore
- API Service Actions
- Bundles Automation-Side Actions
- Bundles Design-Side Actions
- Connection Actions
- Dashboard Actions
- Data Collection Actions
- Data Quality Actions
- Compute Rules on Specific Partition
- Create Data Quality Rules Configuration
- Delete Rule
- Get Data Quality Project Current Status
- Get Data Quality Project Timeline
- Get Data Quality Rules Configuration
- Get Dataset Current Status
- Get Dataset Current Status per Partition
- Get Last Outcome on Specific Partition
- Get Last Rule Results
- Get Rule History
- Update Rule Configuration
- DSS Administration Actions
- Job Actions
- Library Actions
- Dataset Statistic Actions
- Discussion Actions
- Flow Documentation Actions
- Insight Actions
- Internal Metric Actions
- LLM Mesh Actions
- Machine Learning - Lab Actions
- Delete Visual Analysis
- Deploy Trained Model to Flow
- Download Model Documentation of Trained Model
- Generate Model Documentation From Custom Template
- Start Training ML Task
- Update User Metadata for Trained Model
- Update Visual Analysis
- Adjust Forecasting Parameters and Algorithm
- Compute Partial Dependencies of Trained Model
- Compute Subpopulation Analysis of Trained Model
- Create ML Task
- Create Visual Analysis
- Create Visual Analysis and ML Task
- Generate Model Documentation From Default Template
- Generate Model Documentation From File Template
- Get ML Task Settings
- Get ML Task Status
- Get Model Snippet
- Get Partial Dependencies of Trained Model
- Get Scoring Jar of Trained Model
- Get Scoring PMML of Trained Model
- Get Subpopulation Analysis of Trained Model
- Get Trained Model Details
- Get Visual Analysis
- List ML Tasks of Project
- List ML Tasks of Visual Analyses
- List Visual Analyses
- Reguess ML Task
- Machine Learning - Saved Model Actions
- Compute Partial Dependencies of Version
- Get Version Scoring PMML
- Get Version Snippet
- Import MLflow Version From File or Path
- List Saved Models
- List Versions
- Set Version Active
- Compute Subpopulation Analysis of Version
- Create Saved Model
- Delete Version
- Download Model Documentation of Version
- Evaluate MLflow Model Version
- Generate Model Documentation From Custom Template
- Generate Model Documentation From Default Template
- Generate Model Documentation From File Template
- Get MLflow Model Version Metadata
- Get Partial Dependencies of Version
- Get Saved Model
- Get Subpopulation Analysis of Version
- Get Version Details
- Get Version Scoring Jar
- Set Version User Meta
- Update Saved Model
- Long Task Actions
- Machine Learning - Experiment Tracking Actions
- Macro Actions
- Plugin Actions
- Download Plugin
- Fetch From Git Remote
- Get File Detail From Plugin
- Get Git Remote Info
- Get Plugin Settings
- Install Plugin From Git
- Install Plugin From Store
- List Files in Plugin
- List Git Branches
- List Plugin Usages
- Move File or Folder in Plugin
- Add Folder to Plugin
- Create Development Plugin
- Create Plugin Code Env
- Delete File From Plugin
- Delete Git Remote Info
- Delete Plugin
- Download File From Plugin
- Move Plugin to Dev Environment
- Pull From Git Remote
- Push to Git Remote
- Rename File or Folder in Plugin
- Reset to Local Head State
- Reset to Remote Head State
- Set Git Remote Info
- Set Plugin Settings
- Update Plugin Code Env
- Update Plugin From Git
- Update Plugin From Store
- Update Plugin From Zip Archive
- Upload File to Plugin
- Upload Plugin
- Project Deployer Actions
- Get Deployment Settings
- Get Deployment Status
- Create Deployment
- Create Infra
- Create Project
- Delete Bundle
- Delete Deployment
- Delete Infra
- Delete Project
- Get Deployment
- Get Deployment Governance Status
- Get Infra
- Get Infra Settings
- Get Project
- Get Project Settings
- Save Deployment Settings
- Save Infra Settings
- Save Project Settings
- Update Deployment
- Upload Bundle
- SQL Query Actions
- Wiki Actions
- Managed Folder Actions
- Meaning Actions
- Model Comparison Actions
- Notebook Actions
- Project Actions
- Project Folder Actions
- Recipe Actions
- Scenario Actions
- Security Actions
- Streaming Endpoint Actions
- Webapp Actions
- Workspace Actions
Overview
This node integrates with the Dataiku DSS API to perform various operations on Dataiku DSS resources. Specifically, for the Notebook resource and the Create Jupyter Notebook operation, it allows users to create a new Jupyter notebook within a specified Dataiku project.
Typical use cases include automating the creation of notebooks as part of data science workflows, enabling programmatic management of notebooks in Dataiku projects, or integrating notebook creation into larger automation pipelines.
For example, a user might automate the creation of a notebook named "Analysis_2024" in a project to prepare an environment for exploratory data analysis or model development.
Properties
| Name | Meaning |
|---|---|
| Project Key | The unique identifier of the Dataiku project where the notebook will be created. |
| Notebook Name | The name of the Jupyter notebook to create within the specified project. |
| Request Body | (Optional) A JSON object representing additional request parameters or body content. |
Output
The node outputs the response from the Dataiku DSS API after attempting to create the notebook. The output is structured as JSON containing details about the newly created notebook or any relevant metadata returned by the API.
If the operation involves binary data (not typical for notebook creation), the node would output binary data accordingly, but for creating a notebook, the output is JSON.
Example output JSON structure:
{
"id": "notebook_id",
"name": "notebook_name",
"projectKey": "project_key",
"createdAt": "timestamp",
"updatedAt": "timestamp",
"otherDetails": { ... }
}
Dependencies
- Requires valid Dataiku DSS API credentials including:
- The URL or hostname of the Dataiku DSS server.
- An API key or token for authentication.
- The node uses HTTP requests to communicate with the Dataiku DSS REST API.
- No additional external libraries beyond those bundled with n8n are required.
- The user must configure the Dataiku DSS API credentials securely in n8n before using this node.
Troubleshooting
- Missing Credentials Error: If the node throws an error about missing credentials, ensure that the Dataiku DSS API credentials are properly configured in n8n.
- Project Key Required: The operation requires a valid project key; if omitted or incorrect, the node will throw an error.
- Notebook Name Required: The notebook name must be provided; otherwise, the node will not proceed.
- API Errors: Any errors returned by the Dataiku DSS API (e.g., permission denied, invalid project key) will be surfaced as node errors. Check the API response message for details.
- Network Issues: Ensure that the n8n instance can reach the Dataiku DSS server URL and that no firewall or network restrictions block the connection.
- Invalid JSON in Request Body: If providing a custom request body, ensure it is valid JSON to avoid parsing errors.
Links and References
- Dataiku DSS API Documentation
- Dataiku DSS Jupyter Notebooks API Reference
- n8n Documentation on Creating Custom Nodes
This summary focuses on the "Notebook" resource and the "Create Jupyter Notebook" operation as requested, based on static analysis of the provided source code and property definitions.