Open Data Hub logo

Info alert:Important Notice

Please note that more information about the previous v2 releases can be found here. You can use "Find a release" search bar to search for a particular release.

Working with data science pipelines

Table of Contents

As a data scientist, you can enhance your data science projects on Open Data Hub by building portable machine learning (ML) workflows with data science pipelines, using Docker containers. This enables you to standardize and automate machine learning workflows to enable you to develop and deploy your data science models.

For example, the steps in a machine learning workflow might include items such as data extraction, data processing, feature extraction, model training, model validation, and model serving. Automating these activities enables your organization to develop a continuous process of retraining and updating a model based on newly received data. This can help resolve challenges related to building an integrated machine learning deployment and continuously operating it in production.

You can also use the Elyra JupyterLab extension to create and run data science pipelines within JupyterLab. For more information, see Working with pipelines in JupyterLab.

From Open Data Hub version 2.10.0, data science pipelines are based on KubeFlow Pipelines (KFP) version 2.0. For more information, see Enabling data science pipelines 2.0.

To use a data science pipeline in Open Data Hub, you need the following components:

  • Pipeline server: A server that is attached to your data science project and hosts your data science pipeline.

  • Pipeline: A pipeline defines the configuration of your machine learning workflow and the relationship between each component in the workflow.

    • Pipeline code: A definition of your pipeline in a YAML file.

    • Pipeline graph: A graphical illustration of the steps executed in a pipeline run and the relationship between them.

  • Pipeline experiment: A workspace where you can try different configurations of your pipelines. You can use experiments to organize your runs into logical groups.

    • Archived pipeline experiment: An archived pipeline experiment.

    • Pipeline artifact: An output artifact produced by a pipeline component.

    • Pipeline execution: The execution of a task in a pipeline.

  • Pipeline run: An execution of your pipeline.

    • Active run: A pipeline run that is executing, or stopped.

    • Scheduled run: A pipeline run that is scheduled to execute at least once.

    • Archived run: An archived pipeline run.

This feature is based on Kubeflow Pipelines 2.0. Use the latest Kubeflow Pipelines 2.0 SDK to build your data science pipeline in Python code. After you have built your pipeline, use the SDK to compile it into an Intermediate Representation (IR) YAML file. The Open Data Hub user interface enables you to track and manage pipelines and pipeline runs.

You can store your pipeline artifacts in an S3-compatible object storage bucket so that you do not consume local storage. To do this, you must first configure write access to your S3 bucket on your storage account.

Enabling data science pipelines 2.0

From Open Data Hub version 2.10.0, data science pipelines are based on KubeFlow Pipelines (KFP) version 2.0. Data science pipelines 2.0 is enabled and deployed by default in Open Data Hub.

Note

The PipelineConf class is deprecated, and there is no KFP 2.0 equivalent.

Important

Data science pipelines 2.0 contains an installation of Argo Workflows. Open Data Hub does not support direct customer usage of this installation of Argo Workflows.

To install or upgrade to Open Data Hub 2.10.0 or later with data science pipelines, ensure that your cluster does not have an existing installation of Argo Workflows that is not installed by Open Data Hub.

Argo Workflows resources that are created by Open Data Hub have the following labels in the OpenShift Console under Administration > CustomResourceDefinitions, in the argoproj.io group:

 labels:
    app.kubernetes.io/part-of: data-science-pipelines-operator
    app.opendatahub.io/data-science-pipelines-operator: 'true'

Installing Open Data Hub with data science pipelines 2.0

To install Open Data Hub 2.10.0 or later with data science pipelines, ensure that there is no installation of Argo Workflows that is not installed by data science pipelines on your cluster, and follow the installation steps described in Installing Open Data Hub.

If you install Open Data Hub 2.10.0 or later with the datasciencepipelines component while there is an existing installation of Argo Workflows that is not installed by data science pipelines on your cluster, data science pipelines will be disabled after the installation completes.

To enable data science pipelines, remove the separate installation of Argo Workflows from your cluster. Data science pipelines will be enabled automatically.

Upgrading to data science pipelines 2.0

Important

After you upgrade to Open Data Hub 2.9 or later, pipelines created with data science pipelines 1.0 continue to run, but are inaccessible from the Open Data Hub dashboard. If you are a current data science pipelines user, do not upgrade to Open Data Hub with data science pipelines 2.0 until you are ready to migrate to the new pipelines solution.

Managing data science pipelines

Configuring a pipeline server

Before you can successfully create a pipeline in Open Data Hub, you must configure a pipeline server. This task includes configuring where your pipeline artifacts and data are stored.

Note

You are not required to specify any storage directories when configuring a data connection for your pipeline server. When you import a pipeline, the /pipelines folder is created in the root folder of the bucket, containing a YAML file for the pipeline. If you upload a new version of the same pipeline, a new YAML file with a different ID is added to the /pipelines folder.

When you run a pipeline, the artifacts are stored in the /pipeline-name folder in the root folder of the bucket.

Important

If you use an external MySQL database and upgrade to Open Data Hub 2.10.0 or later, the database is migrated to data science pipelines 2.0 format, making it incompatible with earlier versions of Open Data Hub.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have created a data science project that you can add a pipeline server to.

  • You have an existing S3-compatible object storage bucket and you have configured write access to your S3 bucket on your storage account.

  • If you are configuring a pipeline server for production pipeline workloads, you have an existing external MySQL or MariaDB database.

  • If you are configuring a pipeline server with an external MySQL database, your database must use at least MySQL version 5.x. However, Red Hat recommends that you use MySQL version 8.x.

  • If you are configuring a pipeline server with a MariaDB database, your database must use MariaDB version 10.3 or later. However, Red Hat recommends that you use at least MariaDB version 10.5.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Projects.

    The Data Science Projects page opens.

  2. Click the name of the project that you want to configure a pipeline server for.

    A project details page opens.

  3. Click the Pipelines tab.

  4. Click Configure pipeline server.

    The Configure pipeline server dialog appears.

  5. In the Object storage connection section, provide values for the mandatory fields:

    1. In the Access key field, enter the access key ID for the S3-compatible object storage provider.

    2. In the Secret key field, enter the secret access key for the S3-compatible object storage account that you specified.

    3. In the Endpoint field, enter the endpoint of your S3-compatible object storage bucket.

    4. In the Region field, enter the default region of your S3-compatible object storage account.

    5. In the Bucket field, enter the name of your S3-compatible object storage bucket.

      Important

      If you specify incorrect data connection settings, you cannot update these settings on the same pipeline server. Therefore, you must delete the pipeline server and configure another one.

      If you want to use an existing artifact that was not generated by a task in a pipeline, you can use the kfp.dsl.importer component to import the artifact from its URI. You can only import these artifacts to the S3-compatible object storage bucket that you define in the Bucket field in your pipeline server configuration. For more information about the kfp.dsl.importer component, see Special Case: Importer Components.

  6. In the Database section, click Show advanced database options to specify the database to store your pipeline data and select one of the following sets of actions:

    • Select Use default database stored on your cluster to deploy a MariaDB database in your project.

      Important

      The Use default database stored on your cluster option is intended for development and testing purposes only. For production pipeline workloads, select the Connect to external MySQL database option to use an external MySQL or MariaDB database.

    • Select Connect to external MySQL database to add a new connection to an external MySQL or MariaDB database that your pipeline server can access.

      1. In the Host field, enter the database’s host name.

      2. In the Port field, enter the database’s port.

      3. In the Username field, enter the default user name that is connected to the database.

      4. In the Password field, enter the password for the default user account.

      5. In the Database field, enter the database name.

  7. Click Configure pipeline server.

Verification

On the Pipelines tab for the project:

  • The Import pipeline button is available.

  • When you click the action menu () and then click View pipeline server configuration, the pipeline server details are displayed.

Defining a pipeline

The Kubeflow Pipelines SDK enables you to define end-to-end machine learning and data pipelines. Use the latest Kubeflow Pipelines 2.0 SDK to build your data science pipeline in Python code. After you have built your pipeline, use the SDK to compile it into an Intermediate Representation (IR) YAML file. After defining the pipeline, you can import the YAML file to the Open Data Hub dashboard to enable you to configure its execution settings.

You can also use the Elyra JupyterLab extension to create and run data science pipelines within JupyterLab. For more information about the Elyra JupyterLab extension, see Elyra Documentation.

Importing a data science pipeline

To help you begin working with data science pipelines in Open Data Hub, you can import a YAML file containing your pipeline’s code to an active pipeline server, or you can import the YAML file from a URL. This file contains a Kubeflow pipeline compiled by using the Kubeflow compiler. After you have imported the pipeline to a pipeline server, you can execute the pipeline by creating a pipeline run.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a configured pipeline server.

  • You have compiled your pipeline with the Kubeflow compiler and you have access to the resulting YAML file.

  • If you are uploading your pipeline from a URL, the URL is publicly accessible.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Pipelines.

  2. On the Pipelines page, from the Project drop-down list, select the project that you want to import a pipeline to.

  3. Click Import pipeline.

  4. In the Import pipeline dialog, enter the details for the pipeline that you want to import.

    1. In the Pipeline name field, enter a name for the pipeline that you want to import.

    2. In the Pipeline description field, enter a description for the pipeline that want to import.

    3. Select where you want to import your pipeline from by performing one of the following actions:

      • Select Upload a file to upload your pipeline from your local machine’s file system. Import your pipeline by clicking Upload, or by dragging and dropping a file.

      • Select Import by url to upload your pipeline from a URL, and then enter the URL into the text box.

    4. Click Import pipeline.

Verification
  • The pipeline that you imported appears on the Pipelines page and on the Pipelines tab on the project details page.

Deleting a data science pipeline

If you no longer require access to your data science pipeline on the dashboard, you can delete it so that it does not appear on the Data Science Pipelines page.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • There are active pipelines available on the Pipelines page.

  • The pipeline that you want to delete does not contain any pipeline versions.

  • The pipeline that you want to delete does not contain any pipeline versions. For more information, see Deleting a pipeline version.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Pipelines.

  2. On the Pipelines page, from the Project drop-down list, select the project that contains the pipeline that you want to delete.

  3. Click the action menu () beside the pipeline that you want to delete, and then click Delete pipeline.

  4. In the Delete pipeline dialog, enter the pipeline name in the text field to confirm that you intend to delete it.

  5. Click Delete pipeline.

Verification
  • The data science pipeline that you deleted no longer appears on the Pipelines page.

Deleting a pipeline server

After you have finished running your data science pipelines, you can delete the pipeline server. Deleting a pipeline server automatically deletes all of its associated pipelines, pipeline versions, and runs. If your pipeline data is stored in a database, the database is also deleted along with its meta-data. In addition, after deleting a pipeline server, you cannot create new pipelines or pipeline runs until you create another pipeline server.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a pipeline server.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Pipelines.

  2. On the Pipelines page, from the Project drop-down list, select the project that contains the pipeline server that you want to delete.

  3. From the Pipeline server actions list, select Delete pipeline server.

  4. In the Delete pipeline server dialog, enter the name of the pipeline server in the text field to confirm that you intend to delete it.

  5. Click Delete.

Verification
  • Pipelines previously assigned to the deleted pipeline server no longer appear on the Pipelines page for the relevant data science project.

  • Pipeline runs previously assigned to the deleted pipeline server no longer appear on the Runs page for the relevant data science project.

Viewing the details of a pipeline server

You can view the details of pipeline servers configured in Open Data Hub, such as the pipeline’s data connection details and where its data is stored.

Prerequisites
  • You have logged in to Open Data Hub.

  • You have previously created a data science project that contains an active and available pipeline server.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Pipelines.

  2. On the Pipelines page, from the Project drop-down list, select the project that contains the pipeline server that you want to view.

  3. From the Pipeline server actions list, select View pipeline server configuration.

Verification
  • You can view the pipeline server details in the View pipeline server dialog.

Viewing existing pipelines

You can view the details of pipelines that you have imported to Open Data Hub, such as the pipeline’s last run, when it was created, the pipeline’s executed runs, and details of any associated pipeline versions.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • Existing pipelines are available.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Pipelines.

  2. On the Pipelines page, from the Project drop-down list, select the project that contains the pipelines that you want to view.

  3. Optional: Click Expand (rhoai expand icon) on the row of a pipeline to view its pipeline versions.

Verification
  • A list of data science pipelines appears on the Pipelines page.

Overview of pipeline versions

You can manage incremental changes to pipelines in Open Data Hub by using versioning. This allows you to develop and deploy pipelines iteratively, preserving a record of your changes. You can track and manage your changes on the Open Data Hub dashboard, allowing you to schedule and execute runs against all available versions of your pipeline.

Uploading a pipeline version

You can upload a YAML file to an active pipeline server that contains the latest version of your pipeline, or you can upload the YAML file from a URL. The YAML file must consist of a Kubeflow pipeline compiled by using the Kubeflow compiler. After you upload a pipeline version to a pipeline server, you can execute it by creating a pipeline run.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a configured pipeline server.

  • You have a pipeline version available and ready to upload.

  • If you are uploading your pipeline version from a URL, the URL is publicly accessible.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Pipelines.

  2. On the Pipelines page, from the Project drop-down list, select the project that you want to upload a pipeline version to.

  3. Click the Import pipeline drop-down list, and then select Upload new version.

  4. In the Upload new version dialog, enter the details for the pipeline version that you are uploading.

    1. From the Pipeline list, select the pipeline that you want to upload your pipeline version to.

    2. In the Pipeline version name field, confirm the name for the pipeline version, and change it if necessary.

    3. In the Pipeline version description field, enter a description for the pipeline version.

    4. Select where you want to upload your pipeline version from by performing one of the following actions:

      • Select Upload a file to upload your pipeline version from your local machine’s file system. Import your pipeline version by clicking Upload, or by dragging and dropping a file.

      • Select Import by url to upload your pipeline version from a URL, and then enter the URL into the text box.

    5. Click Upload.

Verification
  • The pipeline version that you uploaded is displayed on the Pipelines page. Click Expand (rhoai expand icon) on the row containing the pipeline to view its versions.

  • The Version column on the row containing the pipeline version that you uploaded on the Pipelines page increments by one.

Deleting a pipeline version

You can delete specific versions of a pipeline when you no longer require them. Deleting a default pipeline version automatically changes the default pipeline version to the next most recent version. If no pipeline versions exist, the pipeline persists without a default version.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a pipeline server.

  • You have imported a pipeline to an active pipeline server.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Pipelines.

    The Pipelines page opens.

  2. Delete the pipeline versions that you no longer require:

    • To delete a single pipeline version:

      1. From the Project list, select the project that contains a version of a pipeline that you want to delete.

      2. On the row containing the pipeline, click Expand (rhoai expand icon).

      3. Click the action menu () beside the project version that you want to delete, and then click Delete pipeline version.

        The Delete pipeline version dialog opens.

      4. Enter the name of the pipeline version in the text field to confirm that you intend to delete it.

      5. Click Delete.

    • To delete multiple pipeline versions:

      1. On the row containing each pipeline version that you want to delete, select the checkbox.

      2. Click the action menu (⋮) next to the Import pipeline drop-down list, and then select Delete from the list.

Verification
  • The pipeline version that you deleted no longer appears on the Pipelines page, or on the Pipelines tab for the data science project.

Viewing the details of a pipeline version

You can view the details of a pipeline version that you have uploaded to Open Data Hub, such as its graph and YAML code.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a pipeline server.

  • You have a pipeline available on an active and available pipeline server.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Pipelines.

    The Pipelines page opens.

  2. From the Project drop-down list, select the project that contains the pipeline versions that you want to view details for.

  3. Click the project name to view further details of its most recent version.

    The pipeline version details page opens, displaying the Graph, Summary, and Pipeline spec tabs.

    Alternatively, click Expand (rhoai expand icon) on the row containing the pipeline that you want to view versions for, and then click the pipeline version that you want to view the details of.

    The pipeline version details page opens, displaying the Graph, Summary, and Pipeline spec tabs.

Verification
  • On the pipeline version details page, you can view the pipeline graph, summary details, and YAML code.

Downloading a data science pipeline version

To make further changes to a data science pipeline version that you previously uploaded to Open Data Hub, you can download pipeline version code from the user interface.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a configured pipeline server.

  • You have created and imported a pipeline to an active pipeline server that is available to download.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Pipelines.

  2. On the Pipelines page, from the Project drop-down list, select the project that contains the version that you want to download.

  3. Click Expand (rhoai expand icon) beside the pipeline that contains the version that you want to download.

  4. Click the pipeline version that you want to download.

    The pipeline version details page opens.

  5. On the Pipeline spec tab, click the Download button (rhoai download icon) to download the YAML file that contains the pipeline version code to your local machine.

Verification
  • The pipeline version code downloads to your browser’s default directory for downloaded files.

Managing pipeline experiments

Overview of pipeline experiments

A pipeline experiment is a workspace where you can try different configurations of your pipelines. You can use experiments to organize your runs into logical groups. As a data scientist, you can use Open Data Hub to define, manage, and track pipeline experiments. You can view a record of previously created and archived experiments from the Experiments page in the Open Data Hub user interface. Pipeline experiments contain pipeline runs, including recurring runs. This allows you to try different configurations of your pipelines.

When you work with data science pipelines, it is important to monitor and record your pipeline experiments to track the performance of your data science pipelines. You can compare the results of up to 10 pipeline runs at one time, and view available parameter, scalar metric, confusion matrix, and receiver operating characteristic (ROC) curve data for all selected runs.

You can view artifacts for an executed pipeline run from the Open Data Hub dashboard. Pipeline artifacts can help you to evaluate the performance of your pipeline runs and make it easier to understand your pipeline components. Pipeline artifacts can range from plain text data to detailed, interactive data visualizations.

Creating a pipeline experiment

Pipeline experiments are workspaces where you can try different configurations of your pipelines. You can also use experiments to organize your pipeline runs into logical groups. Pipeline experiments contain pipeline runs, including recurring runs.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a configured pipeline server.

  • You have imported a pipeline to an active pipeline server.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExperiments and runs.

  2. On the Experiments page, from the Project drop-down list, select the project to create the pipeline experiment in.

  3. Click Create experiment.

  4. In the Create experiment dialog, configure the pipeline experiment:

    1. In the Experiment name field, enter a name for the pipeline experiment.

    2. In the Description field, enter a description for the pipeline experiment.

    3. Click Create experiment.

Verification
  • The pipeline experiment that you created appears on the Experiments tab.

Archiving a pipeline experiment

You can retain records of your pipeline experiments by archiving them. If required, you can restore pipeline experiments from your archive to reuse, or delete pipeline experiments that are no longer required.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and has a pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • A pipeline experiment is available to archive.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExperiments and runs.

  2. On the Experiments page, from the Project drop-down list, select the project that contains the pipeline experiment that you want to archive.

  3. Click the action menu () beside the pipeline experiment that you want to archive, and then click Archive.

  4. In the Archiving experiment dialog, enter the pipeline experiment name in the text field to confirm that you intend to archive it.

  5. Click Archive.

Verification
  • The archived pipeline experiment does not appear on the Experiments tab, and instead appears on the Archive tab on the Experiments page for the pipeline experiment.

Deleting an archived pipeline experiment

You can delete pipeline experiments from the Open Data Hub experiment archive.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a configured pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • A pipeline experiment is available in the pipeline archive.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExperiments and runs.

  2. On the Experiments page, from the Project drop-down list, select the project that contains the archived pipeline experiment that you want to delete.

  3. Click the Archive tab.

  4. Click the action menu () beside the pipeline experiment that you want to delete, and then click Delete.

  5. In the Delete experiment? dialog, enter the pipeline experiment name in the text field to confirm that you intend to delete it.

  6. Click Delete.

Verification
  • The pipeline experiment that you deleted no longer appears on the Archive tab on the Experiments page.

Restoring an archived pipeline experiment

You can restore an archived pipeline experiment to the active state.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and has a pipeline server.

  • An archived pipeline experiment exists in your project.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExperiments and runs.

  2. On the Experiments page, from the Project drop-down list, select the project that contains the archived pipeline experiment that you want to restore.

  3. Click the Archive tab.

  4. Click the action menu () beside the pipeline experiment that you want to restore, and then click Restore.

  5. In the Restore experiment dialog, click Restore.

Verification
  • The restored pipeline experiment appears on the Experiments tab on the Experiments page.

Viewing pipeline task executions

When a pipeline run executes, you can view details of executed tasks in each step in a pipeline run from the Open Data Hub dashboard. A step forms part of a task in a pipeline.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • You have previously triggered a pipeline run.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExecutions.

  2. On the Executions page, from the Project drop-down list, select the project that contains the experiment for the pipeline task executions that you want to view.

Verification
  • On the Executions page, you can view the execution details of each pipeline task execution, such as its name, status, unique ID, and execution type. The execution status indicates whether the pipeline task has successfully executed. For further information about the details of the task execution, click the execution name.

Viewing pipeline artifacts

After a pipeline run executes, you can view its pipeline artifacts from the Open Data Hub dashboard. Pipeline artifacts can help you to evaluate the performance of your pipeline runs and make it easier to understand your pipeline components. Pipeline artifacts can range from plain text data to detailed, interactive data visualizations.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • You have previously triggered a pipeline run.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsArtifacts.

  2. On the Artifacts page, from the Project drop-down list, select the project that contains the pipeline experiment for the pipeline artifacts that you want to view.

Verification
  • On the Artifacts page, you can view the details of each pipeline artifact, such as its name, unique ID, type, and URI.

Comparing runs

You can compare up to 10 pipeline runs at one time, and view available parameter, scalar metric, confusion matrix, and receiver operating characteristic (ROC) curve data for all selected runs.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and has a pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • You have created at least 2 pipeline runs.

Procedure
  1. In the Open Data Hub dashboard, select Experiments > Experiments and runs.

    The Experiments page opens.

  2. From the Project drop-down list, select the project that contains the runs that you want to compare.

  3. On the Experiments tab, in the Experiment column, click the experiment that you want to compare runs for. To select runs that are not in an experiment, click Default. All runs that are created without specifying an experiment will appear in the Default group.

    The Runs page opens.

  4. Select the checkbox next to each run that you want to compare, and then click Compare runs. You can compare a maximum of 10 runs at one time.

    The Compare runs page opens and displays available parameter, scalar metric, confusion matrix, and receiver operating characteristic (ROC) curve data for the runs that you selected.

    1. The Run list section displays a list of selected runs. You can filter the list by run name, experiment, pipeline version start date, duration, and status.

    2. The Parameters section displays parameter information for each selected run. Set the Hide parameters with no differences switch to On to hide parameters that have the same values.

    3. The Metrics section displays scalar metric, confusion matrix, and ROC curve data for all selected runs.

      1. On the Scalar metrics tab, set the Hide parameters with no differences switch to On to hide parameters that have the same values.

      2. On the ROC curve tab, in the artifacts list, adjust the ROC curve chart by deselecting the checkbox next to artifacts that you want to remove from the chart.

  5. To select different runs for comparison, click Manage runs.

    The Manage runs dialog opens.

    1. From the Search filter drop-down list, select Run, Experiment, Pipeline version, Created after, or Status to filter the run list by each value.

    2. Deselect the checkbox next to each run that you want to remove from your comparison.

    3. Select the checkbox next to each run that you want to add to your comparison.

  6. Click Update.

Verification
  • The Compare runs page opens and displays data for the runs that you selected.

Managing pipeline runs

Overview of pipeline runs

A pipeline run is a single execution of a data science pipeline. As data scientist, you can use Open Data Hub to define, manage, and track executions of a data science pipeline. You can view a record of previously executed, scheduled, and archived runs from the Runs page in the Open Data Hub user interface.

You can optimize your use of pipeline runs for portability and repeatability by using pipeline experiments. With experiments, you can logically group pipeline runs and try different configurations of your pipelines. You can also clone your pipeline runs to reproduce and scale them, or archive them when you want to retain a record of their execution, but no longer require them. You can delete archived runs that you no longer want to retain, or you can restore them to their former state.

You can execute a run once, that is, immediately after its creation, or on a recurring basis. Recurring runs consist of a copy of a pipeline with all of its parameter values and a run trigger. A run trigger indicates when a recurring run executes. You can define the following run triggers:

  • Periodic: used for scheduling runs to execute in intervals.

  • Cron: used for scheduling runs as a cron job.

You can also configure up to 10 instances of the same run to execute concurrently. You can track the progress of a run from the run details page on the Open Data Hub user interface. From here, you can view the graph and output artifacts for the run.

A pipeline run can be in one of the following states:

  • Scheduled: A pipeline run that is scheduled to execute at least once

  • Active: A pipeline run that is executing, or stopped.

  • Archived: An archived pipeline run.

You can use catch up runs to ensure your pipeline runs do not permanently fall behind schedule when paused. For example, if you re-enable a paused recurring run, the run scheduler backfills each missed run interval. If you disable catch up runs, and you have a scheduled run interval ready to execute, the run scheduler only schedules the run execution for the latest run interval. Catch up runs are enabled by default. However, if your pipeline handles backfill internally, Red Hat recommends that you disable catch up runs to avoid duplicate backfill.

After a pipeline run executes, you can view details of its executed tasks on the Executions page, along with its artifacts, on the Artifacts page. From the Executions page, you can view the execution status of each task, which indicates whether it completed successfully. You can also view further information about each executed task by clicking the execution name in the list. From the Artifacts page, you can view the details of each pipeline artifact, such as its name, unique ID, type, and URI. Pipeline artifacts can help you to evaluate the performance of your pipeline runs and make it easier to understand your pipeline components. Pipeline artifacts can range from plain text data to detailed, interactive data visualizations.

You can view further information about each artifact by clicking the artifact name in the list. You can also view or download the content of artifacts that are stored in S3-compatible object storage by clicking the active artifact URI link in the list.

Note

Artifacts that are not stored in S3-compatible object storage are not available to download, and will not appear with an active URI link.

If your browser can display the artifact content, for example, if the artifact is plain text, HTML, or markdown, the content does not download, but is automatically displayed in a new browser tab. If your browser cannot display the artifact content, for example, if the artifact is a model, the artifact automatically downloads instead. To download an artifact which is displayed in a browser tab, right-click on the content and then click Save as.

You can review and analyze logs for each step in an active pipeline run. With the log viewer, you can search for specific log messages, view the log for each step, and download the step logs to your local machine.

Storing data with data science pipelines

When you run a data science pipeline, Open Data Hub stores the pipeline YAML configuration file and resulting pipeline run artifacts in the root directory of your storage bucket. The directories that contain pipeline run artifacts can differ depending on where you executed the pipeline run from. See the following table for further information:

Table 1. Pipeline configuration file and artifacts storage locations
Pipeline run source Pipeline storage directory Run artifacts storage directory

Open Data Hub dashboard

/pipelines/<pipeline_version_id>

Example: /pipelines/1d01c4eb-d2ab-4916-9935-a73a5580f1fb

/<pipeline_name>/<pipeline run_id>

Example: iris-training-pipeline/2g48k8pw-a8ib-4884-9145-h41j7599h3ds

JupyterLab Elyra extension

/pipelines/<pipeline_version_id>

/<pipeline_name_timestamp>

Example: /hello-generic-world-0523161704

With the JupyterLab Elyra extension, you can also set an object storage path prefix.

Example: /iris-project/hello-generic-world-0523161704

If you want to use an existing artifact that was not generated by a task in a pipeline, you can use the kfp.dsl.importer component to import the artifact from its URI. You can only import these artifacts to the S3-compatible object storage bucket that you define in the Bucket field in your pipeline server configuration. For more information about the kfp.dsl.importer component, see Special Case: Importer Components.

Viewing active pipeline runs

You can view a list of pipeline runs that were previously executed in a pipeline experiment. From this list, you can view details relating to your pipeline runs, such as the pipeline version that the run belongs to, along with the run status, duration, and execution start time.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and has a pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • You have previously executed a pipeline run that is available.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExperiments and runs.

  2. On the Experiments page, from the Project drop-down list, select the project that contains the pipeline experiment for the active pipeline runs that you want to view.

  3. From the list of experiments, click the experiment that contains the active pipeline runs that you want to view.

    The Runs page opens.

    After a run has completed its execution, the run status appears in the Status column of the Runs tab, indicating whether the run succeeded or failed.

Verification
  • A list of active runs appears on the Runs tab on the Runs page for the pipeline experiment.

Executing a pipeline run

By default, a pipeline run executes once immediately after it is created.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a configured pipeline server.

  • You have imported a pipeline to an active pipeline server.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExperiments and runs.

  2. On the Experiments page, from the Project drop-down list, select the project that contains the pipeline experiment that you want to create a run for.

  3. From the list of pipeline experiments, click the experiment that you want to create a run for.

  4. Click Create run.

  5. On the Create run page, configure the run:

    1. From the Experiment list, select the pipeline experiment that you want to create a run for. Alternatively, to create a new pipeline experiment, click Create new experiment, and then complete the relevant fields in the Create experiment dialog.

    2. In the Name field, enter a name for the run, up to 255 characters.

    3. In the Description field, enter a description for the run, up to 255 characters.

    4. From the Pipeline list, select the pipeline that you want to create a run for. Alternatively, to create a new pipeline, click Create new pipeline, and then complete the relevant fields in the Import pipeline dialog.

    5. From the Pipeline version list, select the pipeline version to create a run for. Alternatively, to upload a new version, click Upload new version, and then complete the relevant fields in the Upload new version dialog.

    6. Configure the input parameters for the run by selecting the parameters from the list.

    7. Click Create run.

      The details page for the run opens.

Verification
  • The pipeline run that you created appears on the Runs tab on the Runs page for the pipeline experiment.

Stopping an active pipeline run

If you no longer require an active pipeline run to continue executing in a pipeline experiment, you can stop the run before its defined end date.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • There is a previously created data science project available that contains a pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • An active pipeline run is currently executing.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExperiments and runs.

  2. On the Experiments page, from the Project drop-down list, select the project that contains the pipeline experiment for the active run that you want to stop.

  3. From the list of pipeline experiments, click the pipeline experiment that contains the run that you want to stop.

  4. On the Runs tab, click the action menu () beside the active run that you want to stop, and then click Stop.

    There might be a short delay while the run stops.

Verification
  • The Failed status icon (The pipeline run failure icon) appears in the Status column of the stopped run.

Duplicating an active pipeline run

To make it easier to quickly execute pipeline runs with the same configuration in a pipeline experiment, you can duplicate them.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a configured pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • An active run is available to duplicate on the Active tab on the Runs page.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExperiments and runs.

  2. On the Experiments page, from the Project drop-down list, select the project that contains the pipeline experiment for the pipeline run that you want to duplicate.

  3. From the list of pipeline experiments, click the experiment that contains the pipeline run that you want to duplicate.

  4. Click the action menu () beside the relevant active run, and then click Duplicate.

  5. On the Duplicate run page, configure the duplicate run:

    1. From the Experiment list, select a pipeline experiment to contain the duplicate run. Alternatively, to create a new pipeline experiment, click Create new experiment, and then complete the relevant fields in the Create experiment dialog.

    2. In the Name field, enter a name for the duplicate run.

    3. In the Description field, enter a description for the duplicate run.

    4. From the Pipeline list, select a pipeline to contain the duplicate run. Alternatively, to create a new pipeline, click Create new pipeline, and then complete the relevant fields in the Import pipeline dialog.

    5. From the Pipeline version list, select a pipeline version to contain the duplicate run. Alternatively, to upload a new version, click Upload new version, and then complete the relevant fields in the Upload new version dialog.

    6. In the Parameters section, configure input parameters for the duplicate run by selecting parameters from the list.

    7. Click Create run.

      The details page for the run opens.

Verification
  • The duplicate pipeline run appears on the Runs tab on the Runs page for the pipeline experiment.

Viewing scheduled pipeline runs

You can view a list of pipeline runs that are scheduled for execution in a pipeline experiment. From this list, you can view details relating to your pipeline runs, such as the pipeline version that the run belongs to. You can also view the run status, execution frequency, and schedule.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • You have scheduled a pipeline run that is available to view.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExperiments and runs.

  2. On the Experiments page, from the Project drop-down list, select the project that contains the pipeline experiment for the scheduled pipeline runs that you want to view.

  3. From the list of pipeline experiments, click the experiment that contains the pipeline runs that you want to view.

  4. On the Runs page, click the Schedules tab.

    After a run is scheduled, the Status column indicates whether the run is ready or unavailable for execution. To change its execution availability, set the Status switch to On or Off. Alternatively, you can change its execution availability from the details page for the scheduled run by clicking the Actions drop-down menu, and then selecting Enable or Disable.

Verification
  • A list of scheduled runs appears on the Schedules tab on the Runs page for the pipeline experiment.

Scheduling a pipeline run using a cron job

You can use a cron job to schedule a pipeline run to execute at a specific time. Cron jobs are useful for creating periodic and recurring tasks, and can also schedule individual tasks for a specific time, such as if you want to schedule a run for a low activity period. To successfully execute runs in Open Data Hub, you must use the supported format. See Cron Expression Format for more information.

The following examples show the correct format:

Run occurrence Cron format

Every five minutes

@every 5m

Every 10 minutes

0 */10 * * * *

Daily at 16:16 UTC

0 16 16 * * *

Daily every quarter of the hour

0 0,15,30,45 * * * *

On Monday and Tuesday at 15:40 UTC

0 40 15 * * MON,TUE

Additional resources

Scheduling a pipeline run

To repeatedly run a pipeline, you can create a scheduled pipeline run.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a configured pipeline server.

  • You have imported a pipeline to an active pipeline server.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExperiments and runs.

  2. On the Experiments page, from the Project drop-down list, select the project that contains the pipeline experiment for the run that you want to schedule.

  3. From the list of pipeline experiments, click the experiment that contains the run that you want to schedule.

  4. Click the Schedules tab.

  5. Click Create schedule.

  6. On the Schedule run page, configure the run that you are scheduling:

    1. From the Experiment list, select the pipeline experiment that you want to contain the scheduled run. Alternatively, to create a new pipeline experiment, click Create new experiment, and then complete the relevant fields in the Create experiment dialog.

    2. In the Name field, enter a name for the run.

    3. In the Description field, enter a description for the run.

    4. From the Trigger type list, select one of the following options:

      • Select Periodic to specify an execution frequency. In the Run every field, enter a number and select an execution frequency from the list.

      • Select Cron to specify the execution schedule in cron format in the Cron string field. This creates a cron job to execute the run. Click the Copy button (osd copy) to copy the cron job schedule to the clipboard. The field furthest to the left represents seconds. For more information about scheduling tasks using the supported cron format, see Cron Expression Format.

    5. In the Maximum concurrent runs field, specify the number of runs that can execute concurrently, from a range of one to ten.

    6. For Start date, specify a start date for the run. Select a start date using the calendar, and the start time from the list of times.

    7. For End date, specify an end date for the run. Select an end date using the calendar, and the end time from the list of times.

    8. For Catch up, enable or disable catch up runs. You can use catch up runs to ensure your pipeline runs do not permanently fall behind schedule when they are paused. For example, if you re-enable a paused recurring run, the run scheduler backfills each missed run interval.

    9. From the Pipeline list, select the pipeline that you want to create a run for. Alternatively, to create a new pipeline, click Create new pipeline, and then complete the relevant fields in the Import pipeline dialog.

    10. From the Pipeline version list, select the pipeline version to create a run for. Alternatively, to upload a new version, click Upload new version, and then complete the relevant fields in the Upload new version dialog.

    11. Configure the input parameters for the run by selecting the parameters from the list.

    12. Click Schedule run.

Verification
  • The pipeline run that you scheduled appears on the Schedules tab on the Runs page for the pipeline experiment.

Duplicating a scheduled pipeline run

To make it easier to schedule runs to execute as part of your pipeline experiment, you can duplicate existing scheduled runs.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a configured pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • A scheduled run is available to duplicate on the Schedules tab on the Runs page.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExperiments and runs.

  2. On the Experiments page, from the Project drop-down list, select the project that contains the pipeline experiment for the scheduled run that you want to duplicate.

  3. From the list of pipeline experiments, click the experiment that contains the pipeline run that you want to duplicate.

  4. On the Runs page, click the Schedules tab.

  5. Click the action menu () beside the run that you want to duplicate, and then click Duplicate.

  6. On the Duplicate schedule page, configure the duplicate run:

    1. From the Experiment list, select a pipeline experiment to contain the duplicate run. Alternatively, to create a new pipeline experiment, click Create new experiment, and then complete the relevant fields in the Create experiment dialog.

    2. In the Name field, enter a name for the duplicate run.

    3. In the Description field, enter a description for the duplicate run.

    4. From the Trigger type list, select one of the following options:

      • Select Periodic to specify an execution frequency. In the Run every field, enter a number, and select an execution frequency from the list.

      • Select Cron to specify the execution schedule in cron format in the Cron string field. This creates a cron job to execute the run. Click the Copy button (osd copy) to copy the cron job schedule to the clipboard. The field furthest to the left represents seconds. For more information about scheduling tasks using the supported cron format, see Cron Expression Format.

    5. For Maximum concurrent runs, specify the number of runs that can execute concurrently, from a range of one to ten.

    6. For Start date, specify a start date for the duplicate run. Select a start date using the calendar, and the start time from the list of times.

    7. For End date, specify an end date for the duplicate run. Select an end date using the calendar, and the end time from the list of times.

    8. For Catch up, enable or disable catch up runs. You can use catch up runs to ensure your pipeline runs do not permanently fall behind schedule when they are paused. For example, if you re-enable a paused recurring run, the run scheduler backfills each missed run interval.

    9. From the Pipeline list, select the pipeline that you want to create a duplicate run for. Alternatively, to create a new pipeline, click Create new pipeline, and then complete the relevant fields in the Import pipeline dialog.

    10. From the Pipeline version list, select the pipeline version to create a duplicate run for. Alternatively, to upload a new version, click Upload new version, and then complete the relevant fields in the Upload new version dialog.

    11. Configure input parameters for the run by selecting parameters from the list.

    12. Click Schedule run.

Verification
  • The pipeline run that you duplicated appears on the Schedules tab on the Runs page for the pipeline experiment.

Deleting a scheduled pipeline run

To discard pipeline runs that you previously scheduled, but no longer require, you can delete them so that they do not appear on the Schedules page.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a configured pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • You have previously scheduled a run that is available to delete.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExperiments and runs.

  2. On the Experiments page, from the Project drop-down list, select the project that contains the pipeline experiment for the scheduled pipeline run that you want to delete.

  3. From the list of pipeline experiments, click the experiment that contains the scheduled pipeline run that you want to delete.

  4. On the Runs page, click the Schedules tab.

  5. Click the action menu () beside the scheduled pipeline run that you want to delete, and then click Delete.

  6. In the Delete schedule dialog, enter the run name in the text field to confirm that you intend to delete it.

  7. Click Delete.

Verification
  • The run that you deleted no longer appears on the Schedules tab for the pipeline experiment.

Viewing the details of a pipeline run

To gain a clearer understanding of your pipeline runs, you can view the details of a previously triggered pipeline run, such as its graph, execution details, and run output.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • You have previously triggered a pipeline run.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Pipelines.

  2. On the Pipelines page, from the Project drop-down list, select the project that contains the pipeline that you want to view run details for.

  3. Click Expand (rhoai expand icon) beside the pipeline that you want to view run details for.

  4. Click the action menu () for the pipeline version that you want to view run details for, and then click View runs.

  5. On the Runs page, click the name of the run that you want to view the details of.

    The details page for the run opens.

Verification
  • On the run details page, you can view the run graph, execution details, input parameters, step logs, and run output.

Viewing archived pipeline runs

You can view a list of pipeline runs that you have archived. You can view details for your archived pipeline runs, such as the pipeline version, run status, duration, and execution start date.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and has a pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • An archived pipeline run exists.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExperiments and runs.

  2. On the Experiments page, from the Project drop-down list, select the project that contains the pipeline experiment for the archived pipeline runs that you want to view.

  3. From the list of pipeline experiments, click the experiment that contains the archived pipeline runs that you want to view.

  4. On the Runs page, click the Archive tab.

Verification
  • A list of archived runs appears on the Archive tab on the Runs page for the pipeline experiment.

Archiving a pipeline run

You can retain records of your pipeline runs by archiving them. If required, you can restore runs from your archive to reuse, or delete runs that are no longer required.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and has a pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • You have previously executed a pipeline run that is available.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExperiments and runs.

  2. On the Experiments page, from the Project drop-down list, select the project that contains the pipeline experiment for the run that you want to archive.

  3. From the list of pipeline experiments, click the experiment that contains the pipeline run that you want to archive.

    The Runs page opens.

  4. On the Runs tab, click the action menu () beside the pipeline run that you want to archive, and then click Archive.

  5. In the Archiving run dialog, enter the run name in the text field to confirm that you intend to archive it.

  6. Click Archive.

Verification
  • The archived run does not appear on the Runs tab, and instead appears on the Archive tab on the Runs page for the pipeline experiment.

Restoring an archived pipeline run

You can restore an archived run to the active state.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and has a pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • An archived run exists in your project.

Procedure
  1. From the Open Data Hub dashboard, click ExperimentsExperiments and runs.

  2. On the Experiments page, from the Project drop-down list, select the project that contains the pipeline experiment that you want to restore.

  3. From the list of pipeline experiments, click the experiment that contains the archived pipeline run that you want to restore.

  4. On the Runs page, click the Archive tab.

  5. Click the action menu () beside the pipeline run that you want to restore, and then click Restore.

  6. In the Restore run? dialog, click Restore.

Verification
  • The restored run appears on the Runs tab on the Runs page for the pipeline experiment.

Deleting an archived pipeline run

You can delete pipeline runs from the Open Data Hub run archive.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and has a pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • You have previously archived a pipeline run.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Pipelines.

  2. On the Pipelines page, from the Project drop-down list, select the project that contains the pipeline run that you want to delete.

  3. Click Expand (rhoai expand icon) beside the pipeline that contains the run that you want to delete.

  4. Click the action menu () beside the pipeline version that contains the run that you want to delete, and then click View runs.

  5. On the Runs page, click the Archive tab.

  6. Click the action menu () beside the pipeline run that you want to delete, and then click Delete.

  7. In the Delete run? dialog, enter the run name in the text field to confirm that you intend to delete it.

  8. Click Delete.

Verification
  • The archived run that you deleted no longer appears on the Archive tab on the Runs page.

Duplicating an archived pipeline run

To make it easier to reproduce runs with the same configuration as runs in your archive, you can duplicate them.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a configured pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • An archived run is available to duplicate on the Archived tab on the Runs page.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Pipelines.

  2. On the Pipelines page, from the Project drop-down list, select the project that contains the pipeline run that you want to duplicate.

  3. Click Expand (rhoai expand icon) beside the pipeline that contains the run that you want to duplicate.

  4. Click the action menu () beside the pipeline version that contains the run that you want to duplicate, and then click View runs.

  5. On the Runs page, click the Archive tab.

  6. Click the action menu () beside the pipeline run that you want to duplicate, and then click Duplicate.

  7. On the Duplicate run page, configure the duplicate run:

    1. From the Experiment list, select a pipeline experiment to contain the duplicate run. Alternatively, to create a new pipeline experiment, click Create new experiment, and then complete the relevant fields in the Create experiment dialog.

    2. In the Name field, enter a name for the duplicate run.

    3. In the Description field, enter a description for the duplicate run.

    4. From the Pipeline list, select a pipeline to contain the duplicate run. Alternatively, to create a new pipeline, click Create new pipeline, and then complete the relevant fields in the Import pipeline dialog.

    5. From the Pipeline version list, select a pipeline version to contain the duplicate run. Alternatively, to upload a new version, click Upload new version, and then complete the relevant fields in the Upload new version dialog.

    6. In the Parameters section, configure input parameters for the duplicate run by selecting parameters from the list.

    7. Click Create run.

      The details page for the run opens.

Verification
  • The duplicate pipeline run appears on the Runs tab on the Runs page for the pipeline experiment.

Working with pipeline logs

About pipeline logs

You can review and analyze step logs for each step in a triggered pipeline run.

To help you troubleshoot and audit your pipelines, you can review and analyze these step logs by using the log viewer in the Open Data Hub dashboard. From here, you can search for specific log messages, view the log for each step, and download the step logs to your local machine.

If the step log file exceeds its capacity, a warning appears above the log viewer stating that the log window displays partial content. Expanding the warning displays further information, such as how the log viewer refreshes every three seconds, and that each step log displays the last 500 lines of log messages received. In addition, you can click download all step logs to download all step logs to your local machine.

Each step has a set of container logs. You can view these container logs by selecting a container from the Steps list in the log viewer. The Step-main container log consists of the log output for the step. The step-copy-artifact container log consists of output relating to artifact data sent to s3-compatible storage. If the data transferred between the steps in your pipeline is larger than 3 KB, five container logs are typically available. These logs contain output relating to data transferred between your persistent volume claims (PVCs).

Viewing pipeline step logs

To help you troubleshoot and audit your pipelines, you can review and analyze the log of each pipeline step using the log viewer. From here, you can search for specific log messages and download the logs for each step in your pipeline. If the pipeline is running, you can also pause and resume the log from the log viewer.

Note

Logs are no longer stored in S3-compatible storage for Python scripts which are running in Elyra pipelines. From Open Data Hub version 2.14, you can view these logs in the pipeline step log viewer.

For this change to take effect, you must use the Elyra runtime images provided in the 2024.1 or 2024.2 workbench images.

If you have an older workbench image version, update the Version selection field to 2024.1 or 2024.2, as described in Updating a project workbench.

Updating your workbench image version will clear any existing runtime image selections for your pipeline. After you update your workbench version, open your workbench IDE and update the properties of your pipeline to select a runtime image.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • You have previously triggered a pipeline run.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Pipelines.

  2. On the Pipelines page, from the Project drop-down list, select the project that you want to view logs for.

  3. Click Expand (rhoai expand icon) beside the pipeline that you want to view logs for.

  4. Click the action menu () on the row containing the project version that you want to view pipeline logs for, and then click View runs.

  5. On the Runs page, click the name of the run that you want to view logs for.

  6. On the run details page, on the Graph tab, click the pipeline step that you want to view logs for.

  7. Click the Logs tab.

  8. To view the logs of another pipeline step, from the Steps list, select the step that you want to view logs for.

  9. Analyze the log using the log viewer.

    • To search for a specific log message, enter at least part of the message in the search bar.

    • To view the full log in a separate browser window, click the action menu (⋮) and select View raw logs. Alternatively, to expand the size of the log viewer, click the action menu (⋮) and select Expand.

Verification
  • You can view the logs for each step in your pipeline.

Downloading pipeline step logs

Instead of viewing the step logs of a pipeline run using the log viewer on the Open Data Hub dashboard, you can download them for further analysis. You can choose to download the logs belonging to all steps in your pipeline, or you can download the log only for the step log displayed in the log viewer.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have previously created a data science project that is available and contains a pipeline server.

  • You have imported a pipeline to an active pipeline server.

  • You have previously triggered a pipeline run.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Pipelines.

  2. On the Pipelines page, from the Project drop-down list, select the project that you want to download logs for.

  3. Click Expand (rhoai expand icon) beside the pipeline that you want to download logs for.

  4. Click the action menu () on the row containing the project version that you want to download logs for, and then click View runs.

  5. On the Runs page, click the name of the run that you want to download logs for.

  6. On the run details page, on the Graph tab, click the pipeline step that you want to download logs for.

  7. Click the Logs tab.

  8. In the log viewer, click the Download button (rhoai download icon).

    1. Select Download current stop log to download the log for the current pipeline step.

    2. Select Download all step logs to download the logs for all steps in your pipeline run.

Verification
  • The step logs download to your browser’s default directory for downloaded files.

Working with pipelines in JupyterLab

Overview of pipelines in JupyterLab

You can use Elyra to create visual end-to-end pipeline workflows in JupyterLab. Elyra is an extension for JupyterLab that provides you with a Pipeline Editor to create pipeline workflows that can be executed in Open Data Hub.

You can access the Elyra extension within JupyterLab when you create the most recent version of one of the following notebook images:

  • Standard Data Science

  • PyTorch

  • TensorFlow

  • TrustyAI

When you use the Pipeline Editor to visually design your pipelines, minimal coding is required to create and run pipelines. For more information about Elyra, see Elyra Documentation. For more information about the Pipeline Editor, see Visual Pipeline Editor. After you have created your pipeline, you can run it locally in JupyterLab, or remotely using data science pipelines in Open Data Hub.

The pipeline creation process consists of the following tasks:

  • Create a data science project that contains a workbench.

  • Create a pipeline server.

  • Create a new pipeline in the Pipeline Editor in JupyterLab.

  • Develop your pipeline by adding Python notebooks or Python scripts and defining their runtime properties.

  • Define execution dependencies.

  • Run or export your pipeline.

Before you can run a pipeline in JupyterLab, your pipeline instance must contain a runtime configuration. A runtime configuration defines connectivity information for your pipeline instance and S3-compatible cloud storage.

If you create a workbench as part of a data science project, a default runtime configuration is created automatically. However, if you create a notebook from the Jupyter tile in the Open Data Hub dashboard, you must create a runtime configuration before you can run your pipeline in JupyterLab. For more information about runtime configurations, see Runtime Configuration. As a prerequisite, before you create a workbench, ensure that you have created and configured a pipeline server within the same data science project as your workbench.

You can use S3-compatible cloud storage to make data available to your notebooks and scripts while they are executed. Your cloud storage must be accessible from the machine in your deployment that runs JupyterLab and from the cluster that hosts data science pipelines. Before you create and run pipelines in JupyterLab, ensure that you have your s3-compatible storage credentials readily available.

Accessing the pipeline editor

You can use Elyra to create visual end-to-end pipeline workflows in JupyterLab. Elyra is an extension for JupyterLab that provides you with a Pipeline Editor to create pipeline workflows that can execute in Open Data Hub.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have created a data science project.

  • You have created a workbench with the Standard Data Science notebook image.

  • You have created and configured a pipeline server within the data science project that contains your workbench.

  • You have created and launched a Jupyter server from a notebook image that contains the Elyra extension (Standard data science, TensorFlow, TrustyAI, or PyTorch).

  • You have access to S3-compatible storage.

Procedure
  1. After you open JupyterLab, confirm that the JupyterLab launcher is automatically displayed.

  2. In the Elyra section of the JupyterLab launcher, click the Pipeline Editor tile.

    The Pipeline Editor opens.

Verification
  • You can view the Pipeline Editor in JupyterLab.

Creating a runtime configuration

If you create a workbench as part of a data science project, a default runtime configuration is created automatically. However, if you create a notebook from the Jupyter tile in the Open Data Hub dashboard, you must create a runtime configuration before you can run your pipeline in JupyterLab. This enables you to specify connectivity information for your pipeline instance and S3-compatible cloud storage.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have access to S3-compatible cloud storage.

  • You have created a data science project that contains a workbench.

  • You have created and configured a pipeline server within the data science project that contains your workbench.

  • You have created and launched a Jupyter server from a notebook image that contains the Elyra extension (Standard data science, TensorFlow, TrustyAI, or PyTorch).

Procedure
  1. In the left sidebar of JupyterLab, click Runtimes (The Runtimes icon).

  2. Click the Create new runtime configuration button (Create new runtime configuration).

    The Add new Data Science Pipelines runtime configuration page opens.

  3. Complete the relevant fields to define your runtime configuration.

    1. In the Display Name field, enter a name for your runtime configuration.

    2. Optional: In the Description field, enter a description to define your runtime configuration.

    3. Optional: In the Tags field, click Add Tag to define a category for your pipeline instance. Enter a name for the tag and press Enter.

    4. Define the credentials of your data science pipeline:

      1. In the Data Science Pipelines API Endpoint field, enter the API endpoint of your data science pipeline. Do not specify the pipelines namespace in this field.

      2. In the Public Data Science Pipelines API Endpoint field, enter the public API endpoint of your data science pipeline.

        Important

        You can obtain the data science pipelines API endpoint from the Data Science PipelinesRuns page in the dashboard. Copy the relevant endpoint and enter it in the Public Data Science Pipelines API Endpoint field.

      3. Optional: In the Data Science Pipelines User Namespace field, enter the relevant user namespace to run pipelines.

      4. From the Authentication Type list, select the authentication type required to authenticate your pipeline.

        Important

        If you created a notebook directly from the Jupyter tile on the dashboard, select EXISTING_BEARER_TOKEN from the Authentication Type list.

      5. In the Data Science Pipelines API Endpoint Username field, enter the user name required for the authentication type.

      6. In the Data Science Pipelines API Endpoint Password Or Token, enter the password or token required for the authentication type.

        Important

        To obtain the data science pipelines API endpoint token, in the upper-right corner of the OpenShift web console, click your user name and select Copy login command. After you have logged in, click Display token and copy the value of --token= from the Log in with this token command.

    5. Define the connectivity information of your S3-compatible storage:

      1. In the Cloud Object Storage Endpoint field, enter the endpoint of your S3-compatible storage. For more information about Amazon s3 endpoints, see Amazon Simple Storage Service endpoints and quotas.

      2. Optional: In the Public Cloud Object Storage Endpoint field, enter the URL of your S3-compatible storage.

      3. In the Cloud Object Storage Bucket Name field, enter the name of the bucket where your pipeline artifacts are stored. If the bucket name does not exist, it is created automatically.

      4. From the Cloud Object Storage Authentication Type list, select the authentication type required to access to your S3-compatible cloud storage. If you use AWS S3 buckets, select KUBERNETES_SECRET from the list.

      5. In the Cloud Object Storage Credentials Secret field, enter the secret that contains the storage user name and password. This secret is defined in the relevant user namespace, if applicable. In addition, it must be stored on the cluster that hosts your pipeline runtime.

      6. In the Cloud Object Storage Username field, enter the user name to connect to your S3-compatible cloud storage, if applicable. If you use AWS S3 buckets, enter your AWS Secret Access Key ID.

      7. In the Cloud Object Storage Password field, enter the password to connect to your S3-compatible cloud storage, if applicable. If you use AWS S3 buckets, enter your AWS Secret Access Key.

    6. Click Save & Close.

Verification
  • The runtime configuration that you created appears on the Runtimes tab (The Runtimes icon) in the left sidebar of JupyterLab.

Updating a runtime configuration

To ensure that your runtime configuration is accurate and updated, you can change the settings of an existing runtime configuration.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have access to S3-compatible storage.

  • You have created a data science project that contains a workbench.

  • You have created and configured a pipeline server within the data science project that contains your workbench.

  • A previously created runtime configuration is available in the JupyterLab interface.

  • You have created and launched a Jupyter server from a notebook image that contains the Elyra extension (Standard data science, TensorFlow, TrustyAI, or PyTorch).

Procedure
  1. In the left sidebar of JupyterLab, click Runtimes (The Runtimes icon).

  2. Hover the cursor over the runtime configuration that you want to update and click the Edit button (Edit runtime configuration).

    The Data Science Pipelines runtime configuration page opens.

  3. Fill in the relevant fields to update your runtime configuration.

    1. In the Display Name field, update name for your runtime configuration, if applicable.

    2. Optional: In the Description field, update the description of your runtime configuration, if applicable.

    3. Optional: In the Tags field, click Add Tag to define a category for your pipeline instance. Enter a name for the tag and press Enter.

    4. Define the credentials of your data science pipeline:

      1. In the Data Science Pipelines API Endpoint field, update the API endpoint of your data science pipeline, if applicable. Do not specify the pipelines namespace in this field.

      2. In the Public Data Science Pipelines API Endpoint field, update the API endpoint of your data science pipeline, if applicable.

      3. Optional: In the Data Science Pipelines User Namespace field, update the relevant user namespace to run pipelines, if applicable.

      4. From the Authentication Type list, select a new authentication type required to authenticate your pipeline, if applicable.

        Important

        If you created a notebook directly from the Jupyter tile on the dashboard, select EXISTING_BEARER_TOKEN from the Authentication Type list.

      5. In the Data Science Pipelines API Endpoint Username field, update the user name required for the authentication type, if applicable.

      6. In the Data Science Pipelines API Endpoint Password Or Token, update the password or token required for the authentication type, if applicable.

        Important

        To obtain the data science pipelines API endpoint token, in the upper-right corner of the OpenShift web console, click your user name and select Copy login command. After you have logged in, click Display token and copy the value of --token= from the Log in with this token command.

    5. Define the connectivity information of your S3-compatible storage:

      1. In the Cloud Object Storage Endpoint field, update the endpoint of your S3-compatible storage, if applicable. For more information about Amazon s3 endpoints, see Amazon Simple Storage Service endpoints and quotas.

      2. Optional: In the Public Cloud Object Storage Endpoint field, update the URL of your S3-compatible storage, if applicable.

      3. In the Cloud Object Storage Bucket Name field, update the name of the bucket where your pipeline artifacts are stored, if applicable. If the bucket name does not exist, it is created automatically.

      4. From the Cloud Object Storage Authentication Type list, update the authentication type required to access to your S3-compatible cloud storage, if applicable. If you use AWS S3 buckets, you must select USER_CREDENTIALS from the list.

      5. Optional: In the Cloud Object Storage Credentials Secret field, update the secret that contains the storage user name and password, if applicable. This secret is defined in the relevant user namespace. You must save the secret on the cluster that hosts your pipeline runtime.

      6. Optional: In the Cloud Object Storage Username field, update the user name to connect to your S3-compatible cloud storage, if applicable. If you use AWS S3 buckets, update your AWS Secret Access Key ID.

      7. Optional: In the Cloud Object Storage Password field, update the password to connect to your S3-compatible cloud storage, if applicable. If you use AWS S3 buckets, update your AWS Secret Access Key.

    6. Click Save & Close.

Verification
  • The runtime configuration that you updated is shown on the Runtimes tab (The Runtimes icon) in the left sidebar of JupyterLab.

Deleting a runtime configuration

After you have finished using your runtime configuration, you can delete it from the JupyterLab interface. After deleting a runtime configuration, you cannot run pipelines in JupyterLab until you create another runtime configuration.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have created a data science project that contains a workbench.

  • You have created and configured a pipeline server within the data science project that contains your workbench.

  • A previously created runtime configuration is visible in the JupyterLab interface.

  • You have created and launched a Jupyter server from a notebook image that contains the Elyra extension (Standard data science, TensorFlow, TrustyAI, or PyTorch).

Procedure
  1. In the left sidebar of JupyterLab, click Runtimes (The Runtimes icon).

  2. Hover the cursor over the runtime configuration that you want to delete and click the Delete Item button (Delete item).

    A dialog box appears prompting you to confirm the deletion of your runtime configuration.

  3. Click OK.

Verification
  • The runtime configuration that you deleted is no longer shown on the Runtimes tab (The Runtimes icon) in the left sidebar of JupyterLab.

Duplicating a runtime configuration

To prevent you from re-creating runtime configurations with similar values in their entirety, you can duplicate an existing runtime configuration in the JupyterLab interface.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have created a data science project that contains a workbench.

  • You have created and configured a pipeline server within the data science project that contains your workbench.

  • A previously created runtime configuration is visible in the JupyterLab interface.

  • You have created and launched a Jupyter server from a notebook image that contains the Elyra extension (Standard data science, TensorFlow, TrustyAI, or PyTorch).

Procedure
  1. In the left sidebar of JupyterLab, click Runtimes (The Runtimes icon).

  2. Hover the cursor over the runtime configuration that you want to duplicate and click the Duplicate button (Duplicate).

Verification
  • The runtime configuration that you duplicated is shown on the Runtimes tab (The Runtimes icon) in the left sidebar of JupyterLab.

Running a pipeline in JupyterLab

You can run pipelines that you have created in JupyterLab from the Pipeline Editor user interface. Before you can run a pipeline, you must create a data science project and a pipeline server. After you create a pipeline server, you must create a workbench within the same project as your pipeline server. Your pipeline instance in JupyterLab must contain a runtime configuration. If you create a workbench as part of a data science project, a default runtime configuration is created automatically. However, if you create a notebook from the Jupyter tile in the Open Data Hub dashboard, you must create a runtime configuration before you can run your pipeline in JupyterLab. A runtime configuration defines connectivity information for your pipeline instance and S3-compatible cloud storage.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have access to S3-compatible storage.

  • You have created a pipeline in JupyterLab.

  • You have opened your pipeline in the Pipeline Editor in JupyterLab.

  • Your pipeline instance contains a runtime configuration.

  • You have created and configured a pipeline server within the data science project that contains your workbench.

  • You have created and launched a Jupyter server from a notebook image that contains the Elyra extension (Standard data science, TensorFlow, TrustyAI, or PyTorch).

Procedure
  1. In the Pipeline Editor user interface, click Run Pipeline (The Runtimes icon).

    The Run Pipeline dialog appears. The Pipeline Name field is automatically populated with the pipeline file name.

    Note

    After you run your pipeline, a pipeline experiment containing your pipeline run is automatically created on the ExperimentsExperiments and runs page in the Open Data Hub dashboard. The experiment name matches the name that you assigned to the pipeline.

  2. Define the settings for your pipeline run.

    1. From the Runtime Configuration list, select the relevant runtime configuration to run your pipeline.

    2. Optional: Configure your pipeline parameters, if applicable. If your pipeline contains nodes that reference pipeline parameters, you can change the default parameter values. If a parameter is required and has no default value, you must enter a value.

  3. Click OK.

Verification
  • You can view the details of your pipeline run on the ExperimentsExperiments and runs page in the Open Data Hub dashboard.

  • You can view the output artifacts of your pipeline run. The artifacts are stored in your designated object storage bucket.

Exporting a pipeline in JupyterLab

You can export pipelines that you have created in JupyterLab. When you export a pipeline, the pipeline is prepared for later execution, but is not uploaded or executed immediately. During the export process, any package dependencies are uploaded to S3-compatible storage. Also, pipeline code is generated for the target runtime.

Before you can export a pipeline, you must create a data science project and a pipeline server. After you create a pipeline server, you must create a workbench within the same project as your pipeline server. In addition, your pipeline instance in JupyterLab must contain a runtime configuration. If you create a workbench as part of a data science project, a default runtime configuration is created automatically. However, if you create a notebook from the Jupyter tile in the Open Data Hub dashboard, you must create a runtime configuration before you can export your pipeline in JupyterLab. A runtime configuration defines connectivity information for your pipeline instance and S3-compatible cloud storage.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have created a data science project that contains a workbench.

  • You have created and configured a pipeline server within the data science project that contains your workbench.

  • You have access to S3-compatible storage.

  • You have a created a pipeline in JupyterLab.

  • You have opened your pipeline in the Pipeline Editor in JupyterLab.

  • Your pipeline instance contains a runtime configuration.

  • You have created and launched a Jupyter server from a notebook image that contains the Elyra extension (Standard data science, TensorFlow, TrustyAI, or PyTorch).

Procedure
  1. In the Pipeline Editor user interface, click Export Pipeline (Export pipeline).

    The Export Pipeline dialog appears. The Pipeline Name field is automatically populated with the pipeline file name.

  2. Define the settings to export your pipeline.

    1. From the Runtime Configuration list, select the relevant runtime configuration to export your pipeline.

    2. From the Export Pipeline as select an appropriate file format

    3. In the Export Filename field, enter a file name for the exported pipeline.

    4. Select the Replace if file already exists check box to replace an existing file of the same name as the pipeline you are exporting.

    5. Optional: Configure your pipeline parameters, if applicable. If your pipeline contains nodes that reference pipeline parameters, you can change the default parameter values. If a parameter is required and has no default value, you must enter a value.

  3. Click OK.

Verification
  • You can view the file containing the pipeline that you exported in your designated object storage bucket.