Open Data Hub logo

Getting started with Open Data Hub

Logging in to Open Data Hub

Log in to Open Data Hub from a browser for easy access to Jupyter and your data science projects.

Procedure
  1. Browse to the Open Data Hub instance URL and click Log in with OpenShift.

    • If you are a data scientist user, your administrator must provide you with the Open Data Hub instance URL, for example, https:://odh-dashboard-odh.apps.ocp4.example.com.

    • If you have access to OpenShift Container Platform, you can browse to the OpenShift Container Platform web console and click the Application Launcher (The application launcher) → Open Data Hub.

  2. Click the name of your identity provider, for example, GitHub.

  3. Enter your credentials and click Log in (or equivalent for your identity provider).

Verification
  • Open Data Hub opens on the Enabled applications page.

Troubleshooting
  • If you see An authentication error occurred or Could not create user when you try to log in:

    • You might have entered your credentials incorrectly. Confirm that your credentials are correct.

    • You might have an account in more than one configured identity provider. If you have logged in with a different identity provider previously, try again with that identity provider.

The Open Data Hub user interface

The Open Data Hub interface is based on the OpenShift web console user interface.

The Open Data Hub user interface is divided into several areas:

  • The global navigation bar, which provides access to useful controls, such as Help and Notifications.

    The global navigation bar
    Figure 1. The global navigation bar
  • The side navigation menu, which contains different categories of pages available in Open Data Hub.

    The side navigation menu
    Figure 2. The side navigation menu
  • The main display area, which displays the current page and shares space with any drawers currently displaying information, such as notifications or quick start guides. The main display area also displays the Notebook server control panel where you can launch Jupyter by starting and configuring a notebook server. Administrators can also use the Notebook server control panel to manage other users' notebook servers.

    The main display area
    Figure 3. The main display area

Global navigation

There are four items in the top navigation:

  • The Toggle side navigation menu button (Toggle side navigation menu icon) toggles whether or not the side navigation is displayed.

  • The Notifications button (Notifications icon) opens and closes the Notifications drawer, letting you read current and previous notifications in more detail.

  • The Help menu (Help menu icon) provides a link to access the Open Data Hub documentation.

  • The User menu displays the name of the currently logged-in user and provides access to the Log out button.

Side navigation

There are several different pages in the side navigation:

Applications → Enabled

The Enabled page displays applications that are enabled and ready to use on Open Data Hub. This page is the default landing page for Open Data Hub.

Click the Launch application button on an application tile to open the application interface in a new tab. If an application has an associated quick start tour, click the drop-down menu on the application tile and select Open quick start to access it. This page also displays applications and components that have been disabled by your administrator. Disabled applications are denoted with Disabled on the application tile. Click Disabled on the application tile to access links allowing you to remove the tile itself, and to revalidate its license, if the license had previously expired.

Applications → Explore

The Explore page displays applications that are available for use with Open Data Hub. Click a tile for more information about the application or to access the Enable button. The Enable button is visible only if an application does not require an OpenShift Operator installation.

Data Science Projects

The Data science projects page allows you to organize your data science work into a single project. From this page, you can create and manage data science projects. You can also enhance the capabilities of your data science project by adding workbenches, adding storage to your project’s cluster, adding data connections, and adding model servers.

Data Science Pipelines → Pipelines

The Pipelines page allows you to import, manage, track, and view data science pipelines. Using Open Data Hub pipelines, you can standardize and automate machine learning workflows to enable you to develop and deploy your data science models.

Data Science Pipelines → Runs

The Runs page allows you to define, manage, and track executions of a data science pipeline. A pipeline run is a single execution of a data science pipeline. You can also view a record of previously executed and scheduled runs for your data science project.

Model Serving

The Model Serving page allows you to manage and view the status of your deployed models. You can use this page to deploy data science models to serve intelligent applications, or to view existing deployed models. You can also determine the inference endpoint of a deployed model.

Resources

The Resources page displays learning resources such as documentation, how-to material, and quick start tours. You can filter visible resources using the options displayed on the left, or enter terms into the search bar.

Settings → Notebook images

The Notebook images page allows you to configure custom notebook images that cater to your project’s specific requirements. After you have added custom notebook images to your deployment of Open Data Hub, they are available for selection when creating a notebook server.

Settings → Cluster settings

The Cluster settings page allows you to perform the following administrative tasks on your cluster:

  • Enable or disable Red Hat’s ability to collect data about Open Data Hub usage on your cluster.

  • Configure how resources are claimed within your cluster by changing the default size of the cluster’s persistent volume claim (PVC).

  • Reduce resource usage in your Open Data Hub deployment by stopping notebook servers that have been idle.

  • Schedule notebook pods on tainted nodes by adding tolerations.

Settings → Accelerator profiles

The Accelerator profiles page allows you to perform the following administrative tasks on your accelerator profiles:

  • Enable or disable an existing accelerator profile.

  • Create, update, or delete accelerator profiles.

  • Schedule pods on tainted nodes by adding tolerations.

Settings → Serving runtimes

The Serving runtimes page allows you to manage the model-serving runtimes in your Open Data Hub deployment. You can use this page to add, edit, and enable or disable model-serving runtimes. You specify a model-serving runtime when you configure a model server on the Data Science Projects page.

Settings → User management

The User management page allows you to define Open Data Hub user group and admin group membership.

Notifications in Open Data Hub

Open Data Hub displays notifications when important events happen in the cluster.

If you miss a notification message, click the Notifications button (Notifications icon) to open the Notifications drawer and view unread messages.

The Open Data Hub interface with the Notifications drawer visible
Figure 4. The Notifications drawer

Creating a data science project

To start your data science work, create a data science project. Creating a project helps you organize your work in one place. You can also enhance your data science project by adding the following functionality:

  • Workbenches

  • Storage for your project’s cluster

  • Data connections

  • Model servers

  • Bias monitoring for your models

Prerequisites
  • You have logged in to Open Data Hub.

  • If you are using specialized Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Projects.

    The Data science projects page opens.

  2. Click Create data science project.

    The Create a data science project dialog opens.

  3. Enter a name for your data science project.

  4. Optional: Edit the resource name for your data science project. The resource name must consist of lowercase alphanumeric characters, -, and must start and end with an alphanumeric character.

  5. Enter a description for your data science project.

  6. Click Create.

    A project details page opens. From this page, you can create workbenches, add cluster storage and data connections, import pipelines, and deploy models.

Verification
  • The project that you created is displayed on the Data science projects page.

Creating a project workbench

To examine and work with models in an isolated area, you can create a workbench. You can use this workbench to create a Jupyter notebook from an existing notebook container image to access its resources and properties. For data science projects that require data retention, you can add container storage to the workbench you are creating. If you require extra power for use with large datasets, you can assign accelerators to your workbench to optimize performance.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you use specialized Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have created a data science project that you can add a workbench to.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Projects.

    The Data science projects page opens.

  2. Click the name of the project that you want to add the workbench to.

    The Details page for the project opens.

  3. In the Workbenches section, click Create workbench.

    The Create workbench page opens.

  4. Configure the properties of the workbench you are creating.

    1. In the Name field, enter a name for your workbench.

    2. Optional: In the Description field, enter a description to define your workbench.

    3. In the Notebook image section, complete the fields to specify the notebook image to use with your workbench.

      1. From the Image selection list, select a notebook image.

    4. In the Deployment size section, specify the size of your deployment instance.

      1. From the Container size list, select a container size for your server.

      2. Optional: From the Accelerator list, select an accelerator.

      3. If you selected an accelerator in the preceding step, specify the number of accelerators to use.

    5. Optional: Select and specify values for any new environment variables.

      Note

      To enable data science pipelines in JupyterLab, create the following environment variable: PIPELINES_SSL_SA_CERTS=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

    6. Configure the storage for your Open Data Hub cluster.

      1. Select Create new persistent storage to create storage that is retained after you log out of Open Data Hub. Complete the relevant fields to define the storage.

      2. Select Use existing persistent storage to reuse existing storage and select the storage from the Persistent storage list.

    7. To use a data connection, in the Data connections section, select the Use a data connection checkbox.

      • Create a new data connection as follows:

        1. Select Create new data connection.

        2. In the Name field, enter a unique name for the data connection.

        3. In the Access key field, enter the access key ID for the S3-compatible object storage provider.

        4. In the Secret key field, enter the secret access key for the S3-compatible object storage account that you specified.

        5. In the Endpoint field, enter the endpoint of your S3-compatible object storage bucket.

        6. In the Region field, enter the default region of your S3-compatible object storage account.

        7. In the Bucket field, enter the name of your S3-compatible object storage bucket.

      • Use an existing data connection as follows:

        1. Select Use existing data connection.

        2. From the Data connection list, select a data connection that you previously defined.

  5. Click Create workbench.

Verification
  • The workbench that you created appears on the Details page for the project.

  • Any cluster storage that you associated with the workbench during the creation process appears on the Details page for the project.

  • The Status column, located in the Workbenches section of the Details page, displays a status of Starting when the workbench server is starting, and Running when the workbench has successfully started.

Launching Jupyter and starting a notebook server

Launch Jupyter and start a notebook server to start working with your notebooks. If you require extra power for use with large datasets, you can assign accelerators to your notebook server to optimize performance.

Prerequisites
  • You have logged in to Open Data Hub.

  • You know the names and values you want to use for any environment variables in your notebook server environment, for example, AWS_SECRET_ACCESS_KEY.

  • If you want to work with a large data set, work with your administrator to proactively increase the storage capacity of your notebook server. If applicable, also consider assigning accelerators to your notebook server.

Procedure
  1. Locate the Jupyter tile on the Enabled applications page.

  2. Click Launch application.

    If you see an Access permission needed message, you are not in the default user group or the default administrator group for Open Data Hub. Ask your administrator to add you to the correct group.

    If you have not previously authorized the jupyter-nb-<username> service account to access your account, the Authorize Access page appears prompting you to provide authorization. Inspect the permissions selected by default, and click the Allow selected permissions button.

    If you credentials are accepted, the Notebook server control panel opens displaying the Start a notebook server page.

  3. Start a notebook server.

    This is not required if you have previously opened Jupyter.

    1. In the Notebook image section, select the notebook image to use for your server.

    2. If the notebook image contains multiple versions, select the version of the notebook image from the Versions section.

      Note

      When a new version of a notebook image is released, the previous version remains available and supported on the cluster. This gives you time to migrate your work to the latest version of the notebook image.

    3. From the Container size list, select a suitable container size for your server.

    4. Optional: From the Accelerator list, select an accelerator.

    5. If you selected an accelerator in the preceding step, specify the number of accelerators to use.

      Important

      Using accelerators is only supported with specific notebook images. For GPUs, only the PyTorch, TensorFlow, and CUDA notebook images are supported. For Habana Gaudi devices, only the HabanaAI notebook image is supported. In addition, you can only specify the number of accelerators required for your notebook server if accelerators are enabled on your cluster.

    6. Optional: Select and specify values for any new Environment variables.

      The interface stores these variables so that you only need to enter them once. Example variable names for common environment variables are automatically provided for frequently integrated environments and frameworks, such as Amazon Web Services (AWS).

      Important

      Select the Secret checkbox for variables with sensitive values that must remain private, such as passwords.

    7. Optional: Select the Start server in current tab checkbox if necessary.

    8. Click Start server.

      The Starting server progress indicator appears. Click Expand event log to view additional information about the server creation process. Depending on the deployment size and resources you requested, starting the server can take up to several minutes. Click Cancel to cancel the server creation.

      After the server starts, you see one of the following behaviors:

      • If you previously selected the Start server in current tab checkbox, the JupyterLab interface opens in the current tab of your web browser.

      • If you did not previously select the Start server in current tab checkbox, the Starting server dialog box prompts you to open the server in a new browser tab or in the current browser tab.

        The JupyterLab interface opens according to your selection.

Verification
  • The JupyterLab interface opens.

Troubleshooting
  • If you see the "Unable to load notebook server configuration options" error message, contact your administrator so that they can review the logs associated with your Jupyter pod and determine further details about the problem.

Options for notebook server environments

When you start Jupyter for the first time, or after stopping your notebook server, you must select server options in the Start a notebook server wizard so that the software and variables that you expect are available on your server. This section explains the options available in the Start a notebook server wizard in detail.

The Start a notebook server page consists of the following sections:

Notebook image

Specifies the container image that your notebook server is based on. Different notebook images have different packages installed by default. If the notebook image has multiple versions available, you can select the notebook image version to use from the Versions section.

Note

When a new version of a notebook image is released, the previous version remains available on the cluster. This gives you time to migrate your work to the latest version of the notebook image. Legacy notebook image versions, that is, not the two most recent versions, might still be available for selection. Legacy image versions include a label that indicates that the image is out-of-date. To use the latest package versions, use the most recently added notebook image.

After you start a notebook image, you can check which Python packages are installed on your notebook server and which version of the package you have by running the pip tool in a notebook cell.

The following table shows the package versions used in the available notebook images.

Table 1. Notebook image options
Image name Image version Preinstalled packages

CUDA

2023.2 (Recommended)

  • CUDA 11.8

  • Python 3.9

  • JupyterLab 3.6

  • Notebook 6.5

2023.1

  • CUDA 11.8

  • Python 3.9

  • JupyterLab 3.5

  • Notebook 6.5

1.2

  • CUDA 11.4

  • Python 3.8

  • JupyterLab 3.2

  • Notebook 6.4

Minimal Python (default)

2023.2 (Recommended)

  • Python 3.9

  • JupyterLab 3.6

  • Notebook 6.5

2023.1

  • Python 3.9

  • JupyterLab 3.5

  • Notebook 6.5

1.2

  • Python 3.8

  • JupyterLab 3.2

  • Notebook 6.4

PyTorch

2023.2 (Recommended)

  • CUDA 11.8

  • Python 3.9

  • PyTorch 2.0

  • JupyterLab 3.6

  • Notebook 6.5

  • TensorBoard 2.13

  • Boto3 1.28

  • Kafka-Python 2.0

  • Kfp-tekton 1.5

  • Matplotlib 3.6

  • Numpy 1.24

  • Pandas 1.5

  • Scikit-learn 1.3

  • SciPy 1.11

  • Elyra 3.15

  • PyMongo 4.5

  • Pyodbc 4.0

  • Codeflare-SDK 0.12

  • Sklearn-onnx 1.15

  • Psycopg 3.1

  • MySQL Connector/Python 8.0

2023.1

  • CUDA 11.8

  • Python 3.9

  • PyTorch 1.13

  • JupyterLab 3.5

  • Notebook 6.5

  • TensorBoard 2.11

  • Boto3 1.26

  • Kafka-Python 2.0

  • Kfp-tekton 1.5

  • Matplotlib 3.6

  • Numpy 1.24

  • Pandas 1.5

  • Scikit-learn 1.2

  • SciPy 1.10

  • Elyra 3.15

1.2

  • CUDA 11.4

  • Python 3.8

  • PyTorch 1.8

  • JupyterLab 3.2

  • Notebook 6.4

  • TensorBoard 2.6

  • Boto3 1.17

  • Kafka-Python 2.0

  • Matplotlib 3.4

  • Numpy 1.19

  • Pandas 1.2

  • Scikit-learn 0.24

  • SciPy 1.6

Standard Data Science

2023.2 (Recommended)

  • Python 3.9

  • JupyterLab 3.6

  • Notebook 6.5

  • Boto3 1.28

  • Kafka-Python 2.0

  • Kfp-tekton 1.5

  • Matplotlib 3.6

  • Pandas 1.5

  • Numpy 1.24

  • Scikit-learn 1.3

  • SciPy 1.11

  • Elyra 3.15

  • PyMongo 4.5

  • Pyodbc 4.0

  • Codeflare-SDK 0.12

  • Sklearn-onnx 1.15

  • Psycopg 3.1

  • MySQL Connector/Python 8.0

2023.1

  • Python 3.9

  • JupyterLab 3.5

  • Notebook 6.5

  • Boto3 1.26

  • Kafka-Python 2.0

  • Kfp-tekton 1.5

  • Matplotlib 3.6

  • Numpy 1.24

  • Pandas 1.5

  • Scikit-learn 1.2

  • SciPy 1.10

  • Elyra 3.15

1.2

  • Python 3.8

  • JupyterLab 3.2

  • Notebook 6.4

  • Boto3 1.17

  • Kafka-Python 2.0

  • Matplotlib 3.4

  • Pandas 1.2

  • Numpy 1.19

  • Scikit-learn 0.24

  • SciPy 1.6

TensorFlow

2023.2 (Recommended)

  • CUDA 11.8

  • Python 3.9

  • JupyterLab 3.6

  • Notebook 6.5

  • TensorFlow 2.13

  • TensorBoard 2.13

  • Boto3 1.28

  • Kafka-Python 2.0

  • Kfp-tekton 1.5

  • Matplotlib 3.6

  • Numpy 1.24

  • Pandas 1.5

  • Scikit-learn 1.3

  • SciPy 1.11

  • Elyra 3.15

  • PyMongo 4.5

  • Pyodbc 4.0

  • Codeflare-SDK 0.12

  • Sklearn-onnx 1.15

  • Psycopg 3.1

  • MySQL Connector/Python 8.0

2023.1

  • CUDA 11.8

  • Python 3.9

  • JupyterLab 3.5

  • Notebook 6.5

  • TensorFlow 2.11

  • TensorBoard 2.11

  • Boto3 1.26

  • Kafka-Python 2.0

  • Kfp-tekton 1.5

  • Matplotlib 3.6

  • Numpy 1.24

  • Pandas 1.5

  • Scikit-learn 1.2

  • SciPy 1.10

  • Elyra 3.15

1.2

  • CUDA 11.4

  • Python 3.8

  • JupyterLab 3.2

  • Notebook 6.4

  • TensorFlow 2.7

  • TensorBoard 2.6

  • Boto3 1.17

  • Kafka-Python 2.0

  • Matplotlib 3.4

  • Numpy 1.19

  • Pandas 1.2

  • Scikit-learn 0.24

  • SciPy 1.6

TrustyAI

2023.2 (Recommended)

  • Python 3.9

  • JupyterLab 3.6

  • Notebook 6.5

  • TrustyAI 0.3

  • Boto3 1.28

  • Kafka-Python 2.0

  • Kfp-tekton 1.5

  • Matplotlib 3.6

  • Numpy 1.24

  • Pandas 1.5

  • Scikit-learn 1.3

  • SciPy 1.11

  • Elyra 3.15

  • PyMongo 4.5

  • Pyodbc 4.0

  • Codeflare-SDK 0.12

  • Sklearn-onnx 1.15

  • Psycopg 3.1

  • MySQL Connector/Python 8.0

2023.1

  • Python 3.9

  • JupyterLab 3.5

  • Notebook 6.5

  • TrustyAI 0.3

  • Boto3 1.26

  • Kafka-Python 2.0

  • Kfp-tekton 1.5

  • Matplotlib 3.6

  • Numpy 1.24

  • Pandas 1.5

  • Scikit-learn 1.2

  • SciPy 1.10

  • Elyra 3.15

HabanaAI

2023.2 (Recommended)

  • Python 3.8

  • Habana 1.10

  • JupyterLab 3.5

  • TensorFlow 2.12

  • Boto3 1.26

  • Kafka-Python 2.0

  • Kfp-tekton 1.5

  • Matplotlib 3.6

  • Numpy 1.23

  • Pandas 1.5

  • Scikit-learn 1.2

  • SciPy 1.10

  • PyTorch 2.0

  • Elyra 3.15

code-server

2023.2 (Recommended)

  • Python 3.9

  • Boto3 1.29

  • Kafka-Python 2.0

  • Matplotlib 3.8

  • Numpy 1.26

  • Pandas 2.1

  • Plotly 5.18

  • Scikit-learn 1.3

  • Scipy 1.11

  • Sklearn-onnx 1.15

  • Ipykernel 6.26

  • (code-server plugin) Python 2023.14.0

  • (code-server plugin) Jupyter 2023.3.100

RStudio Server

2023.2 (Recommended)

  • Python 3.9

  • R 4.3

CUDA - RStudio Server

2023.2 (Recommended)

  • Python 3.9

  • CUDA 11.8

  • R 4.3

Deployment size

specifies the compute resources available on your notebook server.

Container size controls the number of CPUs, the amount of memory, and the minimum and maximum request capacity of the container.

Accelerators specifies the accelerators available on your notebook server.

Number of accelerators specifies the number of accelerators to use.

Important

Using accelerators is only supported with specific notebook images. For GPUs, only the PyTorch, TensorFlow, and CUDA notebook images are supported. For Habana Gaudi devices, only the HabanaAI notebook image is supported. In addition, you can only specify the number of accelerators required for your notebook server if accelerators are enabled on your cluster.

Environment variables

Specifies the name and value of variables to be set on the notebook server. Setting environment variables during server startup means that you do not need to define them in the body of your notebooks, or with the Jupyter command line interface. Some recommended environment variables are shown in the table.

Table 2. Recommended environment variables
Environment variable option Recommended variable names

AWS

  • AWS_ACCESS_KEY_ID specifies your Access Key ID for Amazon Web Services.

  • AWS_SECRET_ACCESS_KEY specifies your Secret access key for the account specified in AWS_ACCESS_KEY_ID.

Tutorials for data scientists

To help you get started quickly, you can access learning resources for Open Data Hub and its supported applications.

Additonal resources are available on the Resources tab of the Open Data Hub user interface.

Table 3. Quick start guides
Resource Name Description

Creating a Jupyter notebook

Create a Jupyter notebook in JupyterLab.

Deploying a sample Python application using Flask and OpenShift

Deploy your data science model out of a Jupyter notebook and into a Flask application to use as a development sandbox.

Table 4. How to guides
Resource Name Description

How to install Python packages on your notebook server

Learn how to install additional Python packages on your notebook server.

How to update notebook server settings

Learn how to update the settings or the notebook image on your notebook server.

How to use data from Amazon S3 buckets

Learn how to connect to data in S3 Storage using environment variables.

How to view installed packages on your notebook server

Learn how to see which packages are installed on your running notebook server.

Accessing tutorials

You can access learning resources for Open Data Hub and supported applications.

Prerequisites
  • Ensure that you have logged in to Open Data Hub.

  • You have logged in to the OpenShift Container Platform web console.

Procedure
  1. On the Open Data Hub home page, click Resources.

    The Resources page opens.

  2. Click Access tutorial on the relevant tile.

Verification
  • You can view and access the learning resources for Open Data Hub and supported applications.

Configuring your IDE

You can configure some notebook workbenches to get the most out of your data science work.

Configuring your code-server workbench

You can use extensions to streamline your workflow, add new languages, themes, debuggers, and connect to additional services.

For more information on code-server, see code-server in GitHub.

Installing extensions with code-server

Prerequisites
  • You have logged in to Open Data Hub.

  • If you use specialized Open Data Hub groups, you are part of the user group or admin group (for example, odh-users or odh-admins) in OpenShift.

  • You have created a data science project that has a code-server workbench.

Procedure
  1. From the Open Data Hub dashboard, click Data Science Projects.

    The Data science projects page opens.

  2. Click the name of the project containing the code-server workbench you want to start.

    The Details page for the project opens.

  3. Click the toggle in the Status column for the relevant workbench to start a workbench that is not running.

    The status of the workbench that you started changes from Stopped to Running.

  4. After the workbench has started, click Open to open the workbench notebook.

  5. In the Activity Bar, click the Extensions icon. (The Extensions icon)

  6. Search for the name of the extension you want to install.

  7. Click Install to add the extension to your code-server environment.

    The extension you installed appears in the Browser - Installed list on the Extensions panel.

Extensions

See Open VSX Registry for available third-party extensions that you can consider installing.

Enabling services connected to Open Data Hub

You must enable SaaS-based services, such as Anaconda Professional Edition, before using them with Open Data Hub. On-cluster services are enabled automatically.

Typically, you can install services, or enable services connected to Open Data Hub using one of the following methods:

  • Enabling the service from the Explore page on the Open Data Hub dashboard, as documented in the following procedure.

  • Installing the Operator for the service from OperatorHub. OperatorHub is a web console for cluster administrators to discover and select Operators to install on their cluster. It is deployed by default in OpenShift Container Platform (Installing from OperatorHub using the web console).

  • Installing the Operator for the service from Red Hat Marketplace (Install Operators).

  • Installing the service as an Operator to your OpenShift Container Platform cluster (Adding Operators to a cluster).

For some services (such as Jupyter), the service endpoint is available on the tile for the service on the Enabled page of Open Data Hub. Certain services cannot be accessed directly from their tiles, for example, OpenVINO and Anaconda provide notebook images for use in Jupyter and do not provide an endpoint link from their tile. Additionally, it may be useful to store these endpoint URLs as environment variables for easy reference in a notebook environment.

Some independent software vendor (ISV) applications must be installed in specific namespaces. In these cases, the tile for the application in the Open Data Hub dashboard specifies the required namespace.

To help you get started quickly, you can access the service’s learning resources and documentation on the Resources page, or by clicking the relevant link on the tile for the service on the Enabled page.

Prerequisites
  • You have logged in to Open Data Hub.

  • Your administrator has installed or configured the service on your OpenShift Container Platform cluster.

Procedure
  1. On the Open Data Hub home page, click Explore.

    The Explore page opens.

  2. Click the tile of the service that you want to enable.

  3. Click Enable on the drawer for the service.

  4. If prompted, enter the service’s key and click Connect.

  5. Click Enable to confirm that you are enabling the service.

Verification
  • The service that you enabled appears on the Enabled page.

  • The service endpoint is displayed on the tile for the service on the Enabled page.

Disabling applications connected to Open Data Hub

You can disable applications and components so that they do not appear on the Open Data Hub dashboard when you no longer want to use them, for example, when data scientists no longer use an application or when the application license expires.

Disabling unused applications allows your data scientists to manually remove these application tiles from their Open Data Hub dashboard so that they can focus on the applications that they are most likely to use.

Important

Do not follow this procedure when disabling the following applications:

  • Anaconda Professional Edition. You cannot manually disable Anaconda Professional Edition. It is automatically disabled only when its license expires.

Prerequisites
  • You have logged in to the OpenShift Container Platform web console.

  • You are part of the cluster-admins user group in OpenShift Container Platform.

  • You have installed or configured the service on your OpenShift Container Platform cluster.

  • The application or component that you want to disable is enabled and appears on the Enabled page.

Procedure
  1. In the OpenShift Container Platform web console, switch to the Administrator perspective.

  2. Switch to the odh project.

  3. Click OperatorsInstalled Operators.

  4. Click on the Operator that you want to uninstall. You can enter a keyword into the Filter by name field to help you find the Operator faster.

  5. Delete any Operator resources or instances by using the tabs in the Operator interface.

    During installation, some Operators require the administrator to create resources or start process instances using tabs in the Operator interface. These must be deleted before the Operator can uninstall correctly.

  6. On the Operator Details page, click the Actions drop-down menu and select Uninstall Operator.

    An Uninstall Operator? dialog box is displayed.

  7. Select Uninstall to uninstall the Operator, Operator deployments, and pods. After this is complete, the Operator stops running and no longer receives updates.

Important

Removing an Operator does not remove any custom resource definitions or managed resources for the Operator. Custom resource definitions and managed resources still exist and must be cleaned up manually. Any applications deployed by your Operator and any configured off-cluster resources continue to run and must be cleaned up manually.

Verification
  • The Operator is uninstalled from its target clusters.

  • The Operator no longer appears on the Installed Operators page.

  • The disabled application is no longer available for your data scientists to use, and is marked as Disabled on the Enabled page of the Open Data Hub dashboard. This action may take a few minutes to occur following the removal of the Operator.

Removing disabled applications from Open Data Hub

After your administrator has disabled your unused applications, you can manually remove them from the Open Data Hub dashboard. Disabling and removing unused applications allows you to focus on the applications that you are most likely to use.

Prerequisites
  • Ensure that you have logged in to Open Data Hub.

  • You have logged in to the OpenShift Container Platform web console.

  • Your administrator has previously disabled the application that you want to remove.

Procedure
  1. In the Open Data Hub interface, click Enabled.

    The Enabled page opens. Disabled applications are denoted with Disabled on the tile for the application.

  2. Click Disabled on the tile for the application that you want to remove.

  3. Click the link to remove the application tile.

Verification
  • The tile for the disabled application no longer appears on the Enabled page.

Support requirements and limitations

Supported browsers

Open Data Hub supports the latest version of the following browsers:

  • Google Chrome

  • Mozilla Firefox

  • Safari

Supported services

Open Data Hub supports the following services:

Table 5. Supported services
Service Name Description

Anaconda Professional

Anaconda Professional is a popular open source package distribution and management experience that is optimized for commercial use.

IBM Watson Studio

IBM Watson Studio is a platform for embedding AI and machine learning into your business and creating custom models with your own data.

Intel® oneAPI AI Analytics Toolkit Container

The AI Kit is a set of AI software tools to accelerate end-to-end data science and analytics pipelines on Intel® architectures.

Jupyter

Jupyter is a multi-user version of the notebook designed for companies, classrooms, and research labs.

OpenVINO

OpenVINO is an open source toolkit to help optimize deep learning performance and deploy using an inference engine onto Intel hardware.

Pachyderm

Use Pachyderm’s data versioning, pipeline and lineage capabilities to automate the machine learning life cycle and optimize machine learning operations.

Supported packages

Notebook server images in Open Data Hub are installed with Python 3.9 by default.

You can install packages that are compatible with Python 3.9 on any notebook server that has the binaries required by that package.

You can install packages on a temporary basis by using the pip install command. You can also provide a list of packages to the pip install command using a requirements.txt file.

You must re-install these packages each time you start your notebook server.

You can remove packages by using the pip uninstall command.

Common questions

In addition to documentation, Red Hat provides a rich set of learning resources for Open Data Hub and supported applications.

On the Resources page of the Open Data Hub dashboard, you can use the category links to filter the resources for various stages of your data science workflow. For example, click the Model serving category to display resources that describe various methods of deploying models. Click All items to show the resources for all categories.

For the selected category, you can apply additional options to filter the available resources. For example, you can filter by type, such as how-to articles, quick starts, tutorials; these resources provide the answers to common questions.