Open Data Hub logo

Managing resources

As an Open Data Hub administrator, you can manage the following resources:

  • The dashboard interface, including the visibility of navigation menu options

  • Applications that show in the dashboard

  • Cluster resources to support compute-intensive data science work

  • Jupyter notebook servers

  • Data storage backup

Customizing the dashboard

The Open Data Hub dashboard provides features that are designed to work for most scenarios. These features are configured in the OdhDashboardConfig custom resource (CR) file.

To see a description of the options in the Open Data Hub dashboard configuration file, see Dashboard configuration options.

As an administrator, you can customize the interface of the dashboard, for example to show or hide some of the dashboard navigation menu options. To change the default settings of the dashboard, edit the OdhDashboardConfig custom resource (CR) file as described in Editing the dashboard configuration file.

Editing the dashboard configuration file

As an administrator, you can customize the interface of the dashboard by editing the dashboard configuration file.

Prerequisites
  • You have cluster administrator privileges for your OpenShift Container Platform cluster.

Procedure
  1. Log in to the OpenShift Container Platform console as a cluster administrator.

  2. In the Administrator perspective, click HomeAPI Explorer.

  3. In the search bar, enter OdhDashboardConfig to filter by kind.

  4. Click the OdhDashboardConfig custom resource (CR) to open the resource details page.

  5. Select the redhat-ods-applications project from the Project list.

  6. Click the Instances tab.

  7. Click the odh-dashboard-config instance to open the details page.

  8. Click the YAML tab. Here is an example OdhDashboardConfig file showing default values:

    apiVersion: opendatahub.io/v1alpha
    kind: OdhDashboardConfig
    metadata:
      name: odh-dashboard-config
    spec:
      dashboardConfig:
        enablement: true
        disableBYONImageStream: false
        disableClusterManager: false
        disableISVBadges: false
        disableInfo: false
        disableSupport: false
        disableTracking: true
        disableProjects: true
        disablePipelines: true
        disableModelServing: true
        disableProjectSharing: true
        disableCustomServingRuntimes: false
        disableAcceleratorProfiles: true
        modelMetricsNamespace: ''
        disablePerformanceMetrics: false
      notebookController:
        enabled: true
      notebookSizes:
        - name: Small
          resources:
            limits:
              cpu: '2'
              memory: 2Gi
            requests:
              cpu: '1'
              memory: 1Gi
        - name: Medium
          resources:
            limits:
              cpu: '4'
              memory: 4Gi
            requests:
              cpu: '2'
              memory: 2Gi
        - name: Large
          resources:
            limits:
              cpu: '8'
              memory: 8Gi
            requests:
              cpu: '4'
              memory: 4Gi
      modelServerSizes:
        - name: Small
          resources:
            limits:
              cpu: '2'
              memory: 8Gi
            requests:
              cpu: '1'
              memory: 4Gi
        - name: Medium
          resources:
            limits:
              cpu: '8'
              memory: 10Gi
            requests:
              cpu: '4'
              memory: 8Gi
        - name: Large
          resources:
            limits:
              cpu: '10'
              memory: 20Gi
            requests:
              cpu: '6'
              memory: 16Gi
      groupsConfig:
        adminGroups: 'odh-admins'
        allowedGroups: 'system:authenticated'
      templateOrder:
        - 'ovms'
      templateDisablement:
        - 'ovms'
  9. Edit the values of the options that you want to change.

  10. Click Save to apply your changes and then click Reload to make sure that your changes are synced to the cluster.

Verification

Log in to Open Data Hub and verify that your dashboard configurations apply.

Dashboard configuration options

The Open Data Hub dashboard includes a set of core features enabled by default that are designed to work for most scenarios. Administrators can configure the Open Data Hub dashboard from the OdhDashboardConfig custom resource (CR) in OpenShift Container Platform.

Table 1. Dashboard feature configuration options

Feature

Default

Description

dashboardConfig: enablement

true

Enables admin users to add applications to the Open Data Hub dashboard Application → Enabled page. To disable this ability, set the value to false.

dashboardConfig: disableInfo

false

On the Applications → Explore page, when a user clicks on an application tile, an information panel opens with more details about the application. To disable the information panel for all applications on the Applications → Explore page , set the value to true.

dashboardConfig: disableSupport

false

Shows the Support menu option when a user clicks the Help icon in the dashboard toolbar. To hide this menu option, set the value to true.

dashboardConfig: disableClusterManager

false

Shows the Settings → Cluster settings option in the dashboard navigation menu. To hide this menu option, set the value to true.

dashboardConfig: disableTracking

true

Allows Red Hat to collect data about Open Data Hub usage in your cluster. To enable data collection, set the value to false. You can also set this option in the Open Data Hub dashboard interface from the Settings → Cluster settings navigation menu.

dashboardConfig: disableBYONImageStream

false

Shows the Settings → Notebook images option in the dashboard navigation menu. To hide this menu option, set the value to false.

dashboardConfig: disableISVBadges

false

Shows the label on a tile that indicates whether the application is “Red Hat managed”, “Partner managed”, or “Self-managed”. To hide these labels, set the value to true.

dashboardConfig: disableUserManagement

false

Shows the Settings → User management option in the dashboard navigation menu. To hide this menu option, set the value to true.

dashboardConfig: disableProjects

false

Shows the Data Science Projects option in the dashboard navigation menu. To hide this menu option, set the value to true.

dashboardConfig: disablePipelines

false

Shows the Data Science Pipelines option in the dashboard navigation menu. To hide this menu option, set the value to true.

dashboardConfig: disableModelServing

false

Shows the Model Serving option in the dashboard navigation menu and in the list of components for the data science projects. To hide Model Serving from the dashboard navigation menu and from the list of components for data science projects, set the value to true.

dashboardConfig: disableProjectSharing

false

Allows users to share access to their data science projects with other users. To prevent users from sharing data science projects, set the value to true.

dashboardConfig: disableCustomServingRuntimes

false

Shows the Serving runtimes option in the dashboard navigation menu. To hide this menu option, set the value to true.

dashboardConfig: disableKServe

false

Enables the ability to select KServe as a Serving Platform. To disable this ability, set the value to true.

dashboardConfig: disableModelMesh

false

Enables the ability to select ModelMesh as a Serving Platform. To disable this ability, set the value to true.

dashboardConfig: disableAcceleratorProfiles

false

Shows the Accelerator profiles option in the dashboard navigation menu. To hide this menu option, set the value to true.

dashboardConfig: modelMetricsNamespace

false

Enables the namespace in which the Model Serving Metrics' Prometheus Operator is installed.

dashboardConfig: disablePerformanceMetrics

false

Shows the Endpoint Performance tab on the Model Serving page. To hide this tab, set the value to true.

notebookController: enabled

true

Controls the Notebook Controller options, such as whether it is enabled in the dashboard and which parts are visible.

notebookSizes

Allows you to customize names and resources for notebooks. The Kubernetes-style sizes are shown in the dropdown menu that appears when spawning notebooks with the Notebook Controller. Note: These sizes must follow conventions. For example, requests must be smaller than limits.

ModelServerSizes

Allows you to customize names and resources for model servers.

groupsConfig

Controls access to dashboard features, such as the spawner for allowed users and the cluster settings UI for admin users.

templateOrder

Specifies the order of custom Serving Runtime templates. When the user creates a new template, it is added to this list.

Managing applications that show in the dashboard

Adding an application to the dashboard

If you have installed an application in your OpenShift Container Platform cluster, you can add a tile for that application to the Open Data Hub dashboard (the ApplicationsEnabled page) to make it accessible for Open Data Hub users.

Prerequisites
  • You have cluster administrator privileges for your OpenShift Container Platform cluster.

  • The dashboard configuration enablement option is set to true (the default). Note that an admin user can disable this ability as described in Preventing users from adding applications to the dashboard.

Procedure
  1. Log in to the OpenShift Container Platform console as a cluster administrator.

  2. In the Administrator perspective, click HomeAPI Explorer.

  3. On the API Explorer page, search for the OdhApplication kind.

  4. Click the OdhApplication kind to open the resource details page.

  5. On the OdhApplication details page, select the redhat-ods-applications project from the Project list.

  6. Click the Instances tab.

  7. Click Create OdhApplication.

  8. On the Create OdhApplication page, copy the following code and paste it into the YAML editor.

    apiVersion: dashboard.opendatahub.io/v1
    kind: OdhApplication
    metadata:
      name: examplename
      namespace: redhat-ods-applications
      labels:
        app: odh-dashboard
        app.kubernetes.io/part-of: odh-dashboard
    spec:
      enable:
        validationConfigMap: examplename-enable
      img: >-
        <svg width="24" height="25" viewBox="0 0 24 25" fill="none" xmlns="http://www.w3.org/2000/svg">
        <path d="path data" fill="#ee0000"/>
        </svg>
      getStartedLink: 'https://example.org/docs/quickstart.html'
      route: exampleroutename
      routeNamespace: examplenamespace
      displayName: Example Name
      kfdefApplications: []
      support: third party support
      csvName: ''
      provider: example
      docsLink: 'https://example.org/docs/index.html'
      quickStart: ''
      getStartedMarkDown: >-
        # Example
    
        Enter text for the information panel.
    
      description: >-
        Enter summary text for the tile.
      category: Self-managed | Partner managed | {org-name} managed
  9. Modify the parameters in the code for your application.

    Tip
    To see example YAML files, click HomeAPI Explorer, select OdhApplication, click the Instances tab, select an instance, and then click the YAML tab.
  10. Click Create. The application details page appears.

  11. Log in to Open Data Hub.

  12. In the left menu, click ApplicationsExplore.

  13. Locate the new tile for your application and click it.

  14. In the information pane for the application, click Enable.

Verification
  • In the left menu of the Open Data Hub dashboard, click ApplicationsEnabled and verify that your application is available.

Preventing users from adding applications to the dashboard

By default, admin users are allowed to add applications to the Open Data Hub dashboard Application → Enabled page.

You can disable the ability for admin users to add applications to the dashboard.

Note: The Jupyter tile is enabled by default. To disable it, see Hiding the default Jupyter application.

Prerequisite
  • You have cluster administrator privileges for your OpenShift Container Platform cluster.

Procedure
  1. Log in to the OpenShift Container Platform console as a cluster administrator.

  2. Open the dashboard configuration file:

    1. In the Administrator perspective, click HomeAPI Explorer.

    2. In the search bar, enter OdhDashboardConfig to filter by kind.

    3. Click the OdhDashboardConfig custom resource (CR) to open the resource details page.

    4. Select the redhat-ods-applications project from the Project list.

    5. Click the Instances tab.

    6. Click the odh-dashboard-config instance to open the details page.

    7. Click the YAML tab.

  3. In the spec:dashboardConfig section, set the value of enablement to false to disable the ability for dashboard users to add applications to the dashboard.

  4. Click Save to apply your changes and then click Reload to make sure that your changes are synced to the cluster.

Verification

Open the Open Data Hub dashboard Application → Enabled page.

Showing or hiding information about enabled applications

If you have installed another application in your OpenShift Container Platform cluster, you can add a tile for that application to the Open Data Hub dashboard (the ApplicationsEnabled page) to make it accessible for Open Data Hub users.

Prerequisites
  • You have cluster administrator privileges for your OpenShift Container Platform cluster.

Procedure
  1. Log in to the OpenShift Container Platform console as a cluster administrator.

  2. In the Administrator perspective, click HomeAPI Explorer.

  3. On the API Explorer page, search for the OdhApplication kind.

  4. Click the OdhApplication kind to open the resource details page.

  5. On the OdhApplication details page, select the redhat-ods-applications project from the Project list.

  6. Click the Instances tab.

  7. Click Create OdhApplication.

  8. On the Create OdhApplication page, copy the following code and paste it into the YAML editor.

    apiVersion: dashboard.opendatahub.io/v1
    kind: OdhApplication
    metadata:
      name: examplename
      namespace: redhat-ods-applications
      labels:
        app: odh-dashboard
        app.kubernetes.io/part-of: odh-dashboard
    spec:
      enable:
        validationConfigMap: examplename-enable
      img: >-
        <svg width="24" height="25" viewBox="0 0 24 25" fill="none" xmlns="http://www.w3.org/2000/svg">
        <path d="path data" fill="#ee0000"/>
        </svg>
      getStartedLink: 'https://example.org/docs/quickstart.html'
      route: exampleroutename
      routeNamespace: examplenamespace
      displayName: Example Name
      kfdefApplications: []
      support: third party support
      csvName: ''
      provider: example
      docsLink: 'https://example.org/docs/index.html'
      quickStart: ''
      getStartedMarkDown: >-
        # Example
    
        Enter text for the information panel.
    
      description: >-
        Enter summary text for the tile.
      category: Self-managed | Partner managed | Red Hat managed
  9. Modify the parameters in the code for your application.

    Tip
    To see example YAML files, click HomeAPI Explorer, select OdhApplication, click the Instances tab, select an instance, and then click the YAML tab.
  10. Click Create. The application details page appears.

  11. Log in to Open Data Hub.

  12. In the left menu, click ApplicationsExplore.

  13. Locate the new tile for your application and click it.

  14. In the information pane for the application, click Enable.

Verification
  • In the left menu of the Open Data Hub dashboard, click ApplicationsEnabled and verify that your application is available.

Hiding the default Jupyter application

The Open Data Hub dashboard includes Jupyter as an enabled application by default.

To hide the Jupyter tile from the list of Enabled applications, edit the dashboard configuration file.

Prerequisite
  • You have cluster administrator privileges for your OpenShift Container Platform cluster.

Procedure
  1. Log in to the OpenShift Container Platform console as a cluster administrator.

  2. Open the dashboard configuration file:

    1. In the Administrator perspective, click HomeAPI Explorer.

    2. In the search bar, enter OdhDashboardConfig to filter by kind.

    3. Click the OdhDashboardConfig custom resource (CR) to open the resource details page.

    4. Select the redhat-ods-applications project from the Project list.

    5. Click the Instances tab.

    6. Click the odh-dashboard-config instance to open the details page.

    7. Click the YAML tab.

  3. In the spec:notebookController section, set the value of enabled to false to hide the Jupyter tile from the list of Enabled applications.

  4. Click Save to apply your changes and then click Reload to make sure that your changes are synced to the cluster.

Verification

In the Open Data Hub dashboard, select Applications> Enabled. You should not see the Jupyter tile.

Managing cluster resources

Configuring the default PVC size for your cluster

To configure how resources are claimed within your Open Data Hub cluster, you can change the default size of the cluster’s persistent volume claim (PVC) ensuring that the storage requested matches your common storage workflow. PVCs are requests for resources in your cluster and also act as claim checks to the resource.

Prerequisites
  • You have logged in to Open Data Hub.

Note
Changing the PVC setting restarts the Jupyter pod and makes Jupyter unavailable for up to 30 seconds. As a workaround, it is recommended that you perform this action outside of your organization’s typical working day.
Procedure
  1. From the Open Data Hub dashboard, click SettingsCluster settings.

  2. Under PVC size, enter a new size in gibibytes. The minimum size is 1 GiB, and the maximum size is 16384 GiB.

  3. Click Save changes.

Verification
  • New PVCs are created with the default storage size that you configured.

Additional resources

Restoring the default PVC size for your cluster

To change the size of resources utilized within your Open Data Hub cluster, you can restore the default size of your cluster’s persistent volume claim (PVC).

Prerequisites
  • You have logged in to Open Data Hub.

  • You are part of the administrator group for Open Data Hub in OpenShift Container Platform.

Procedure
  1. From the Open Data Hub dashboard, click SettingsCluster settings.

  2. Click Restore Default to restore the default PVC size of 20GiB.

  3. Click Save changes.

Verification
  • New PVCs are created with the default storage size of 20 GiB.

Additional resources

Overview of accelerators

If you work with large data sets, you can use accelerators to optimize the performance of your data science models in Open Data Hub. With accelerators, you can scale your work, reduce latency, and increase productivity. You can use accelerators in Open Data Hub to assist your data scientists in the following tasks:

  • Natural language processing (NLP)

  • Inference

  • Training deep neural networks

  • Data cleansing and data processing

Open Data Hub supports the following accelerators:

  • NVIDIA graphics processing units (GPUs)

    • To use compute-heavy workloads in your models, you can enable NVIDIA graphics processing units (GPUs) in Open Data Hub.

    • To enable GPUs on OpenShift, you must install the NVIDIA GPU Operator.

  • Habana Gaudi devices (HPUs)

    • Habana, an Intel company, provides hardware accelerators intended for deep learning workloads. You can use the Habana libraries and software associated with Habana Gaudi devices available from your notebook.

    • Before you can successfully enable Habana Gaudi devices on Open Data Hub, you must install the necessary dependencies and version 1.10 of the HabanaAI Operator. For more information about how to enable your OpenShift environment for Habana Gaudi devices, see HabanaAI Operator for OpenShift.

    • You can enable Habana Gaudi devices on-premises or with AWS DL1 compute nodes on an AWS instance.

Before you can use an accelerator in Open Data Hub, your OpenShift instance must contain an associated accelerator profile. For accelerators that are new to your deployment, you must configure an accelerator profile for the accelerator in context. You can create an accelerator profile from the SettingsAccelerator profiles page on the Open Data Hub dashboard. If your deployment contains existing accelerators that had associated accelerator profiles already configured, an accelerator profile is automatically created after you upgrade to the latest version of Open Data Hub.

Enabling GPU support in Open Data Hub

Optionally, to ensure that your data scientists can use compute-heavy workloads in their models, you can enable graphics processing units (GPUs) in Open Data Hub.

Prerequisites
  • You have logged in to your OpenShift Container Platform cluster.

  • You have the cluster-admin role in your OpenShift Container Platform cluster.

Procedure
  1. To enable GPU support on an OpenShift cluster, follow the instructions here: NVIDIA GPU Operator on Red Hat OpenShift Container Platform in the NVIDIA documentation.

  2. Delete the migration-gpu-status ConfigMap.

    1. In the OpenShift Container Platform web console, switch to the Administrator perspective.

    2. Set the Project to All Projects or redhat-ods-applications to ensure you can see the appropriate ConfigMap.

    3. Search for the migration-gpu-status ConfigMap.

    4. Click the action menu (⋮) and select Delete ConfigMap from the list.

      The Delete ConfigMap dialog appears.

    5. Inspect the dialog and confirm that you are deleting the correct ConfigMap.

    6. Click Delete.

  3. Restart the dashboard replicaset.

    1. In the OpenShift Container Platform web console, switch to the Administrator perspective.

    2. Click WorkloadsDeployments.

    3. Set the Project to All Projects or redhat-ods-applications to ensure you can see the appropriate deployment.

    4. Search for the rhods-dashboard deployment.

    5. Click the action menu (⋮) and select Restart Rollout from the list.

    6. Wait until the Status column indicates that all pods in the rollout have fully restarted.

Verification
  • The NVIDIA GPU Operator appears on the OperatorsInstalled Operators page in the OpenShift Container Platform web console.

  • The reset migration-gpu-status instance is present in the Instances tab on the AcceleratorProfile custom resource definition (CRD) details page.

After installing the NVIDIA GPU Operator, create an accelerator profile as described in Working with accelerator profiles.

Enabling Habana Gaudi devices

Before you can use Habana Gaudi devices in Open Data Hub, you must install the necessary dependencies and deploy the HabanaAI Operator.

Prerequisites
  • You have logged in to OpenShift Container Platform.

  • You have the cluster-admin role in OpenShift Container Platform.

Procedure
  1. To enable Habana Gaudi devices in Open Data Hub, follow the instructions at HabanaAI Operator for OpenShift.

  2. From the Open Data Hub dashboard, click SettingsAccelerator profiles.

    The Accelerator profiles page appears, displaying existing accelerator profiles. To enable or disable an existing accelerator profile, on the row containing the relevant accelerator profile, click the toggle in the Enable column.

  3. Click Create accelerator profile.

    The Create accelerator profile dialog opens.

  4. In the Name field, enter a name for the Habana Gaudi device.

  5. In the Identifier field, enter a unique string that identifies the Habana Gaudi device, for example, habana.ai/gaudi.

  6. Optional: In the Description field, enter a description for the Habana Gaudi device.

  7. To enable or disable the accelerator profile for the Habana Gaudi device immediately after creation, click the toggle in the Enable column.

  8. Optional: Add a toleration to schedule pods with matching taints.

    1. Click Add toleration.

      The Add toleration dialog opens.

    2. From the Operator list, select one of the following options:

      • Equal - The key/value/effect parameters must match. This is the default.

      • Exists - The key/effect parameters must match. You must leave a blank value parameter, which matches any.

    3. From the Effect list, select one of the following options:

      • None

      • NoSchedule - New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain.

      • PreferNoSchedule - New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain.

      • NoExecute - New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed.

    4. In the Key field, enter the toleration key habana.ai/gaudi. The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.

    5. In the Value field, enter a toleration value. The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.

    6. In the Toleration Seconds section, select one of the following options to specify how long a pod stays bound to a node that has a node condition.

      • Forever - Pods stays permanently bound to a node.

      • Custom value - Enter a value, in seconds, to define how long pods stay bound to a node that has a node condition.

    7. Click Add.

  9. Click Create accelerator profile.

Verification
  • From the Administrator perspective, the following Operators appear on the OperatorsInstalled Operators page.

    • HabanaAI

    • Node Feature Discovery (NFD)

    • Kernel Module Management (KMM)

  • The Accelerator list displays the Habana Gaudi accelerator on the Start a notebook server page. After you select an accelerator, the Number of accelerators field appears, which you can use to choose the number of accelerators for your notebook server.

  • The accelerator profile appears on the Accelerator profiles page

  • The accelerator profile appears on the Instances tab on the details page for the AcceleratorProfile custom resource definition (CRD).

Additional resources

Allocating additional resources to Open Data Hub users

As a cluster administrator, you can allocate additional resources to a cluster to support compute-intensive data science work. This support includes increasing the number of nodes in the cluster and changing the cluster’s allocated machine pool.

For more information about allocating additional resources to an OpenShift Container Platform cluster, see Manually scaling a compute machine set.

Managing Jupyter notebook servers

Accessing the Jupyter administration interface

You can use the Jupyter administration interface to control notebook servers in your Open Data Hub environment.

Prerequisite
  • You are part of the OpenShift Container Platform administrator group.

Procedure
  • To access the Jupyter administration interface from Open Data Hub, perform the following actions:

    1. In Open Data Hub, in the Applications section of the left menu, click Enabled.

    2. Locate the Jupyter tile and click Launch application.

    3. On the page that opens when you launch Jupyter, click the Administration tab.

      The Administration page opens.

  • To access the Jupyter administration interface from JupyterLab, perform the following actions:

    1. Click FileHub Control Panel.

    2. On the page that opens in Open Data Hub, click the Administration tab.

      The Administration page opens.

Verification
  • You can see the Jupyter administration interface

Starting notebook servers owned by other users

Administrators can start a notebook server for another existing user from the Jupyter administration interface.

Prerequisites
Procedure
  1. On the page that opens when you launch Jupyter, click the Administration tab.

  2. On the Administration tab, perform the following actions:

    1. In the Users section, locate the user whose notebook server you want to start.

    2. Click Start server beside the relevant user.

    3. Complete the Start a notebook server page.

    4. Optional: Select the Start server in current tab checkbox if necessary.

    5. Click Start server.

      After the server starts, you see one of the following behaviors:

      • If you previously selected the Start server in current tab checkbox, the JupyterLab interface opens in the current tab of your web browser.

      • If you did not previously select the Start server in current tab checkbox, the Starting server dialog box prompts you to open the server in a new browser tab or in the current tab.

        The JupyterLab interface opens according to your selection.

Verification
  • The JupyterLab interface opens.

Accessing notebook servers owned by other users

Administrators can access notebook servers that are owned by other users to correct configuration errors or to help them troubleshoot problems with their environment.

Prerequisites
Procedure
  1. On the page that opens when you launch Jupyter, click the Administration tab.

  2. On the Administration page, perform the following actions:

    1. In the Users section, locate the user that the notebook server belongs to.

    2. Click View server beside the relevant user.

    3. On the Notebook server control panel page, click Access notebook server.

Verification
  • The user’s notebook server opens in JupyterLab.

Stopping notebook servers owned by other users

Administrators can stop notebook servers that are owned by other users to reduce resource consumption on the cluster, or as part of removing a user and their resources from the cluster.

Prerequisites
  • If you are using specialized Open Data Hub groups, you are part of the administrator group (for example, odh-admins). If you are not using specialized groups, you are part of the OpenShift Container Platform administrator group.

  • You have launched the Jupyter application, as described in Launching Jupyter and starting a notebook server.

  • The notebook server that you want to stop is running.

Procedure
  1. On the page that opens when you launch Jupyter, click the Administration tab.

  2. Stop one or more servers.

    • If you want to stop one or more specific servers, perform the following actions:

      1. In the Users section, locate the user that the notebook server belongs to.

      2. To stop the notebook server, perform one of the following actions:

        • Click the action menu () beside the relevant user and select Stop server.

        • Click View server beside the relevant user and then click Stop notebook server.

          The Stop server dialog box appears.

      3. Click Stop server.

    • If you want to stop all servers, perform the following actions:

      1. Click the Stop all servers button.

      2. Click OK to confirm stopping all servers.

Verification
  • The Stop server link beside each server changes to a Start server link when the notebook server has stopped.

Stopping idle notebooks

You can reduce resource usage in your Open Data Hub deployment by stopping notebook servers that have been idle (without logged in users) for a period of time. This is useful when resource demand in the cluster is high. By default, idle notebooks are not stopped after a specific time limit.

Note

If you have configured your cluster settings to disconnect all users from a cluster after a specified time limit, then this setting takes precedence over the idle notebook time limit. Users are logged out of the cluster when their session duration reaches the cluster-wide time limit.

Prerequisites
  • You have logged in to Open Data Hub.

  • You are part of the administrator group for Open Data Hub in OpenShift Container Platform.

Procedure
  1. From the Open Data Hub dashboard, click SettingsCluster settings.

  2. Under Stop idle notebooks, select Stop idle notebooks after.

  3. Enter a time limit, in hours and minutes, for when idle notebooks are stopped.

  4. Click Save changes.

Verification
  • The notebook-controller-culler-config ConfigMap, located in the redhat-ods-applications project on the WorkloadsConfigMaps page, contains the following culling configuration settings:

    • ENABLE_CULLING: Specifies if the culling feature is enabled or disabled (this is false by default).

    • IDLENESS_CHECK_PERIOD: The polling frequency to check for a notebook’s last known activity (in minutes).

    • CULL_IDLE_TIME: The maximum allotted time to scale an inactive notebook to zero (in minutes).

  • Idle notebooks stop at the time limit that you set.

Adding notebook pod tolerations

If you want to dedicate certain machine pools to only running notebook pods, you can allow notebook pods to be scheduled on specific nodes by adding a toleration. Taints and tolerations allow a node to control which pods should (or should not) be scheduled on them. For more information, see Understanding taints and tolerations.

This capability is useful if you want to make sure that notebook servers are placed on nodes that can handle their needs. By preventing other workloads from running on these specific nodes, you can ensure that the necessary resources are available to users who need to work with large notebook sizes.

Prerequisites
  • You have logged in to Open Data Hub.

  • You are part of the administrator group for Open Data Hub in OpenShift Container Platform.

  • You are familiar with OpenShift Container Platform taints and tolerations, as described in Understanding taints and tolerations.

Procedure
  1. From the Open Data Hub dashboard, click SettingsCluster settings.

  2. Under Notebook pod tolerations, select Add a toleration to notebook pods to allow them to be scheduled to tainted nodes.

  3. In the Toleration key for notebook pods field, enter a toleration key. The key is any string, up to 253 characters. The key must begin with a letter or number, and can contain letters, numbers, hyphens, dots, and underscores. For example, notebooks-only.

  4. Click Save changes. The toleration key is applied to new notebook pods when they are created.

    For existing notebook pods, the toleration key is applied when the notebook pods are restarted. If you are using Jupyter, see Updating notebook server settings by restarting your server. If you are using a workbench in a data science project, see Starting a workbench.

Next step

In OpenShift Container Platform, add a matching taint key (with any value) to the machine pools that you want to dedicate to notebooks. For more information, see Controlling pod placement using node taints.

Verification
  1. In the OpenShift Container Platform console, for a pod that is running, click WorkloadsPods. Otherwise, for a pod that is stopped, click WorkloadsStatefulSet.

  2. Search for your workbench pod name and then click the name to open the pod details page.

  3. Confirm that the assigned Node and Tolerations are correct.

Configuring a custom notebook image

You can configure custom notebook images that cater to your project’s specific requirements. From the Notebook images page, you can enable or disable a previously imported notebook image and create an accelerator profile as a recommended accelerator for existing notebook images.

Prerequisites
  • You have logged in to Open Data Hub.

  • Your custom notebook image exists in an image registry and is accessible.

  • You can access the Settings → Notebook images dashboard navigation menu option.

Procedure
  1. From the Open Data Hub dashboard, click SettingsNotebook images.

    The Notebook images page appears. Previously imported notebook images are displayed. To enable or disable a previously imported notebook image, on the row containing the relevant notebook image, click the toggle in the Enable column.

    Note

    If you have already configured an accelerator identifier for a notebook image, you can specify a recommended accelerator for the notebook image by creating an associated accelerator profile. To do this, click Create profile on the row containing the notebook image and complete the relevant fields. If the notebook image does not contain an accelerator identifier, you must manually configure one before creating an associated accelerator profile.

  2. Click Import new image. Alternatively, if no previously imported images were found, click Import image.

    The Import Notebook images dialog appears.

  3. In the Image location field, enter the URL of the repository containing the notebook image. For example: quay.io/my-repo/my-image:tag, quay.io/my-repo/my-image@sha256:xxxxxxxxxxxxx, or docker.io/my-repo/my-image:tag.

  4. In the Name field, enter an appropriate name for the notebook image.

  5. Optional: In the Description field, enter a description for the notebook image.

  6. Optional: From the Accelerator identifier list, select an identifier to set its accelerator as recommended with the notebook image. If the notebook image contains only one accelerator identifier, the identifier name displays by default.

  7. Optional: Add software to the notebook image. After the import has completed, the software is added to the notebook image’s meta-data and displayed on the Jupyter server creation page.

    1. Click the Software tab.

    2. Click the Add software button.

    3. Click Edit (The Edit icon).

    4. Enter the Software name.

    5. Enter the software Version.

    6. Click Confirm (The Confirm icon) to confirm your entry.

    7. To add additional software, click Add software, complete the relevant fields, and confirm your entry.

  8. Optional: Add packages to the notebook images. After the import has completed, the packages are added to the notebook image’s meta-data and displayed on the Jupyter server creation page.

    1. Click the Packages tab.

    2. Click the Add package button.

    3. Click Edit (The Edit icon).

    4. Enter the Package name.

    5. Enter the package Version.

    6. Click Confirm (The Confirm icon) to confirm your entry.

    7. To add an additional package, click Add package, complete the relevant fields, and confirm your entry.

  9. Click Import.

Verification
  • The notebook image that you imported is displayed in the table on the Notebook images page.

  • Your custom notebook image is available for selection on the Start a notebook server page in Jupyter.

Backing up storage data

It is a best practice to back up the data on your persistent volume claims (PVCs) regularly.

Backing up your data is particularly important before you delete a user and before you uninstall Open Data Hub, as all PVCs are deleted when Open Data Hub is uninstalled.

See the documentation for your cluster platform for more information about backing up your PVCs.

Additional resources