Open Data Hub logo

Info alert:Important Notice

Please note that more information about the previous v2 releases can be found here. You can use "Find a release" search bar to search for a particular release.

Working with accelerators

Use accelerators, such as NVIDIA GPUs and Intel Gaudi AI accelerators, to optimize the performance of your end-to-end data science workflows.

Overview of accelerators

If you work with large data sets, you can use accelerators to optimize the performance of your data science models in Open Data Hub. With accelerators, you can scale your work, reduce latency, and increase productivity. You can use accelerators in Open Data Hub to assist your data scientists in the following tasks:

  • Natural language processing (NLP)

  • Inference

  • Training deep neural networks

  • Data cleansing and data processing

Open Data Hub supports the following accelerators:

  • NVIDIA graphics processing units (GPUs)

    • To use compute-heavy workloads in your models, you can enable NVIDIA graphics processing units (GPUs) in Open Data Hub.

    • To enable GPUs on OpenShift, you must install the NVIDIA GPU Operator.

  • Intel Gaudi AI accelerators

    • Intel provides hardware accelerators intended for deep learning workloads.

    • Before you can enable Intel Gaudi AI accelerators in Open Data Hub, you must install the necessary dependencies. Also, the version of the Intel Gaudi AI Operator that you install must match the version of the corresponding workbench image in your deployment.

    • A workbench image for Intel Gaudi accelerators is not included in Open Data Hub by default. Instead, you must create and configure a custom notebook to enable Intel Gaudi AI support.

    • You can enable Intel Gaudi AI accelerators on-premises or with AWS DL1 compute nodes on an AWS instance.

Before you can use an accelerator in Open Data Hub, your OpenShift instance must contain an associated accelerator profile. For accelerators that are new to your deployment, you must configure an accelerator profile for the accelerator in context. You can create an accelerator profile from the SettingsAccelerator profiles page on the Open Data Hub dashboard. If your deployment contains existing accelerators that had associated accelerator profiles already configured, an accelerator profile is automatically created after you upgrade to the latest version of Open Data Hub.

Enabling NVIDIA GPUs

Before you can use NVIDIA GPUs in Open Data Hub, you must install the NVIDIA GPU Operator.

Prerequisites
  • You have logged in to your OpenShift Container Platform cluster.

  • You have the cluster-admin role in your OpenShift Container Platform cluster.

Procedure
  1. To enable GPU support on an OpenShift cluster, follow the instructions here: NVIDIA GPU Operator on Red Hat OpenShift Container Platform in the NVIDIA documentation.

  2. Delete the migration-gpu-status ConfigMap.

    1. In the OpenShift Container Platform web console, switch to the Administrator perspective.

    2. Set the Project to All Projects or redhat-ods-applications to ensure you can see the appropriate ConfigMap.

    3. Search for the migration-gpu-status ConfigMap.

    4. Click the action menu (⋮) and select Delete ConfigMap from the list.

      The Delete ConfigMap dialog appears.

    5. Inspect the dialog and confirm that you are deleting the correct ConfigMap.

    6. Click Delete.

  3. Restart the dashboard replicaset.

    1. In the OpenShift Container Platform web console, switch to the Administrator perspective.

    2. Click WorkloadsDeployments.

    3. Set the Project to All Projects or redhat-ods-applications to ensure you can see the appropriate deployment.

    4. Search for the rhods-dashboard deployment.

    5. Click the action menu (⋮) and select Restart Rollout from the list.

    6. Wait until the Status column indicates that all pods in the rollout have fully restarted.

Verification
  • The NVIDIA GPU Operator appears on the OperatorsInstalled Operators page in the OpenShift Container Platform web console.

  • The reset migration-gpu-status instance is present on the Instances tab on the AcceleratorProfile custom resource definition (CRD) details page.

After installing the NVIDIA GPU Operator, create an accelerator profile as described in Working with accelerators.

Intel Gaudi AI Accelerator integration

To accelerate your high-performance deep learning (DL) models, you can integrate Intel Gaudi AI accelerators in Open Data Hub. This allows your data scientists to use Gaudi libraries and software associated with Intel Gaudi AI accelerators from their workbench. Before you can enable Intel Gaudi AI accelerators in Open Data Hub, you must install the necessary dependencies. Also, the version of the Intel Gaudi AI Operator that you install must match the version of the corresponding workbench image in your deployment. However, a workbench image for Intel Gaudi accelerators is not included in Open Data Hub by default. Instead, you must create and configure a custom notebook to enable Intel Gaudi AI support.

You can use Intel Gaudi AI accelerators in an Amazon EC2 DL1 instance on OpenShift. Therefore, your OpenShift platform must support EC2 DL1 instances. Before you can use your Intel Gaudi AI accelerators, you must enable them in your OpenShift environment and configure an accelerator profile for each device. When enabled and fully configured with a custom notebook, Intel Gaudi AI accelerators are available to your data scientists when they create a workbench instance or serve a model.

To identify the Intel Gaudi AI accelerators present in your deployment, use the lspci utility. For more information, see lspci(8) - Linux man page.

Important

If the lspci utility indicates that Intel Gaudi AI accelerators are present in your deployment, it does not necessarily mean that the devices are ready to use.

Working with accelerator profiles

To configure accelerators for your data scientists to use in Open Data Hub, you must create an associated accelerator profile. An accelerator profile is a custom resource definition (CRD) on OpenShift that has an AcceleratorProfile resource, and defines the specification of the accelerator. You can create and manage accelerator profiles by selecting SettingsAccelerator profiles on the Open Data Hub dashboard.

For accelerators that are new to your deployment, you must manually configure an accelerator profile for each accelerator. If your deployment contains an accelerator before you upgrade, the associated accelerator profile remains after the upgrade. You can manage the accelerators that appear to your data scientists by assigning specific accelerator profiles to your custom notebook images. This example shows the code for a Habana Gaudi 1 accelerator profile:

---
apiVersion: dashboard.opendatahub.io/v1alpha
kind: AcceleratorProfile
metadata:
  name: hpu-profile-first-gen-gaudi
spec:
  displayName: Habana HPU - 1st Gen Gaudi
  description: First Generation Habana Gaudi device
  enabled: true
  identifier: habana.ai/gaudi
  tolerations:
    - effect: NoSchedule
      key: habana.ai/gaudi
      operator: Exists
---

The accelerator profile code appears on the Instances tab on the details page for the AcceleratorProfile custom resource definition (CRD). For more information about accelerator profile attributes, see the following table:

Table 1. Accelerator profile attributes
Attribute Type Required Description

displayName

String

Required

The display name of the accelerator profile.

description

String

Optional

Descriptive text defining the accelerator profile.

identifier

String

Required

A unique identifier defining the accelerator resource.

enabled

Boolean

Required

Determines if the accelerator is visible in Open Data Hub.

tolerations

Array

Optional

The tolerations that can apply to notebooks and serving runtimes that use the accelerator. For more information about the toleration attributes that Open Data Hub supports, see Toleration v1 core.

Creating an accelerator profile

To configure accelerators for your data scientists to use in Open Data Hub, you must create an associated accelerator profile.

Prerequisites
  • You have logged in to Open Data Hub.

Procedure
  1. From the Open Data Hub dashboard, click SettingsAccelerator profiles.

    The Accelerator profiles page appears, displaying existing accelerator profiles. To enable or disable an existing accelerator profile, on the row containing the relevant accelerator profile, click the toggle in the Enable column.

  2. Click Create accelerator profile.

    The Create accelerator profile dialog appears.

  3. In the Name field, enter a name for the accelerator profile.

  4. In the Identifier field, enter a unique string that identifies the hardware accelerator associated with the accelerator profile.

  5. Optional: In the Description field, enter a description for the accelerator profile.

  6. To enable or disable the accelerator profile immediately after creation, click the toggle in the Enable column.

  7. Optional: Add a toleration to schedule pods with matching taints.

    1. Click Add toleration.

      The Add toleration dialog opens.

    2. From the Operator list, select one of the following options:

      • Equal - The key/value/effect parameters must match. This is the default.

      • Exists - The key/effect parameters must match. You must leave a blank value parameter, which matches any.

    3. From the Effect list, select one of the following options:

      • None

      • NoSchedule - New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain.

      • PreferNoSchedule - New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain.

      • NoExecute - New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed.

    4. In the Key field, enter a toleration key. The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.

    5. In the Value field, enter a toleration value. The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.

    6. In the Toleration Seconds section, select one of the following options to specify how long a pod stays bound to a node that has a node condition.

      • Forever - Pods stays permanently bound to a node.

      • Custom value - Enter a value, in seconds, to define how long pods stay bound to a node that has a node condition.

    7. Click Add.

  8. Click Create accelerator profile.

Verification
  • The accelerator profile appears on the Accelerator profiles page.

  • The Accelerator list appears on the Start a notebook server page. After you select an accelerator, the Number of accelerators field appears, which you can use to choose the number of accelerators for your notebook server.

  • The accelerator profile appears on the Instances tab on the details page for the AcceleratorProfile custom resource definition (CRD).

Updating an accelerator profile

You can update the existing accelerator profiles in your deployment. You might want to change important identifying information, such as the display name, the identifier, or the description.

Prerequisites
  • You have logged in to Open Data Hub.

  • The accelerator profile exists in your deployment.

Procedure
  1. From the Open Data Hub dashboard, click SettingsNotebook images.

    The Notebook images page appears. Previously imported notebook images are displayed. To enable or disable a previously imported notebook image, on the row containing the relevant notebook image, click the toggle in the Enable column.

  2. Click the action menu (⋮) and select Edit from the list.

    The Edit accelerator profile dialog opens.

  3. In the Name field, update the accelerator profile name.

  4. In the Identifier field, update the unique string that identifies the hardware accelerator associated with the accelerator profile, if applicable.

  5. Optional: In the Description field, update the accelerator profile.

  6. To enable or disable the accelerator profile immediately after creation, click the toggle in the Enable column.

  7. Optional: Add a toleration to schedule pods with matching taints.

    1. Click Add toleration.

      The Add toleration dialog opens.

    2. From the Operator list, select one of the following options:

      • Equal - The key/value/effect parameters must match. This is the default.

      • Exists - The key/effect parameters must match. You must leave a blank value parameter, which matches any.

    3. From the Effect list, select one of the following options:

      • None

      • NoSchedule - New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain.

      • PreferNoSchedule - New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain.

      • NoExecute - New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed.

    4. In the Key field, enter a toleration key. The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.

    5. In the Value field, enter a toleration value. The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.

    6. In the Toleration Seconds section, select one of the following options to specify how long a pod stays bound to a node that has a node condition.

      • Forever - Pods stays permanently bound to a node.

      • Custom value - Enter a value, in seconds, to define how long pods stay bound to a node that has a node condition.

    7. Click Add.

  8. If your accelerator profile contains existing tolerations, you can edit them.

    1. Click the action menu (⋮) on the row containing the toleration that you want to edit and select Edit from the list.

    2. Complete the applicable fields to update the details of the toleration.

    3. Click Update.

  9. Click Update accelerator profile.

Verification
  • If your accelerator profile has new identifying information, this information appears in the Accelerator list on the Start a notebook server page.

Deleting an accelerator profile

To discard accelerator profiles that you no longer require, you can delete them so that they do not appear on the dashboard.

Prerequisites
  • You have logged in to Open Data Hub.

  • The accelerator profile that you want to delete exists in your deployment.

Procedure
  1. From the Open Data Hub dashboard, click SettingsAccelerator profiles.

    The Accelerator profiles page appears, displaying existing accelerator profiles.

  2. Click the action menu () beside the accelerator profile that you want to delete and click Delete.

    The Delete accelerator profile dialog opens.

  3. Enter the name of the accelerator profile in the text field to confirm that you intend to delete it.

  4. Click Delete.

Verification
  • The accelerator profile no longer appears on the Accelerator profiles page.

Viewing accelerator profiles

If you have defined accelerator profiles for Open Data Hub, you can view, enable, and disable them from the Accelerator profiles page.

Prerequisites
  • You have logged in to Open Data Hub.

  • Your deployment contains existing accelerator profiles.

Procedure
  1. From the Open Data Hub dashboard, click SettingsAccelerator profiles.

    The Accelerator profiles page appears, displaying existing accelerator profiles.

  2. Inspect the list of accelerator profiles. To enable or disable an accelerator profile, on the row containing the accelerator profile, click the toggle in the Enable column.

Verification
  • The Accelerator profiles page appears appears, displaying existing accelerator profiles.

To help you indicate the most suitable accelerators to your data scientists, you can configure a recommended tag to appear on the dashboard.

Prerequisites
  • You have logged in to OpenShift Container Platform.

  • You have the cluster-admin role in OpenShift Container Platform.

  • You have existing notebook images in your deployment.

Procedure
  1. From the Open Data Hub dashboard, click SettingsNotebook images.

    The Notebook images page appears. Previously imported notebook images are displayed.

  2. Click the action menu (⋮) and select Edit from the list.

    The Update notebook image dialog opens.

  3. From the Accelerator identifier list, select an identifier to set its accelerator as recommended with the notebook image. If the notebook image contains only one accelerator identifier, the identifier name displays by default.

  4. Click Update.

    Note

    If you have already configured an accelerator identifier for a notebook image, you can specify a recommended accelerator for the notebook image by creating an associated accelerator profile. To do this, click Create profile on the row containing the notebook image and complete the relevant fields. If the notebook image does not contain an accelerator identifier, you must manually configure one before creating an associated accelerator profile.

Verification
  • When your data scientists select an accelerator with a specific notebook image, a tag appears next to the corresponding accelerator indicating its compatibility.

To help you indicate the most suitable accelerators to your data scientists, you can configure a recommended accelerator tag for your serving runtimes.

Prerequisites
  • You have logged in to Open Data Hub.

  • If you use Open Data Hub groups, you are part of the admin group (for example, odh-admins) in OpenShift.

Procedure
  1. From the Open Data Hub dashboard, click Settings > Serving runtimes.

    The Serving runtimes page opens and shows the model-serving runtimes that are already installed and enabled in your Open Data Hub deployment. By default, the OpenVINO Model Server runtime is pre-installed and enabled in Open Data Hub.

  2. Edit your custom runtime that you want to add the recommended accelerator tag to, click the action menu (⋮) and select Edit.

    A page with an embedded YAML editor opens.

    Note
    You cannot directly edit the OpenVINO Model Server runtime that is included in Open Data Hub by default. However, you can clone this runtime and edit the cloned version. You can then add the edited clone as a new, custom runtime. To do this, click the action menu beside the OpenVINO Model Server and select Duplicate.
  3. In the editor, enter the YAML code to apply the annotation opendatahub.io/recommended-accelerators. The excerpt in this example shows the annotation to set a recommended tag for an NVIDIA GPU accelerator:

    metadata:
    	annotations:
    		opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
  4. Click Update.

Verification
  • When your data scientists select an accelerator with a specific serving runtime, a tag appears next to the corresponding accelerator indicating its compatibility.