apiVersion: opendatahub.io/v1alpha kind: OdhDashboardConfig metadata: name: odh-dashboard-config spec: dashboardConfig: enablement: true disableBYONImageStream: false disableClusterManager: false disableISVBadges: false disableInfo: false disableSupport: false disableTracking: true disableProjects: true disablePipelines: true disableModelServing: true disableProjectSharing: true disableCustomServingRuntimes: false disableAcceleratorProfiles: true modelMetricsNamespace: '' disablePerformanceMetrics: false notebookController: enabled: true notebookSizes: - name: Small resources: limits: cpu: '2' memory: 2Gi requests: cpu: '1' memory: 1Gi - name: Medium resources: limits: cpu: '4' memory: 4Gi requests: cpu: '2' memory: 2Gi - name: Large resources: limits: cpu: '8' memory: 8Gi requests: cpu: '4' memory: 4Gi modelServerSizes: - name: Small resources: limits: cpu: '2' memory: 8Gi requests: cpu: '1' memory: 4Gi - name: Medium resources: limits: cpu: '8' memory: 10Gi requests: cpu: '4' memory: 8Gi - name: Large resources: limits: cpu: '10' memory: 20Gi requests: cpu: '6' memory: 16Gi groupsConfig: adminGroups: 'odh-admins' allowedGroups: 'system:authenticated' templateOrder: - 'ovms' templateDisablement: - 'ovms'
Managing resources
As an Open Data Hub administrator, you can manage the following resources:
-
The dashboard interface, including the visibility of navigation menu options
-
Applications that show in the dashboard
-
Cluster resources to support compute-intensive data science work
-
Jupyter notebook servers
-
Data storage backup
Customizing the dashboard
The Open Data Hub dashboard provides features that are designed to work for most scenarios. These features are configured in the OdhDashboardConfig
custom resource (CR) file.
To see a description of the options in the Open Data Hub dashboard configuration file, see Dashboard configuration options.
As an administrator, you can customize the interface of the dashboard, for example to show or hide some of the dashboard navigation menu options. To change the default settings of the dashboard, edit the OdhDashboardConfig
custom resource (CR) file as described in Editing the dashboard configuration file.
Editing the dashboard configuration file
As an administrator, you can customize the interface of the dashboard by editing the dashboard configuration file.
-
You have cluster administrator privileges for your OpenShift Container Platform cluster.
-
Log in to the OpenShift Container Platform console as a cluster administrator.
-
In the Administrator perspective, click Home → API Explorer.
-
In the search bar, enter
OdhDashboardConfig
to filter by kind. -
Click the
OdhDashboardConfig
custom resource (CR) to open the resource details page. -
Select the
redhat-ods-applications
project from the Project list. -
Click the Instances tab.
-
Click the
odh-dashboard-config
instance to open the details page. -
Click the YAML tab. Here is an example
OdhDashboardConfig
file showing default values: -
Edit the values of the options that you want to change.
-
Click Save to apply your changes and then click Reload to make sure that your changes are synced to the cluster.
Log in to Open Data Hub and verify that your dashboard configurations apply.
Dashboard configuration options
The Open Data Hub dashboard includes a set of core features enabled by default that are designed to work for most scenarios. Administrators can configure the Open Data Hub dashboard from the OdhDashboardConfig
custom resource (CR) in OpenShift Container Platform.
Feature |
Default |
Description |
|
|
Enables admin users to add applications to the Open Data Hub dashboard Application → Enabled page. To disable this ability, set the value to |
|
|
On the Applications → Explore page, when a user clicks on an application tile, an information panel opens with more details about the application. To disable the information panel for all applications on the Applications → Explore page , set the value to |
|
|
Shows the Support menu option when a user clicks the Help icon in the dashboard toolbar. To hide this menu option, set the value to |
|
|
Shows the Settings → Cluster settings option in the dashboard navigation menu. To hide this menu option, set the value to |
|
|
Allows Red Hat to collect data about Open Data Hub usage in your cluster. To enable data collection, set the value to |
|
|
Shows the Settings → Notebook images option in the dashboard navigation menu. To hide this menu option, set the value to |
|
|
Shows the label on a tile that indicates whether the application is “Red Hat managed”, “Partner managed”, or “Self-managed”. To hide these labels, set the value to |
|
|
Shows the Settings → User management option in the dashboard navigation menu. To hide this menu option, set the value to |
|
|
Shows the Data Science Projects option in the dashboard navigation menu. To hide this menu option, set the value to |
|
|
Shows the Data Science Pipelines option in the dashboard navigation menu. To hide this menu option, set the value to |
|
|
Shows the Model Serving option in the dashboard navigation menu and in the list of components for the data science projects. To hide Model Serving from the dashboard navigation menu and from the list of components for data science projects, set the value to |
|
|
Allows users to share access to their data science projects with other users. To prevent users from sharing data science projects, set the value to |
|
|
Shows the Serving runtimes option in the dashboard navigation menu. To hide this menu option, set the value to |
|
|
Enables the ability to select KServe as a Serving Platform. To disable this ability, set the value to |
|
|
Enables the ability to select ModelMesh as a Serving Platform. To disable this ability, set the value to |
|
|
Shows the Accelerator profiles option in the dashboard navigation menu. To hide this menu option, set the value to |
|
|
Enables the namespace in which the Model Serving Metrics' Prometheus Operator is installed. |
|
|
Shows the Endpoint Performance tab on the Model Serving page. To hide this tab, set the value to |
|
|
Controls the Notebook Controller options, such as whether it is enabled in the dashboard and which parts are visible. |
|
Allows you to customize names and resources for notebooks. The Kubernetes-style sizes are shown in the dropdown menu that appears when spawning notebooks with the Notebook Controller. Note: These sizes must follow conventions. For example, requests must be smaller than limits. |
|
|
Allows you to customize names and resources for model servers. |
|
|
Controls access to dashboard features, such as the spawner for allowed users and the cluster settings UI for admin users. |
|
|
Specifies the order of custom Serving Runtime templates. When the user creates a new template, it is added to this list. |
Managing applications that show in the dashboard
Adding an application to the dashboard
If you have installed an application in your OpenShift Container Platform cluster, you can add a tile for that application to the Open Data Hub dashboard (the Applications → Enabled page) to make it accessible for Open Data Hub users.
-
You have cluster administrator privileges for your OpenShift Container Platform cluster.
-
The dashboard configuration enablement option is set to
true
(the default). Note that an admin user can disable this ability as described in Preventing users from adding applications to the dashboard.
-
Log in to the OpenShift Container Platform console as a cluster administrator.
-
In the Administrator perspective, click Home → API Explorer.
-
On the API Explorer page, search for the
OdhApplication
kind. -
Click the
OdhApplication
kind to open the resource details page. -
On the OdhApplication details page, select the
redhat-ods-applications
project from the Project list. -
Click the Instances tab.
-
Click Create OdhApplication.
-
On the Create OdhApplication page, copy the following code and paste it into the YAML editor.
apiVersion: dashboard.opendatahub.io/v1 kind: OdhApplication metadata: name: examplename namespace: redhat-ods-applications labels: app: odh-dashboard app.kubernetes.io/part-of: odh-dashboard spec: enable: validationConfigMap: examplename-enable img: >- <svg width="24" height="25" viewBox="0 0 24 25" fill="none" xmlns="http://www.w3.org/2000/svg"> <path d="path data" fill="#ee0000"/> </svg> getStartedLink: 'https://example.org/docs/quickstart.html' route: exampleroutename routeNamespace: examplenamespace displayName: Example Name kfdefApplications: [] support: third party support csvName: '' provider: example docsLink: 'https://example.org/docs/index.html' quickStart: '' getStartedMarkDown: >- # Example Enter text for the information panel. description: >- Enter summary text for the tile. category: Self-managed | Partner managed | {org-name} managed
-
Modify the parameters in the code for your application.
TipTo see example YAML files, click Home → API Explorer, select OdhApplication
, click the Instances tab, select an instance, and then click the YAML tab. -
Click Create. The application details page appears.
-
Log in to Open Data Hub.
-
In the left menu, click Applications → Explore.
-
Locate the new tile for your application and click it.
-
In the information pane for the application, click Enable.
-
In the left menu of the Open Data Hub dashboard, click Applications → Enabled and verify that your application is available.
Preventing users from adding applications to the dashboard
By default, admin users are allowed to add applications to the Open Data Hub dashboard Application → Enabled page.
You can disable the ability for admin users to add applications to the dashboard.
Note: The Jupyter tile is enabled by default. To disable it, see Hiding the default Jupyter application.
-
You have cluster administrator privileges for your OpenShift Container Platform cluster.
-
Log in to the OpenShift Container Platform console as a cluster administrator.
-
Open the dashboard configuration file:
-
In the Administrator perspective, click Home → API Explorer.
-
In the search bar, enter
OdhDashboardConfig
to filter by kind. -
Click the
OdhDashboardConfig
custom resource (CR) to open the resource details page. -
Select the
redhat-ods-applications
project from the Project list. -
Click the Instances tab.
-
Click the
odh-dashboard-config
instance to open the details page. -
Click the YAML tab.
-
-
In the
spec:dashboardConfig
section, set the value ofenablement
tofalse
to disable the ability for dashboard users to add applications to the dashboard. -
Click Save to apply your changes and then click Reload to make sure that your changes are synced to the cluster.
Open the Open Data Hub dashboard Application → Enabled page.
Showing or hiding information about enabled applications
If you have installed another application in your OpenShift Container Platform cluster, you can add a tile for that application to the Open Data Hub dashboard (the Applications → Enabled page) to make it accessible for Open Data Hub users.
-
You have cluster administrator privileges for your OpenShift Container Platform cluster.
-
Log in to the OpenShift Container Platform console as a cluster administrator.
-
In the Administrator perspective, click Home → API Explorer.
-
On the API Explorer page, search for the
OdhApplication
kind. -
Click the
OdhApplication
kind to open the resource details page. -
On the OdhApplication details page, select the
redhat-ods-applications
project from the Project list. -
Click the Instances tab.
-
Click Create OdhApplication.
-
On the Create OdhApplication page, copy the following code and paste it into the YAML editor.
apiVersion: dashboard.opendatahub.io/v1 kind: OdhApplication metadata: name: examplename namespace: redhat-ods-applications labels: app: odh-dashboard app.kubernetes.io/part-of: odh-dashboard spec: enable: validationConfigMap: examplename-enable img: >- <svg width="24" height="25" viewBox="0 0 24 25" fill="none" xmlns="http://www.w3.org/2000/svg"> <path d="path data" fill="#ee0000"/> </svg> getStartedLink: 'https://example.org/docs/quickstart.html' route: exampleroutename routeNamespace: examplenamespace displayName: Example Name kfdefApplications: [] support: third party support csvName: '' provider: example docsLink: 'https://example.org/docs/index.html' quickStart: '' getStartedMarkDown: >- # Example Enter text for the information panel. description: >- Enter summary text for the tile. category: Self-managed | Partner managed | Red Hat managed
-
Modify the parameters in the code for your application.
TipTo see example YAML files, click Home → API Explorer, select OdhApplication
, click the Instances tab, select an instance, and then click the YAML tab. -
Click Create. The application details page appears.
-
Log in to Open Data Hub.
-
In the left menu, click Applications → Explore.
-
Locate the new tile for your application and click it.
-
In the information pane for the application, click Enable.
-
In the left menu of the Open Data Hub dashboard, click Applications → Enabled and verify that your application is available.
Hiding the default Jupyter application
The Open Data Hub dashboard includes Jupyter as an enabled application by default.
To hide the Jupyter tile from the list of Enabled applications, edit the dashboard configuration file.
-
You have cluster administrator privileges for your OpenShift Container Platform cluster.
-
Log in to the OpenShift Container Platform console as a cluster administrator.
-
Open the dashboard configuration file:
-
In the Administrator perspective, click Home → API Explorer.
-
In the search bar, enter
OdhDashboardConfig
to filter by kind. -
Click the
OdhDashboardConfig
custom resource (CR) to open the resource details page. -
Select the
redhat-ods-applications
project from the Project list. -
Click the Instances tab.
-
Click the
odh-dashboard-config
instance to open the details page. -
Click the YAML tab.
-
-
In the
spec:notebookController
section, set the value ofenabled
tofalse
to hide the Jupyter tile from the list of Enabled applications. -
Click Save to apply your changes and then click Reload to make sure that your changes are synced to the cluster.
In the Open Data Hub dashboard, select Applications> Enabled. You should not see the Jupyter tile.
Managing cluster resources
Configuring the default PVC size for your cluster
To configure how resources are claimed within your Open Data Hub cluster, you can change the default size of the cluster’s persistent volume claim (PVC) ensuring that the storage requested matches your common storage workflow. PVCs are requests for resources in your cluster and also act as claim checks to the resource.
-
You have logged in to Open Data Hub.
Note
|
Changing the PVC setting restarts the Jupyter pod and makes Jupyter unavailable for up to 30 seconds. As a workaround, it is recommended that you perform this action outside of your organization’s typical working day. |
-
From the Open Data Hub dashboard, click Settings → Cluster settings.
-
Under PVC size, enter a new size in gibibytes. The minimum size is 1 GiB, and the maximum size is 16384 GiB.
-
Click Save changes.
-
New PVCs are created with the default storage size that you configured.
Restoring the default PVC size for your cluster
To change the size of resources utilized within your Open Data Hub cluster, you can restore the default size of your cluster’s persistent volume claim (PVC).
-
You have logged in to Open Data Hub.
-
You are part of the administrator group for Open Data Hub in OpenShift Container Platform.
-
From the Open Data Hub dashboard, click Settings → Cluster settings.
-
Click Restore Default to restore the default PVC size of 20GiB.
-
Click Save changes.
-
New PVCs are created with the default storage size of 20 GiB.
Overview of accelerators
If you work with large data sets, you can use accelerators to optimize the performance of your data science models in Open Data Hub. With accelerators, you can scale your work, reduce latency, and increase productivity. You can use accelerators in Open Data Hub to assist your data scientists in the following tasks:
-
Natural language processing (NLP)
-
Inference
-
Training deep neural networks
-
Data cleansing and data processing
Open Data Hub supports the following accelerators:
-
NVIDIA graphics processing units (GPUs)
-
To use compute-heavy workloads in your models, you can enable NVIDIA graphics processing units (GPUs) in Open Data Hub.
-
To enable GPUs on OpenShift, you must install the NVIDIA GPU Operator.
-
-
Habana Gaudi devices (HPUs)
-
Habana, an Intel company, provides hardware accelerators intended for deep learning workloads. You can use the Habana libraries and software associated with Habana Gaudi devices available from your notebook.
-
Before you can successfully enable Habana Gaudi devices on Open Data Hub, you must install the necessary dependencies and version 1.10 of the HabanaAI Operator. For more information about how to enable your OpenShift environment for Habana Gaudi devices, see HabanaAI Operator for OpenShift.
-
You can enable Habana Gaudi devices on-premises or with AWS DL1 compute nodes on an AWS instance.
-
Before you can use an accelerator in Open Data Hub, your OpenShift instance must contain an associated accelerator profile. For accelerators that are new to your deployment, you must configure an accelerator profile for the accelerator in context. You can create an accelerator profile from the Settings → Accelerator profiles page on the Open Data Hub dashboard. If your deployment contains existing accelerators that had associated accelerator profiles already configured, an accelerator profile is automatically created after you upgrade to the latest version of Open Data Hub.
Enabling GPU support in Open Data Hub
Optionally, to ensure that your data scientists can use compute-heavy workloads in their models, you can enable graphics processing units (GPUs) in Open Data Hub.
-
You have logged in to your OpenShift Container Platform cluster.
-
You have the
cluster-admin
role in your OpenShift Container Platform cluster.
-
To enable GPU support on an OpenShift cluster, follow the instructions here: NVIDIA GPU Operator on Red Hat OpenShift Container Platform in the NVIDIA documentation.
-
Delete the migration-gpu-status ConfigMap.
-
In the OpenShift Container Platform web console, switch to the Administrator perspective.
-
Set the Project to All Projects or redhat-ods-applications to ensure you can see the appropriate ConfigMap.
-
Search for the migration-gpu-status ConfigMap.
-
Click the action menu (⋮) and select Delete ConfigMap from the list.
The Delete ConfigMap dialog appears.
-
Inspect the dialog and confirm that you are deleting the correct ConfigMap.
-
Click Delete.
-
-
Restart the dashboard replicaset.
-
In the OpenShift Container Platform web console, switch to the Administrator perspective.
-
Click Workloads → Deployments.
-
Set the Project to All Projects or redhat-ods-applications to ensure you can see the appropriate deployment.
-
Search for the rhods-dashboard deployment.
-
Click the action menu (⋮) and select Restart Rollout from the list.
-
Wait until the Status column indicates that all pods in the rollout have fully restarted.
-
-
The NVIDIA GPU Operator appears on the Operators → Installed Operators page in the OpenShift Container Platform web console.
-
The reset migration-gpu-status instance is present in the Instances tab on the
AcceleratorProfile
custom resource definition (CRD) details page.
After installing the NVIDIA GPU Operator, create an accelerator profile as described in Working with accelerator profiles.
Enabling Habana Gaudi devices
Before you can use Habana Gaudi devices in Open Data Hub, you must install the necessary dependencies and deploy the HabanaAI Operator.
-
You have logged in to OpenShift Container Platform.
-
You have the
cluster-admin
role in OpenShift Container Platform.
-
To enable Habana Gaudi devices in Open Data Hub, follow the instructions at HabanaAI Operator for OpenShift.
-
From the Open Data Hub dashboard, click Settings → Accelerator profiles.
The Accelerator profiles page appears, displaying existing accelerator profiles. To enable or disable an existing accelerator profile, on the row containing the relevant accelerator profile, click the toggle in the Enable column.
-
Click Create accelerator profile.
The Create accelerator profile dialog opens.
-
In the Name field, enter a name for the Habana Gaudi device.
-
In the Identifier field, enter a unique string that identifies the Habana Gaudi device, for example,
habana.ai/gaudi
. -
Optional: In the Description field, enter a description for the Habana Gaudi device.
-
To enable or disable the accelerator profile for the Habana Gaudi device immediately after creation, click the toggle in the Enable column.
-
Optional: Add a toleration to schedule pods with matching taints.
-
Click Add toleration.
The Add toleration dialog opens.
-
From the Operator list, select one of the following options:
-
Equal - The key/value/effect parameters must match. This is the default.
-
Exists - The key/effect parameters must match. You must leave a blank value parameter, which matches any.
-
-
From the Effect list, select one of the following options:
-
None
-
NoSchedule - New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain.
-
PreferNoSchedule - New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain.
-
NoExecute - New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed.
-
-
In the Key field, enter the toleration key
habana.ai/gaudi
. The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. -
In the Value field, enter a toleration value. The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.
-
In the Toleration Seconds section, select one of the following options to specify how long a pod stays bound to a node that has a node condition.
-
Forever - Pods stays permanently bound to a node.
-
Custom value - Enter a value, in seconds, to define how long pods stay bound to a node that has a node condition.
-
-
Click Add.
-
-
Click Create accelerator profile.
-
From the Administrator perspective, the following Operators appear on the Operators → Installed Operators page.
-
HabanaAI
-
Node Feature Discovery (NFD)
-
Kernel Module Management (KMM)
-
-
The Accelerator list displays the Habana Gaudi accelerator on the Start a notebook server page. After you select an accelerator, the Number of accelerators field appears, which you can use to choose the number of accelerators for your notebook server.
-
The accelerator profile appears on the Accelerator profiles page
-
The accelerator profile appears on the Instances tab on the details page for the
AcceleratorProfile
custom resource definition (CRD).
Allocating additional resources to Open Data Hub users
As a cluster administrator, you can allocate additional resources to a cluster to support compute-intensive data science work. This support includes increasing the number of nodes in the cluster and changing the cluster’s allocated machine pool.
For more information about allocating additional resources to an OpenShift Container Platform cluster, see Manually scaling a compute machine set.
Managing Jupyter notebook servers
Accessing the Jupyter administration interface
You can use the Jupyter administration interface to control notebook servers in your Open Data Hub environment.
-
You are part of the OpenShift Container Platform administrator group.
-
To access the Jupyter administration interface from Open Data Hub, perform the following actions:
-
In Open Data Hub, in the Applications section of the left menu, click Enabled.
-
Locate the Jupyter tile and click Launch application.
-
On the page that opens when you launch Jupyter, click the Administration tab.
The Administration page opens.
-
-
To access the Jupyter administration interface from JupyterLab, perform the following actions:
-
Click File → Hub Control Panel.
-
On the page that opens in Open Data Hub, click the Administration tab.
The Administration page opens.
-
-
You can see the Jupyter administration interface
Starting notebook servers owned by other users
Administrators can start a notebook server for another existing user from the Jupyter administration interface.
-
You are part of the OpenShift Container Platform administrator group which requires the
cluster-admin
role on OpenShift Container Platform. For more information, see Creating a cluster admin. -
You have launched the Jupyter application, as described in Launching Jupyter and starting a notebook server.
-
On the page that opens when you launch Jupyter, click the Administration tab.
-
On the Administration tab, perform the following actions:
-
In the Users section, locate the user whose notebook server you want to start.
-
Click Start server beside the relevant user.
-
Complete the Start a notebook server page.
-
Optional: Select the Start server in current tab checkbox if necessary.
-
Click Start server.
After the server starts, you see one of the following behaviors:
-
If you previously selected the Start server in current tab checkbox, the JupyterLab interface opens in the current tab of your web browser.
-
If you did not previously select the Start server in current tab checkbox, the Starting server dialog box prompts you to open the server in a new browser tab or in the current tab.
The JupyterLab interface opens according to your selection.
-
-
-
The JupyterLab interface opens.
Accessing notebook servers owned by other users
Administrators can access notebook servers that are owned by other users to correct configuration errors or to help them troubleshoot problems with their environment.
-
You are part of the OpenShift Container Platform administrator group which requires the
cluster-admin
role on OpenShift Container Platform. For more information, see Creating a cluster admin. -
You have launched the Jupyter application, as described in Launching Jupyter and starting a notebook server.
-
The notebook server that you want to access is running.
-
On the page that opens when you launch Jupyter, click the Administration tab.
-
On the Administration page, perform the following actions:
-
In the Users section, locate the user that the notebook server belongs to.
-
Click View server beside the relevant user.
-
On the Notebook server control panel page, click Access notebook server.
-
-
The user’s notebook server opens in JupyterLab.
Stopping notebook servers owned by other users
Administrators can stop notebook servers that are owned by other users to reduce resource consumption on the cluster, or as part of removing a user and their resources from the cluster.
-
If you are using specialized Open Data Hub groups, you are part of the administrator group (for example,
odh-admins
). If you are not using specialized groups, you are part of the OpenShift Container Platform administrator group. -
You have launched the Jupyter application, as described in Launching Jupyter and starting a notebook server.
-
The notebook server that you want to stop is running.
-
On the page that opens when you launch Jupyter, click the Administration tab.
-
Stop one or more servers.
-
If you want to stop one or more specific servers, perform the following actions:
-
In the Users section, locate the user that the notebook server belongs to.
-
To stop the notebook server, perform one of the following actions:
-
Click the action menu (⋮) beside the relevant user and select Stop server.
-
Click View server beside the relevant user and then click Stop notebook server.
The Stop server dialog box appears.
-
-
Click Stop server.
-
-
If you want to stop all servers, perform the following actions:
-
Click the Stop all servers button.
-
Click OK to confirm stopping all servers.
-
-
-
The Stop server link beside each server changes to a Start server link when the notebook server has stopped.
Stopping idle notebooks
You can reduce resource usage in your Open Data Hub deployment by stopping notebook servers that have been idle (without logged in users) for a period of time. This is useful when resource demand in the cluster is high. By default, idle notebooks are not stopped after a specific time limit.
Note
|
If you have configured your cluster settings to disconnect all users from a cluster after a specified time limit, then this setting takes precedence over the idle notebook time limit. Users are logged out of the cluster when their session duration reaches the cluster-wide time limit. |
-
You have logged in to Open Data Hub.
-
You are part of the administrator group for Open Data Hub in OpenShift Container Platform.
-
From the Open Data Hub dashboard, click Settings → Cluster settings.
-
Under Stop idle notebooks, select Stop idle notebooks after.
-
Enter a time limit, in hours and minutes, for when idle notebooks are stopped.
-
Click Save changes.
-
The
notebook-controller-culler-config
ConfigMap, located in theredhat-ods-applications
project on the Workloads → ConfigMaps page, contains the following culling configuration settings:-
ENABLE_CULLING
: Specifies if the culling feature is enabled or disabled (this isfalse
by default). -
IDLENESS_CHECK_PERIOD
: The polling frequency to check for a notebook’s last known activity (in minutes). -
CULL_IDLE_TIME
: The maximum allotted time to scale an inactive notebook to zero (in minutes).
-
-
Idle notebooks stop at the time limit that you set.
Adding notebook pod tolerations
If you want to dedicate certain machine pools to only running notebook pods, you can allow notebook pods to be scheduled on specific nodes by adding a toleration. Taints and tolerations allow a node to control which pods should (or should not) be scheduled on them. For more information, see Understanding taints and tolerations.
This capability is useful if you want to make sure that notebook servers are placed on nodes that can handle their needs. By preventing other workloads from running on these specific nodes, you can ensure that the necessary resources are available to users who need to work with large notebook sizes.
-
You have logged in to Open Data Hub.
-
You are part of the administrator group for Open Data Hub in OpenShift Container Platform.
-
You are familiar with OpenShift Container Platform taints and tolerations, as described in Understanding taints and tolerations.
-
From the Open Data Hub dashboard, click Settings → Cluster settings.
-
Under Notebook pod tolerations, select Add a toleration to notebook pods to allow them to be scheduled to tainted nodes.
-
In the Toleration key for notebook pods field, enter a toleration key. The key is any string, up to 253 characters. The key must begin with a letter or number, and can contain letters, numbers, hyphens, dots, and underscores. For example,
notebooks-only
. -
Click Save changes. The toleration key is applied to new notebook pods when they are created.
For existing notebook pods, the toleration key is applied when the notebook pods are restarted. If you are using Jupyter, see Updating notebook server settings by restarting your server. If you are using a workbench in a data science project, see Starting a workbench.
In OpenShift Container Platform, add a matching taint key (with any value) to the machine pools that you want to dedicate to notebooks. For more information, see Controlling pod placement using node taints.
-
In the OpenShift Container Platform console, for a pod that is running, click Workloads → Pods. Otherwise, for a pod that is stopped, click Workloads → StatefulSet.
-
Search for your workbench pod name and then click the name to open the pod details page.
-
Confirm that the assigned Node and Tolerations are correct.
Configuring a custom notebook image
You can configure custom notebook images that cater to your project’s specific requirements. From the Notebook images page, you can enable or disable a previously imported notebook image and create an accelerator profile as a recommended accelerator for existing notebook images.
-
You have logged in to Open Data Hub.
-
Your custom notebook image exists in an image registry and is accessible.
-
You can access the Settings → Notebook images dashboard navigation menu option.
-
From the Open Data Hub dashboard, click Settings → Notebook images.
The Notebook images page appears. Previously imported notebook images are displayed. To enable or disable a previously imported notebook image, on the row containing the relevant notebook image, click the toggle in the Enable column.
NoteIf you have already configured an accelerator identifier for a notebook image, you can specify a recommended accelerator for the notebook image by creating an associated accelerator profile. To do this, click Create profile on the row containing the notebook image and complete the relevant fields. If the notebook image does not contain an accelerator identifier, you must manually configure one before creating an associated accelerator profile.
-
Click Import new image. Alternatively, if no previously imported images were found, click Import image.
The Import Notebook images dialog appears.
-
In the Image location field, enter the URL of the repository containing the notebook image. For example:
quay.io/my-repo/my-image:tag
,quay.io/my-repo/my-image@sha256:xxxxxxxxxxxxx
, ordocker.io/my-repo/my-image:tag
. -
In the Name field, enter an appropriate name for the notebook image.
-
Optional: In the Description field, enter a description for the notebook image.
-
Optional: From the Accelerator identifier list, select an identifier to set its accelerator as recommended with the notebook image. If the notebook image contains only one accelerator identifier, the identifier name displays by default.
-
Optional: Add software to the notebook image. After the import has completed, the software is added to the notebook image’s meta-data and displayed on the Jupyter server creation page.
-
Click the Software tab.
-
Click the Add software button.
-
Click Edit ().
-
Enter the Software name.
-
Enter the software Version.
-
Click Confirm () to confirm your entry.
-
To add additional software, click Add software, complete the relevant fields, and confirm your entry.
-
-
Optional: Add packages to the notebook images. After the import has completed, the packages are added to the notebook image’s meta-data and displayed on the Jupyter server creation page.
-
Click the Packages tab.
-
Click the Add package button.
-
Click Edit ().
-
Enter the Package name.
-
Enter the package Version.
-
Click Confirm () to confirm your entry.
-
To add an additional package, click Add package, complete the relevant fields, and confirm your entry.
-
-
Click Import.
-
The notebook image that you imported is displayed in the table on the Notebook images page.
-
Your custom notebook image is available for selection on the Start a notebook server page in Jupyter.
Backing up storage data
It is a best practice to back up the data on your persistent volume claims (PVCs) regularly.
Backing up your data is particularly important before you delete a user and before you uninstall Open Data Hub, as all PVCs are deleted when Open Data Hub is uninstalled.
See the documentation for your cluster platform for more information about backing up your PVCs.