$ oc login
Info alert:Important Notice
Please note that more information about the previous v2 releases can be found here. You can use "Find a release" search bar to search for a particular release.
Upgrading Open Data Hub
Overview of upgrading Open Data Hub
As a cluster administrator, you can configure either automatic or manual upgrades for the Open Data Hub Operator.
-
If you configure automatic upgrades, when a new version of the Open Data Hub Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.
-
If you configure manual upgrades, when a new version of the Open Data Hub Operator is available, OLM creates an update request.
A cluster administrator must manually approve the update request to update the Operator to the new version. See Manually approving a pending Operator upgrade for more information about approving a pending Operator upgrade.
-
By default, the Open Data Hub Operator follows a sequential update process. This means that if there are several minor versions between the current version and the version that you plan to upgrade to, Operator Lifecycle Manager (OLM) upgrades the Operator to each of the minor versions before it upgrades it to the final, target version. If you configure automatic upgrades, OLM automatically upgrades the Operator to the latest available version, without human intervention. If you configure manual upgrades, a cluster administrator must manually approve each sequential update between the current version and the final, target version.
-
When you upgrade Open Data Hub, the upgrade process automatically uses the values of the previous version’s
DataScienceClusterobject. After the upgrade, you should inspect the defaultDataScienceClusterobject to check and optionally update themanagementStatestatus of the components.NoteNew components are not automatically added to the
DataScienceClusterobject during upgrade. If you want to use a new component, you must manually edit theDataScienceClusterobject to add the component entry. -
For any components that you update, Open Data Hub initiates a rollout that affects all pods to use the updated image.
-
Workbench images are integrated into the image stream during the upgrade and subsequently appear in the Open Data Hub dashboard.
NoteWorkbench images are constructed externally; they are prebuilt images that undergo quarterly changes and they do not change with every Open Data Hub upgrade.
Upgrading Open Data Hub version 2.0 to version 2.2 or later
You can upgrade the Open Data Hub Operator from version 2.0 or 2.1 to version 2.2 or later by using the OpenShift console.
For information about upgrading from version 1 to version 2, see Upgrading Open Data Hub version 1 to version 2.
Upgrading Open Data Hub involves the following tasks:
-
Reviewing and understanding the requirements for upgrading Open Data Hub version 2.
-
Upgrading the Open Data Hub Operator.
-
Reviewing the Open Data Hub components.
Requirements for upgrading Open Data Hub version 2
When upgrading Open Data Hub version 2.0 or 2.1 to version 2.2 or later, you must complete the following tasks.
Check the components in the DataScienceCluster object
When you upgrade to version 2.34, the upgrade process automatically uses the values from the DataScienceCluster object in the previous version.
After the upgrade, you should inspect the 2.34 DataScienceCluster object and optionally update the status of any components as described in Installing Open Data Hub components.
|
Note
|
New components are not automatically added to the |
Note that Open Data Hub Operator versions 2.2 and later use an upgraded API version for a DataScienceCluster instance, resulting in the following differences.
| ODH 2.1 and earlier | ODH 2.2 and later | |
|---|---|---|
API version |
|
|
Enable component |
|
|
Disable component |
|
|
Migrate from embedded Kueue to Red Hat build of Kueue
The embedded Kueue component for managing distributed workloads is deprecated. Open Data Hub now uses the Red Hat build of Kueue Operator to provide enhanced workload scheduling for distributed training, workbench, and model serving workloads.
Before upgrading Open Data Hub, check if your environment is using the embedded Kueue component by verifying the spec.components.kueue.managementState field in the DataScienceCluster custom resource. If the field is set to Managed, you must complete the migration to the Red Hat build of Kueue Operator to avoid controller conflicts and ensure continued support for queue-based workloads.
This migration requires OpenShift Container Platform 4.18 or later. For more information, see Migrating to the Red Hat build of Kueue Operator.
Update workflows interacting with OdhDashboardConfig resource
Previously, cluster administrators used the groupsConfig option in the OdhDashboardConfig resource to manage the OpenShift Container Platform groups (both administrators and non-administrators) that can access the Open Data Hub dashboard. Starting with Open Data Hub 2.17, this functionality has moved to the Auth resource. If you have workflows (such as GitOps workflows) that interact with OdhDashboardConfig, you must update them to reference the Auth resource instead.
| ODH 2.16 and earlier | ODH 2.17 and later | |
|---|---|---|
|
|
|
|
|
|
|
|
|
Admin groups |
|
|
User groups |
|
|
Verify Argo Workflows compatibility
If you use your own Argo Workflows instance for pipelines, verify that the installed version is compatible with this release of Open Data Hub. For details, see Supported Configurations.
Check the status of certificate management
You can use self-signed certificates in Open Data Hub.
After you upgrade, check the management status for Certificate Authority (CA) bundles as described in Understanding how Open Data Hub handles certificates.
Upgrading the Open Data Hub Operator
-
You have installed the Open Data Hub Operator.
-
You are using OpenShift Container Platform 4.16 or later.
-
Your OpenShift cluster has a minimum of 16 CPUs and 32GB of memory across all OpenShift worker nodes.
-
You can log in as a user with
cluster-adminprivileges.
-
Log in to the OpenShift Container Platform web console as a user with
cluster-adminprivileges. -
Select Operators → Installed Operators, and then click the Open Data Hub Operator.
-
Click the Subscription tab.
-
Under Update channel, click the pencil icon.
-
In the Change Subscription update channel dialog, select
fast, and then click Save.If you configured the Open Data Hub Operator with automatic update approval, the upgrade begins. If you configured the Operator with manual update approval, perform the actions in the next step.
-
To approve a manual update, perform these actions:
-
Next to Upgrade status, click 1 requires approval.
-
Click Preview InstallPlan.
-
Review the manual install plan, and then click Approve.
-
-
Select Operators → Installed Operators to verify that the Open Data Hub Operator is listed with the updated version number and Succeeded status.
-
Check the Open Data Hub components. For more information, see Installing Open Data Hub components.
-
Access the Open Data Hub dashboard.
Upgrading Open Data Hub version 1 to version 2
You can upgrade the Open Data Hub Operator from version 1 to version 2 by using the OpenShift console.
For information about upgrading from version 2.0, see Upgrading Open Data Hub version 2.0 to version 2.2.
Upgrading Open Data Hub involves the following tasks:
-
Reviewing and understanding the requirements for upgrading Open Data Hub version 1.
-
Upgrading the Open Data Hub Operator.
-
Reviewing the Open Data Hub components.
Requirements for upgrading Open Data Hub version 1
This section describes the tasks that you should complete when upgrading Open Data Hub version 1 to version 2.
|
Note
|
After you install Open Data Hub 2.34, pipelines created with data science pipelines 1.0 continue to run, but are inaccessible from the Open Data Hub dashboard. If you are a current data science pipelines user, do not install Open Data Hub with data science pipelines 2.0 until you are ready to migrate to the new pipelines solution. |
Check the components in the DataScienceCluster object
When you upgrade to version 2.34, the upgrade process automatically uses the values from the DataScienceCluster object in the previous version.
After the upgrade, you should inspect the 2.34 DataScienceCluster object and optionally update the status of any components as described in Installing Open Data Hub components.
|
Note
|
New components are not automatically added to the |
Recreate existing pipeline runs
When you upgrade to Open Data Hub 2.34, any existing pipeline runs that you created in version 1 continue to refer to the previous version’s image (as expected).
After upgrading, you must delete the pipeline runs (not the pipelines) and create new pipeline runs. The pipeline runs that you create in version 2.34 correctly refer to the version 2.34 image.
For more information about pipeline runs, see Managing pipeline runs.
Address KServe requirements
For KServe (single-model serving platform), you must meet these requirements:
-
Install dependent Operators, including the Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh Operators. For more information, see Configuring your model serving platform.
-
After upgrading, you must inspect the default
DataScienceClusterobject and verify that the value of themanagementStatefield for thekservecomponent isManaged. -
In Open Data Hub version 1, the KServe component is a Limited Availability feature. If you enabled the
kservecomponent and created models in version 1, then after you upgrade to version 2.34, you must update some Open Data Hub resources as follows:-
Log in to the OpenShift Container Platform console as a cluster administrator:
-
Update the DSC Initialization resource:
$ oc patch $(oc get dsci -A -oname) --type='json' -p='[{"op": "replace", "path": "/spec/serviceMesh/managementState", "value":"Unmanaged"}]' -
Update the Data Science Cluster resource:
$ oc patch $(oc get dsc -A -oname) --type='json' -p='[{"op": "replace", "path": "/spec/components/kserve/serving/managementState", "value":"Unmanaged"}]' -
Update the
InferenceServicesCRD:$ oc patch crd inferenceservices.serving.kserve.io --type=json -p='[{"op": "remove", "path": "/spec/conversion"}]' -
Optionally, restart the Operator pods.
-
-
If you deployed a model by using KServe in Open Data Hub version 1, when you upgrade to version 2.34 the model does not automatically appear in the Open Data Hub dashboard. To update the dashboard view, redeploy the model by using the Open Data Hub dashboard.
Upgrading the Open Data Hub Operator
-
You have installed the Open Data Hub Operator.
-
You are using OpenShift Container Platform 4.16 or later.
-
Your OpenShift cluster has a minimum of 16 CPUs and 32GB of memory across all OpenShift worker nodes.
-
You can log in as a user with
cluster-adminprivileges.
-
Log in to the OpenShift Container Platform web console as a user with
cluster-adminprivileges. -
Select Operators → Installed Operators, and then click the Open Data Hub Operator.
-
Click the Subscription tab.
-
Under Update channel, click the pencil icon.
-
In the Change Subscription update channel dialog, select
fast, and then click Save.If you configured the Open Data Hub Operator with automatic update approval, the upgrade begins. If you configured the Operator with manual update approval, perform the actions in the next step.
-
To approve a manual update, perform these actions:
-
Next to Upgrade status, click 1 requires approval.
-
Click Preview InstallPlan.
-
Review the manual install plan, and then click Approve.
-
-
Select Operators → Installed Operators to verify that the Open Data Hub Operator is listed with the updated version number and Succeeded status.
-
Check the Open Data Hub components. For more information, see Installing Open Data Hub components.
-
Access the Open Data Hub dashboard.
Installing Open Data Hub components
You can use the OpenShift web console to install specific components of Open Data Hub on your cluster when the Open Data Hub Operator is already installed on the cluster.
-
You have installed the Open Data Hub Operator.
-
You can log in as a user with
cluster-adminprivileges. -
If you want to use the
trustyaicomponent, you must configure workload monitoring as described in Monitoring models on the single-model serving platform. -
If you want to use the
RAGcomponent, your infrastructure supports GPU-enabled instance types, for example,g4dn.xlargeon AWS. -
If you want to use the
kserveormodelmeshcomponents, you must have already installed the following Operator or Operators for the component. For information about installing an Operator, see Adding Operators to a cluster.ImportantThe multi-model serving platform based on ModelMesh is deprecated. You can continue to deploy models on the multi-model serving platform, but it is recommended that you migrate to the single-model serving platform.
-
If you want to use
kserve, you have selected a deployment mode. For more information, see About KServe deployment modes. -
If you want to use the
kueuecomponent to manage workloads, you must install the Red Hat build of Kueue Operator before activating the Kueue integration. For more information, see Configuring workload management with Kueue.
| Component | Required Operators | Catalog |
|---|---|---|
kserve |
Red Hat OpenShift Serverless Operator, Red Hat OpenShift Service Mesh Operator, Red Hat Authorino Operator |
Red Hat |
kueue |
Red Hat build of Kueue Operator |
Red Hat |
[Deprecated] modelmesh |
Prometheus Operator |
Community |
RAG (Llama Stack) |
Llama Stack Operator, Node Feature Discovery Operator, NVIDIA GPU Operator |
Red Hat |
-
Log in to your OpenShift Container Platform cluster as a user with
cluster-adminprivileges. If you are performing a developer installation on try.openshift.com, you can log in as thekubeadminuser. -
Select Operators → Installed Operators, and then click the Open Data Hub Operator.
-
On the Operator details page, click the DSC Initialization tab, and then click Create DSCInitialization.
-
On the Create DSCInitialization page, configure by using the YAML view.
-
If you are using a custom applications namespace, specify the namespace in the
spec.applicationsNamespacefield.
-
-
Click Create.
-
Wait until the status of the DSCInitialization is Ready.
-
Click the Data Science Cluster tab, and then click Create DataScienceCluster.
-
On the Create DataScienceCluster page, configure by using the YAML view.
-
In the
spec.componentssection, for each component shown, set the value of themanagementStatefield toManaged,Removed, orUnmanaged. -
If you are using a custom workbench namespace, specify the namespace in the
spec.workbenches.workbenchNamespacefield.
-
-
Click Create.
-
Select Home → Projects, and then select the opendatahub project.
-
On the Project details page, click the Workloads tab and confirm that the Open Data Hub core components are running.
Note: In the Open Data Hub dashboard, users can view the list of the installed Open Data Hub components, their corresponding source (upstream) components, and the versions of the installed components, as described in Viewing installed Open Data Hub components.