Open Data Hub logo

Info alert:Important Notice

Please note that more information about the previous v2 releases can be found here. You can use "Find a release" search bar to search for a particular release.

Upgrading Open Data Hub

Overview of upgrading Open Data Hub

As a cluster administrator, you can configure either automatic or manual upgrades for the Open Data Hub Operator.

  • If you configure automatic upgrades, when a new version of the Open Data Hub Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.

  • If you configure manual upgrades, when a new version of the Open Data Hub Operator is available, OLM creates an update request.

    A cluster administrator must manually approve the update request to update the Operator to the new version. See Manually approving a pending Operator upgrade for more information about approving a pending Operator upgrade.

  • By default, the Open Data Hub Operator follows a sequential update process. This means that if there are several minor versions between the current version and the version that you plan to upgrade to, Operator Lifecycle Manager (OLM) upgrades the Operator to each of the minor versions before it upgrades it to the final, target version. If you configure automatic upgrades, OLM automatically upgrades the Operator to the latest available version, without human intervention. If you configure manual upgrades, a cluster administrator must manually approve each sequential update between the current version and the final, target version.

  • When you upgrade Open Data Hub, the upgrade process automatically uses the values of the previous version’s DataScienceCluster object. After the upgrade, you should inspect the default DataScienceCluster object to check and optionally update the managementState status of the components.

  • For any components that you update, Open Data Hub initiates a rollout that affects all pods to use the updated image.

  • Notebook images are integrated into the image stream during the upgrade and subsequently appear in the Open Data Hub dashboard.

    Note
    Notebook images are constructed externally; they are prebuilt images that undergo quarterly changes and they do not change with every Open Data Hub upgrade.

Requirements for upgrading Open Data Hub

This section describes the tasks that you should complete when upgrading Open Data Hub.

Check the components in the DataScienceCluster object

When you upgrade to version 2, the upgrade process automatically uses the values from the DataScienceCluster object in the previous version.

After the upgrade, you should inspect the 2 DataScienceCluster object and optionally update the status of any components as described in Installing Open Data Hub components.

Recreate existing pipeline runs

When you upgrade to version 2, any existing pipeline runs that you created in version 1 continue to refer to the previous version’s image (as expected).

You must delete the pipeline runs (not the pipelines) and create new pipeline runs. The pipeline runs that you create in version 2 correctly refer to the version 2 image.

For more information about pipeline runs, see Managing pipeline runs.

Address KServe requirements

For KServe (single-model serving platform), you must meet these requirements:

  • Install dependent Operators, including the Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh Operators. For more information, see Serving large models.

  • After the upgrade, you must inspect the default DataScienceCluster object and verify that the value of the managementState field for the kserve component is Managed.

  • In Open Data Hub version 1, the KServe component is a Limited Availability feature. If you enabled the kserve component and created models in version 1, then after you upgrade to version 2, you must update some Open Data Hub resources as follows:

    1. Log in as an admin user to the OpenShift Container Platform cluster where Open Data Hub 2 is installed:

      $ oc login
    2. Update the DSC Initialization resource:

      $ oc patch $(oc get dsci -A -oname) --type='json' -p='[{"op": "replace", "path": "/spec/serviceMesh/managementState", "value":"Unmanaged"}]'
    3. Update the Data Science Cluster resource:

      $ oc patch $(oc get dsc -A -oname) --type='json' -p='[{"op": "replace", "path": "/spec/components/kserve/serving/managementState", "value":"Unmanaged"}]'
    4. Update the InferenceServices CRD:

      $ oc patch crd inferenceservices.serving.kserve.io --type=json -p='[{"op": "remove", "path": "/spec/conversion"}]'
    5. Optionally, restart the Operator pod.

  • If you deployed a model by using KServe in Open Data Hub version 1, when you upgrade to version 2 the model does not automatically appear in the Open Data Hub dashboard. To update the dashboard view, redeploy the model by using the Open Data Hub dashboard.

Upgrading Open Data Hub version 1 to version 2

You can upgrade the Open Data Hub Operator from version 1 to version 2 by using the OpenShift console. For information about installing the Open Hub Operator, see Installing Open Data Hub.

Upgrading Open Data Hub involves the following tasks:

  1. Upgrading the Open Data Hub Operator version 1.

  2. Installing Open Data Hub components.

  3. Accessing the Open Data Hub dashboard.

Upgrading the Open Data Hub Operator version 1

Prerequisites
  • You have installed version 1 of the Open Data Hub Operator.

  • You are using OpenShift Container Platform 4.13 or later.

  • Your OpenShift cluster has a minimum of 16 CPUs and 32GB of memory across all OpenShift worker nodes.

  • You can log in as a user with cluster-admin privileges.

Procedure
  1. Log in to the OpenShift Container Platform web console as a user with cluster-admin privileges.

  2. Select OperatorsInstalled Operators, and then click the 1.x version of the Open Data Hub Operator.

  3. Click the Subscription tab.

  4. Under Update channel, click the pencil icon.

  5. In the Change Subscription update channel dialog, select fast, and then click Save.

    If you configured the Open Data Hub Operator with automatic update approval, the upgrade begins. If you configured the Operator with manual update approval, perform the actions in the next step.

  6. To approve a manual update, perform these actions:

    1. Next to Upgrade status, click 1 requires approval.

    2. Click Preview InstallPlan.

    3. Review the manual install plan, and then click Approve.

Verification
  • Select OperatorsInstalled Operators to verify that the Open Data Hub Operator is listed with the 2.x version number and Succeeded status.

Next Steps
  • Install Open Data Hub components.

  • Access the Open Data Hub dashboard.

Installing Open Data Hub components

You can use the OpenShift web console to install specific components of Open Data Hub on your cluster when version 2 of the Open Data Hub Operator is already installed on the cluster.

Prerequisites
  • You have installed version 2 of the Open Data Hub Operator.

  • You can log in as a user with cluster-admin privileges.

  • If you want to use the trustyai component, you must enable user workload monitoring as described in Configuring monitoring for the multi-model serving platform.

  • If you want to use the kserve, modelmesh, or modelregistry components, you must have already installed the following Operator or Operators for the component. For information about installing an Operator, see Adding Operators to a cluster.

Table 1. Required Operators for components
Component Required Operators Catalog

kserve

Red Hat OpenShift Serverless Operator, Red Hat OpenShift Service Mesh Operator, Red Hat Authorino Operator

Red Hat

modelmesh

Prometheus Operator

Community

modelregistry

Red Hat Authorino Operator, Red Hat OpenShift Serverless Operator, Red Hat OpenShift Service Mesh Operator

NOTE: To use the model registry feature, you must install the required Operators in a specific order. For more information, see Configuring the model registry component.

Red Hat

Procedure
  1. Log in to your OpenShift Container Platform as a user with cluster-admin privileges. If you are performing a developer installation on try.openshift.com, you can log in as the kubeadmin user.

  2. Select OperatorsInstalled Operators, and then click the Open Data Hub Operator.

  3. On the Operator details page, click the DSC Initialization tab, and then click Create DSCInitialization.

  4. On the Create DSCInitialization page, configure by using Form view or YAML view. For general information about the supported components, see Tiered Components.

    • Configure by using Form view:

      1. In the Name field, enter a value.

      2. In the Components section, expand each component and set the managementState to Managed or Removed.

    • Configure by using YAML view:

      1. In the spec.components section, for each component shown, set the value of the managementState field to either Managed or Removed.

  5. Click Create.

  6. Wait until the status of the DSCInitialization is Ready.

  7. Click the Data Science Cluster tab, and then click Create DataScienceCluster.

  8. On the Create DataScienceCluster page, configure the DataScienceCluster by using Form view or YAML view. For general information about the supported components, see Tiered Components.

    • Configure by using Form view:

      1. In the Name field, enter a value.

      2. In the Components section, expand each component and set the managementState to Managed or Removed.

    • Configure by using YAML view:

      1. In the spec.components section, for each component shown, set the value of the managementState field to either Managed or Removed.

  9. Click Create.

Verification
  1. Select HomeProjects, and then select the opendatahub project.

  2. On the Project details page, click the Workloads tab and confirm that the Open Data Hub core components are running. For more information, see Tiered Components.

Next Step
  • Access the Open Data Hub dashboard.

Accessing the Open Data Hub dashboard

You can access and share the URL for your Open Data Hub dashboard with other users to let them log in and work on their models.

Prerequisites
  • You have installed the Open Data Hub Operator.

Procedure
  1. In the OpenShift web console, select NetworkingRoutes.

  2. On the Routes page, click the Project list and select the odh project. The page filters to only display routes in the odh project.

  3. In the Location column, copy the URL for the odh-dashboard route.

  4. Give this URL to your users to let them log in to Open Data Hub dashboard.

Verification
  • Confirm that you and your users can log in to the Open Data Hub dashboard by using the URL.

Upgrading Open Data Hub version 2.0 to version 2.2

You can upgrade the Open Data Hub Operator from version 2.0 or 2.1 to version 2.2 or later by using the OpenShift console. For information about upgrading from version 1 to version 2.2 or later, see Upgrading Open Data Hub version 1 to version 2. For information about installing the Open Hub Operator, see Installing Open Data Hub Operator version 2.

Note

After you install Open Data Hub 2, pipelines created with data science pipelines 1.0 continue to run, but are inaccessible from the Open Data Hub dashboard. If you are a current data science pipelines user, do not install Open Data Hub with data science pipelines 2.0 until you are ready to migrate to the new pipelines solution.

Important

Data science pipelines 2.0 contains an installation of Argo Workflows. Open Data Hub does not support direct customer usage of this installation of Argo Workflows.

If you upgrade to Open Data Hub 2.10.0 or later with data science pipelines enabled and an Argo Workflows installation that is not installed by data science pipelines exists on your cluster, Open Data Hub components will not be upgraded. To complete the component upgrade, disable data science pipelines or remove the separate installation of Argo Workflows. The component upgrade will complete automatically.

Note that Open Data Hub Operator versions 2.2 and later use an upgraded API version for a DataScienceCluster instance, resulting in the following differences.

Table 2. DataScienceCluster instance differences
ODH 2.1 and earlier ODH 2.2 and later

API version

v1alpha1

v1

Enable component

.spec.components.[component_name].enabled: true

.spec.components.[component_name].managementState: Managed

Disable component

.spec.components.[component_name].enabled: false

.spec.components.[component_name].managementState: Removed

Upgrading Open Data Hub involves the following tasks:

  1. Installing version 2.2 or later of Open Data Hub.

  2. If using self-signed certificates, adding a CA bundle.

Installing Open Data Hub version 2

You can install Open Data Hub version 2 on your OpenShift Container Platform from the OpenShift web console. For information about upgrading the Open Hub Operator, see Upgrading Open Data Hub.

Installing Open Data Hub involves the following tasks:

  1. Installing the Open Data Hub Operator.

  2. Installing Open Data Hub components.

  3. Accessing the Open Data Hub dashboard.

Note

Version 2 of the Open Data Hub Operator represents an alpha release, accessible only on the fast channel. Later releases will change to the rolling channel when the Operator is more stable.

Note

After you install Open Data Hub 2, pipelines created with data science pipelines 1.0 continue to run, but are inaccessible from the Open Data Hub dashboard. If you are a current data science pipelines user, do not install Open Data Hub with data science pipelines 2.0 until you are ready to migrate to the new pipelines solution.

Important

Data science pipelines 2.0 contains an installation of Argo Workflows. Open Data Hub does not support direct customer usage of this installation of Argo Workflows. To install Open Data Hub 2.10.0 or later with data science pipelines, ensure that no separate installation of Argo Workflows exists on your cluster.

If you install Open Data Hub 2.10.0 or later with the datasciencepipelines component while there is an existing installation of Argo Workflows that is not installed by data science pipelines on your cluster, data science pipelines will be disabled after the installation completes.

To enable data science pipelines, remove the separate installation of Argo Workflows from your cluster. Data science pipelines will be enabled automatically.

Installing the Open Data Hub Operator version 2

Prerequisites
  • You are using OpenShift Container Platform 4.13 or later.

  • Your OpenShift cluster has a minimum of 16 CPUs and 32GB of memory across all OpenShift worker nodes.

  • You have cluster administrator privileges for your OpenShift Container Platform cluster.

Procedure
  1. Log in to your OpenShift Container Platform as a user with cluster-admin privileges. If you are performing a developer installation on try.openshift.com, you can log in as the kubeadmin user.

  2. Select OperatorsOperatorHub.

  3. On the OperatorHub page, in the Filter by keyword field, enter Open Data Hub Operator.

  4. Click the Open Data Hub Operator tile.

  5. If the Show community Operator window opens, read the information and then click Continue.

  6. Read the information about the Operator and then click Install.

  7. On the Install Operator page, follow these steps:

    1. For Update channel, select fast.

    2. For Version, select the version of the Operator that you want to install.

    3. For Installation mode, leave All namespaces on the cluster (default) selected.

    4. For Installed Namespace, select the openshift-operators namespace.

    5. For Update approval, select automatic or manual updates.

      • Automatic: When a new version of the Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator.

      • Manual: When a new version of the Operator is available, OLM notifies you with an update request that you must manually approve to upgrade the running instance of your Operator.

  8. Click Install. The installation might take a few minutes.

Verification
  • Select OperatorsInstalled Operators to verify that the Open Data Hub Operator is listed with Succeeded status.

Next Step
  • Install Open Data Hub components.

Installing Open Data Hub components

You can use the OpenShift web console to install specific components of Open Data Hub on your cluster when version 2 of the Open Data Hub Operator is already installed on the cluster.

Prerequisites
  • You have installed version 2 of the Open Data Hub Operator.

  • You can log in as a user with cluster-admin privileges.

  • If you want to use the trustyai component, you must enable user workload monitoring as described in Configuring monitoring for the multi-model serving platform.

  • If you want to use the kserve, modelmesh, or modelregistry components, you must have already installed the following Operator or Operators for the component. For information about installing an Operator, see Adding Operators to a cluster.

Table 3. Required Operators for components
Component Required Operators Catalog

kserve

Red Hat OpenShift Serverless Operator, Red Hat OpenShift Service Mesh Operator, Red Hat Authorino Operator

Red Hat

modelmesh

Prometheus Operator

Community

modelregistry

Red Hat Authorino Operator, Red Hat OpenShift Serverless Operator, Red Hat OpenShift Service Mesh Operator

NOTE: To use the model registry feature, you must install the required Operators in a specific order. For more information, see Configuring the model registry component.

Red Hat

Procedure
  1. Log in to your OpenShift Container Platform as a user with cluster-admin privileges. If you are performing a developer installation on try.openshift.com, you can log in as the kubeadmin user.

  2. Select OperatorsInstalled Operators, and then click the Open Data Hub Operator.

  3. On the Operator details page, click the DSC Initialization tab, and then click Create DSCInitialization.

  4. On the Create DSCInitialization page, configure by using Form view or YAML view. For general information about the supported components, see Tiered Components.

    • Configure by using Form view:

      1. In the Name field, enter a value.

      2. In the Components section, expand each component and set the managementState to Managed or Removed.

    • Configure by using YAML view:

      1. In the spec.components section, for each component shown, set the value of the managementState field to either Managed or Removed.

  5. Click Create.

  6. Wait until the status of the DSCInitialization is Ready.

  7. Click the Data Science Cluster tab, and then click Create DataScienceCluster.

  8. On the Create DataScienceCluster page, configure the DataScienceCluster by using Form view or YAML view. For general information about the supported components, see Tiered Components.

    • Configure by using Form view:

      1. In the Name field, enter a value.

      2. In the Components section, expand each component and set the managementState to Managed or Removed.

    • Configure by using YAML view:

      1. In the spec.components section, for each component shown, set the value of the managementState field to either Managed or Removed.

  9. Click Create.

Verification
  1. Select HomeProjects, and then select the opendatahub project.

  2. On the Project details page, click the Workloads tab and confirm that the Open Data Hub core components are running. For more information, see Tiered Components.

Next Step
  • Access the Open Data Hub dashboard.

Installing the distributed workloads components

To use the distributed workloads feature in Open Data Hub, you must install several components.

Prerequisites
  • You have logged in to OpenShift Container Platform with the cluster-admin role and you can access the data science cluster.

  • You have installed Open Data Hub.

  • You have sufficient resources. In addition to the minimum Open Data Hub resources described in Installing the Open Data Hub Operator version 2, you need 1.6 vCPU and 2 GiB memory to deploy the distributed workloads infrastructure.

  • You have removed any previously installed instances of the CodeFlare Operator.

  • If you want to use graphics processing units (GPUs), you have enabled GPU support. This process includes installing the Node Feature Discovery Operator and the NVIDIA GPU Operator. For more information, see NVIDIA GPU Operator on Red Hat OpenShift Container Platform in the NVIDIA documentation.

  • If you want to use self-signed certificates, you have added them to a central Certificate Authority (CA) bundle as described in Understanding certificates in Open Data Hub. No additional configuration is necessary to use those certificates with distributed workloads. The centrally configured self-signed certificates are automatically available in the workload pods at the following mount points:

    • Cluster-wide CA bundle:

      /etc/pki/tls/certs/odh-trusted-ca-bundle.crt
      /etc/ssl/certs/odh-trusted-ca-bundle.crt
    • Custom CA bundle:

      /etc/pki/tls/certs/odh-ca-bundle.crt
      /etc/ssl/certs/odh-ca-bundle.crt
Procedure
  1. In the OpenShift Container Platform console, click OperatorsInstalled Operators.

  2. Search for the Open Data Hub Operator, and then click the Operator name to open the Operator details page.

  3. Click the Data Science Cluster tab.

  4. Click the default instance name (for example, default-dsc) to open the instance details page.

  5. Click the YAML tab to show the instance specifications.

  6. Enable the required distributed workloads components. In the spec.components section, set the managementState field correctly for the required components:

    • If you want to use the CodeFlare framework to tune models, enable the codeflare, kueue, and ray components.

    • If you want to use the Kubeflow Training Operator to tune models, enable the kueue and trainingoperator components.

    • The list of required components depends on whether the distributed workload is run from a pipeline or notebook or both, as shown in the following table.

    Table 4. Components required for distributed workloads
    Component Pipelines only Notebooks only Pipelines and notebooks

    codeflare

    Managed

    Managed

    Managed

    dashboard

    Managed

    Managed

    Managed

    datasciencepipelines

    Managed

    Removed

    Managed

    kueue

    Managed

    Managed

    Managed

    ray

    Managed

    Managed

    Managed

    trainingoperator

    Managed

    Managed

    Managed

    workbenches

    Removed

    Managed

    Managed

  7. Click Save. After a short time, the components with a Managed state are ready.

Verification

Check the status of the codeflare-operator-manager, kuberay-operator, and kueue-controller-manager pods, as follows:

  1. In the OpenShift Container Platform console, from the Project list, select odh.

  2. Click WorkloadsDeployments.

  3. Search for the codeflare-operator-manager, kuberay-operator, and kueue-controller-manager deployments. In each case, check the status as follows:

    1. Click the deployment name to open the deployment details page.

    2. Click the Pods tab.

    3. Check the pod status.

      When the status of the codeflare-operator-manager-<pod-id>, kuberay-operator-<pod-id>, and kueue-controller-manager-<pod-id> pods is Running, the pods are ready to use.

    4. To see more information about each pod, click the pod name to open the pod details page, and then click the Logs tab.

Next Step

Configure the distributed workloads feature as described in Managing distributed workloads.

Accessing the Open Data Hub dashboard

You can access and share the URL for your Open Data Hub dashboard with other users to let them log in and work on their models.

Prerequisites
  • You have installed the Open Data Hub Operator.

Procedure
  1. In the OpenShift web console, select NetworkingRoutes.

  2. On the Routes page, click the Project list and select the odh project. The page filters to only display routes in the odh project.

  3. In the Location column, copy the URL for the odh-dashboard route.

  4. Give this URL to your users to let them log in to Open Data Hub dashboard.

Verification
  • Confirm that you and your users can log in to the Open Data Hub dashboard by using the URL.

Adding a CA bundle after upgrading

Open Data Hub provides support for using self-signed certificates. If you have upgraded Open Data Hub, you can add self-signed certificates to the Open Data Hub deployments and Data Science Projects in your cluster.

There are two ways to add a Certificate Authority (CA) bundle to Open Data Hub. You can use one or both of these methods:

  • For OpenShift Container Platform clusters that rely on self-signed certificates, you can add those self-signed certificates to a cluster-wide Certificate Authority (CA) bundle (ca-bundle.crt) and use the CA bundle in Open Data Hub.

  • You can use self-signed certificates in a custom CA bundle (odh-ca-bundle.crt) that is separate from the cluster-wide bundle.

For more information, see Understanding certificates in Open Data Hub.

Prerequisites
Procedure
  1. Log in to the OpenShift Container Platform as a cluster administrator.

  2. Click OperatorsInstalled Operators and then click the Open Data Hub Operator.

  3. Click the DSC Initialization tab.

  4. Click the default-dsci object.

  5. Click the YAML tab.

  6. Add the following to the spec section, setting the managementState field to Managed:

    spec:
      trustedCABundle:
        managementState: Managed
        customCABundle: ""
  7. If you want to use self-signed certificates added to a cluster-wide CA bundle, log in to the OpenShift Container Platform as a cluster administrator and follow the steps as described in Configuring the cluster-wide proxy during installation.

  8. If you want to use self-signed certificates in a custom CA bundle that is separate from the cluster-wide bundle, follow these steps:

    1. Add the custom certificate to the customCABundle field of the default-dsci object, as shown in the following example:

      spec:
        trustedCABundle:
          managementState: Managed
          customCABundle: |
            -----BEGIN CERTIFICATE-----
            examplebundle123
            -----END CERTIFICATE-----
    2. Click Save.

      The Open Data Hub Operator creates an odh-trusted-ca-bundle ConfigMap containing the certificates in all new and existing non-reserved namespaces.

Verification
  • If you are using a cluster-wide CA bundle, run the following command to verify that all non-reserved namespaces contain the odh-trusted-ca-bundle ConfigMap:

    $ oc get configmaps --all-namespaces -l app.kubernetes.io/part-of=opendatahub-operator | grep odh-trusted-ca-bundle
  • If you are using a custom CA bundle, run the following command to verify that a non-reserved namespace contains the odh-trusted-ca-bundle ConfigMap and that the ConfigMap contains your customCABundle value. In the following command, example-namespace is the non-reserved namespace and examplebundle123 is the customCABundle value.

    $ oc get configmap odh-trusted-ca-bundle -n example-namespace -o yaml | grep examplebundle123

Adding a CA bundle after upgrading

Open Data Hub provides support for using self-signed certificates. If you have upgraded Open Data Hub, you can add self-signed certificates to the Open Data Hub deployments and Data Science Projects in your cluster.

There are two ways to add a Certificate Authority (CA) bundle to Open Data Hub. You can use one or both of these methods:

  • For OpenShift Container Platform clusters that rely on self-signed certificates, you can add those self-signed certificates to a cluster-wide Certificate Authority (CA) bundle (ca-bundle.crt) and use the CA bundle in Open Data Hub.

  • You can use self-signed certificates in a custom CA bundle (odh-ca-bundle.crt) that is separate from the cluster-wide bundle.

For more information, see Understanding certificates in Open Data Hub.

Prerequisites
Procedure
  1. Log in to the OpenShift Container Platform as a cluster administrator.

  2. Click OperatorsInstalled Operators and then click the Open Data Hub Operator.

  3. Click the DSC Initialization tab.

  4. Click the default-dsci object.

  5. Click the YAML tab.

  6. Add the following to the spec section, setting the managementState field to Managed:

    spec:
      trustedCABundle:
        managementState: Managed
        customCABundle: ""
  7. If you want to use self-signed certificates added to a cluster-wide CA bundle, log in to the OpenShift Container Platform as a cluster administrator and follow the steps as described in Configuring the cluster-wide proxy during installation.

  8. If you want to use self-signed certificates in a custom CA bundle that is separate from the cluster-wide bundle, follow these steps:

    1. Add the custom certificate to the customCABundle field of the default-dsci object, as shown in the following example:

      spec:
        trustedCABundle:
          managementState: Managed
          customCABundle: |
            -----BEGIN CERTIFICATE-----
            examplebundle123
            -----END CERTIFICATE-----
    2. Click Save.

      The Open Data Hub Operator creates an odh-trusted-ca-bundle ConfigMap containing the certificates in all new and existing non-reserved namespaces.

Verification
  • If you are using a cluster-wide CA bundle, run the following command to verify that all non-reserved namespaces contain the odh-trusted-ca-bundle ConfigMap:

    $ oc get configmaps --all-namespaces -l app.kubernetes.io/part-of=opendatahub-operator | grep odh-trusted-ca-bundle
  • If you are using a custom CA bundle, run the following command to verify that a non-reserved namespace contains the odh-trusted-ca-bundle ConfigMap and that the ConfigMap contains your customCABundle value. In the following command, example-namespace is the non-reserved namespace and examplebundle123 is the customCABundle value.

    $ oc get configmap odh-trusted-ca-bundle -n example-namespace -o yaml | grep examplebundle123