Installing Open Data Hub requires OpenShift Container Platform version 4.2+. Documentation for OpenShift is located (here). All screenshots and instructions are from OpenShift 4.5. For the purposes of this quick start, we used try.openshift.com on AWS. Tutorials have also been tested on Code Ready Containers (CRC) with cluster resources configured for 6 CPUS and 16GB of RAM.
We will not be installing optional components such as Argo, Seldon, AI Library, or Kafka to avoid using too many resources in case your cluster is small.
Installing the Open Data Hub Operator
The Open Data Hub operator is available for deployment in the OpenShift OperatorHub as a Community Operators. You can install it from the OpenShift web console by following the steps below:
- From the OpenShift web console, log in as a user with
cluster-adminprivileges. For a developer installation from try.openshift.com including AWS and CRC, the
kubeadminuser will work.
- Create a new namespace named ‘odh’ for your installation of Open Data Hub.
Open Data Hubin the
- Select the new namespace if not already selected.
OperatorHubfor a list of operators available for deployment.
- Filter for
Open Data Hubor look under
Big Datafor the icon for
Open Data Hub.
- Click the
Installbutton and follow the installation instructions to install the Open Data Hub operator.
- The subscription creation view will offer a few options including Update Channel, keep the
- To view the status of the Open Data Hub operator installation, find the Open Data Hub Operator under
Installed Operators(inside the namespace you created earlier). Once the STATUS field displays
InstallSucceeded, you can proceed to create a new Open Data Hub deployment.
Create a New Open Data Hub Deployment
The Open Data Hub operator will create new Open Data Hub deployments and manage its components. Let’s create a new Open Data Hub deployment.
Find the Open Data Hub Operator under
Installed Operators(inside the namespace you created earlier)
Click on the Open Data Hub Operator to bring up the details for the version that is currently installed.
Create Instanceto create a new deployment.
YAML Viewradio button to be presented with a YAML file to customize your deployment. Most of the components available in ODH have been removed, and for this tutorial we’ll leave them that way to make sure the components for JupyterHub and Spark fit within our cluster resource constraints.
Take note of some parameters:
# ODH uses the KfDef manifest format to specify what components will be included in the deployment apiVersion: kfdef.apps.kubeflow.org/v1 kind: KfDef metadata: # The name of your deployment name: opendatahub # only the components listed in the `KFDef` resource will be deployed: spec: applications: # REQUIRED: This contains all of the common options used by all ODH components - kustomizeConfig: repoRef: name: manifests path: odh-common name: odh-common # Deploy Radanalytics Spark Operator - kustomizeConfig: repoRef: name: manifests path: radanalyticsio/spark/cluster name: radanalyticsio-spark-cluster # Deploy Open Data Hub JupyterHub - kustomizeConfig: parameters: - name: s3_endpoint_url value: s3.odh.com repoRef: name: manifests path: jupyterhub/jupyterhub name: jupyterhub # Deploy addtional Open Data Hub Jupyter notebooks - kustomizeConfig: overlays: - additional repoRef: name: manifests path: jupyterhub/notebook-images name: notebook-images # Reference to all of the git repo archives that contain component kustomize manifests repos: # Official Open Data Hub v0.9.0 component manifests repo # This shows that we will be deploying components from an archive of the odh-manifests repo tagged for v0.9.0 - name: manifests uri: 'https://github.com/opendatahub-io/odh-manifests/tarball/v0.9.0' version: v0.9-branch-openshift
specof the resource to match the above and click
Create. If you accepted the default name, this will trigger the creation of an Open Data Hub deployment named
opendatahubwith JupyterHub and Spark.
Verify the installation by viewing the Open Data Hub tab within the operator details. You Should see
Verify the installation by viewing the project workload. JupyterHub and Spark Operator should be running.