The steps are also available in a tutorial video available on the OpenShift youtube channel.
Installing ODH requires OpenShift 3.11 or 4.x. Documentation for OpenShift can be located (here). All screenshots and instructions are from OpenShift 4.2. For the purposes of this quick start, we used try.openshift.com on AWS. Tutorials have also been tested on Code Ready Containers with 16GB of RAM.
We will not be installing optional components such as Argo, Seldon, AI Library, and Kafka. For these components, there are additional pre-requisites detailed in the advanced installation. These additional pre-requisites must be installed before the Open Data Hub Operator if you intend to install these optional components.
Installing the Open Data Hub Operator
The Open Data Hub operator is available in the OpenShift 4.x Community Operators section. You can install it from the OpenShift webui by following the steps below:
- From the OpenShift console, log in as a user with
cluster-adminprivileges. For a developer installation from try.openshift.com including AWS and CRC, the
kubeadminuser will work.
- Create a new namespace for your installation of Open Data Hub.
Open Data Hubin the
- Select the new namespace if not already selected.
OperatorHubfor a list of community operators.
- Filter for
Open Data Hubor look under
Big Datafor the icon for
Open Data Hub.
- Click the
Installbutton and follow the installation instructions to install the Open Data Hub operator.
- To view the status of the Open Data Hub operator installation, find the Open Data Hub Operator under
Installed Operators(inside the namespace you created earlier). Once the STATUS field displays
InstallSucceeded, you can proceed to create a new Open Data Hub deployment.
Create a New Open Data Hub Deployment
The Open Data Hub operator will create new Open Data Hub deployments and manage its components. Let’s create a new Open Data Hub deployment.
Find the Open Data Hub Operator under
Installed Operators(inside the namespace you created earlier)
Click on the Open Data Hub Operator to bring up the detail.
Create Instanceto create a new deployment.
- Here you’ll be presented with a YAML file to customize your deployment. Most options are disabled, and for this tutorial we’ll leave them that way and modify some of the parameters to make sure the components for JupyterHub and Spark fit within our cluster resource constraints. Take note of some parameters:
- the name of your deployment
metadata: name: example-opendatahub
- the deployed components designated by
spec: aicoe-jupyterhub: odh_deploy: true # Set the Jupyter notebook pod to 1CPU and 2Gi of memory notebook_cpu: 1 notebook_memory: 1Gi # Disable creation of the spark worker node in the cluster spark_master_nodes: 1 spark_worker_nodes: 0 # Reduce the master node to 1CPU and 1GB spark_memory: 1Gi spark_cpu: 1 spark-operator: odh_deploy: true # Reduce the memory requirements monitoring: odh_deploy: false
- the name of your deployment
Leave the YAML intact and click
Create. If you accepted the default name, this will trigger an Open Data Hub deployment named
example-opendatahubwith JupyterHub and Spark.
Verify the installation by viewing the Open Data Hub tab within the operator details. You Should see
- Verify the installation by viewing the project workload. JupyterHub, Spark, and Prometheus should all be running.