Quick Installation

Pre-requisites

Installing Open Data Hub requires OpenShift Container Platform version 4.2+. Documentation for OpenShift is located (here). All screenshots and instructions are from OpenShift 4.5. For the purposes of this quick start, we used try.openshift.com on AWS. Tutorials have also been tested on Code Ready Containers (CRC) with cluster resources configured for 6 CPUS and 16GB of RAM.

We will not be installing optional components such as Argo, Seldon, AI Library, or Kafka to avoid using too many resources in case your cluster is small.

Installing the Open Data Hub Operator

The Open Data Hub operator is available for deployment in the OpenShift OperatorHub as a Community Operators. You can install it from the OpenShift web console by following the steps below:

  1. From the OpenShift web console, log in as a user with cluster-admin privileges. For a developer installation from try.openshift.com including AWS and CRC, the kubeadmin user will work. Log in to OpenShift
  2. Create a new namespace named ‘odh’ for your installation of Open Data Hub. Create Namespace 'odh'
  3. Find Open Data Hub in the OperatorHub catalog.
    1. Select the new namespace if not already selected.
    2. Under Operators, select OperatorHub for a list of operators available for deployment.
    3. Filter for Open Data Hub or look under Big Data for the icon for Open Data Hub. OperatorHub
  4. Click the Install button and follow the installation instructions to install the Open Data Hub operator. Install
  5. The subscription creation view will offer a few options including Update Channel, keep the Beta channel selected. Select Channel
  6. To view the status of the Open Data Hub operator installation, find the Open Data Hub Operator under Operators -> Installed Operators (inside the namespace you created earlier). Once the STATUS field displays InstallSucceeded, you can proceed to create a new Open Data Hub deployment. Installed Operators

Create a New Open Data Hub Deployment

The Open Data Hub operator will create new Open Data Hub deployments and manage its components. Let’s create a new Open Data Hub deployment.

  1. Find the Open Data Hub Operator under Installed Operators (inside the namespace you created earlier) Installed Operators

  2. Click on the Open Data Hub Operator to bring up the details for the version that is currently installed. Open Data Hub Operator

  3. Click Create Instance to create a new deployment. Create New ODH

  4. Select the YAML View radio button to be presented with a YAML file to customize your deployment. Most of the components available in ODH have been removed, and for this tutorial we’ll leave them that way to make sure the components for JupyterHub and Spark fit within our cluster resource constraints.

    Take note of some parameters:

       # ODH uses the KfDef manifest format to specify what components will be included in the deployment
       apiVersion: kfdef.apps.kubeflow.org/v1
       kind: KfDef
       metadata:
         # The name of your deployment
         name: opendatahub
       # only the components listed in the `KFDef` resource will be deployed:
       spec:
         applications:
           # REQUIRED: This contains all of the common options used by all ODH components
           - kustomizeConfig:
               repoRef:
                 name: manifests
                 path: odh-common
             name: odh-common
           # Deploy Radanalytics Spark Operator
           - kustomizeConfig:
               repoRef:
                 name: manifests
                 path: radanalyticsio/spark/cluster
             name: radanalyticsio-spark-cluster
           # Deploy Open Data Hub JupyterHub
           - kustomizeConfig:
               parameters:
                 - name: s3_endpoint_url
                   value: s3.odh.com
               repoRef:
                 name: manifests
                 path: jupyterhub/jupyterhub
             name: jupyterhub
           # Deploy addtional Open Data Hub Jupyter notebooks
           - kustomizeConfig:
               overlays:
                 - additional
               repoRef:
                 name: manifests
                 path: jupyterhub/notebook-images
             name: notebook-images
         # Reference to all of the git repo archives that contain component kustomize manifests
         repos:
           # Official Open Data Hub v0.9.0 component manifests repo
           # This shows that we will be deploying components from an archive of the odh-manifests repo tagged for v0.9.0
           - name: manifests
             uri: 'https://github.com/opendatahub-io/odh-manifests/tarball/v0.9.0'
         version: v0.9-branch-openshift
    
  5. Update the spec of the resource to match the above and click Create. If you accepted the default name, this will trigger the creation of an Open Data Hub deployment named opendatahub with JupyterHub and Spark.

  6. Verify the installation by viewing the Open Data Hub tab within the operator details. You Should see opendatahub listed. ODH List

  7. Verify the installation by viewing the project workload. JupyterHub and Spark Operator should be running. Verify Status