Quick Installation

Pre-requisites

Installing ODH requires OpenShift 3.11 or 4.x. Documentation for OpenShift can be located (here). All screenshots and instructions are from OpenShift 4.1. For the purposes of this quick start, we used try.openshift.com on AWS. Tutorials have also been tested on Code Ready Containers with 16GB of RAM.

Installing the Open Data Hub Operator

The Open Data Hub operator is available in the OpenShift 4.x Community Operators section. You can install it from the OpenShift webui by following the steps below:

  1. From the OpenShift console, log in as a user with cluster-admin privileges. For a developer installation from try.openshift.com including AWS and CRC, the kubeadmin user will work. Log in to OpenShift
  2. Create a new namespace for your installation of Open Data Hub. Create Namespace
  3. Find Open Data Hub in the OperatorHub catalog.
    1. Select the new namespace if not already selected.
    2. Under Catalog, select OperatorHub for a list of community operators.
    3. Filter for Open Data Hub or look under Big Data for the icon for Open Data Hub. OperatorHub
  4. Click the Install button and follow the installation instructions to install the Open Data Hub operator. Install
  5. To view the status of the Open Data Hub operator installation, find the Open Data Hub Operator under Catalog -> Installed Operators (inside the namespace you created earlier). Once the STATUS field displays InstallSucceeded, you can proceed to create a new Open Data Hub deployment. Installed Operators

Create a New Open Data Hub Deployment

The Open Data Hub operator will create new Open Data Hub deployments and manage its components. Let’s create a new Open Data Hub deployment.

  1. Find the Open Data Hub Operator under Installed Operators (inside the namespace you created earlier) Installed Operators

  2. Click on the Open Data Hub Operator to bring up the detail. Open Data Hub Operator

  3. Click Create New to create a new deployment. Create New ODH

  4. Here you’ll be presented with a YAML file to customize your deployment. Most options are disabled, and for this tutorial we’ll leave them that way and stick with the defaults. Take note of some parameters:
    • the name of your deployment example-opendatahub
      metadata:
       name: example-opendatahub
      
    • the deployed components designated by odh_deploy:
      spec:
       aicoe-jupyterhub:
        odh_deploy: true
       spark-operator:
        odh_deploy: true
      
  5. Leave the YAML intact and click Create. If you accepted the default YAML, this will trigger an Open Data Hub deployment named example-opendatahub with JupyterHub and Spark.

  6. Verify the installation by viewing the Open Data Hub tab within the operator details. You Should see example-opendatahub listed. ODH List

  7. Verify the installation by viewing the project status. JupyterHub, Spark, and Prometheus should all be running. Verify Status