Skip to main content

Kubeflow cluster dashboard

Logging in to your Kubeflow cluster

Once your cluster is running, you will need to authenticate yourself to access the cluster dashboard. The login credentials to use are the same ones you use to sign in to Civo.

:::noteOther user authentication If a Kubeflow cluster administrator has given you access to a cluster they run, you will have a different mode of authentication. Ask your cluster administrator for details on how to log in to the cluster. :::

Kubeflow has quite a few components which you could see on the dashboard.

Kubeflow dashboard home screen

Once logged in, the Home tab will present you with an overview of your recent notebooks (Jupyter) and pipelines, quick shortcuts to your notebooks, pipelines and experiments and links to the Civo provided documentation.

The home page also contains your activities page, which can be used to troubleshoot your Kubeflow cluster and show you all active machine learning-related processes.

The Kubeflow Dashboard

Notebooks

Notebooks are the primary development environment for Kubeflow.

The Notebooks tab gives an overview of your available notebook environments and allows you to manage your existing notebooks or create new ones from pre-provided images or a custom container.

From the Notebooks dashboard you can:

  • Create a new notebook
  • Query and filter your notebooks
  • Stop or start notebooks to free up resources on the Kubeflow Cluster or recreate an environment which you've spun down
  • Delete your notebooks

Clicking on a notebook will show you all the logging, events and resources associated with that particular notebook:

Notebook details

Tensorboards

Tensorboard is popular tool for visualizing machine learning models and experiments. Kubeflow allows you to host your Tensorboard instances and create these dashboards.

Volumes

Volume, in general means some sort of directory, which is accesible to containers running inside the underlying Kubernetes cluster. This tab on the dashboard allows you to creae and view Kubernetes Volumes.

Experiments (AutoML)

Experiments in Kubeflow are powered by an open-source tool, Katib that allows you to work with automated machine learning tasks, hyperparameter tuning, and neural architecture search. Finding the right values for hyperparameters is much dependent on previous experience and experimentation, to which Katib is well suited.

Pipelines

Pipelines are one of the most popular components of Kubeflow.

Kubeflow Pipelines allows you to easily manage, create and deploy end-to-end scalable machine learning workflows. Kubeflow Pipelines functions as an orchestrator for machine learning pipelines making experimentation and deployments pretty easy.

Experiments (KFP)

This tab in the Kubeflow dashboard shows you past experiments you ran with Kubeflow pipelines and also allows you to run or trigger experiments in Kubeflow pipelines.

Runs

This tab in the Kubeflow dashboard shows you past Kubeflow pipeline runs.

Recurring Runs

Recurring runs allow you to schedule your Kubeflow pipeline runs to run at a specific time or interval as well as showing scheduled recurring Kubeflow pipeline runs.

Artifacts

Kubeflow pipelines can have output artifacts like the best set of hyperparameters, trained models, and the like. This tab shows you all the artifacts generated by your previously run Kubeflow pipelines.

Executions

This tab shows you a history of previous runs and what functions Kubeflow pipelines ran under the hood.

Training Operators

Training Operators allow you to train your machine learning models in a distributed fashion in Kubeflow. At the moment, training operators support TensorFlow, PyTorch, MXNet, and XGBoost jobs.

With these components Kubeflow can be used across the entire machine learning workflow.