Skip to content


JupyterHub is a multi-user platform for running Jupyter notebooks. Here we will describe how to deploy a JupyterHub platform on a Kubernetes platform at the Berlin site.

Setup cloud project

First, the cloud environment must be set up. We assume here that there is a Kubernetes cluster available for your project. See this tutorial to set up a Kubernetes cluster via Kubermatic at the Berlin de.NBI site. We will use a specific node (deploy-node) to communicate with the Kubernetes cluster. It requires installation of kubectl and helm3 on the node. Depending on the cloud setup, this can be your local computer or a VM within your OpenStack project. Here, we use a VM in the de.NBI cloud based on Ubuntu version 22.10. See here for creating a single VM in your cloud project.


  1. We need to install kubectl. An installation script can downloaded from the official Kubernetes repository.
curl -LO "$(curl -L -s"
sudo apt-get update
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
  1. Then we make our kubeconfig (kubeconfig-XXXXXX) file available to kubectl. It is good practice, that access to this file is restricted.
mkdir .kube
cp kubeconfig-XXXXXX .kube/config
chmod 400 .kube/config
  1. To check that cluster is available, we can e.g. display all nodes by typing:
kubectl get nodes
  1. We will use a separate namespace (jhub) to deploy JupyterHub platform:
kubectl create namespace jhub


  • Helm is required to install JupyterHub. We install Helm version 3 on the deploy-node:
curl -fsSL -o
chmod 700
helm list -A
  • We can now add JupyterHub helm charts:
helm repo add jupyterhub
helm repo update


The JupyerHub in the example will include an extra storage that can be accessed by all users. This can be useful to share files with all user. Here we use NFS, since this can be easily installed via OpenStack. To create an NFS share in the de.NBI node see here. If you use a different cloud setup, other storage options can also be integrated. See for example here to define different types of volumes in Kubernetes. In the example below a 1000GB NFS share is created. To use NFS in the de.NBI cloud the two fields server and path have to be filled in. The content below needs to be stored as data-pv.yaml on the deploy-node.

apiVersion: v1
kind: PersistentVolume
  name: data
    storage: 1000Gi
    - ReadWriteMany
    server: #TODO: set url to NFS server (e.g.
    path: #TODO: set path to NFS share
    - nfsvers=3
  • The next command creates the share in Kubernetes:
kubectl apply --namespace jhub -f data-pv.yaml
  • To mount the share into JupyterHub, we also need a persistent volume claim. The content below needs to be stored in the file data-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
  name: data-pvc
    - ReadWriteMany
  storageClassName: ""
      storage: 1000Gi
  • Finally, the next command creates the volume claim in Kubernetes:
kubectl apply --namespace jhub -f data-pvc.yaml


In the Berlin node, a floating-ip with public access needs specifically be assigned to your project. Please ask the Berlin cloud admin team for the provision of a public floating ip via email: . We will use the floating IP in the example and the respective public IP in our example is The two addresses are only examples here.

DNS & encryption

To access the portal from the internet via https, we also need a DNS entry and a TLS certificate. In some cases, a DNS entry for the domain can be requested for a de.NBI project at the Berlin site. Please contact the site cloud admins to get information for your specific project. In the example here, we use the URL Next, we also need a certificate for encrypted communication. For example, a certificate can be created by Let's Encrypt. The keyfile and the certificate file need to be stored on the deploy-node. Both files are stored as a secret in Kubernetes so that we can use them in the JupyterHub deployment. Name of the secret is jupyter-tls.

kubectl create secret --namespace=jhub tls jupyter-tls --key=$PATH_TO_KEY --cert=$PATH_TO_CERT


We recommend using GitHub for user authentication, because it is easy to configure and also easy to manage users and it is directly implemented as an option in JupyterHub. Any other OIDC provider such as LifeScience AAI can be used here as well. See here for an full tutorial on how to create an oauth2 app for GitHub. In summary, first we need to create an organization, then we can create an OAuth app and include the url of our JupyterHub platform and add the path to the callback site, which will be e.g. After that, adding users to the GitHub organization will be sufficient to authorize them for using the JupyterHub platform.


  • To start the deployment, we need to create a config file (config.yaml). Several fields must be filed in with values that were set during the example.
    enabled: true
      - #TODO: add URL of the portal (e.g.
    type: secret
      name: jupyter-tls
    loadBalancerIP: #TODO: add the floating ip address (e.g.

    enabled: false
    enabled: true

    name: jupyter/datascience-notebook
    tag: latest
    capacity: 1Gi
      storageClass: cinder-csi
      - name: data
          claimName: data-pvc
      - name: data
        mountPath: /home/jovyan/data
        readOnly: True
    - display_name: "Standard"
      description: "4 CPUs + 32 GB RAM"
        limit: "4"
        guarantee: "4"
        limit: "32G"
        guarantee: "32G"

        - #TODO: add your GitHub username or username of any other user, who shall be an admin of the platform
      client_id: #TODO: add your GitHub oauth client id
      client_secret: #TODO: add your GitHub oauth client secret
      oauth_callback_url: #TODO: add callback url (e.g.
        -  #TODO: add name of your GitHub organization 
        - read:org
      authenticator_class: github
  • Finally, we can create the deployment using the command:
helm upgrade --cleanup-on-fail \
  --install release jupyterhub/jupyterhub \
  --namespace jhub \
  --version=2.0.0 \
  --values config.yaml

If you have any questions or run into issues, please contact