Using SLATE to Enable Remote Desktop in Open OnDemand
Open OnDemand is a web application enabling easy access to high-performance computing resources. Open OnDemand, through a plugin system, provides many different ways to interact with these resources. Most simply, OnDemand can launch a shell to remote resources in one’s web browser. Additionally, OnDemand can provide several ways of submitting batch jobs and launching interactive computing sessions. It is also able to serve as a portal to computationally expensive software running on remote HPC nodes. For example, users can launch remote Jupyter Notebooks or Matlab instances.
The SLATE platform provides a simple way to deploy this application with a batch job scheduler in a containerized environment, with remote desktop and shell access to any desired backend compute resources.
This tutorial requires that you can install a basic OnDemand instance using SLATE as described here.
It is assumed that you already have access to a SLATE-registered Kubernetes cluster, and that you have installed and configured the SLATE command line interface. If not, instructions can be found at SLATE Quickstart.
The remote desktop application requires that autofs can be implemented on the cluster you are installing on. The official linux man pages provide more information here.
On backend resources, it is required you can install NFS/autofs, enable hostbased authentication, and can connect to an organizational LDAP. More information about hostbased authentication can be found here.
Initially, a configuration file for the Open OnDemand application must be obtained. The SLATE client will do this with the following command:
slate app get-conf open-ondemand > ood.yaml
This will save a local copy of the OnDemand configuration, formatted as a .yaml file. We will modify this configuration accordingly, and eventually deploy Open OnDemand with this configuration.
With your preferred text editor, open this configuration file and follow the instructions below.
To set up remote desktop access, we must configure the
LinuxHost Adapter. This is a simplified resource manager built from various component softwares. By enabling resource management, you can configure interactive apps and manage sessions remotely.
To do this set
enableHostAdapter to true and fill in each cluster definition file. If you’re not sure what a field should be, leave it as default for now. Most failures in connecting to backend resources are due to errors with these definition files.
After creating an entry for each backend resource you’d like to connect to, ensure that the
host_regex field below captures all of the provided hostnames.
- cluster: name: "Node1" host: "node1.example.net" enableHostAdapter: true job: ssh_hosts: "node1.example.net" site_timeout: 14400 singularity_bin: /bin/singularity singularity_bindpath: /etc,/media,/mnt,/opt,/run,/srv,/usr,/var,/home singularity_image: /opt/centos7.sif # Something like centos_7.6.sif tmux_bin: /usr/bin/tmux basic_script: - '#!/bin/bash' - 'set -x' - 'export XDG_RUNTIME_DIR=$(mktemp -d)' - '%s' vnc_script: - '#!/bin/bash' - 'set -x' - 'export PATH="/opt/TurboVNC/bin:$PATH"' - 'export WEBSOCKIFY_CMD="/usr/bin/websockify"' - 'export XDG_RUNTIME_DIR=$(mktemp -d)' - '%s' set_host: "$(hostname)" - cluster: name: "Node2" ...
Here you must once again set
enableHostAdapter equal to
true and fill in the following entries. To find the groupID of your ssh_keys group, simply run
cat /etc/group | grep ssh_keys.
enableHostAdapter: true # Enable linuxHost Adapter advanced: desktop: "mate" # Desktop of your choice (mate, xfce, or gnome) node_selector_label: "ood" # Matching node_selector_label (See next step) ssh_keys_GID: 993 # ssh_keys groupID
The chart must be installed on a properly configured node. On a multi-node cluster it is necessary to set a
nodeSelectorLabel called ‘application’ on a desired node. Then match that label in the
values.yaml file. If all nodes are properly configured then you may leave this field blank.
kubectl label nodes <node-name> application=ood
Pods are ephermeral, so keys from the host system should be passed into the container using a secret. This will ensure trust is not broken when pods are replaced. This script will generate a secret containing host keys on the OnDemand server.
#!/bin/bash echo -n "Please enter a name for your secret: " && read secretName if [ "$secretName" != "" ]; then : else echo "Please enter a non-empty secret name" && exit fi echo -n "Please select a namespace for your secret (default slate-group-slate-dev): " && read nameSpace if [ "$nameSpace" == "" ]; then nameSpace=slate-group-slate-dev fi command="kubectl create secret generic $secretName -n $nameSpace" for i in /etc/ssh/ssh_host_*; do command=`echo "$command --from-file=$i"` done printf "$command\n" $command ; echo ""
In the slate configuration file, ensure that the
secret_name field and
host_key names match the secret you generate.
If you’re not sure what your host_key names are, list the contents of
# Provide names for each host key stored in your secret secret_name: "ssh-key-secret" host_keys: - host_key: name: "ssh_host_ecdsa_key" - host_key: name: "ssh_host_ecdsa_key.pub" - ...
Resource management for OnDemand also requires a distributed filesystem. This chart currently supports autofs.
# Filesystem distribution autofs: true fileSharing: nfs_shares: - 'slate1 -rw slate.example.net:/export/mdist/slate1' - 'slate2 -rw slate.example.net:/export/mdist/slate2' - '...'
Now that Open OnDemand is ready to be deployed using
slate app install, we should access each of our backend clusters and ensure that they are ready to establish a connection.
To enable resource management, you must install components of the
LinuxHost Adapter on each backend cluster.
To get a basic centOS 7 image run the following command, and ensure it has the same path as the
singularity_image field in Cluster Definitions above.
singularity pull docker://centos:7
To establish a remote desktop connection, ports 5800(+n) 5900(+n) and 6000(+n) need to be open for each display number n. As well as, port 22 for ssh and ports 20000+ for websocket connections.
To do this easily, add a global rule to iptables or a firewalld exception.
sudo iptables -A INPUT -s xxx.xxx.xxx.xxx/32 -j ACCEPT sudo firewall-cmd --zone=trusted --add-source=xxx.xxx.xxx.xxx/32
Filesystem Distribution must also be configured on the backend clusters, so that user data is persistent between the OnDemand server and backend resources.
To configure autofs install
autofs. Then configure the
auto.master file to mount NFS shares from an
auto.map file. This map file should have consistent shares and mount points with entries in the slate app configuration described above. When everything is set up, run
systemctl enable nfs autofs to ensure they always run at system startup.
slate1 -rw slate.example.net:/export/mdist/slate1 slate2 -rw slate.example.net:/export/mdist/slate2 ...
The LinuxHost Adapter requires passwordless SSH for all users, which is most easily configured by establishing host-level trust (hostBasedAuthentication).
To do this, run the
ssh-keyscan command below using the public IP address of the OnDemand host. This will automatically populate an
ssh_known_hosts file with public host keys.
For more detailed information, see the links in Prerequisites.
ssh-keyscan [ONDEMAND_HOST_PUBLIC_IP] >> /etc/ssh/ssh_known_hosts
Next, add an entry to
/etc/ssh/shosts.equiv with the IP address of the OnDemand server like so
node1.example.net node2.example.net ...
And in the
/etc/ssh/sshd_config file, change the following entries from
#HostbasedAuthentication no #IgnoreRhosts yes
HostbasedAuthentication yes IgnoreRhosts no
Finally, ensure that you have the correct permissions for host keys at
-rw-r-----. 1 root ssh_keys 227 Jan 1 2000 ssh_host_ecdsa_key -rw-r--r--. 1 root root 162 Jan 1 2000 ssh_host_ecdsa_key.pub -rw-r-----. 1 root ssh_keys 387 Jan 1 2000 ssh_host_ed25519_key -rw-r--r--. 1 root root 82 Jan 1 2000 ssh_host_ed25519_key.pub -rw-r-----. 1 root ssh_keys 1675 Jan 1 2000 ssh_host_rsa_key -rw-r--r--. 1 root root 382 Jan 1 2000 ssh_host_rsa_key.pub
And for ssh-keysign at
/usr/libexec/openssh Note: location varies with distro
---x--s--x. 1 root ssh_keys 5760 Jan 1 2000 ssh-keysign
To install the application using slate, run this app install command:
slate app install open-ondemand --group <group_name> --cluster <cluster_name> --conf /path/to/ood.yaml
After a short while, your SLATE OnDemand application should be live at
<slate_instance_tag>.ondemand.<slate_cluster_name>.slateci.net. Note that
<slate_instance_tag> is the
instance parameter specified in the
values.yaml file, not the randomly-assigned SLATE instance ID.
Navigate to this URL with any web browser, and you will be directed to a Keycloak login page. A successful login will then direct you to the Open OnDemand portal home page. Navigating to the shell access menu within the portal should allow you to launch in-browser shells to the previously specified backend compute resources.
Test User Setup
This Open OnDemand chart supports the creation of temporary test users, for validating application functionality without the complexity of connecting to external LDAP and Kerberos servers. To add a test user(s), navigate to the
testUsers section of the configuration file. Add the following yaml to this section for each user you would like to add:
- user: name: <username_here> group: <test_group> groupID: <1000+n> tempPassword: <temporary_password_here>
The following table lists the configurable parameters of the Open OnDemand application and their default values.
|String to differentiate SLATE experiment instances.|
|The number of replicas to create.|
|The volume provisioner from which to request the Keycloak backing volume.|
|The amount of storage to request for the volume.|
|Set up LDAP automatically based on following values.|
|URL to access LDAP at.|
|Import LDAP users to Keycloak.|
|Kerberos realm to connect to.|
|Kerberos server principal.|
|Use Kerberos for password authentication.|
|Writes additional debug logs if enabled.|
|Name of cluster to appear in the portal.|
|Hostname of cluster to connect to.|
|Configure remote desktop functionality.|
|Full hostname of the login node.|
|Location of singularity binary.|
|Directories accessible during VNC sessions.|
|Location of singularity image.|
|Location of tmux binary.|
|Basic desktop startup script.|
|VNC session startup script.|
|Hostname passed from the remote node back to OnDemand.|
|Regular expression to capture hostnames.|
|Enable resource management and interactive apps.|
|Desktop environment (mate,xfce,gnome)|
|Matching node label for a preferred node.|
|Group ID value of ssh_keys group.|
|Name of secret holding host_keys.|
|Names of stored keys.|
|Mount home directories using autofs.|
|A mapfile with shares to be mounted by autofs.|
|Unprivileged users for testing login to OnDemand.|