SLATE blog

How to Use JupyterLab to Submit Jobs to the Open Science Grid with SLATE

Our previous post JupyterLab and HTCondor with SLATE described deployment of an HTCondor pool onto a SLATE-registered Kubernetes cluster with job submission provided by a JupyterLab application. But what if you want just a JupyterLab capable of submitting jobs to the Open Science Grid?


Using Kubernetes Network Policies for SLATE applications

A major priority of SLATE is ensuring that our clusters and applications are secure. In order to better secure the applications there is an ongoing effort to ensure that they all have built in Network Policies. This allows a user or site administrator to more strictly limit who exactly has access to a given application.


Implementing Kubernetes Network Policies for SLATE applications

The SLATE team and collaborators continue to target security as a major focus. The SLATE team is configuring all the offered applications in the SLATE stable catalog with Kubernetes Network Policy hooks in the Helm deployment charts. As an application developer, being able to build this functionality into your application will make many site administrators much more comfortable with your application being used on their clusters.


SLATE Quarterly Updates

Welcome to another round of SLATE project updates! It’s been quite some time since our last update, so this post will be especially long. On the plus side, we have a lot of cool and interesting new things to tell you about!


JupyterLab and HTCondor with SLATE

JupyterLab is a great tool for data analysis, visualization, machine learning and much more. It allows users to run code interactively via its web notebook interface and thus iterate changes quickly. Often users need to scale up their work and thus require submission to a backend cluster from the notebook. We show how this can be done with HTCondor using SLATE.


Deploying and Testing an OSG Hosted CE via SLATE

An OSG Compute Element (CE) is an application that allows a site to contribute HPC or HTC compute resources to the Open Science Grid. The CE is responsible for receiving jobs from the grid and routing them to your local cluster(s). Jobs from the Open Science Grid are preemptible, and can be configured to run only when resources would have otherwise been idle. Resource providers can use OSG to backfill their cluster(s) to efficiently utilize resources and contribute to the shared national cyberinfrastructure.

The simplest way to start contributing resources to the OSG, for many sites, is via the “Hosted” CE. In the hosted case, installation and setup of the Compute Element is done by the OSG team, usually on a machine outside of your cluster, and uses standard OpenSSH as a transport for submitting jobs to your resources. With SLATE, we have simplified Hosted CE installation and made a shared operations model possible. Now the Compute Element can be hosted on your Kubernetes infrastructure on-prem and cooperatively managed by OSG and your local team. In this article, we’ll go through the steps for connecting a SLURM cluster to the Open Science Grid with our newly stabilized OSG Hosted CE Helm Chart.


RSS