The SLATE Federation is tied together by a custom component called the SLATE API Server. This provides a uniform interface for all users of SLATE to make requests to use and manage resources participating in the Federation. It enforces access control rules so that resource providers can regulate which groups on SLATE can run services on their cluster, along with services they may run. SLATE provides both the SLATE Command Line Client and the SLATE Dashboard to interact with this API.
The NRP-Controller is a tool which enables federating Kubernetes clusters with limited permissions. This means that an external entity (like the SLATE federation) can be granted access to only part of a Kubernetes cluster, allowing both multi-tenancy, in that SLATE can coexist with other uses of the same clusters, and helping local cluster administrators retain full control of their clusters when participating in SLATE.
Docker is the primary container runtime environment used in SLATE. Our clusters run docker containers orchestrated by Kubernetes. The Docker Engine provides the necessary operating system level abstractions to enable containers. Docker is the world leader in the containerization. With a strong open-source community, it was our choice for running containers in SLATE
Singularity can be run in place of Docker as an alternative container run-time environment for Kubernetes. Singularity adds additional security and features that have made it popular in many environments including HPC. To learn more about running a Kubernetes cluster with Singularity checkout our recent blog post: Setting up a Kubernetes Cluster based on Singularity.
Kubernetes facilitates the underlying container orchestration done by SLATE. Kubernetes, sometimes abbreviated K8s, is the open-source leader in deployment, management, and automation of containers. Kubernetes abstracts container workloads into deployments and services that can be connected using a simple internal networking model. The SLATE client provides granular secure access to the Kubernetes control plane, and allows the management of multiple geographically disparate clusters at once.
Helm is used to manage SLATE applications housed in our application catalog. Helm is known as the package manager for Kubernetes. Helm is used to bring together the multiple Kubernetes specs required to run a single application and present it in a complete package called a chart. Helm provides a powerful templating engine that allows us to build a great deal of customizability into our applications. Helm is essential for tying our container technologies together.
High-performance networking utilized by SLATE is made possible by the ScienceDMZ network model. The ScienceDMZ is a segment of an institution’s network with equipment configuration and security policies that are optimized for scientific computing. The ScienceDMZ model was developed by the engineers at ESnet. The model includes specialized data transfer systems, performance measurement utilities, and specialized architecture. SLATE clusters are designed to operate within the host institution’s ScienceDMZ.
Our Console and authorization backend was developed using the Globus Modern Research Data Portal.
The SLATE platform relies on CILogon for integrated identity and access management.