Using Apptainer/Singularity

Overview

Apptainer/Singularity Containers are a tool for packaging up applications and running them on various host systems reproducibly. Singularity can import most Docker containers without issue and can be easily deployed as a user application that can run without administrative privileges.

To use Apptainer on Rescale, image files (*.sif) must be created ahead of time using an Apptainer/Singularity build environment.  For this task, see the Apptainer Quick Start guide: https://apptainer.org/user-docs/master/quick_start.html.

Once created, create a new job and select the Bring Your Own Singularity Container software tile.

image 25

Apptainer/Singularity in Basic/DOE Mode

Apptainers that run in Rescale’s Basic or DOE modes should have an exec command or run script that returns the process after completion.

E.g. Exec

singularity pull library://lolcow
singularity exec lolcow_latest.sif cowsay moo

E.g. Run

singularity pull library://lolcow
singularity run lolcow_latest.sif

or

singularity pull library://lolcow
./lolcow_latest.sif

Apptainer/Singularity in End-to-End Desktop

If the Apptainer supports running a GUI, you can easily run that interface on Rescale’s End-to-End Desktop.  Rescale machine images contain an X-Windows server which may be leveraged by Apptainer/Singularity via the xhost command. 

To launch a GUI from Apptainer, execute the xhost + command prior to the command used to launch the Apptainer/Singularity application.

xhost +
singularity exec <my_gui_application.sif> <path to gui command>

Additional Notes

As of version 2.3, Singularity supports running containers that also use GPUs running CUDA applications, making it a useful choice for running packaged deep learning jobs.

singularity exec --nv docker://rescale/tf-cnn-benchmarks:1.3.0 python /tf_cnn_benchmarks/tf_cnn_benchmarks.py --model resnet50 --batch_size 
64 --gpus

The “–nv” flag in the command line above instructs Singularity to pass through the host GPU interface to the container, enabling CUDA applications to run inside. This particular example runs the TensorFlow CNN benchmarks in a container on one or more GPUs.