Scientific Linux from CVMFS with Singularity

From Etp
Jump to: navigation, search

Singularity is a tool that enables users to work in any environment by loading prepared containers (singularity images). This allows for running slc6 based software from cvmfs on Ubuntu (or any other OS with singularity installed).

SLC6/Centos7 CVMFS Image setup with Singularity (updated May 17, 2019, GD)

ATLAS provides regularly updated images on /cvmfs for several Linux flavors used by ATLAS, such as SLC6 and Centos7. Just execute:

 source /project/etpsw/Common/bin/

This will setup the functions "slc6" and "centos7" , which allow to either invoke a new shell within an slc6/centos7 singularity container or execute commands within said container. You can put this line in your bashrc config (or the startup config for the shell of your choice) to have this function available every time you've opened a new terminal.

Invoke Shell inside SLC6/Centos7 Environment

In order to open a shell inside the slc6/centos7 environment you simply execute:




This will invoke a shell within the singularity container. It automatically tries to identify the shell you've run the command from and uses this one (if available inside the container) and uses bash if this fails.

If you want to explicitly use another shell inside the container than the one you normally use in a terminal, you can simply specify it as additional argument. For example, for zsh you would do this:

 slc6 zsh

Note that this will obviously not work if the respective shell is not installed in the singularity image (bash and zsh usually work, others may not).

Note: If you use bash, a reasonable prompt (the one you also get on the Ubuntu host machine) is set up by default. This should provide a seamless transition from the host (Ubuntu) environment to the slc6 environment in the singularity container, even if the user doesn't have a custom bashrc yet.

Execute a Command inside SLC6/centos7 Environment

If you want to run a command inside the slc6 environment you just specify it as an additional argument:

 slc6 cat /etc/issue
 centos7 cat /etc/issue

If you want to do some more extensive processing it makes sense to enclose whatever you want to do inside a shell script and specify it's execution as additional argument (make sure that it's executable):

 slc6 <shell_script>
 centos7 <shell_script>

Usage within Slurm Scripts

In order to use singularity in slurm jobs you just have to source the setup script in the slurm batch submission script. And you have to watch out how to hand over commands to slc6/centos within a script.

This does not work:

source /project/etpsw/Common/bin/
# ATLAS setup
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/
echo "source ${ATLAS_LOCAL_ROOT_BASE}/user/"
source ${ATLAS_LOCAL_ROOT_BASE}/user/ -q

The lines following slc6 are not executed by slc6 env but in standard env

Instead, put commands in extra script and hand-over script to slc6:

source /project/etpsw/Common/bin/
slc6 ./

Or hand-over commands as Here-doc

source /project/etpsw/Common/bin/
# ATLAS setup
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/
echo "source ${ATLAS_LOCAL_ROOT_BASE}/user/"
source ${ATLAS_LOCAL_ROOT_BASE}/user/ -q

See More Info on Heredocs

Additional Information

Some additional information for the interested reader. This is based on singularity version 2.2.1, details may change with newer iterations.

Open a Shell inside an Image/Container

Open a shell inside a singularity image or container:

 singularity shell <singularity_image>

The shell is sh by default so you might want to switch to your favourite one instead. You can also use the "--shell" option to specify which one to use already with the singularity command:

 singularity shell --shell /bin/bash <singularity_image>

Execute a Command

You can also just pass a command to singularity to be executed inside the image/container:

 singularity exec <singularity_image> cat /etc/issue

This can also be an executable shell script that contains the full chain of commands you want to execute:

 singularity exec <singularity_image> <shell_script>

Create new Singularity Images

In order to make a new singularity image it's most convenient to use a bootstrap file which holds the instructions for singularity to follow. You can find one for an slc6 image here:


It pulls a docker container from dockerhub as a base environment and installs some software on top of that. You'll find that the base docker container is very bare bones, so it's quite possible that you'll find some base functionality that is missing which should really be there (if you do, send some feedback to Thomas Maier). In order to build a new singularity image you simply execute these two commands:

 sudo singularity create --size 2048 cern-slc6.img
 sudo singularity bootstrap cern-slc6.img cern-slc6.def

Note that you have to have sudo rights to be able to generate and bootstrap new singularity images. On your etp working machine you most likely don't have these privileges, so you'll have to use some other means if you want to generate your own images (laptops are prime contenders here).

Making Persistent Changes in Images as User

In general, singularity is very strict when it comes to privileges when making changes to an image. For example, you have to be root to make a new image, but running a singularity image on any given host will always be done with the privileges that are provided to you on that host (usually you're a user without rights to change anything that is not owned by you). However, there is a way to have a private image that can be manipulated to make persistent changes as a user (e.g. installing some code). You can find an slc6 template image here:


This image contains a folder "/user", which is owned by root but privileges are set to rwx for any user. This means any user can make new directories/files inside this folder (which are then owned by this user). You can make persistent changes inside the image if you open it with this command:

 singularity shell --writable <private_image>

If you omit the "--writable" option, any changes you make inside "/user" won't be persistent, meaning they are discarded when you close the singularity shell and won't be there the next time you open a new one.

NOTE: you must first make your own copy of this image in order to manipulate it, whatever you want to persistently change must be owned by you.

You can also pass the "--writable" option to "singularity exec":

 singularity exec --writable <private_image> touch /user/file

Keep in mind that the image only has a limited size, as defined upon it's creation. The size of the template above is ~3 GB of which ~900 MB are used for system binaries, leaving you with ~2 GB for whatever you want to put inside the image. Sizeable files, which don't necessarily have to live inside the image (like output files from analysis code) should be put on one of the "/project/etp*" partitions instead, which are bind mounted inside the image automatically (assuming you run the image on a machine inside the etp cluster).

If you want to share an image you've manipulated with others, you should make sure that they're able to make changes to whatever part of "/user" that you want them to be able to modify. You can just set the permissions for a given folder to rwx for every user, like this:

 chmod -R a+rwx <folder>

Note that the "-R" option makes sure that all sub-folders and files are also set to the same permissions (which you probably want). If you want to get really fancy, you could also change ownership of the folder (and its sub-folders/files) to the user you are sharing the image with:

 chown -R <uid> <folder>

The uid can be obtained by executing this command on any machine inside the etp cluster:

 id -u <username>

However, the caveat is that executing "chown" requires sudo privileges, meaning you have to invoke the singularity shell with sudo (something you probably have to do on your own laptop). If you want to keep the image as portable as possible, e.g. for usage on C2PAP or (eventually) LRZ, you probably just want to stick to changing permissions for the respective parts in "/user". In other environments you most likely won't have the same UID and GID associations, so trying to adapt the UID to a specific user on the etp cluster is meaningless for other systems.

Links for Reference

Singularity website with documentation

Documentation by HPC @ NIH