Micro-services
Scientific computing tasks
Follow the following steps:
# Connect to the interactive machine using SSH
$ ssh -l user cca.in2p3.fr
# Activate the Singularity 3.3.0 environment
$ ccenv singularity 3.3.0
From Singularity Hub
$ singularity pull shub://vsoch/hello-world
From Docker Hub
$ singularity pull docker://godlovedc/lolcow
$ singularity shell hello-world_latest.sif
Singularity: Invoking an interactive shell within container...
Singularity hello-world_latest.sif:~> cat /etc/issue
Ubuntu 14.04.6 LTS \n \l
To quit:
CTRL-d or exit
$ cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)
$ singularity exec hello-world_latest.sif cat /etc/issue
Ubuntu 14.04.6 LTS
$ singularity exec hello-world_latest.sif ls /
anaconda-post.log etc lib64 mnt root singularity tmp
bin home lost+found opt run srv usr
dev lib media proc sbin sys var
user1@hst:~$ singularity shell hello-world_latest.sif
Singularity hello-world_latest.sif:~> whoami
user1
Singularity hello-world_latest.sif:~> id
uid=1000(user1) gid=1000(user1) groups=1000(user1),4(adm),[…]
→ The container is instanciated with the autorisations of the user spawning it.
user1@hst:~$ singularity shell hello-world_latest.sif
Singularity hello-world_latest.sif:~> pwd
/home/user1/
test@hst:/home/user1$ singularity shell hello-world_latest.sif
Singularity hello-world_latest.sif:~> pwd
/home/test
Singularity hello-world_latest.sif:~> ls /home/user1
ls: cannot access /home/user1: No such file or directory
Require the option --bind or -B
→ -B src:dst
$ ls ~/tests
test.py
$ singularity shell -B ~/tests:/mnt hello-world_latest.sif
Singularity hello-world_latest.sif:~> ls /mnt
test.py
No need to specify dst if src == dst:
$ cat /mnt/test.txt
Ceci est un test
$ singularity shell -B /mnt hello-world_latest.sif
Singularity hello-world_latest.sif:~> cat /mnt/test.txt
Ceci est un test
Mounting several directories at once:
$ singularity shell -B /mnt,~/tests:/mnt1 hello-world_latest.sif
ccenv singularity 3.3.0-rc.1
cd ~
echo "It's alive!" > test.txt
$ singularity exec -B [...] hello-world_latest.sif cat [...]/test.txt
It's alive!
CVMFS | Images storage provided by the CC-IN2P3 | /cvmfs/singularity.in2p3.fr/images/ |
PBS | Your images |
The images in CVMFS are organised by usage (HPC/HTC, CPU/GPU, ...) and are maintained by the CC-IN2P3
In /pbs, you have:
How to upload an image to the CC-IN2P3:
$ scp mycontainer.sif formationX@cca.in2p3.fr:
How to submit on batch job using Singularity:
-- hello.sh --
#!/bin/bash
/bin/singularity exec /cvmfs/singularity.in2p3.fr/images/
HTC/ubuntu/ubuntu1804-CC3D.simg /bin/hostname
-- Wrapper submission --
$ qsub -q long /pbs/home/john/hello.sh
Singularity 3.X from a interactive machine:
# List the Singularity versions available @ CC-IN2P3
ccenv singularity --list
Software:
Version:
singularity:
- 3.0.3
- 3.1.1
- 3.3.0
# Activate the desired version
ccenv singularity 3.3.0
From a interactive machine:
# List the various softwares available @ CC-IN2P3
$ ccenv --list
Software:
- anaconda
- cctools
- cmake
# List the available version for a specific software
$ ccenv software --list
# Setup a specific version for a given software
$ ccenv software version
Singularity 3.X on a worker node, i.e. from the batch system:
source /pbs/software/centos-7-x86_64/singularity/ccenv.[c]sh 3.3.0
Submitting a job on the GPU farm?
You can find all the available images as well as the compatible Deep Learning Python modules here
https://gitlab.in2p3.fr/ccin2p3-support/c3/hpc/gpu
Submission command (from an interactive machine)
qsub -l sps=1,GPU=<nb_gpus>,GPUtype=<K80-V100> -q <queue> -pe multicores_gpu 4
-o <output_path> -e <error_path> -V <path_to>/batch_launcher.sh
batch_launcher.sh (on the worker node)
#!/bin/bash
/bin/singularity exec --nv --bind /sps:/sps --bind /pbs:/pbs <image_path> <path_to>/start.sh
start.sh (executed through a Singularity image on the worker node)
#!/bin/bash
source <path_to_python_env> activate <env>
python <path_to>/program.py