The Apptainer shown in the table below is available. As it is provided as a package that comes with the operating system, there is no need to set the environment with module files.
Version | Module File Name | System A | System B/C | System G | Cloud System | Note |
---|---|---|---|---|---|---|
1.1.9 | none | + | + | + | + | Former name:singularity |
+ : Available for all users - : Not available
test.sh
) to run a program (a.out
) that performs hybrid parallelism, using a container image called ubuntu.sif
.
#!/bin/bash
#============ Slurm Options
#SBATCH -p gr19999b # Specify the job queue (partition). The name of the queue to be submitted needs to be changed accordingly.
#SBATCH -t 1:00:00 # Specify the elapsed time (example: to specify one hour).
#SBATCH --rsc p=2:c=40 # Specifying the required resources (example: hybrid parallel using 2 processes and 40 cores).
#SBATCH -o %x.%j.out # Specify the standard output file of the job. %x is replaced by the job name and %j by the job ID.
#============ Shell Script ============
set -x
## When you do not use GPU
srun apptainer exec --bind `pwd`,/opt/system,/usr/lib64 --env LD_LIBRARY_PATH=/usr/lib64 ubuntu.sif ./a.out
## When you use GPU (Adding --nv makes it GPU-compatible.)
srun apptainer exec --nv --bind `pwd`,/opt/system,/usr/lib64 --env LD_LIBRARY_PATH=/usr/lib64 ubuntu.sif ./a.out
Option | Meaning | Example |
---|---|---|
--bind MOUNT_DIR | Specify the to mount. | --bind /LARGE0,LARGE1,/opt/system,/usr/lib64 |
--env ENVIRONMENT | Specify the environment variables | --env LD_LIBRARY_PATH=/usr/lib64 |
--nv | Add when you use GPU. | --nv |
Job submission using the job script created in 1.
$ sbatch test.sh
Submitted batch job {jobid}
Confirmation of execution results .
$ cat test.sh.{jobid}.out
================ Slrum Info ================
DATE = 2023-03-08T14:38:10+09:00
PARTITION = gr19999d
JOB_ID = {jobid}
JOB_NAME = test.sh
NNODES = 1
RSC_OPT = p=1:t=8:c=8:m=8G
============================================
_________
< hello!! >
---------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
Here, we use the ubuntu22.04 docker image as the base package and create a container image of python3 and tensorflow installed.
Create a recipe file to generate the container and save it as ubuntu22_tensorflow.def
.
Bootstrap: docker
From: ubuntu:22.04
%post
apt-get update
apt-get install -y python3 python3-pip
pip3 install tensorflow
Execute a build of the container image using the recipe file (ubunutu22_tensorflow.def
) generated in 1.The container image is saved as ubuntu22_tensorflow.sif
.
$ apptainer build --fakeroot {Container image storage location} {Recipe files for generating container images.}
(Example)
$ apptainer build --fakeroot ~/ubuntu22_tensorflow.sif ubuntu22_tensorflow.def
INFO: Starting build...
Getting image source signatures
(snip)
INFO: Creating SIF file...
INFO: Build complete: ubuntu22_tensorflow.sif
Operation Check
## Start the container.
$ apptainer shell ~/ubuntu22_tensorflow.sif
## Launch the Python 3
Apptainer> python3
Python 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
## Check the operation of Tensorflow
>>> import tensorflow as tf
>>> tf.__version__
'2.11.0'
Translate to the directory where the Apptainer container image is stored.
$ cd container_image
Download and build a container image from DockerHub
$ apptainer build lolcow.sif docker://godlovedc/lolcow
INFO: Starting build...
Getting image source signatures
(snip)
INFO: Creating SIF file...
INFO: Build complete: lolcow.sif
3.Confirm that the container image has been created.
$ ls lolcow.sif
lolcow.sif
This is an example of operations using openfoam, which is available on DockerHub. When executing, the data come with openfoam is used. This is only an example of confirmed operation, so please make necessary preparations and readjustments depending on what you want to implement.
Go to the directory where the container image of Apptainer is stored.
$ cd container_image
Download the container image from DockerHub and build it.
$ apptainer build openfoam10-paraview510.sif docker://openfoam/openfoam10-paraview510
(Omitted)
INFO: Creating SIF file...
INFO: Build complete: openfoam10-paraview510.sif
Copy of test data
$ cp -a /opt/system/app/openfoam/10/intel-2022.3-impi-2022.3/OpenFOAM-10/tutorials/basic/scalarTransportFoam/pitzDaily .
$ cd pitzDaily
$ mkdir -p resources/blockMesh
$ cp -a /opt/system/app/openfoam/10/intel-2022.3-impi-2022.3/OpenFOAM-10/tutorials/resources/blockMesh/pitzDaily resources/blockMes/
Save a system/decomposeParDict file for MPI parallelism with the following contents
/*--------------------------------*- C++ -*----------------------------------*\
========= |
\\ / F ield | OpenFOAM: The Open Source CFD Toolbox
\\ / O peration | Website: https://openfoam.org
\\ / A nd | Version: 10
\\/ M anipulation |
\*---------------------------------------------------------------------------*/
FoamFile
{
format ascii;
class dictionary;
note "mesh decomposition control dictionary";
object decomposeParDict;
}
numberOfSubdomains 2;
method scotch;
#!/bin/bash
#============ SBATCH Directives =======
#SBATCH -p gr19999b # Please specify the available queues
#SBATCH -t 1:00:00 # Specify an elapsed time limit of 1 hour
#SBATCH --rsc p=2:c=1 # Resource request for 2 processes (2 processes x 1 core)
#SBATCH -o %x.%A.out # Standard Output Destination
#============ Shell Script ============
# Overview :
# By allowing the openmpi installed on the host side to be referenced on the container side as well,
# Execute in parallel openfoam through the container using mpi implementations that support infiniband and slurm.
#
#
# Supplement of the option of apptainer
# --bind :
# /opt/system/app # To use openmpi installed on the host side, mount it on the container side.
# /usr/lib64:/usr/lib64/host # Mount the directory that contains the required libraries to run the host application (openmpi in this case) on the container side.
# # To avoid conflicts with local libraries in containers, mount them by specifying /usr/lib64/host as the mount destination.
# --env
# LD_LIBRARY_PATH=/opt/system/app/openmpi/4.0.5/gnu-8.5.0/lib:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib64/host
# #Since openmpi is bundled within the openfoam container, in order to give priority to mpi on the host side
/opt/system/app/openmpi/4.0.5/gnu-8.5.0/lib is set at the top.
# # Second and third, set the ubuntu local library to this location in order to make the application in the container work properly.
# # Fourth, set /usr/lib64/host lastly to recognize the infiniband-related libraries on which openmpi depends.
# # Consideration is required to avoid library conflicts depending on the host and container situation.
#
# PATH=/opt/system/app/openmpi/4.0.5/gnu-8.5.0/bin:/opt/openfoam10/bin:/usr/bin:/bin \
# # Add the PATH with the command to be recognized on the command line. No need to set if you use absolute PATH.
# # In this example, openfoam and /usr/bin, /bin in the container are set with the openmpi on the host is added to the top of the PATH and given top priority.
#
# Execute blockMesh in 1 parallel
srun -n 1 apptainer exec \
--bind /opt/system/app,/usr/lib64:/usr/lib64/host \
--env LD_LIBRARY_PATH=/opt/system/app/openmpi/4.0.5/gnu-8.5.0/lib:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib64/host \
--env PATH=/opt/system/app/openmpi/4.0.5/gnu-8.5.0/bin:/opt/openfoam10/bin:/usr/bin:/bin \
../openfoam10-paraview510.sif \
foamExec blockMesh -dict resources/blockMesh/pitzDaily
# Execute cecomposePar in 1 parallel
srun -n 1 apptainer exec \
--bind /opt/system/app,/usr/lib64:/usr/lib64/host \
--env LD_LIBRARY_PATH=/opt/system/app/openmpi/4.0.5/gnu-8.5.0/lib:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib64/host \
--env PATH=/opt/system/app/openmpi/4.0.5/gnu-8.5.0/bin:/opt/openfoam10/bin:/usr/bin:/bin \
../openfoam10-paraview510.sif \
foamExec decomposePar
$ sbatch jobscript.sh