OpenMPI Library

Version System A System B System C
3.1.4 + +
2.0.1 + +

+ : Available for all users, ― : Not available for use

Version Compiler Modulefile Name
3.1.4 Intel openmpi/3.1.4_intel-18.0
" GNU openmpi/3.1.4_gnu-4.8
2.0.1 Intel openmpi/2.0.1_intel-17.0
" PGI openmpi/2.0.1_pgi-16.10
" GNU openmpi/2.0.1_gnu-4.8

The compiler for OpenMPI Library varies by Intel compiler, PGI compiler, and GNU compiler. Accordingly, to set the environment corresponding to the compiler that you wish to use, execute the module command as below:

(When using Intel compiler)
$ module switch PrgEnv-(the current env.) PrgEnv-intel
$ module load openmpi/1.6_intel-17.0
(When using PGI compiler)
$ module switch PrgEnv-(the current env.) PrgEnv-pgi
$ module load openmpi/1.6_pgi-16.10
(When using GNU compiler)
$ module switch PrgEnv-(the current env.) PrgEnv-gnu
$ module load openmpi/1.6_gnu-4.8

For details on the module commands, see Modules.

The value for threading support level available in OpenMPI Library is MPI_THREAD_SINGLE. Note that MPI functions can be called ONLY from one thread.

Compile commands

Language Commands Operands
Fortran mpif90 mpif90 [sequence_of_options] sequence_of_files
C mpicc mpicc [sequence_of_options] sequence_of_files
C++ mpic++ mpic++ [sequence_of_options] sequence_of_files

Examples of Compiling

$ mpif90 -O3 sample_mpi.f90

To run a compiled MPI program, use the mpiexec command. In the interactive mode, write the mpiexec command after the tssrun command.

In both of interactive mode and batch mode, specify the number of parallels as argument of the -A option, p. And specify the same number as argument of the -n option of the mpiexec command. Also, when executing the program in combination with thread-level parallelism such as OpenMP, specify the number of processes performed per node by using the -npernode option.

For details, see Examples.

Example(in interactive mode)

$ tssrun -A p=2 mpiexec -n 2 ./a.out

For details on interactive mode, see Interactive Processing.

  • Running with 8-fold parallelization

    $ tssrun -A p=8 mpiexec -n 8 ./a.out
  • Running in combination with thread parallelization(MPI 4-fold parallelization, OpenMP 8-fold parallelization)

    $ tssrun -A p=4:t=8:c=8 mpiexec -n 4 -npernode 2 ./a.out

For details of batch mode, see Batch Processing (For System B and C).

  • Running with 8-fold parallelization

    $ cat sample.sh
    #!/bin/bash
    #QSUB -q gr19999b
    #QSUB -ug gr19999
    #QSUB -W 5:00
    #QSUB -A p=8
    mpiexec -n $QSUB_PROCS -npernode $QSUB_PPN ./a.out
    $ qsub sample.sh
  • Running in combination with thread parallelization(MPI 4-fold parallelization, OpenMP 8-fold parallelization)

    $ cat sample.sh
    #!/bin/bash
    #QSUB -q gr19999b
    #QSUB -ug gr19999
    #QSUB -W 5:00
    #QSUB -A p=4:t=8:c=8:m=4G
    #QSUB -o %J.out
    mpiexec -n $QSUB_PROCS -npernode $QSUB_PPN ./a.out
    $ qsub sample.sh


Copyright © Academic Center for Computing and Media Studies, Kyoto University, All Rights Reserved.