Platform for Scientific Computing

WR Cluster Usage



Users have NIS accounts that are valid on all cluster nodes. Passwords can be changed with the passwd command. It takes some time (up to several minutes) until such a change will be seen by all nodes. Please be aware that account names are the same as with your university accounts, but that the accounts are different including passwords.

File Systems

Each node (server as well as cluster nodes) has its own operating system on a local disc. Certain shared directory subtrees are exported via NFS to all cluster nodes. This includes user data (e.g. $HOME = /home/username) as well as commonly used application software (e.g. /usr/local).

The /tmp directory is on all nodes guaranteed to be located on a node-local (and fast) filesystem. The environment variable $TMPDIR within a batch job contains a name to a job-private fast local directory (somewhere in /tmp on a node; see additional description here). If possible, use this dynamically set environment variable to write to and read from temporary files used only in one job run. The /scratch directory can be used for larger amounts of temporary data that needs to be available longer than a batch job run. This directory is shared between all nodes and access to it is slow. Please be aware that data on /tmp filessystems may be deleted without any notice. And be aware that there is no backup for the scratch file system!
mount point located purpose shared on all nodes daily backup default soft quota
/ local operating system no no -
/home server user data yes yes 80GB
/usr/local server application software yes yes 40GB
/tmp local node-local temporary data no no 10GB
/scratch server shared temporary data yes no 5TB

We have established a quota. Users can ask for their own quota with the command quota -s. The output shows the internal device names which can be translated to user filesystems as follows:
internal name filesystem
/dev/md0 / (including /tmp)
/dev/md1 /usr/local
/dev/md2 /home
wr1:/raid1/scratch /scratch
The maximum number of files is per default restricted to 1 Mio. / 2 Mio. (soft / hard limit) files per file system. For the /scratch filesystem, the numbers are 5 Mio. / 6 Mio. If you need more file space than restricted by the default quota please contact the system administrator .

File and Directory Names

Don't use spaces or umlauts in file or directory names, e.g. on copying from a MS Windows system. Otherwise you get in trouble getting results files from the batch system (result code 2).

Software Packages

Beside a set of standard software packages, a user can extend his/her package list with additional software packages or package versions. This needs to be done by a user itself using the module command with several possible subcommands. A software environment is called a module. Loading a module means usually that internally the search paths for commands, libraries etc. are extended.


Here is a short overview of some (sub-)commands: A module may exist in several versions where the user has the possibility to work with one specific version of choice. If no version is specified during the load a default version is used. It is a good practice always to use the default version of a module even if the concrete version behind the default may change over the time. Most modules are downward compatible such that no problems should exist in this case and you will always get the most advanced, fast and error free version of a module at any time.

Example: Instead of

user@wr0: module use gcc/6.2.0
just use

user@wr0: module use gcc


user@wr0: module avail

---------------------------------------------- /usr/local/modules/modulesfiles ----------------------------------------------
acml-mp/5.3.1           gnuplot/5.0.0           intel-inspector/2015    octave/4.0.0            pgi/15.10
acml-sp/5.3.1           gnuplot/default         intel-inspector/default octave/default          pgi/15.7
binutils/2.25           grace/5.1.23            intel-vtune/2013        ompp/0.8.3              pgi/default
binutils/default        hwloc/1.10.0            intel-vtune/2015        ompp/0.8.5              R/3.0.2
cmake/          hwloc/1.11.0            intel-vtune/2016        ompp/default            R/3.2.3
cmake/3.4.1             hwloc/1.11.2            intel-vtune/default     opencl/intel            R/default
cmake/default           hwloc/1.9               java/7                  opencl/nvidia           sage/6.1.1
cuda/5.5                hwloc/1.9.1             java/8                  opencl/nvidia-6.5       sage/6.9
cuda/6.0                hwloc/default           java/default            opencl/nvidia-7.0       sage/default
cuda/6.5                intel-advisor/2013      likwid/3.1.2            opencl/nvidia-7.5       scalasca/2.0
cuda/7.0                intel-advisor/2015      likwid/4.0.1            openmpi/gnu             scilab/5.5.2
cuda/7.5                intel-advisor/default   likwid/default          openmpi/intel           scilab/default
cuda/default            intel-icc/2013-32       matlab/default          papi/5.4.0              solaris-studio/12.3
gcc/4.8.2               intel-icc/2013-64       matlab/R2015b           papi/5.4.1              texlive/2014
gcc/4.9.0               intel-icc/2015          metis/5.1.0-gcc-32      papi/default            texlive/2015
gcc/4.9.1               intel-icc/2015-32       metis/5.1.0-gcc-64      pgi/14.1                texlive/default
gcc/4.9.2               intel-icc/2015-64       metis/5.1.0-icc-32      pgi/14.10
gcc/5.2.0               intel-icc/2016          metis/5.1.0-icc-64      pgi/14.6
gcc/5.3.0               intel-icc/default       mpe/intel               pgi/14.7
gcc/default             intel-inspector/2013    octave/3.8.0            pgi/15.1

---------------------------------------------- /usr/share/Modules/modulefiles -----------------------------------------------
dot         module-cvs  module-info modules     null        use.own

----------------------------------------------------- /etc/modulefiles ------------------------------------------------------
compat-openmpi-psm-x86_64 compat-openmpi-x86_64     openmpi-x86_64

user@wr0: module whatis gcc/4.8.2
gcc/4.8.2            : GNU compiler suite version 4.8.2

# check current compiler version
user@wr0: gcc --version
gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4)

# load version 4.8.2
user@wr0: module load gcc/4.8.2
user@wr0: gcc --version
gcc (GCC) 4.8.2

# unload version 4.8.2
user@wr0: module unload gcc/4.8.2
user@wr0: gcc --version
gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4)

Available Modules

A list of software packages (eventually with sub-versions) that are handled using the module command is:
name purpose
acml-[mp|sp] AMD ACML library (single / multi processor version)
binutils GNU binutils
cmake CMake system
cuda CUDA development and runtime environment
gcc GNU compiler suite
gnuplot plot program
grace 2D plotting
hwloc detect hardware properties
intel-advisor Intel advisor
intel-icc Intel Compiler enviroment
intel-inspector Intel Inspector
intel-vtune Intel VTune
java Oracle Java environment
likwid development tools
matlab Matlab mathematical software with toolboxes
metis graph partitioning package
mpe special MPI environment
octave GNU octave
ompp OpenMP tool
opencl OpenCL
openmpi OpenMPI environment
papi Papi performance counter library
pgi PGI compiler suite
R Mathematical R system
sage Mathematical sage environment
scalasca performance tool
scilab Mathematical software package
solaris-studio Oracle Solaris Studio development environment
texlive TeX distribution

Initial Module Enviroment Setup

If you need always the same modules, you may include the load commands in your .bash_profile (once per session executed) or .bashrc (once per shell executed) file in your home directory. Example $HOME/.bashrc file:

module load intel-icc openmpi/intel

Modules with MPI

If you use an MPI programs spaned over more than one node during execution, you must load in your .bashrc file these modules. There is no other way! A module load xyz in a job script does not work!

Running a Batch Job

We use torque/maui as a batch system and we ask you to use the batch system for all your work on all cluster nodes other than wr0. torque has a command line interface and a graphical interface for functions mostly used. The ssh environment must be setup as described in the ssh section and is set up already for all new accounts.

Specify a Job

A batch job script is a shell script that is submitted to and started by the batch system. In a batch script you specify all actions that should be done in your job either sequentially or parallel.

Sequential Job

An example of such a batch script /home/user/ is:

# start sequential program
# change directory and execute another sequential program
cd subdir

OpenMP Job

An example of such a batch script /home/user/ is:

# set the number of threads
# start OpenMP program


An example of such a batch script /home/user/ is:

# load the OpenMPI environment
module load openmpi

# dynamically determine the number of MPI processes we have in use
NCORES=`cat $PBS_NODEFILE | wc -l`
# start here your MPI program on the nodes/cores we got assigned from the batch system
mpirun -np $NCORES -machinefile $PBS_NODEFILE myprog.exe
It is important that you use for MPI jobs the PBS_NODEFILE environment variable that the batch system sets for you dynamically at runtime. This file contains the names of the nodes that were assigned for your job.

Environment Variables and Modules

The batch system defines certain environment variables that you may use in your batch job script.

variable name purpose
PBS_O_HOME home directory
PBS_O_WORKDIR working directory where the job was submitted
PBS_NODEFILE file name of a file with a list of all assigned nodes (one line per processor)
PBS_JOBID Job ID given to the job
TMPDIR the name of a job-private directory under /tmp that is generated at job startup and is deleted after the job finished

Example batch job including the use of PBS_O_WORKDIR that most likely will be used if most job scripts :

# change to the directory where the batch job was submitted

# determine number of MPI processes we have from the machinefile
NCORES=`cat $PBS_NODEFILE | wc -l`
# start here your MPI program
mpirun -np $NCORES -machinefile $PBS_NODEFILE myprog.exe >>outfile
Be aware that your loaded modules are not exported to other nodes other than the MPI master node if a MPI program execution spans over multiple nodes. You must load the modules in your .bashrc file. See section Modules with MPI <.

Submit a Job

To submit a batch job, use the qsub command followed by the job script. Submitting a job must include a specification what resources you need (at least number of nodes and maximum run time). There are two possibilities to specify the requested resources: inside the batch script itself (most convenient) or as additional parameters in the command line.

Resource Specification in the Job Script

This is the preferred method. You can include special comments in the job script that specify the ressources you request. Here is an example in which the job asks for 2 nodes with 16 processors each, a maximum of 4 GB memory per node, and 1 hour and 30 minutes runtime in the job queue mpi .
# the following special lines starting with #PBS tells the batch system what resources you need
#PBS -q mpi
#PBS -l nodes=2:ppn=16
#PBS -l walltime=01:30:00
#PBS -l vmem=4GB

module load openmpi
# following the actions to be performed by the job
# start here your MPI program on the nodes we got assigned from the batch system
NCORES=`cat $PBS_NODEFILE | wc -l`
mpirun -np $NCORES -machinefile $PBS_NODEFILE myprog.exe
Parameters in #PBS lines may be written also comma-separated in one line, e.g.
#PBS -q mpi -l nodes=3:ppn=16,walltime=01:30:00
The batch job can then be submitted with:

user@wr0: qsub

Resource Specification in the Command Line

Alternatively, you may specify a resource specification during job submission in the command line. The job specified in runs at most 1 hour and 30 minutes and requests 32 processors (2 nodes with 16 processors each) with at most 4 GB memory per process in job queue mpi :

user@wr0: qsub -q mpi -l nodes=2:ppn=16,walltime=01:30:00 -l vmem=4GB
Please be aware that you have to specify the number of nodes (up to 10 in our cluster) and additionally the number of cores per node (e.g. up to 16 cores per node for MPI jobs). The product of these two numbers is the total number of CPU resources you get. If you want to use a node exclusively, you must request all ressources of a node (all cores). This is highly encouraged if you want to measure the time for program runs. The batch system recognises several options, among them:
option default purpose
#PBS -N name - assigns the job a name
#PBS -q queue_name default submit to a certain queue
#PBS -l nodes=x[:ppn=y] - requested number of nodes and cores per node
#PBS -l walltime=hh:mm:ss - the maximum wall-time the job can run
#PBS -l vmem=4gb 1 GB request 4 GB for all spawned processes (replace 4 with your favorite memory requirement)
#PBS -V - export all environment variables to the job. This is necessary if you use MPI on multiple nodes and have loaded modules

Resource Limits

As part of a job submit, you can specify a request for main memory above the default 1 GB. Be aware that on nodes not all main memory as given in the hardware overview table can be allocated for your job. For example, the operating system needs some memory for itself, for the efficient communication with a GPU memory is pinned etc. For example, it might happen that on a system with 128 GB main memory only 120 GB are available for a job. Therefore the advice is, that you should specify resource requests that fit to you job's needs and do not request the maximum available resources of a node.

Check Job Status

After submission you may check the status of your/all your jobs with several commands depending on the amount of information you want.
  1. You can view the batch status of all batch jobs in a web brower ( link). The page gets updated periodically.
  2. You can show the status of all jobs in a shell window with showq .
    user@wr0: showq
    ACTIVE JOBS--------------------
    40                 user1       Running    14     1:02:56  Wed Aug 19 09:54:16
    41                 user1       Running    14     1:07:23  Wed Aug 19 09:58:43
    11                 user1       Running    14     2:55:21  Sun Aug 16 12:46:41
    12                 user2       Running    14     2:57:37  Sun Aug 16 12:48:57
        11 Active Jobs     152 of  172 Processors Active (88.37%)
                            11 of   11 Nodes Active      (100.00%)
    IDLE JOBS----------------------
    JOBNAME            USERNAME      STATE  PROC     WCLIMIT            QUEUETIME
    42                 user1          Idle    14     1:10:00  Wed Aug 19 08:38:19
    43                 user1          Idle    14     1:10:00  Wed Aug 19 08:38:20
    44                 user1          Idle    14     1:10:00  Wed Aug 19 08:38:21
    3 Idle Jobs
    BLOCKED JOBS----------------
    JOBNAME            USERNAME      STATE  PROC     WCLIMIT            QUEUETIME
    23                 user1          Idle    12     7:40:00  Mon Aug 17 08:59:13
    46                 user1          Idle    12     7:40:00  Wed Aug 19 08:38:31
    Total Jobs: 16   Active Jobs: 11   Idle Jobs: 3   Blocked Jobs: 2
  3. qstat shows a brief status overview of your jobs only.
    user@wr0: qstat
    Job id                    Name             User            Time Use S Queue
    ------------------------- ---------------- --------------- -------- - -----
    371.wr0             user1           00:00:00 R default
    374.wr0             user1           00:00:00 C default
    380.wr0             user2                  0 Q default
    The column named S gives the status of your job (R=running, C=completed, Q=queued).
  4. A more detailled information for one queued job is given by the command: tracejob your_job_number . For the example shown above: tracejob 374

Get Results

Output to stdout / stderr in your program is redirected to 2 files you find in the directory where you submitted the job after the job finished. These files have the names x.on and x.en , respectively, where x is the job name (batch script) and n in on/en is the Job ID.

user@wr0: ls -l
-rw------- 1 user fb02    316 Mar  9 07:27
-rw------- 1 user fb02  11484 Mar  9 07:27


We have established several queues with different behaviour and restrictions (see the output of qstat -q for a list of available queues). With each queue are associated certain policies (maximum number of jobs in queue, maximum runtime per job, scheduling priority, physical memory, special hardware features). Without any queue specification your job will be queued in default scheduled on (nearly) any of the nodes in the cluster. If you want to submit your job in a non-default queue (which is what you normally want) specify either on the command line or in your job script -q queue_name.

user@wr0: qsub -q mpi -l nodes=1:ppn=16,walltime=15:00:00,vmem=4GB


There are certain limits associated with each batch queue. A user can not have more than 2.000 jobs in any queue. Some of the limits are displayed with the command: qstat -q

user@wr0: qstat -q

server: wr0

Queue            Memory CPU Time Walltime Node  Run Que Lm  State
---------------- ------ -------- -------- ----  --- --- --  -----
hpc2               --      --    72:00:00   --    6   3 --   E R
wr5                --      --    72:00:00   --    1   0 --   E R
wr4                --      --    72:00:00   --    1   0 --   E R
default            --      --    72:00:00   --   12   0 --   E R
mpi                --      --    72:00:00   --    9   0 --   E R
hpc1               --      --    72:00:00   --    2   3 --   E R
wr7                --      --    72:00:00   --    1   0 --   E R
interactive        --      --    01:00:00   --    0   0 --   E R
wr3                --      --    72:00:00   --    1   2 --   E R
hpc                --      --    72:00:00   --   12   0 --   E R
wr8                --      --    72:00:00   --    1   0 --   E R
wr6                --      --    72:00:00   --    0   0 --   E R
                                               ----- -----
                                                  46     8
In this example jobs for the batch queue named mpi have a limit of 72 hours wallclock time. Please be aware that a job can not be started on a node if the jobs asks for (almost) all the physical memory on that node (e.g., due to OS memory reservations).
queue name maximum time per job usable memory default virt.memory/process nodes used
default 72 hours (dependent on node) 1 GB any node
mpi 72 hours 15 GB 1 GB wr10 - wr19
hpc 72 hours 120 GB 1 GB wr20 - wr42
hpc1 72 hours 120 GB 1 GB wr20 - wr27
hpc2 72 hours 120 GB 1 GB wr28 - wr40
wr4 72 hours 750 GB 1 GB wr4
wr5 72 hours 120 GB 1 GB wr5
wr6 72 hours 60 GB 1 GB wr6
wr7 72 hours 10 GB 1 GB wr7
wr8 72 hours 30 GB 1 GB wr8

Job Priority

Every job gets a priority that may change over time. The batch system favors in scheduling decisions jobs with higher priorities. The priority of a job depends on several factors, amongst them are: You may help you and others if you

Temporary Files

torque defines an environment variable $TMPDIR with a name of a temporary directory (with fast access) that should be used for fast temporary file storage within a job scope. The directory is created on job start and deleted when the job finished. Example on how to use the environment variable within a program:

char *basedir = getenv("TMPDIR");
if(basename != NULL)
    char filename = "test.dat";
    char allname[1024];
    sprintf(allname, "%s/%s", basedir, filename);
    FILE *f = fopen(all, "w");

Interactive Usage

In rare cases (e.g., parallel visualization) there may be the need to use a cluster node interactively. To do this you reserve a cluster node through the batch system with the command qsub -I -q interactive. As the requested ressource is no longer allocated by some other user, the batch system allocates that ressource for you exclusively, does a ssh login on the allocated node and presents you a shell on that node. After that, you may work interactively on that node up to the requested time. You leave the node (and the reservation) as you normally leave a shell with exit. If no node is available that matches your request the requests hangs until an appropriate node gets free. If you need support for graphical output, add the option -X. If you need a specific node, change the request as follows (in the example for node wr42): qsub -I -q interactive -l nodes=wr42,ppn=48.

user@wr0:  qsub -I -q interactive -l nodes=1:ppn=16
qsub: waiting for job to start
qsub: job ready

Have a lot of fun...
Directory: /home/user
So 9. Aug 13:36:28 CEST 2009
user@wr7:~>:  exit 

qsub: job completed

Special Cases

Command Line Interface Summary

command purpose
qsub shell-script submit a batch job that executes commands from the script shell-script
qdel job-number delete a batch job from the queue with job number job-number
qstat show batch queue (only your part)
showq show batch queue (all jobs)
showstart job-number show earliest start time of your job with number job-number

Interactice Development

To fasten development cycles, you can use some nodes interactively doing from wr0 a ssh to one of these nodes. The nodes are:


All main development tools are available. Among them are compilers (C, C++, Java, Fortran) and parallel programming environments (OpenMP, MPI, CUDA, OpenCL, OpenACC). Application software is in the responsibility of users.
compiler name module command documentation safe optimization debug option compiler feedback version
GNU C cc / gcc - man gcc -O2 -g -ftree-vectorizer-verbose=2 --version
Intel C icc module load intel-icc man icc -O2 -g –vec-report=2 (or higher) --version
PGI C pgcc module load pgi man pgcc -O2 -g -Minfo=vec --version
GNU C++ g++ - man g++ -O2 -g -ftree-vectorizer-verbose=2 --version
Intel C++ icpc module load intel-icc man icpc -O2 -g –vec-report=2 (or higher) --version
PGI C++ pgc++ module load pgi man pgc++ -O2 -g -Minfo=vec --version
GNU Fortran gfortran - man gfortran -O2 -g -ftree-vectorizer-verbose=2 --version
Intel Fortran ifort module load intel-icc man ifort -O2 -g –vec-report=2 (or higher) --version
PGI Fortran pgfortran module load pgi man pgfortran -O2 -g -Minfo=vec --version
Oracle Java javac module load java   -O -g n.a. -version


On wr7 there is additionally the whole PGI Compiler infrastructure with compilers and the profiler pgprof installed. Documentation is available under /usr/local/PGI/. The tool infrastructure can be used only on wr7. The generated Code may be executed on all nodes. Exception: if you use the accelerator functionality of the PGI compiler, the code can be executed only on nodes with a GPU.

Base Software

The following base software is installed:

Intel MKL

The Intel Math Kernel Library (MKL) is installed at /usr/local/Intel/current/mkl. It should be used preferably on Intel-based systems, but works also on AMD systems. The library contains basic mathematical functions (BLAS, LAPACK, FFT,...). See the documentation at /usr/local/Intel/doc for details. If you use any of the Intel compilers, just add the flag -mkl as a compiler and linker flag. Otherwise, check this page for the appropriate version and correspondings flags. You need to load the module intel-icc to use this library. Example for Makefile:

CC      = icc
CFLAGS  = -mkl
LDLIBS  = -mkl
By default MKL uses all available cores. You can restrict this number with the environment variable MKL_NUM_THREADS, e.g.


AMD Core Math Library (ACML)

The AMD Core Math Library (ACML) is installed at /usr/local/acml. It should be used preferably on AMD-based systems, but works also on Intel systems. The library contains basic mathematical functions (BLAS, LAPACK, FFT,...). See the documentation at /usr/local/acml/Doc for details. You need to load the module acml to use this library. Example for Makefile:

CC    = cc
CFLAGS= -I/usr/local/acml/ifort64/include
LINKER= ifort -nofor-main
LDLIBS= -L/usr/local/acml/ifort64/lib -lacml -lm
The above example is for the sequential library. If you want to use all (or some of the) available cores on a node you can simply compile and link with the multicore version of the library without any source code changes. Example for Makefile:

CC= cc
CFLAGS= -I/usr/local/acml/ifort64_mp/include
LINKER= ifort -nofor-main
LDLIBS= -L/usr/local/acml/ifort64_mp/lib -lacml_mp -lm
Example C code:

#include          /* AMD Core Math Library */
static void matmul(int n, double a[n][n], double b[n][n], double c[n][n])
  char trans = 'T';
  double one = 1.0;
  double zero = 0.0;

  /* call BLAS DGEMM */
  dgemm(trans, trans, n, n, n, one, (double *)b, n, (double *)c, n, one,
  (double *)a, n);

Parallel Programming

There are different approaches for parallel programming today: shared memory parallel programming based on OpenMP, distributed memory programming based on MPI, and GPGPU computing based on CUDA, OpenCL, OpenACC or OpenMP 4.x.


compiler name module command documentation version
GNU OpenMP C/C++ gcc/g++ -fopenmp - man gcc --version
Intel OpenMP C/C++ icc/icpc -openmp module load intel-icc man icc / icpc --version
PGI OpenMP C/C++ pgcc/pgCC -mp module load pgi man pgcc /pgCC --version
Intel OpenMP Fortran ifort -openmp module load intel-icc man ifort --version
GNU OpenMP Fortran gfortran -fopenmp - man gfortran --version
PGI Fortran pgfortran -mp module load pgi man pgfortran --version

Example: Compile and run an OpenMP C file:

module load intel-icc
icc -openmp -O2 t.c


compiler name module command documentation version
MPI C (based on gcc) mpicc module load openmpi/gnu see gcc --version
MPI C++ (based on gcc) mpic++ module load openmpi/gnu see g++ --version
MPI Fortran (based on gfortran) mpif90 module load openmpi/gnu see gfortran --version
MPI C (based on icc) mpiicc module load openmpi/intel see icc --version
MPI C++ (based on icpc) mpiicpc module load openmpi/intel see icpc --version
MPI Fortran (based on ifort) mpiifort module load openmpi/intel see ifort --version

Which MPI-compilers are used can be influenced through the module command: with module load openmpi/gnu you can use the GNU compiler environment (gcc, g++, gfortran), and with module load openmpi/intel you can use the Intel compiler environment (icc, icpc, ifort). Be aware that using module load openmpi/intel the MPI compiler names mpicc etc. are mapped to the GNU compilers. To use an Intel compiler you need to specify Intel's own names for that, i.e., mpiicc, mpiicpc, mpiifort.

All options discussed in the compiler section also apply here, e.g. optimization.

Example: Compile a MPI C file and generate optimised code:

module load openmpi/intel
mpicc -O2 t.c

The MPI implementation we use (OpenMPI) has options to influence the communication medium used. Within one node, MPI processes can communicate through shared memory, Infiniband, or Ethernet with TCP/IP. Between nodes, Infiniband or Ethernet with TCP/IP is possible. OpenMPI ususally chooses the most appropriate medium which means you don't need to specify anything. But if you want to choose a specific and applicable medium you may specify this in the call to mpirun through the --mca btl specifier: mpirun --mca btl communication-channels ... where communication-channels is a list of comma separated communication mediums. Possible values are: sm for shared memory, openib for Infiniband, and tcp for Ethernet. The last specifier must be self.


mpirun --mca btl tcp,self -np 4 -machinefile mfile mpi.exe

OpenCL and CUDA

The nodes wr7, wr20-wr27 and wr5 have a NVIDIA Tesla card installed (M2050, K20m, K80). Program development should be done interactively on wr7 (i.e. ssh wr7) as there are all necessary drivers installed locally. Production runs on any Tesla card should be done using the batch queue wr7 (wr7) or hpc (wr20-wr27) or wr5 (wr5). Use module load cuda to load the CUDA environment. Use module load opencl/nvidia or module load opencl/intel to load the OpenCL environment, for Nvidia GPUs or Intel processors, respectively. With both modules, the standard environment variables CPATH for inlude files and LIBRARY_PATH for libraries are set accordingly to be used e.g. in a makefile.

To compile an OpenCL program on a node with the appropriate software environment installed proceed as follows:

module load opencl
cc opencltest.c -lOpenCL
To compile a CUDA project use the following Makefile template:

# defines
CC              = cc
CUDA_CC         = nvcc
LDLIBS          = -lcudart

# default rules based on suffices
#       C
%.o: %.c
        $(CC) -c $(CFLAGS) -o $@ $<

#       CUDA
        $(CUDA_CC) -c $(CUDA_CFLAGS) -o $@ $<

myprogram.exe: myprogram.o kernel.o
        $(CC) -o $@ $^ $(LDLIBS)
Here the CUDA kernel and host part is in a file kernel.c and the non-CUDA part of your program is in a file myprogram.c.


Directive-based GPU programming is available through the PGI compiler. See /usr/local/PGI/doc for documentation. Use wr7 only interactively to compile such programs. The generated code can be executed on wr7 and wr20-wr27. Important: the PGI compiler generates per default debug code that in general is very slow. If you want fast code add the nodebug option. Example:

module load pgi
pgcc -acc -ta=nvidia,cc3.5,nodebug openacctest.c

OpenMP 4.0

Directive-based accelerator programming is available through the OpenMP 4.0 target directives in the Intel compiler. Use wr6 only interactively to compile and run such programs.


See this document .

Resource Requirements

If you want to find out the memory requirements of a non-MPI job, use:

/usr/bin/time -f "%M KB" command
which prints out the peak memory consumption in kilobytes of the command execution.

Usage Examples

Sequential C program

C-program named test.c

#include <stdio.h>
int main(int argc, char **argv)
    return 0;


CC     = cc

#default rules
%.o: %.c
        $(CC) $(CFLAGS) -c $<
%.exe: %.o
        $(CC) -o $@ $< $(LDLIBS)

default:: test.exe

Batch script

# 1 node with 1 core used, 2 minutes maximum runtime
#PBS -q mpi
#PBS -l nodes=1:ppn=1
#PBS -l walltime=00:02:00
#PBS -l vmem=1GB

# change to submit directory (with executable)
# execute sequential program

OpenMP C program

C-program named test_openmp.c

#include <omp.h>
int main(int argc, char **argv)
#pragma omp parallel
    return 0;


CC     = icc -openmp

#default rules
%.o: %.c
        $(CC) $(CFLAGS) -c $<
%.exe: %.o
        $(CC) -o $@ $< $(LDLIBS)

default:: test_omp.exe

Batch script

# 1 node with 16 cores used, 2 minutes maximum runtime
#PBS -q mpi
#PBS -l nodes=1:ppn=16
#PBS -l walltime=00:02:00
#PBS -l vmem=1GB

# change to submit directory (with executable)
# execute parallel OpenMP program with 16 threads

MPI C program

C-program named test_mpi.c :

#include <mpi.h>
int main(int argc, char **argv)
    MPI_Init(&argc, &argv);
    return 0;


CC     = mpicc

#default rules
%.o: %.c
        $(CC) $(CFLAGS) -c $<
%.exe: %.o
        $(CC) -o $@ $< $(LDLIBS)

default:: test_mpi.exe

Batch script

# 4 nodes with 16 cores used (total of 64 MPI processes), 2 minutes maximum runtime
#PBS -q mpi
#PBS -l nodes=4:ppn=16
#PBS -l walltime=00:02:00
#PBS -l vmem=1GB

# load OpenMPI environment
module load openmpi/gnu

# change to submit directory (with executable)

# not really necessary, but as an example: determine the number of different nodes involved
NNODES=`uniq $PBS_NODEFILE | wc -l`
# not really necessary, but as an example: determine the number of threads per node
PPN=$(sort $PBS_NODEFILE | uniq -c | tail -n 1 | awk '{print $1}')

# determine number of MPI processes we have from the machinefile
NCORES=`cat $PBS_NODEFILE | wc -l`
# execute parallel MPI program
mpirun -np $NCORES -machinefile $PBS_NODEFILE test_mpi.exe

OpenCL C program (dynamic compiling)

C OpenCL host program named vectorproduct.c :

#include <CL/opencl.h>
int main(int argc, char **argv)
    // your OpenCL driver program comes here
    return 0;

OpenCL kernel named :

__kernel void test( __global const float *x,


 # the source is simple: $(PROGRAM).c is the source file
PROGRAM         = vectorproduct.exe
SRC_HOST        := $(PROGRAM:.exe=.c)
OBJ_HOST        := $(SRC_HOST:.c=.o)

# normal C compiler
CC              = cc
CFLAGS          = -I/usr/local/cuda/include

# Linker flags
LD              = cc -fPIC
LDLIBS          = -lOpenCL

# common rules

%.o: %.c
        $(CC) $(CFLAGS) -c -o $@ $<
%.exe: %.o
        $(LD) -o $@ $< $(LDFLAGS)


.PHONY: default compile clean

default:: compile

compile:: $(OBJ_HOST)
        $(LD) $(LDFLAGS) -o $(PROGRAM) $(OBJ_HOST) $(LDLIBS)

run:: $(PROGRAM)

        -rm -f *.o *.exe


Batch script

# wr7 (full), 2 minutes maximum runtime
#PBS -q wr7
#PBS -l nodes=1:ppn=8
#PBS -l walltime=00:02:00
#PBS -l vmem=1GB

# change to submit directory (with executable)
# start program

Setting Up ssh

The ssh environment must be set up properly to use the batch system. This set up is done already on every fresh account. Please, do not change that! The following example session shows the steps to be done to set up again the correct enviroment necessary for the batch system (removing all your previous ssh settings):
user@wr0: rm -rf ~/.ssh
user@wr0: mkdir ~/.ssh
user@wr0: ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_dsa):RETURN
Enter passphrase (empty for no passphrase):RETURN
Enter same passphrase again:RETURN
Your identification has been saved in /home/user/.ssh/id_dsa.
Your public key has been saved in /home/user/.ssh/
The key fingerprint is:
27:5e:be:3e:26:aa:9f:8a:8c:2e:d9:01:c1:60:7b:a6 user@wr0
user@wr0: cp ~/.ssh/ ~/.ssh/authorized_keys


For some of the application programs installed here is a brief description how to use them.


Beside the basic Matlab program there are several Matlab toolboxes installed.

Using Matlab interactively

To run Matlab interactively on wr0 you have to do the following: Usage:
user@wr0: module load matlab
user@wr0: matlab
This starts the Matlab shell. If you logged in from a X-Server capable computer and used ssh -Y to login to wr0 the graphical panel appears on your computer instead of the text panel (see here for details of X-Server usage).

Using Matlab with the Batch System

Inside your batch job start Matlab without display:
    module load matlab
    matlab -nodisplay -nosplash -nodesktop -r "m-file"
where m-file is the name of your Matlab script with the suffix .m

Pitfalls Using Matlab

Matlab is very sensible with memory allocation / administration.


As there are several groups of OpenFOAM users we try to bring together these to coordinate the installation of one (or several) OpenFOAM versions. Please contact us if you are interested.

X11 applications

X11 applications are possible only on wr0. To use X11 applications that open a display on your local X-server (e.g. xterm, ...) you need to redirect the X11 output to your local X11 server and to allow another computer to open a window on your computer.
  1. The easiest way to enable this is to login to the WR-cluster with ssh and use the ssh option -Y (or with older ssh versions also -X ) that enables X11 tunneling through your ssh connection. If your login path goes over multiple computers please be sure to use the -Y option for every intermediate host on the path.
    user@another_host:  ssh -Y
    On your local computer (i.e. where the X-server is running) you must allow wr0 to open a window. Execute on your local computer in a shell: xhost +
  2. Another possibility it to set the DISPLAY variable on the cluster and to allow other computers (i.e. the WR cluster) to open a window on your local X-Server.
    Example: Please be aware that newer versions of X-Servers don't support by default IP-Ports but rather Unix ports and therefore this second version doesn't work.
You can test your X11 setup executing in an ssh shell window on wr0 xterm. A window on your local computer must pop up with a shell on wr0.