The main PBSPro Commands are:
qsub
for job submission
qstat
for querying job execution statistics
qdel
for terminating a job
qalter
for altering job attributes
pbsnodes mach2
shows the resources currently
configured to the Workload Manager PBSPro: otto@lotte:~> pbsnodes mach2
.....
resources_available.arch = linux_cpuset
resources_available.host = mach2
resources_available.mem = 20347869088kb
resources_available.ncpus = 1716
.....
Hence, MACH-2 is able to cope with a job submitted the
way:otto@lotte:~> qsub -I -q f2800 -l select=mem=20347869088kb
and allows you to use some 19,405 GigaBytes of RAM in a single
Linux process address space!
otto
is logged in to MACH-2's frontend; and
in the project working directory ~/mach2-home/MPI_ex1/
he
will compile and run the sample MPI example program example-compute-pi.c
by The Argonne Labs.
f2800
featuring
"TurboBoost" (CPU Dodeka-Core
clock frequency is at maximum):otto@lotte:~> cd ~/mach2-home/; mkdir MPI_ex1; cd MPI_ex1/
otto@lotte:~/mach2-home/MPI_ex1> qsub -I -N test_ex_MPI -q f2800 -l select=ncpus=4
qsub: waiting for job 1633.lotte to start
qsub: job 1633.lotte ready
*****
SGI UV 3000 series BIOS 01/15/2015
SUSE Linux Enterprise Server 12 SP2 Kernel Release: 4.4.103-92.56-default
Wed Feb 14 16:30:18 CET 2018
model name : Intel(R) Xeon(R) CPU E5-4650 v3 @ 2.10GHz
Architecture : x86_64
cpu MHz : 2800.000
cache size : 30720 KB (Last Level)
Total Number of Sockets : 144
Total Number of Cores : 1728 (12 per socket)
Hyperthreading : OFF
HUB Version: UVHub 3.0
Number of Hubs: 144
Number of connected Hubs: 144
Number of connected NUMAlink ports: 1984
PBS Queue name: f2800. Frequency selected 2800 MHz.
CPUSET Path: /dev/cpuset/PBSPro/1633.lotte/cpuset.cpus
CPUSET CPU Range: 204-215
*****
Directory: /localhome/edvz/otto
Wed Feb 14 16:30:19 CET 2018
otto@mach2:~> cd $PBS_O_WORKDIR
otto@mach2:~/mach2-home/MPI_ex1> cp /apps/local/src/Bsp-MPI/example-compute-pi.c .
otto@mach2:~/mach2-home/MPI_ex1> module load intelcompiler
Module for Intel Parallel Studio XE, icc, icpc, and ifort version 2018.0.033 loaded.
Nota Bene: $MPI*_C{C,XX} defined (to be used in commands mpi{cc,cxx} of mpt).
otto@mach2:~/mach2-home/MPI_ex1> module load mpt
otto@mach2:~/mach2-home/MPI_ex1> module list
Currently Loaded Modulefiles:
1) uvstats/1.1.0+135
2) intelcompiler/parallel_studio_xe_2018.0.033
3) mpt/2.16
otto@mach2:~/mach2-home/MPI_ex1> which icc
/opt/intel/parallel_studio_xe_2018.0.033/compilers_and_libraries_2018/linux/bin/intel64/icc
otto@mach2:~/mach2-home/MPI_ex1> which mpiexec
/opt/hpe/hpc/mpt/mpt-2.16/bin/mpiexec
otto@mach2:~/mach2-home/MPI_ex1> \
mpicc -O2 -xHOST -o ex-compute-pi.exe example-compute-pi.c -lm
otto@mach2:~/mach2-home/MPI_ex1> mpiexec -perhost $NCPUS ./ex-compute-pi.exe
Process 0 of 4 is on mach2
Process 1 of 4 is on mach2
Process 2 of 4 is on mach2
Process 3 of 4 is on mach2
pi is approximately 3.1415926544231270, Error is 0.0000000008333338
wall clock time = 0.000115
otto@mach2:~/mach2-home/MPI_ex1> exit
logout
qsub: job 1633.lotte completed
MACH-2 is meant to be used as a Compute Server only. The built in filesystems shall not serve
as a file archive. The mindful users take care of the data management themselves and
will save away (transfer to the storage facilities at the home department) or clean up used storage space as
frequently/soon as possible. On MACH-2, there is no backup mechanism in place—hence: Caveat Emptor!
Whenever the storage subsystem
breaks due to a serious failure, system operation might have to restart from tabula rasa (i.e. from scratch). Users are asked
to inspect the file data items stored underneath: $HOME
("~/mach2-home/
") and
$SCRATCHDIR
("~/mach2-scratch/
"), respectively, on a regular basis and to free up storage
space as often and as early as possible. Thanks!
Available Filesystems:
~/mach2-home/
leads to your $HOME
; the file container for your program code and input data
~/mach2-scratch/
leads to your $SCRATCHDIR
; the place to store bulk data
/tmp2/
(accessible on MACH-2 only) is the entry point to an SSD based filesystem for
demanding (i.e. i/o intensive) temporary file data management