Glamdring, the Astrophysics compute cluster



Glamdring is a linux-based, computing cluster consisting of 94, dual-cpu, multi-core, Intel-CPU based machines for people in Astro who need large amounts of CPU time. So we have a total of 1,200 CPU cores on which to run programs, either MPI-based, OpenMP based or normal "serial" programs.

You can connect via SSH or RDP.
If you want to run a "normal" (ie. non-MPI, non-OpenMP) program lots of times with different parameters, ( a lot of you do exactly that ) then I have a program which can make that pretty easy - "multirun". Please email me for more info. It's easier than converting your current "normal" program to MPI.
There are various scientific libraries/bits of software installed but you can always ask for whatever you want on there.
You can get an account if you're in the Astro department and your supervisor says you can have an account, so get in touch with them or contact Jonathan Patterson ( jop@astro.ox.ac.uk ) direct to setup an account.
Below is a guide on how to use the cluster.


HARDWARE

The machine lives at the Begbroke Science Park and the OS used is CentOS (very similar to RedHat).
Each node is connected by standard gigabit ethernet for normal network tasks like NFS (over which your home directories are connected), and for MPI messages if you're running a parallel MPI program.

FILE STORAGE

We've got 24TB of disk storage kept in a RAID 6 array for your home directories. This is backed up twice a month to another storage array kept in a different building. There's also a daily incremental backup. So if you need a copy of a file as it was any day in the last week or so, I can usually pull it out of the backup for you.
Aside from that, there are various other storage areas which are owned by different groups within Astro, and the 262 TB distributed filesystem /extraspace which anybody can use. This is the ideal area to use for heavy disk IO since the files are held on multiple compute nodes, which leads to parallel filesystem access. However, it is not backed up, so keep your important files in your home directory.

COMPILING

We've got two sets of compilers - the gnu compilers (gcc,gfortran,etc). Typing gcc,gfortran,etc gets you version 4.4.7 - the one that comes as standard with the operating system.
There are also the Intel compilers, version 15 which you can use with "module load intel-compilers" (see Modules section below). You should try to use these in preference to the gnu ones as they will produce faster code, though the gnu ones are more standard and many things will compile more easily with the gnu compilers. It's also worth using the -O3 flag to either compiler to turn on optimisation.
If you're compiling an MPI program, the following commands will call the compiler, linking in the necessary MPI libraries:
mpicc, mpicxx, mpif90.

LIBRARIES


There are lots of libraries/programs available. We use the "Environment Modules" software to manage which ones you want to be available, so:
module avail lists what software modules are available
module list shows which ones you have loaded
module load moduleName loads a module,
eg. "module load intel-compilers" selects the intel compilers
"module load fftw/2.1.5" would load version 2.1.5 of the fftw libraries, and just "module load fftw" would load the default version (3.3.4)
module unload moduleName removes a module
module initadd/initrm/initlist adds/removes/lists which modules are loaded each time you login. So if you use something all the time, add it using that.

The MKL libraries are available, for fast BLAS and LAPACK routines (module load intel-compilers). These libraries are in $MKLROOT, and there's a webpage here to help you choose which ones to link against.
Numerous other libraries are available - "module avail" to list them, so have a look in there to see if the library you want is available, but if not I can install it.
Some of the software we have available is: AIPS, ATLAS, CFITSIO, FFTW, HDF5, GSL, IDL, MATHEMATICA, MATLAB, OBIT, PYTHON

Feel free to email me (jop@astro.ox.ac.uk) if you have any problems compiling.

MPI libraries


MPI is "Message Passing Interface" - library functions for sending & receiving messages between processes. Typically, programs use these to exchange information about which process should be doing what to what - distributing the work.
There are 2 kinds of MPI libraries we have - normal mpi for most of the compute nodes (which use a gigabit ethernet network) and mpi-berg (which uses the ~ 30x faster infiniband network on the Berg queue). So for the berg queue you should use one of the mpi-berg modules, eg. mpi-berg/2.2-intel15 if you're using the intel compilers version 15.
For all non-berg queues, please use one of the normal mpi modules, eg. mpi or mpi/3.2-intel15.

THE JOB QUEUE

We use the slurm queueing system, with some extra queueing/display software I've written which runs on top of it in an effort to make sure everyone gets a fair share of time on the cluster. The queueing on Glamdring is a little complicated since most machines are reserved for the people who bought them. In essence, you can use any "spare time" on them but your compute job will be stopped (in the case of MPI programs) or paused (non-MPI programs which aren't using huge amounts of memory) if they need to use them. Paused jobs may then move to another machine which is free and continue running again automatically.
If you are part of the cmb or berg groups, then you can use any of the cmb or berg groups - the groups share compute nodes. However, if your job does not use a lot of MPI communication, please use the cmb nodes in preference - the berg ones have a much faster network connection which is needed for lots of the berg simulations.
There are 6 queues you can submit jobs to - "default", "cmb", "blackhole", "cbass", "berg", "berg2" and "planet". The method below shows how to submit to the default queue (which anybody can use and your jobs will not be stopped) but you can submit to the other queues with the "-q" option to addqueue, eg. addqueue -q cmb
The berg compute nodes have dual infiniband connecting them, so if your job needs fast MPI communication, please use this queue.

MEMORY


Jobs are allocated 0.1 GB of RAM unless you ask for more with the -m flag to addqueue (see below). If you exceed the memory limit, you'll get a message in the job's output file to say so, though sometimes it will just say "Killed.". You can check how much memory it used with the q command.

SUBMITTING YOUR JOB TO THE QUEUE


You can use q -tw 1 to give a list of the compute nodes and how many cores/GB they have free right now. This will tell you what kind of job it's possible to run right now.
addqueue -c "runtimeEstimate or comment" ./myProgramName myParameters
You can just type "addqueue" to get the different options, but here are some common ones:
eg.
addqueue -c "2 days" -n 2x8 ./doSomeAnalysis
will submit your program "doSomeAnalysis" to the default queue, requesting 2 compute nodes with 8 cores on each (so 16 processes are started). You can also use the runtimeEstimate to add your own comments about the job - parameter numbers, maybe, to keep track of which job is which.
addqueue -c "1 week" -m 3 ./doSomeAnalysis
would run a single-process (non-MPI) job with 3GB RAM allocated to it
OR
addqueue -c "1 week" -n 31 ./doSomeAnalysis
would submit a 31-core job to the queue with no mention of how many cores to run on each compute node - so they'll be allocated wherever the queue sees fit. It only really matters how the cores are allocated if your job uses a lot of MPI communication, in which case you'll want as many of them as possible running on each compute node because if the MPI traffic has to go over the ethernet it will be quite slow.
Once you've submitted it you'll see the job number which was allocated to it.
OpenMP
If you are using OpenMP then, to make sure your program runs happily alongside others, you should use the -s option for addqueue, as below. This will set the OMP_NUM_THREADS environment variable so that OpenMP uses the correct number of threads.
addqueue -s -c "test" -n 1x8 myProgram myProgramArguments
which would tell OpenMP to use 8 cores.
If in doubt, feel free to ask me.

Mathematica, Matlab
addqueue -n 1x3 -l -s /usr/local/shared/mathematica/10.1/bin/math -script scriptname.m
Would reserve 3 cores on the same compute node, and only run 1 occurance (-s) of mathematica at a reduced priority (-l), telling it to run your script scriptname.m
With Matlab specifically, I've seen cases where using parfor or allowing matlab to use all the cores available actually slows the code down, or at least runs at the same speed. So it's worth trying with the -singleCompThread option to matlab. If this runs at the same speed as without it, then please use it, otherwise matlab will use more cores than it needs to. Let me know if you'd like some help with this.

CHECKING YOUR JOB

The command q will list your jobs in the queue.


If you have X-windows forwarding enabled, or are using an RDP connection to hydra, then you will get a graphical GUI showing you the jobs.
There are various buttons to play with, and the main list of jobs shows all sorts of job info, including current usage stats for the jobs - how many cores, memory GB, diskIO, etc it's using right now.


You can then double-click on a job to show the job's output file, or right-click to get various options, eg. show job statistics, cancel job, etc
If you select "show job stats", then you will get a window showing CPU, MEMORY, DISKIO usage, and you can click on the icons there to graph these over time. You can use this to check that your job ran efficiently, had enough memory, etc.

If you don't have X-windows on, or you type "q -t", then you'll get a text-only version.
You can check the output (stdout,stderr, or "what would have been printed to the screen") of your job by typing:
q -s jobNunber or (text-only)
showoutput jobNumber
You can stop your job with scancel jobnumber at any time if you want, or via the q GUI interface.
You can also do scancel -u myUserName to cancel all your jobs.
If your job runs longer than you estimated, please update your estimated runtime with the comment command, eg.
comment jobNumber "New comment"
Feel free to login to the compute node running your job and see how it's doing. Have a look at which compute nodes it's running on and then (to see node 12, for example):

ssh comp12
top
exit

If your job sits in the queue waiting for a long time

You can use the command q -tw to show information on what your job is asking for, which compute nodes match that spec., and how many are currently free. This is a good thing to run if your job hasn't started running for many minutes, to see if you're asking for too many resources (and so would have to wait a really long time for the job to start), or to have a guess at when it may start running. You can also run this with a jobnumber of 1 to see what resources are free on the nodes, perhaps to figure out how best to fit a job into the cluster before you submit it.

So that's the basic info for Hydra. In practice, people have problems compiling, running, and things break sometimes, so please feel free to email me (jonathan.patterson@physics.ox.ac.uk) if you have any questions / think something is wrong.

Categories: Astrophysics