-
Notifications
You must be signed in to change notification settings - Fork 237
Installation on ARC Leeds, UK
Iris van Zelst ([email protected]) & Jamie Ward ([email protected]) & Maeve Murphy Quinlan ([email protected]).
ARC (Advanced Research Computing) provides computing resources to researchers at the University of Leeds, United Kingdom. There are several systems to choose from as the clusters only have a short life span of 4/5 years. Currently, you can use ARC3 and ARC4, with ARC4 being the newest cluster with the largest remaining lifetime, i.e., we can keep using it for the longest. Therefore, the installation procedures described here are specific for ARC4, with additional information for ARC3 where required.
You can request an account by filling in the HPC application form: https://arc.leeds.ac.uk/apply/getting-an-account/. You need your university username and password to apply.
This option is ideal if you do not want to install aspect from scratch, but instead want to quickly use it. It is less flexible if you want to make changes to the code. In the aspect manual, this option is referred to as 'docker', but ARC4 does not have docker, so instead 'singularity' is used. These instructions and the singularity option have been provided by Ollie Clark from the ARC IT.
To use the aspect container, type
singularity pull docker://geodynamics/aspect
singularity shell -B /nobackup:/nobackup aspect.simg
You will then be inside the ASPECT Docker container with /nobackup
on ARC bound to /nobackup
in the container and ASPECT itself available in /home/dealii/aspect
. You can then submit a job running on a Singularity container to the ARC queues using a job script similar to:
#$ p -pcsmp 24
#$ -l h_vmem=2G
module load singularity
singularity exec -B /nobackup:/nobackup aspect.simg /home/dealii/aspect/aspect [options] [aspect file]
Since aspect and particularly the libraries on which aspect depends are too large for your home directory (maximum 10 Gb), you need to install aspect and its libraries on the /nobackup
folder. From your home folder, go to the /nobackup
folder by typing:
cd /nobackup
and create your own folder
mkdir <USERNAME>
In this folder, you can now do whatever you want. Since I wanted to keep my source code files and actual runs separate, I also made a "home" and "work" folder within this <USERNAME>
folder.
Since this /nobackup
folder has no backup (the clue is in the name) and deletes files after 90 days, it is a bit of a problem to install your source code here. The recommended work around by IT is to manually touch the entire directory whenever you get an e-mail warning about your files expiring. You can do this from your home directory by typing the following:
find /nobackup/<USERNAME>/ -exec touch -h {} \;
find /nobackup/<USERNAME>/ -exec touch {} \;
However, in my opinion, this is still a risky way of storing your source code files. What if you are on holiday and forget to touch the files in time? Therefore, I recommend setting up a cronjob (ask me for details; bad practice to share publicly since the IT people did not mention this option).
Now, check if you are in the correct folder:
pwd
/nobackup/<USERNAME>/home
We will use candi to install all the necessary libraries. Make a dedicated libraries folder in which we can do the installation
mkdir libraries
and go into this folder
cd libraries
pwd
/nobackup/<USERNAME>/home/libraries
Download candi by running
git clone https://github.com/dealii/candi
Load the necessary modules:
module add openmpi/3.1.4
module add mkl/2019.0
module add intel/19.0.4
module add cmake/3.10.0
For ARC3 users, there are different modules and some are not uploaded automatically when logging in:
module swap openmpi/2.0.2 openmpi/2.1.3
module add mkl/2018.2
module swap intel/17.0.1 intel/18.0.2
module add cmake/3.7.2
Also add these lines to your ~/.bashrc
file to ensure the correct modules are always loaded.
To set the correct environment variables for openmpi, type the following:
export CC=mpicc; export CXX=mpicxx; export FC=mpif90; export FF=mpif77
We now need to change the configuration of candi to make it aware that we are using mkl. Go into the candi folder and open the configuration file candi.cfg
. Modify lines 90 and 91 from
(old)
MKL=OFF
# MKL_DIR=
to
(new; this is what it should be)
MKL=ON
MKL_DIR=${MKLROOT}/lib/intel64
Check you are in the correct folder
pwd
/nobackup/<USERNAME>/home/libraries/candi
and execute candi by running
./candi.sh -j<N> -p /nobackup/<USERNAME>/home/libraries/
You can substitute <N>
with the number of cores you want to use for compiling and installing these libraries. From experience: use more than one, because it takes ages otherwise.
When the installation of dealii is complete, you can add the newly installed packages to your environment configuration by adding the following line to your ~/.bashrc
file:
source /nobackup/<USERNAME>/home/libraries/configuration/enable.sh
At this point, you can test your installation of dealii, if you want, by running the example in /nobackup/<USERNAME>/home/libraries/deal.II-v9.1.1/examples/step-32
In this folder type cmake . && make
to compile the code. Then run dealii with mpirun -n 2 ./step-32
. If you get output, you're good! You could also check some of the vtu files to see if the output is reasonable (i.e., not NaN). You don't have to run it for the entirety of the example (that would take a long time and doesn't add anything).
Trouble shooting candi/dealii/trilinos installation
A recent update of candi has caused errors when installing dealii and the trilinos package (discussed here on the aspect forums). This issue may now be resolved with new updates to candi, but if you run into problems you might want to follow these additional steps.
Use a different version of candi and change branch:
git clone https://github.com/tjhei/candi.git
cd candi
git checkout trilinos_mkl
Edit the trilinos.package
file in candi/deal.II-toolchain/packages
to change the version from 12-18-1 to 12-10-1. This can be done by commenting/uncommenting lines with the appropriate version and checksum.
Edit the candi.cfg file to comment out the petsc and slepsc packages.
Edit the lines for MKL and MKL_DIR as described for the normal install. When testing your installation of dealii, the file path will be /nobackup/<USERNAME>/home/libraries/deal.II-v9.2.0/examples/step-32
instead of /nobackup/<USERNAME>/home/libraries/deal.II-v9.1.1/examples/step-32
.
Installing Aspect
Okay, so now we finally get to the point: installing aspect! First, you need to download aspect or your fork of the aspect code (recommended if you want to develop or contribute to the development of the aspect code in any way). Go to your home folder on /nobackup
. To download the development version of aspect directly, you can type
git clone https://github.com/geodynamics/aspect.git
Go into the newly created aspect folder and open the file CMakeLists.txt
. Change line 123 and 124 from
(old)
SET(WORLD_BUILDER_SOURCE_DIR "" CACHE PATH "Provide an external World Builder directory to be compiled with ASPECT. \
If the path is not not provided or the World Builder is not found in the provided location, the version in the contrib folder is is used.")
to
(new; this is what it should be)
SET(WORLD_BUILDER_SOURCE_DIR "" CACHE PATH "Provide an external World Builder directory to be compiled with ASPECT. If the path is not not provided or the World Builder is not found in the provided location, the version in the contrib folder is is used.")
We have to remove this linebreak, because the version of cmake on ARC doesn't like this type of linebreak.
Now make a new build directory in which we will build the aspect executable:
mkdir build; cd build
To configure aspect, simply type
cmake ..
You can now compile aspect into an executable by typing
make -j<N>
(where N is again the amount of cores you would like to use for this). Hoorah! You now have compiled aspect!
You can now run aspect directly in the build
folder by typing
./aspect <INPUTFILE>
However, you probably don't want to run aspect in the build folder and you may want to use the resources of the cluster. This is, after all, why you have gone through the trouble of installing it on ARC in the first place.
Below is an example of a submission script run.sh
to run the first cookbook described in the manual 'convection in a 2D box' on 2 cores. You can run this script from any folder in which you would like to get your output (take care that the file paths are correct):
# Run from the current directory and with current environment
#$ -cwd -V
# Ask for some time (hh:mm:ss max of 48:00:00)
#$ -l h_rt=00:15:00
# Ask for some cores (that know how to talk to each other; in this case, ask for 2 cores)
#$ -pe ib 2
# Run the job
mpirun /nobackup/<USERNAME>/home/aspect/build/aspect /nobackup/<USERNAME>/home/aspect/cookbooks/convection-box.prm
Alternatively, you can ask for nodes instead of cores. In this case, remove the line about the cores, and replace it with
#$ -l nodes=1
to request one node. Each node on ARC4 has 40 cores and a maximum memory of 192 Gb. You can specify the amount of memory by typing:
#$ -l h_vmem=2G
In this example you request 2Gb.
To submit this job to the batch system, type:
qsub run.sh
If you have any additional questions on installing or running aspect on ARC in Leeds, please e-mail to [email protected].