You are on page 1of 6

Thinking

Quick Start Guide


Overview of the system
Thinking consists of a total of 144 nodes with 2 E5-2680 v2 processors each with 20 cores.
Table 1 shows the hardware details of the new cluster in comparison to the Vic3 system:

VIC3

Thinking

Total nodes

112/86/96

112/32

Processor type

Harpertown/Nehalem/Westmere

Ivy bridge

Base Clock Speed

2.66 GHz

2.8 GHz

Cores per node

8/8/12

20

Total cores

2,736

2,880

Memory per node (GB)

8/24/24 (+4x72 +2x144)

112x64/32x128

Memory per core (GB)

1/3/2 (9 , 18)

3.2/6.4

Peak performance (TF)

28.9

64.5

Network

Infiniband QDR fat tree

Infiniband QDR 2:1

Cache (L1 KB/L2 KB/L3 MB)

4x(32i+32d)/ 2x6MB/12
4x(32i+32d)/4x256/8
6x(32i+32d)/6x256/12

10x(32i+32d)/10x256/25

Total L3 cache per core

3MB/2MB/2MB

2.5 MB

Table 1 Hardware overview

Connecting to Thinking
All users having an active VSC account on VIC3 can connect to the login node of Thinking with
the same credentials using the command:
$ ssh vscXXXXX@login.hpc.kuleuven.be

Users without an active VSC account on VIC3 willing to test the cluster during the open pilot
period can request an account and Introductory credits through the webpage:
https://account.vscentrum.be/req/auth.cgi
Take into account that the site is only accessible from within KU Leuven domain, so the page
won't load from, e.g., home, unless by using VNP or a proxy.

Accessing your Data


All global storage areas available on VIC3 are also available on the new cluster, so no data
migration is needed. Table 2 summarize the available storage areas and their characteristics:

Name

Variable

Type

Access

Backup

Default
Quota

/user/leuven/30X/vsc30XXX

$VSC_HOME

NFS

Global

YES

3 GB

/data/leuven/30X/vsc30XXX

$VSC_DATA

NFS

Global

YES

25 GB

/scratch/leuven/30X/vsc30XXX

$VSC_SCRATCH

GPFS

Global

NO

25 GB

ext4

local

NO

250 GB

$VSC_SCRATCH_SITE
/node_scratch

$VSC_SCRATCH_NODE

Table 2 Storage areas overview

$VSC_HOME: A regular home directory which contains all files that a user might need to log on
to the system, and small 'utility' scripts/programs/source code/.... The capacity that can be used
is restricted by quota and this directory should not be used for I/O intensive programs. Regular
backups are performed.

$VSC_DATA: A data directory which can be used to store programs and their results.. Regular
backups are performed. This are shpuld not be used for I/O intensive programs. There is a
default quota of 25 GB, but it can be enlarged. You can find more information about the price
and conditions here: https://icts.kuleuven.be/sc/HPC

$VSC_SCRATCH/$VSC_SCRATCH_SITE: On each cluster you have access to a scratch directory


that is shared by all nodes on the cluster. This directory is also accessible from the login nodes,
so it is accessible while your jobs run, and after they finish. No backups are made for that area
and files can be removed automatically after 21 days.

$VSC_SCRATCH_NODE: is a scratch space local to each compute node. Thus, on each node, this
directory point to a different physical location and the content is only accessible from that
particular worknode, and only during the runtime of your job.

Software
All applications on Thinking are installed in a separate directory, and the application directory of VIC3 is
not accessible from Thinking. A certain amount of the available software applications on VIC 3 has been
recompiled and are already available on Thinking, normally the latest version available.
The modules software manager tool is available on Thinking but it must be noted that the naming
scheme for the software packages has been changed. The new naming scheme is as follows:
PackageName/version-ToolchainName-ToolchainVersion
Where PackageName is the official name of the software keeping capital and lower letters.
Thus, in some cases the names of the packages have been changed to make them compliant with the
new scheme. One example of the change is the Python package:
On Thinking:
$ module av Python
-------------- /apps/leuven/thinking/2014a/modules/all --------------Python/2.7.6-foss-2014a

Python/2.7.6-intel-2014a Python/3.2.5-foss-2014a

On VIC3:
$ module av python
---------------- /apps/leuven/etc/modulefiles/ -----------------------python/2.5.2 python/2.6.0 python/2.6.6
python/2.5.2-scipy
python/2.7.2_simple

python/2.7.1

python/2.6.4-scipy

python/3.2.3
python/2.6.6-scipy

TIP: Revise your batch scripts and your .bashrc to ensure the appropriate software package name is used.
Use always the complete name of the package (name and version) and do not rely on defaults.

Table 3 shows a list of some of the available software on Thinking grouped by category:

Biosciences

Chemistry

Mathematics

Others

BEAST

GAMESS

JAGS

Abaqus

BEDTools

Gromacs

Matlab

Ansys

BWA

NAMD

Octave

Comsol

MrBayes

NWChem

SAMTools

Quantum
ESPRESSO

SAS

Siesta
Table 3. Available software

To obtain a complete list of the software you can execute:


$ module available
If you need additional software installed, please send a request to : hpcinfo@kuleuven.be

Compiling and Running your Code


Several compilers and libraries are available on Thinking, as well as two toolchains flavors intel (based on
Intel software components) and foss (based on free and open source software).
A toolchain is a collection of tools to build (HPC) software consistently. It consists of:

compilers for C/C++ and Fortran


a communications library (MPI)
mathematical libraries (linear algebra, FFT).

Toolchains are versioned and refreshed twice a year. All software available on the cluster is rebuilt when
a new version of a toolchain is defined to ensure consistency. Version numbers consist of the year of
their definition, followed by either a or b, e.g., 2014a. Note that the software components are not
necessarily the most recent releases; rather they are selected for stability and reliability. Table 4
summarizes the toolchains available on Thinking and their components:

Intel compilers

Open source

Name

intel

foss

version

2014a

2014a

Compilers

Intel compilers
icc, icpc, ifort

GNU compilers
gcc, g++, gfortran

MPI Library

Intel MPI

OpenMPI

Math libraries

Intel MKL

OpenBLAS, Lapack
FFTW
ScaLAPACK

Table 4. Toolchains on Thinking

TIP: Recompile your codes for using them on Thinking, check the results of the recompiled codes before
starting production runs and use the available Toolchains for compiling whenever possible.

Running Jobs
Torque/Moab is used for scheduling jobs on Thinking, so the same commands and scripts used on VIC3
will work on Thinking. During the open pilot phase for testing purposes all projects active on VIC3 will be
also activated on Thinking and therefore they can be used on the scripts. However the used credits will
not be charged. Once VIC3 will be turned off, all projects and available credits will be migrated to
Thinking.
None of available queues on VIC3 are available on Thinking. A new queue name scheme based on
resources has been established for simplicity.
The available queues on Thinking are: q1h, q24h, q72h and q7d. However, we strongly recommend that
instead of specifying queue names on the batch scripts you use the PBS l option to define your needs.
Some useful are l options are:
Resources usage

-lwalltime=4:30:00 (job will last 4h 30 min)

-lnodes=2:ppn=20 (job needs 2 nodes and 20 cores per node)

-lmem=40gb

(jobs request 40 GB of memory, sum for all processes)

-lpmem=4gb

(job request 4 GB of memory per core)

Other node features: processor type, accelerators, etc.

-lnodes=1:feature

An example of a job submitted using resource request could be:


$ qsub lnodes=10:ppn=20:ivybridge,walltime=1:00:00,pmem=4gb myprogram
This job request 10 nodes with 20 cores each and that have with ivybridge processors. It will need a
walltime of 1 hour and 4 GB of memory per core.
TIP: Revise your batch scripts to remove references to the queue names of VIC3 and modify them to be
based on resources instead on queue names.
For MPI parallel jobs it is recommended to request full nodes (i.e 20 cores) even when less are actually
used to ensure that there is no performance loss due to wrong process placement.

You might also like