Libraries

Under this area you will find information about available libraries on HPC2N’s systems, as well as some information about accessing and using them. The libraries on HPC2N systems includes parallel communications libraries (MPI) as well as various mathematical libraries, including MKL.

To access the libraries you need to load a module (and often prerequisites). Some are part of Compiler Toolchains.

Caveat and info

The list is NOT complete. Login to Kebnekaise and run the command

ml spider

to get a full list.

If there is a library you need and it is not installed, you can either install it yourself or ask (mail support at support@hpc2n.umu.se) if it can be installed on Kebnekaise.

Most of the software on Kebnekaise, including the libraries, is accessed through the module system. You can read more about using modules on our ‘The modules system’ page. When you are looking for a specific library, try running either

module avail

or

ml spider

to see if it is installed.

You can also try with

ml spider LIBRARY

to see if the library named LIBRARY is installed as a module.

For information about versions, login to the cluster (Kebnekaise) and run

ml spider LIBRARY

where LIBRARY is the name of the library in question.

Newer versions of the provided libraries will be installed regularly. However, if you need a new version quickly, please send an email to support@hpc2n.umu.se.

Important

Many libraries are available as part of a compiler toolchain.

Some important examples (see Compilers and Compiler Toolchains for more:

  • foss: GCC, OpenMPI, OpenBLAS/LAPACK, FlexiBLAS, FFTW, ScaLAPACK
  • iimpi: icc, ifort, IntelMPI
  • imkl: icc, ifort, IntelMPI
  • intel: icc, ifort, IntelMPI, IntelMKL

Build environment

Using the MPI and MATH libraries that are available through a compiler toolchain by itself is possible but requires a fair bit of manual work, figuring out which paths to add to -I or -L for include files and libraries, and similar.

To make life as a software builder easier there is a special module available, buildenv, that can be loaded on top of any toolchain. If it is missing for some toolchain, send a mail to support@hpc2n.umu.se and let us know.

This module defines a large number of environment variables with the relevant settings for the used toolchain. Among other things it sets CC, CXX, F90, FC, MPICC, MPICXX, MPIF90, CFLAGS, FFLAGS, and much more.

To see all of them, load a toolchain and do

ml show buildenv

There is some more information about buildenv in the Build environment section under compilers.

MPI Libraries

Message Passing Interface (MPI) is a standardized and portable message-passing system designed to function on a wide variety of parallel computing architectures. Several implementations exist, among others OpenMPI and Intel MPI.

A number of compiler toolchains at HPC2N has OpenMPI and Intel MPI installed. See the section Compilers and Compiler Toolchains for more information.

To access the MPI libraries, first load the relevant compiler toolchain and then use the appropriate mpi wrapper command:

Language Command, gcc Command, intel
Fortran 77 mpif77 mpiifort
Fortran 90 mpif90 mpiifort
Fortran 95 mpif90 mpiifort
C mpicc mpiicc
C++ mpicxx mpiicpc

To run an MPI program, you need to load the relevant Compiler Toolchain used when compiling your software (and possibly one of the site-installed software modules you are using, like Gromacs or LAMMPS, etc, etc):

ml <compiler toolchain module>

and then add

srun <program>

Here are a few links to pages with more information about the different implementations of the MPI libraries:

Warning

You must always use srun or mpirun to run a MPI program in your batch job, unless it handles the parallelization itself.

Intel MPI

Intel MPI (impi) is a high performance MPI implementation that can run on multiple cluster interconnects chosen by the user at runtime. Part of several compiler toolchains.

OpenMPI

Open MPI is an open source combination of technologies from several other MPI projects. Part of several compiler toolchains.


Math Libraries

Most of these are loaded as part of compiler toolchains.

NOTE: below are given some examples for how to link with the library in question. In all the examples the executable is named by using -o PROGRAM to name the executable PROGRAM. If you leave out this your executable will be named a.out.

BLACS

The BLACS (Basic Linear Algebra Communication Subprograms) project aims to create a linear algebra oriented message passing interface that may be implemented efficiently and uniformly across a large range of distributed memory platforms.

As of ScaLAPACK version 2, BLACS is now included in the ScaLAPACK library.

BLAS

Blas is available in the form of FlexiBLAS, OpenBLAS, or Intel MKL for more information about that.

Linking with FlexiBLAS

FlexiBLAS - A BLAS and LAPACK wrapper library with runtime exchangable backends.

You can load it as a module after first loading a suitable GCC module (ml spider FlexiBLAS for more information). It is also available as part of several compiler toolchain versions. Remember, you can always see which other modules are included in a toolchain with ml show TOOLCHAIN/VERSION.

Language Command
Fortran 77 gfortran -o PROGRAM PROGRAM.f -lflexiblas -lgfortran
Fortran 90 gfortran -o PROGRAM PROGRAM.f90 -lflexiblas -lgfortran
C gcc -o PROGRAM PROGRAM.c -lflexiblas -lgfortran
C++ g++ -o PROGRAM PROGRAM.cc -lflexiblas -lgfortran

Or, after loading the buildenv module, use the environment variable to link with: $LIBBLAS.

Linking with OpenBLAS

OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.

Load it either as a module after first loading a suitable GCC module (ml spider OpenBLAS for more information), or as part of several compiler toolchain versions. Remember, you can always see which other modules are included in a toolchain with ml show TOOLCHAIN/VERSION.

Language Command
Fortran 77 gfortran -o PROGRAM PROGRAM.f -lopenblas -lgfortran
Fortran 90 gfortran -o PROGRAM PROGRAM.f90 -lopenblas -lgfortran
C gcc -o PROGRAM PROGRAM.c -lopenblas -lgfortran
C++ g++ -o PROGRAM PROGRAM.cc -lopenblas -lgfortran

Or, after loading the buildenv module, use the environment variable to link with: $LIBBLAS.

Eigen

Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.

Using Eigen

First load the “Eigen” module. To see which versions of Eigen are available use ml spider eigen. Remember to also load the needed prerequisites for the version (listed when you do ml spider Eigen/VERSION for the version you want).

You can find the Eigen library files under the $EBROOTEIGEN/lib directory after the module has been loaded.

There is a getting started guide and other documentation on the Eigen homepage.

ELPA

Eigenvalue SoLvers for Petaflop-Applications.

The publicly available ELPA library provides highly efficient and highly scalable direct eigensolvers for symmetric matrices. Though especially designed for use for PetaFlop/s applications solving large problem sizes on massively parallel supercomputers, ELPA eigensolvers have proven to be also very efficient for smaller matrices.

To see which versions of ELPA are available use:

ml spider elpa

Remember to load any prerequisites before loading the ELPA module. Use ml spider ELPA/VERSION for each version of ELPA to see the prerequisites.

You can find the libraries that can be linked with in $EBROOTELPA/lib when the module has been loaded. In addition, there is a USERS_GUIDE.md file with information about how to use ELPA. It can be found in $EBROOTELPA/share/doc/elpa.

  • External info: ELPA

FFTW

A fast, free C FFT library; includes real-complex, multidimensional, and parallel Fourier
transforms. Part of several compiler toolchains.

The available versions of FFTW are all 3.x. Most have MPI support. See ml spider FFTW to see which are serial and which are MPI.

Linking with FFTW

To use FFTW, you should load it as part of a compiler toolchain. The available modules are foss and fosscuda. Do ml av to see which versions you can load.

Use these commands to compile and link with FFTW3

Language Command
Fortran 77 gfortran -o PROGRAM PROGRAM.f -lfftw3 -lm
Fortran 90 gfortran -o PROGRAM PROGRAM.f90 -lfftw3 -lm
C gcc -o PROGRAM PROGRAM.c -lfftw3 -lm
C++ g++ -o PROGRAM PROGRAM.cc -lfftw3 -lm

Or use $LIBFFT -lm to link with ($LIBFFT_MT -lm for threaded). This requires you to load the buildenv module after loading the compiler toolchain.

In addition, you can use Intel MKL if you are using the Intel compilers.

  • External info: FFTW

FlexiBLAS

FlexiBLAS is a wrapper library that enables the exchange of the BLAS (Basic Linear Algebra System) and LAPACK (Linear Algebra PACKage) implementation used in an executable without recompiling or re-linking it.

Linking with FlexiBLAS

FlexiBLAS is available as part of several compiler toolchains. Remember, you can always see which other modules are included in a toolchain with ml show TOOLCHAIN/VERSION.

You can also load it as a module after first loading a suitable GCC module (ml spider FlexiBLAS for more information)

Language Command
Fortran 77 gfortran -o PROGRAM PROGRAM.f -lflexiblas -lgfortran
Fortran 90 gfortran -o PROGRAM PROGRAM.f90 -lflexiblas -lgfortran
C gcc -o PROGRAM PROGRAM.c -lflexiblas -lgfortran
C++ g++ -o PROGRAM PROGRAM.cc -lflexiblas -lgfortran

Or, after loading the buildenv module, use the environment variable to link with: $LIBBLAS.

GSL

The GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting.

It is free software under the GNU General Public License.

Using GSL

First load the module. To see which versions of GSL are available use ml spider GSL. Then do ml spider GSL/VERSION for the VERSION you would like to load, in order to see the prerequisites that are needed to load the GSL module.

The GSL libraries can be found in $EBROOTGSL/lib after the module has been loaded, if you need to update yourself on their names.

After loading, you can get some information about GSL from the command man gsl.

  • External info: GSL

Intel MKL

Intel® Math Kernel Library (Intel® MKL) is a library of highly optimized, extensively threaded math routines for science, engineering, and financial applications that require maximum performance. Core math functions include BLAS, LAPACK, ScaLAPACK, Sparse Solvers, Fast Fourier Transforms, Vector Math, and more. Part of several compiler toolchains.

The Intel MKL libraries contains:

  • ScaLAPACK
  • LAPACK
  • Sparse Solver
  • BLAS
  • Sparse BLAS
  • PBLAS
  • GMP
  • FFTs
  • BLACS
  • VSL
  • VML

Linking with MKL libraries

To use the MKL libraries, first load one of the following compiler toolchain modules:

  • gomkl: GCC, OpenMPI, IntelMKL
  • imkl: icc, ifort, IntelMKL
  • intel: icc, ifort, IntelMPI, IntelMKL
  • intelcuda: intel, CUDA

in a suitable version (check with ml spider for the relevant compiler toolchain).

To correctly use MKL it is vital to have read the documentation. To find the correct way of linking, take a look at the offical Intel MKL documentation.

Using the buildenv module, the common blas/lapack/scalapack/fftw libraries are available in the following environment variables, just like when using a non-MKL capable toolchain:

  • LIBBLAS
  • LIBLAPACK
  • LIBSCALAPACK
  • LIBFFT

And threaded versions are available from the corresponding environment variable appended with “_MT”

Read the section about buildenv for more information.

There are too many libraries in MKL to show a complete list of combinations. We refer you to the official MKL documentation for examples and support@hpc2n.umu.se for help.

Lapack

LAPACK (Linear Algebra PACKage) is a software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and generalized Schur decomposition.

In addition is provided related computations such as reordering of the Schur factorizations and estimating condition numbers. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single and double precision. LAPACK is written in Fortran77.

Included with OpenBLAS.

Part of several compiler toolchains.

Linking with LAPACK (and BLAS)

The Fortran based lapack library is included with the BLAS modules. To use it you must load the BLAS module you want, as well as its prerequisite compiler (toolchain).

Load a suitable version of a toolchain containing LAPACK/BLAS. See compiler toolchains for this.

Use the following commands to compile and link.

OpenBLAS

Language Command
Fortran 77 gfortran -o PROGRAM PROGRAM.f -lopenblas -lgfortran
Fortran 90 gfortran -o PROGRAM PROGRAM.f90 -lopenblas -lgfortran
C gcc -o PROGRAM PROGRAM.c -lopenblas -lgfortran
C++ g++ -o PROGRAM PROGRAM.cc -lopenblas -lgfortran

FlexiBLAS

Language Command
Fortran 77 gfortran -o PROGRAM PROGRAM.f -lflexiblas -lgfortran
Fortran 90 gfortran -o PROGRAM PROGRAM.f90 -lflexiblas -lgfortran
C gcc -o PROGRAM PROGRAM.c -lflexiblas -lgfortran
C++ g++ -o PROGRAM PROGRAM.cc -lflexiblas -lgfortran

Or use the environment variable $LIBLAPACK to link with. This requires you to load the buildenv module after loading the compiler toolchain.

You can also use the Intel MKL version of LAPACK.

Libint

Libint library is used to evaluate the traditional (electron repulsion) and certain novel two-body matrix elements (integrals) over Cartesian Gaussian functions used in modern atomic and molecular theory.

Using Libint

To see which versions of Libint are available, and how to load it and any dependencies, use ml spider libint and then ml spider libint/VERSION for the specfic VERSION you are interested in.

When the module and its prerequisites have been loaded, you can use the environment variable $EBROOTLIBINT to find the binaries and libraries for Libint.

There is some information about Libint and how to use it on the Libint Homepage. There is a brief Libint Programmers Manual here.

Libxc

Libxc is a library of exchange-correlation functionals for density-functional theory. The aim is to provide a portable, well tested and reliable set of exchange and correlation functionals.

Using Libxc

To see which versions of Libxc are available, and how to load it and any dependencies, use ml spider libxc and then use ml spider to check a specific version to see how to load it and the prerequisites.

When the module has been loaded, you can use the environment variable $EBROOTLIBXC to find the binaries and libraries for Libxc.

There is a Libxc manual here.

LIBXSMM

LIBXSMM is a library for small dense and small sparse matrix-matrix multiplications targeting Intel Architecture (x86).

Using Libxsmm

To see which versions of Libxsmm are available, and how to load it and any dependencies, use ml spider libxsmm and then use ml spider libxsmm/VERSION to check a specific VERSION to see how to load it.

When the module has been loaded, you can use the environment variable $EBROOTLIBXSMM to find the binaries and libraries for Libxsmm.

There is some Libxsmm documentation here.

MAGMA

The MAGMA project aims to develop a dense linear algebra library similar to LAPACK but for heterogeneous/hybrid architectures, starting with current Multicore+GPU systems.

Using MAGMA

To access MAGMA, you first need to load the module and its prerequisites. Do ml spider magma to see the available modules and then do ml spider magma/VERSION for the specific version you are interested in to see which prerequisites you need to load first.

When you have loaded a MAGMA module, you can see the available libraries by looking where the environment variable $EBROOTMAGMA points.

There are some examples of using magma here:

METIS

METIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes.

Linking with METIS

First load the METIS module. Do ml spider METIS to see versions. Remember to load the prerequisite compiler suite or toolchain!

Use these commands to compile and link with METIS

Language Command
Fortran 77 gfortran -o PROGRAM PROGRAM.f -lmetis
Fortran 90 gfortran -o PROGRAM PROGRAM.f90 -lmetis
C gcc -o PROGRAM PROGRAM.c -lmetis
C++ g++ -o PROGRAM PROGRAM.cc -lmetis

MPFR

The MPFR library is a C library for multiple-precision floating-point computations with correct rounding.

Using MPFR

To see which versions of MPFR are available, and how to load it and any dependencies, use ml spider mpfr and then do ml spider on a specific version to see how to load it.

When the module has been loaded, you can use the environment variable $EBROOTMPFR to find the binaries and libraries for MPFR.

The MPFR Reference Guide is here (for the newest version).

  • External info: MPFR

OpenBLAS

OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version. Part of several compiler toolchains.

Using OpenBLAS

Load it either as a module after first loading a suitable GCC module (ml spider OpenBLAS for more information), or as part of several compiler toolchain versions. Remember, you can always see which other modules are included in a toolchain with ml show TOOLCHAIN/VERSION.

Language Command
Fortran 77 gfortran -o PROGRAM PROGRAM.f -lopenblas -lgfortran
Fortran 90 gfortran -o PROGRAM PROGRAM.f90 -lopenblas -lgfortran
C gcc -o PROGRAM PROGRAM.c -lopenblas -lgfortran
C++ g++ -o PROGRAM PROGRAM.cc -lopenblas -lgfortran

Or, after loading the buildenv module, use the environment variable to link with: $LIBBLAS.

ParMETIS

ParMETIS is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices.

Using ParMETIS

To see which versions of ParMETIS are available, and how to load it and any dependencies, use ml spider parmetis and then use ml spider on a specific version to see how to load it.

When the module has been loaded, you can use the environment variable $EBROOTPARMETIS to find the binaries and libraries for ParMETIS.

ScaLAPACK

ScaLAPACK is a library of high-performance linear algebra routines for distributed-memory message-passing computers and networks of workstations supporting PVM and/or MPI. It is a continuation of the LAPACK project, which designed and produced analogous software for workstations, vector supercomputers, and shared-memory parallel computers. Part of several [compiler toolchains]((../compiling/#compilers__and__compiler__toolchains).

Since the usage of ScaLAPACK depends on LAPACK, it involves multiple libraries.

NOTE: As of version 2, ScaLAPACK includes BLACS. This means that it is tightly coupled to the MPI implementation used to build it. In order to use this library, a compiler and the corresponding MPI libraries needs to be loaded first, as well as ScaLAPACK, LAPACK and BLAS for that compiler. This is easily accomplished by loading a suitable compiler toolchain module.

Linking with ScaLAPACK, BLAS, and LAPACK

You can load one of either the foss or the fosscuda toolchains. See the section compiler toolchains for this. .

In addition, you can use Intel MKL if you are using the Intel compilers (though there is the gomkl toolchain with GCC, OpenMPI, and Intel MKL).

After loading the suitable compiler toolchain module, use the following command to compile and link with ScaLAPACK

Toolchain versions with OpenBLAS

Language Command
Fortran 77 mpifort -o PROGRAM PROGRAM.f -lscalapack -lopenblas -lgfortran
Fortran 90 mpifort -o PROGRAM PROGRAM.f90 -lscalapack -lopenblas -lgfortran
C mpicc -o PROGRAM PROGRAM.c -lscalapack -lopenblas -lgfortran
C++ mpicc -o PROGRAM PROGRAM.cc -lscalapack -lopenblas -lgfortran

Toolchain versions with FlexiBLAS

Language Command
Fortran 77 mpifort -o PROGRAM PROGRAM.f -lscalapack -lflexiblas -lgfortran
Fortran 90 mpifort -o PROGRAM PROGRAM.f90 -lscalapack -lflexiblas -lgfortran
C mpicc -o PROGRAM PROGRAM.c -lscalapack -lflexiblas -lgfortran
C++ mpicc -o PROGRAM PROGRAM.cc -lscalapack -lflexiblas -lgfortran

Or use the environment variable, $LIBSCALAPACK to link with. This requires you to load the buildenv module after loading the compiler toolchain.

SCOTCH

Software package and libraries for sequential and parallel graph partitioning, static mapping, and sparse matrix block ordering, and sequential mesh and hypergraph partitioning.

Using SCOTCH

To see which versions of SCOTCH are available, and how to load it and its dependencies, use ml spider scotch and then ml spider SCOTCH/VERSION for a specific VERSION to see how it is loaded (which prerequisites needs to be loaded first).

When the module has been loaded, you can use the environment variable $EBROOTSCOTCH to find the binaries and libraries for SCOTCH.

The text files for the user manual on how to use SCOTCH can be found in $EBROOTSCOTCH/man/man1/ when the module is loaded. You can also copy the content of that directory and then run “make” to generate a full user manual.

ML/DL

Note

If you do not find what you are looking for here, also take a look at the Python modules section.

cuTENSOR

The cuTENSOR Library is a GPU-accelerated tensor linear algebra library providing tensor contraction, reduction and elementwise operations.

Using cuTENSOR

You first need to load the module. To see the versions available, do ml spider cuTENSOR. The module can then be loaded directly.

You can see the libraries from $EBROOTCUTENSOR/lib/<VERSION>.

There is a user guide at the offical cuTENSOR page.

Horovod

Horovod is a distributed training framework for TensorFlow.

Using Horovod

You first need to load a Horovod module in order to use it. You can see the available versions with the command ml spider Horovod. Then you do ml spider module spider Horovod/<VERSION> to see which prerequisites to load before loading the Horovod module.

When the module is loaded, you can see the binaries and libraries from $EBROOTHOROVOD.

You do ´´horovodrun –help`` for some options.


Data science

Note

If you do not find what you are looking for here, also take a look at the Python modules section.

GeoPandas

GeoPandas is a project to add support for geographic data to pandas objects. It currently implements GeoSeries and GeoDataFrame types which are subclasses of pandas.Series and pandas.DataFrame respectively. GeoPandas objects can act on shapely geometry objects and perform geometric operations.

Using GeoPandas

First you need to load the GeoPandas module. To see which versions exists, do ml spider geopandas and then ml spider geopandas/<VERSION> to see which prerequisites needs to be loaded first.

You can use $EBROOTGEOPANDAS/lib to see the available libraries.


Data formats

HDF5

HDF5 is a data model, library, and file format for storing and managing data.

Using HDF5

You need to first load the HDF5 module. To see the available versions, do ml spider HDF5 and then ml spider HDF5/<VERSION> to see how to load the specific version of the module (which prerequisites to load first).

When the module is loaded, you can use $EBROOTHDF5 to find the binaries and libraries available.

netCDF

NetCDF (Network Common Data Form) is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.

Using netCDF

To see which versions of NetCDF are available, and how to load it and any dependencies, use ml spider netcdf and then do ml spider on a specific version to see how to load it.

When the module has been loaded, you can use the environment variable $EBROOTNETCDF to find the binaries and libraries for NetCDF.

There is some information about NetCDF and how to use it on the NetCDF documentation page.

There is also a Parallel netCDF available, for parallel I/O access. The module is called PnetCDF.


OTHER

CUDA

CUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.

NOTE: CUDA libraries can be used with either GCC or Intel compilers. In addition, the NVIDIA CUDA compiler driver nvcc is installed.

Using CUDA

CUDA can be loaded on its own, and it is also part of some compiler toolchains

After you have loaded CUDA (on its own or as part of a compiler toolchain) module, you compile and link with CUDA like this:

Language GCC, OpenMPI Intel, Intel MPI NVCC
Fortran calling CUDA functions 1) nvcc -c CUDAPROGRAM.cu
2) gfortran -lcudart -lcuda PROGRAM.f90 CUDAPROGRAM.o
C / C++ with CUDA mpicc CUDAPROGRAM.cu -lcuda -lcudart mpiicc CUDAPROGRAM.cu -lcuda -lcudart nvcc CUDAPROGRAM.cu

You can add other flags, like for instance -o my-binary to name the output differently than the standard a.out.

NOTE: CUDA functions can be called directly from Fortran programs:

  1. First use the nvcc compiler to create an object file from the .cu file.
  2. Then compile the Fortran code together with the object file from the .cu file.

numactl

The numactl program allows you to run your application program on specific cpu’s and memory nodes. It does this by supplying a NUMA memory policy to the operating system before running your program. The libnuma library provides convenient ways for you to add NUMA memory policies into your own program.

Using numactl

You first need to load the module. Use ml spider numactl to see available versions, then use ml spider numactl/<VERSION> to see how to load a specific version (what prerequisited to load first).

When the module is loaded, you can find the binaries and libraries from $EBROOTNUMACTL.

With a numactl module loaded, you can also get help with

  • man numactl
  • numactl --help or just numactl with no options

rgdal

Provides bindings to the ‘Geospatial’ Data Abstraction Library (‘GDAL’) (>= 1.11.4 and <= 2.5.0) and access to projection/transformation operations from the ‘PROJ.4’ library.

Using rgdal

You need to load the rgdal module before using it. First do ml spider rgdal to see which versions exist, then do ml spider rgdal/<VERSION> for the version you are interested in, in order to see which prerequisites to load before loading the rgdal module.

When you have loaded the module, you can find docs, help, libs, etc. from $EBROOTRGDAL/rgdal

SIONlib

SIONlib is a scalable I/O library for parallel access to task-local files. The library not only supports writing and reading binary data to or from several thousands of processors into a single or a small number of physical files, but also provides global open and close functions to access SIONlib files in parallel. This package provides a stripped-down installation of SIONlib for use with performance tools (e.g., Score-P), with renamed symbols to avoid conflicts when an application using SIONlib itself is linked against a tool requiring a different SIONlib version.

Using SIONlib

To see which versions of SIONlib are available, and how to load it and any dependencies, use ml spider sionlib and then do ml spider on a specific version to see how to load it.

When the module has been loaded, you can use the environment variable $EBROOTSIONLIB to find the binaries and libraries for SIONlib.

There is some documentation for SIONlib here.

StarPU

A Unified Runtime System for Heterogeneous Multicore Architectures, StarPU is a task programming library for hybrid architectures.

The application provides algorithms and constraints

  • CPU/GPU implementations of tasks
  • A graph of tasks, using either the StarPU’s high level GCC plugin pragmas, StarPU’s rich C API, or OpenMP pragmas.

StarPU handles run-time concerns

  • Task dependencies
  • Optimized heterogeneous scheduling
  • Optimized data transfers and replication between main memory and discrete memories
  • Optimized cluster communications

Rather than handling low-level issues, programmers can concentrate on algorithmic concerns!

Using StarPU

In order to use StarPU, you first need to load the module. Do ml spider starpu to see which versions are available and then do ml spider starpu/<VERSION> in order to see which prerequisites you need to load first.

When the module has been loaded you can use $EBROOTSTARPU to find binaries and libraries.

We have versions available for normal compute nodes and for GPUs. The tests and examples have not been built.

There is a manual for using StarPU here: StarPU Handbook

In the directory $EBROOTSTARPU/easybuild/ you will find the .eb file that was used to build the module. In there, under 'configopts' you will find the options the module was built with.

Variant Built with
StarPU/1.2.2-fast --enable-blas-lib=mkl --enable-maxcpus=288 --enable-maxcudadev=8 --enable-fast
StarPU/1.2.2-fxt --enable-blas-lib=mkl --enable-maxcpus=288 --enable-maxcudadev=8 --with-fxt

They can both be loaded with or without GPU capability. It is easiest to load the corresponding intel version instead of the separate icc, ifort, and impi modules.

Additional info: