[ Home | Overview | Publications | Software/Downloads ] • [ Documentation/Questions | Training Videos and Slides ] • [ People | Acks ]
These notes describe how to build and install HPCToolkit and hpcviewer and their prerequisites with Spack. HPCToolkit proper (hpcrun, hpcstruct and hpcprof) is used to measure and analyze an application’s performance and then produce a database for hpcviewer. HPCToolkit is supported on the following platforms. IBM Blue Gene is no longer supported.
We provide binary distributions for hpcviewer and hpctraceviewer on Linux (x86_64, ppc64/le and aarch64), Windows (x86_64) and MacOS (x86_64, M1 and M2). HPCToolkit databases are platform-independent and it is common to run hpcrun on one machine and then view the results on another machine.
We build HPCToolkit and its prerequisite libraries from source.
HPCToolkit has some 20-25 base prerequisites (more for cuda or rocm)
and we now use spack to build them. It is possible to use spack to
install all of hpctoolkit or build just the prerequisites and then
build hpctoolkit with the traditional configure ; make ; make
install
method from autotools. Developers will probably want to run
configure
and make
manually, but both methods are
supported.
These notes are written mostly from the view of using spack to build hpctoolkit and its dependencies. If you are a more experienced spack user, especially if you want to use spack to build hpctoolkit plus several other packages, then you will want to adapt these directions to your own needs.
Spack documentation is available at:
The current status of using Spack for HPCToolkit is at:
Last revised: July 11, 2023.
Building HPCToolkit requires the following prerequisites.
Hpcviewer and hpctraceviewer require Java 11 or later. Spack can install Java, if needed. On Linux, the viewers also require GTK+ version 3.20 or later.
Spack uses a special notation for specifying the version, variants, compilers and dependencies when describing how to build a package. This combination of version, variants, etc is called a ’spec’ and is used both on the command line and in config files.
spack info <package>
shows
the available versions and variants for a package. In most cases,
spaces are optional between elements of a spec. For example:
boost@1.77.0 hpctoolkit @develop papi @=6.0.0 "libiberty@2.40"
Note: foo@2.1
includes all versions beginning with 2.1
,
including 2.1
, 2.1.0
, 2.1.4
, 2.1.9.9.9
,
2.1.stable
, etc. If you want version exactly 2.1
, then
use the notation foo@=2.1
to differentiate 2.1
from
2.1.*
.
Note: dotted version numbers with exactly two fields that end with a 0
(@1.10
, @2.120
, etc) should be quoted so that yaml
does not treat the version @2.40
as the floating point number
2.4
.
-
(dash) and ~
(tilde) both mean ’off’. Use dash after a space
and tilde after a non-space. For example:
elfutils+bzip2~nls elfutils +bzip2 -nls elfutils@0.186 +bzip2~nls
dyninst+openmp build_type=RelWithDebInfo xerces-c@3.2.2 transcoder=iconv
hpctoolkit@develop %gcc@8.5.0
amg2013 cflags='-O2 -mavx512pf'
hpctoolkit@develop ^dyninst@12.1.0+openmp
linux-rhel8-x86_84
.
Normally, a system has only one arch type and you don’t need to specify this. However, for systems with separate front and back-end types, the default is the back end. For example, if you wanted to build for the front end on Cray, then you could use something like this.
python@3.7.4 arch=cray-sles15-x86_64 boost os=fe
Now that spack has implemented microarchitecture targets (haswell, ivybridge, etc), you can use ’target’ to build for a generic x86_64 or a specific CPU type. For example:
amg2013 target=x86_64 lulesh target=ivybridge
You can use spack arch
to display the generic, top-level
families and the micro-arch targets.
spack arch --known-targets
The following command gives a summary of spack spec syntax.
spack help --spec
When writing a spec (for spack spec, install
, etc), spack will
fully resolve all possible choices for the package and all of its
dependencies and create a unique hash value for that exact
configuration. This process is called ’concretization.’ To see how
spack would concretize a spec, use spack spec
.
spack spec hpctoolkit@develop ^elfutils@0.187 ^boost@1.77.0
Spack is available via git clone from GitHub. This includes the core
spack machinery and recipes for building over 7,000 packages (and
growing). You should also clone HPCToolkit for the
packages.yaml
file which is used to configure the spack build.
Note: spack is on GitHub, but hpctoolkit has moved to GitLab.
git clone https://github.com/spack/spack.git git clone https://gitlab.com/hpctoolkit/hpctoolkit.git
After cloning, add the spack/bin
directory to your PATH, or else
source the spack setup-env
script.
(bash) . /path/to/spack/share/spack/setup-env.sh (csh) setenv SPACK_ROOT /path/to/spack/root source $SPACK_ROOT/share/spack/setup-env.csh
It suffices to add spack/bin
to your PATH (or even symlink the
spack launch script). Sourcing the setup-env
script adds extra
support for modules built by spack.
config.yaml
is the top-level spack config file. This specifies
the directory layout for installed files and the top-level spack
parameters.
By default, spack installs packages inside the spack repository at
spack/opt/spack
. To use another location, set the root
field under install_tree
in config.yaml
. Normally, you
will want to set this.
config: install_tree: root: /path/to/top-level/install/directory
There are a few other fields that you may want to set for your local
system. These are all in config.yaml
.
build_stage
– the location where spack builds packages
(default is in /tmp
).
source_cache
– where spack stores downloaded source tar files.
connect_timeout
– some download sites, especially sourceforge
are often slow to connect. If you find that connections are timing out,
try increasing this time to 30 or 60 seconds (default is 10 seconds).
url_fetch_method
– by default, spack uses a python library
(urllib) to fetch source files. If you have trouble downloading
files, try changing this to curl
.
build_jobs
– by default, spack uses all available hardware
threads for parallel make, up to a limit of 16. If you want to use a
different number, then set this.
The default config.yaml
file is in the spack repository at
spack/etc/spack/defaults
. The simplest solution is to copy this
file one directory up and then edit the copy (don’t edit the default
file directly).
cd spack/etc/spack cp defaults/config.yaml . vi config.yaml
Alternatively, you could put this file in a separate directory,
outside of the spack repository and then use -C/--config-scope
dir
on the spack command line. (The -C
option goes before the
spack command name.) This is useful if you maintain multiple config
files for different machines.
spack -C dir install ...
Note: if you put config.yaml
in spack/etc/spack
, then it
will apply to every spack command for that repository (and you won’t
forget). Putting it in a separate directory is more flexible because
you can support multiple configurations from the same repository. But
then you must use -C dir
with every spack command or else you
will get inconsistent results.
You can view the current configuration and see where each entry comes
from with spack config
.
spack [-C dir] config get config spack [-C dir] config blame config
See the spack docs on ‘Configuration Files’ and ‘Basic Settings’.
Spack supports creating module files, but does not install them by
default. If you want to install module files, then you need to edit
modules.yaml
to specify which type of modules to use (TCL or
Lmod) and the install path.
modules: default: roots: # normally, need only one of these tcl: /path/to/top-level/tcl-module/directory lmod: /path/to/top-level/lmod-module/directory enable: - tcl (or lmod)
Also, for hpctoolkit, you should also turn off autoload for dependencies. By default, autoload loads the modules for hpctoolkit’s dependencies. But hpctoolkit does not need this and loading them may interfere with an application’s dependencies. You should do this for both tcl and lmod modules.
modules: default: tcl: hpctoolkit: autoload: none all: autoload: direct
The packages.yaml
file specifies the versions and variants for
the packages that spack installs and serves as a common reference
point for HPCToolkit’s prerequisites. This file also specifies the
paths or modules for system build tools (cmake, python, etc) to avoid
rebuilding them. Put this file in the same directory as
config.yaml
. A sample packages.yaml
file is available
in the spack
directory of the hpctoolkit repository.
There are two main sections to packages.yaml
. The first
specifies the versions and variants for hpctoolkit’s prereqs. By
default, spack will choose the latest version of each package (plus
any constraints from hpctoolkit’s package.py
file). In most
cases, this will work, but not always. If you need to specify a
different version or variant, then set this in packages.yaml
.
For example:
packages: elfutils: version: [0.189] variants: ~nls
Note: the versions and variants specified in hpctoolkit’s
package.py
file are hard constraints and should not be changed.
Variants in packages.yaml
are preferences that may be modified
for your local system. (But don’t report a bug until you have first
tried the versions from packages.yaml
that we supply.)
The other sections in packages.yaml
specify paths or modules
for other packages and system build tools. Building hpctoolkit’s
prerequisites requires cmake 3.14 or later, perl 5.x and python 3.8 or
later. There are three ways to satisfy these requirements: a system
installed version (eg, /usr), a pre-built module or build from
scratch.
By default, spack will rebuild these from scratch, even if your local
version is perfectly fine. If you already have an installed version
and prefer to use that instead, then you can specify this in
packages.yaml
.
The easiest way to use a pre-built package is to let spack find the
package itself. Make sure the program is on your PATH and run
spack external
. For example, to search for cmake
, use:
spack external find cmake
This does not work for every spack package, but it does work with
cmake
, perl
and python
. Note: spack puts these
entries in packages.yaml
in the .spack
subdirectory of
your home directory.
You can also add these entries manually to packages.yaml
. For
example, this entry says that cmake 3.7.2 is available from module
CMake/3.7.2
. buildable: False
is optional and means
that spack must find a matching external spec or else fail the build.
cmake: externals: - spec: cmake@3.7.2 modules: - CMake/3.7.2 buildable: False
This example says that python2 and python3 are both available in
/usr/bin
. Note that the prefix
entry is the parent
directory of bin
, not the bin directory itself.
python: externals: - spec: python@2.7.18 prefix: /usr - spec: python@3.6.8 prefix: /usr
Note: as a special rule for python, use package name python
,
even though the program name is python2 or python3.
Warning: It is Ok to use spack externals for build utilities that exist on your system (cmake, perl, python). However, we strongly recommend that you should rebuild all prereq packages that link code into hpctoolkit (dyninst, elfutils, etc).
Spack implements a hierarchy of micro-architecture targets, where ’target’ is a specific architecture (eg, haswell, ivybridge, etc) instead of a generic family (x86_64, ppc64le or aarch64). This allows the compiler to optimize code for the specific target.
You will notice this choice in two main places: the ’spack spec’ and the
path for the install directory. For example, linux-rhel7-x86_64
might become linux-rhel7-broadwell
. You can use spack
arch
to see the list of generic families and micro-architecture
targets.
spack arch --known-targets
If you prefer a generic install, you can use the target
option
to specify a generic family (x86_64, ppc64le or aarch64) instead of a
micro-architecture target. This would be useful for a shared install
that needs to work across multiple machines with different micro-arch
types. For example:
spack install hpctoolkit ... target=x86_64
You can also specify preferences for target
, compilers
and providers
in the all:
section of
packages.yaml
. Note: these are only preferences, they can be
overridden on the command line.
packages: all: target: [x86_64] compiler: [gcc@9.3.0] providers: mpi: [openmpi]
See the spack docs on ’Build Customization’ and ’Specs and Dependencies’.
It is important to understand that specifications in
packages.yaml
are only preferences, not requirements. There
are other choices that spack ranks higher. In particular, spack will
prefer to reuse an existing package that doesn’t conform to
packages.yaml
rather than rebuild a newer version.
For example, suppose you previously installed hpctoolkit with dyninst 12.1.0. Then, some months later, you update your spack repo and want to install a new hpctoolkit with dyninst 12.3.0. By default, spack will prefer to reuse the old 12.1.0 rather than rebuild the new version.
The solution is to use require:
to force spack to build the new
version.
packages: dyninst: require: "@12.3.0"
Note:
require:
is a full spec (so include @
for
version) and supersedes both version and variants.
require:
should be a singleton spec (not a list)
and should be quoted.
By default, spack install uses --reuse
which prefers reusing an
already installed package. You can change this with --fresh
which prefers to rebuild the latest version of a package. But
--reuse
and --fresh
apply to all package versions. The
advantage of require:
is that you can selectively choose the
version and variants on a package by package basis.
There are two extensions to require:
that are sometimes useful.
any_of
requires one or more from a list of specs, and
one_of
requires exactly one from a list of specs. For example,
packages: boost: require: - one_of: ["@1.75.0", "@1.77.0"] elfutils: require: - any_of: ["+bzip2", "+xz"]
You can require the target, compiler or providers in
packages.yaml
as follows. Recall that the field for
require:
is a spec in quotes.
packages: all: require: "%gcc@9.3.0 target=x86_64" mpi: require: "mpich@4.0"
The ’concretizer’ is the part of spack that converts a partial spec into a full spec with values for the version and variants of every package in the spec plus all dependencies. The new concretizer for spack (clingo) is a third-party python library for solving answer-set logic problems (eg, satisfiability). Normally, this only needs to be set up once per machine, the first time you run spack.
The easiest way to install clingo it to use spack’s pre-built libraries. These are available for Linux (x86_64, ppc64le, aarch64) and Macos/Darwin (x86_64) for python 3.7 or later. The Macos version also requires Macos 10.13 or later and the Xcode developer package (for python and other programs).
By default, spack will automatically install (bootstrap) clingo the
first time you run a command that uses it (spec
or
solve
). However, if this fails or you want to verify the steps
yourself, then follow these steps.
In config.yaml
, set concretizer
to clingo
.
config: concretizer: clingo
Spack needs at least one compiler configured (see below). If this is
your first time running spack on this machine, then use compiler
find
to detect a compiler. Finally, use spack solve
to
trigger bootstrapping.
spack compiler list (to display known compilers) spack compiler find (to add a compiler, if needed) spack solve zlib ==> Bootstrapping clingo from pre-built binaries ... zlib@1.2.11%gcc@8.4.1+optimize+pic+shared arch=linux-rhel8-zen
Spack stores the clingo bootstrap files in ~/.spack/bootstrap
.
You can check on the status of these files or clean (reset) them with
the find
or clean
commands.
spack find -b (displays the status of the bootstrap files) spack clean -b (erases the current bootstrap files)
If the binary bootstrap fails, then try the solve
step with
debugging turned on.
spack -d solve zlib
If the binary bootstrap fails or if your system is not supported, then
you will need to let spack build clingo from source. Reset
spack-install
to true and rerun spack solve zlib
. This
requires a compiler with support for C++14 and takes maybe 30-45
minutes to install all the packages.
Building HPCToolkit requires GNU gcc/g++ version 8.x or later with C++17 support. By default, spack uses the latest available version of gcc, but you can specify a different compiler, if one is available.
Spack uses a separate file, compilers.yaml
to store information
about available compilers. This file is normally in your home directory
at ~/.spack/platform
where ‘platform’ is normally ‘linux’ (or
else ‘cray’).
The first time you use spack, or after adding a new compiler, you should
run spack compiler find
to have spack search your system for
available compilers. If a compiler is provided as a module, then you
should load the module before running find
. Normally, you only
need to run find
once, unless you want to add or delete a
compiler. You can also run spack compiler list
and spack
compiler info
to see what compilers spack knows about.
For example, on one power8 system running RedHat 7.3, /usr/bin/gcc is
version 4.8.5, but gcc 8.3.0 is available as module GCC/8.3.0
.
module load GCC/8.3.0 spack compiler find ==> Added 2 new compilers to /home/krentel/.spack/linux/compilers.yaml gcc@8.3.0 gcc@4.8.5 ==> Compilers are defined in the following files: /home/krentel/.spack/linux/compilers.yaml spack compiler list ==> Available compilers -- gcc rhel7-ppc64le -------------------------------------------- gcc@8.3.0 gcc@4.8.5 spack compiler info gcc@8.3 gcc@8.3.0: paths: cc = /opt/apps/software/Core/GCCcore/8.3.0/bin/gcc cxx = /opt/apps/software/Core/GCCcore/8.3.0/bin/g++ f77 = /opt/apps/software/Core/GCCcore/8.3.0/bin/gfortran fc = /opt/apps/software/Core/GCCcore/8.3.0/bin/gfortran modules = ['GCC/8.3.0'] operating system = rhel7
Note: for compilers from modules, spack does not fill in the
modules:
field in the compilers.yaml
file. You need to
do this manually. In the above example, after running find
, I
edited compilers.yaml
to add GCC/8.3.0
to the
modules:
field as below. This is important to how spack
manipulates the build environment.
- compiler: modules: [GCC/8.3.0] operating_system: rhel7 spec: gcc@8.3.0 ...
Spack uses %
syntax to specify the build compiler and @
syntax to specify the version. For example, suppose you had gcc
versions 8.5.0, 9.3.0 and 10.2.0 available and you wanted to use 9.3.0.
You could write this as:
spack install package %gcc@9.3.0
You can also set the choice of compiler in the all:
section of
packages.yaml
.
packages: all: compiler: [gcc@9.3.0]
See the spack docs on ‘Compiler Configuration’.
Spack uses Python for two things. First, to run the Spack scripts written in Python, and second, to use as a dependency for other spack packages. These do not have to be the same python version or install.
Currently, Spack requires at a minimum Python 3.7 to run spack at all. But 3.7 is deprecated and support for it will be removed in a few months. So, the best thing to do is to upgrade to Python 3.8 or later now.
If python 3.8 or later is not available on your system, then your options to install it are: (1) load a module for a later version, (2) use yum or apt to install a later version, (3) ask your sysadmin to install a later version, or (4) as a last resort, compile a later version from source.
If a later python is available on your system but not first in your
PATH or under a different name, you can set the environment variable
SPACK_PYTHON
to the python3 binary. For example, suppose
/usr/bin/python3
is too old, but python 3.8 is available as
/usr/bin/python3.8
, then you could use:
export SPACK_PYTHON=/usr/bin/python3.8
If set, SPACK_PYTHON
is the path to the Python interpreter used
to run Spack.
First, set up your config.yaml
, modules.yaml
,
packages.yaml
and compilers.yaml
files as above and edit
them for your system. You can see how spack will build hpctoolkit
with spack spec
.
spack spec hpctoolkit
Then, the “one button” method uses spack to install everything.
spack install hpctoolkit
Tip: Spack fetch is somewhat fragile and sometimes has transient
problems downloading files. You can use spack fetch -D
to
pre-fetch all of the tar files and resolve any downloading problems
before starting the full install.
spack fetch -D hpctoolkit
The manual method uses spack to build hpctoolkit’s prerequisites and
then uses the traditional autotools configure && make && make
install
to install hpctoolkit. This method is primarily for
developers who want to compile hpctoolkit, edit the source code and
recompile, etc. This method is also useful if you need some configure
option that is not available through spack.
First, use spack to build hpctoolkit’s prerequisites as above. You
can either build some version of hpctoolkit as before (which will pull
in the prerequisites), or else use --only dependencies
to avoid
building hpctoolkit itself.
spack install --only dependencies hpctoolkit
Then, configure and build hpctoolkit as follows. Hpctoolkit uses automake and so allows for parallel make.
configure \ --prefix=/path/to/hpctoolkit/install/prefix \ --with-spack=/path/to/spack/install_tree/linux-centos9-x86_64/gcc-11.3.1 \ ... make -j <num> make install
The argument to --with-spack
should be the directory containing
all of the individual package directories, normally two directories
down from the top-level install_tree
and named by the platform
and compiler. This option replaces the old --with-externals
.
The following are other options that may be useful. For the full list
of options, see configure -h
.
--enable-all-static
– build hpcprof-mpi statically linked for
the compute nodes.
--enable-develop
– compile with optimization turned off for
debugging.
--with-package=path
– specify the install prefix for some
prerequisite package, mostly for developers who want to use a custom,
non-spack version of some package.
MPICXX=compiler
– specify the MPI C++ compiler for
hpcprof-mpi, may be a compiler name or full path.
Note: if your spack install tree has multiple versions or variants for
the same package, then --with-spack
will select the one with
the most recent directory time stamp (and issue a warning). If this
is not what you want, then you will need to specify the correct
version with a --with-package
option.
Beginning with the 2020.03.01 version, HPCToolkit now supports profiling CUDA binaries (nVidia only). For best results, use CUDA version 10.1 or later and Dyninst 10.1 or later. Note: in addition to a CUDA installation, you also need the CUDA system drivers installed. This normally requires root access and is outside the scope of spack.
For a spack install with CUDA, use the +cuda
variant.
spack install hpctoolkit +cuda
For a manual install, either download and install CUDA or use an
existing module, and then use the --with-cuda
configure option.
configure \ --prefix=/path/to/hpctoolkit/install/prefix \ --with-spack=/path/to/spack/install/dir \ --with-cuda=/path/to/cuda/install/prefix \ ...
If you installed CUDA with spack in the same directory as the rest of
the prerequisites, then the --with-spack
option should find it
automatically (but check the summary at the end of the configure
output). If you are using CUDA from a separate system module, then you
will need the --with-cuda
option.
HPCToolkit supports profiling Intel GPUs through the Intel Level Zero
and Intel GTPin interfaces. For basic support (start and stop times
for GPU kernels) add the +level_zero
variant. For advanced
support inside the GPU kernel, also add the +gtpin
variant.
But we recommend always compiling with gtpin and then deciding at
runtime which options to use.
spack install hpctoolkit +level_zero +gtpin
GTPin requires the oneapi-igc
package which is an external only
spack package, normally installed in /usr
. You should add this
manually with a spack externals (it’s currently not searchable) and
let spack build the rest. For example:
packages: oneapi-igc: externals: - spec: oneapi-igc@1.0.10409 prefix: /usr
For an autotools build, use the options:
configure \ --with-level0=/path/to/oneapi-level-zero/prefix \ --with-gtpin=/path/to/intel-gtpin/prefix \ --with-igc=/usr (or oneapi-igc prefix) \ ...
HPCToolkit supports profiling AMD GPU binaries through the HIP/ROCM interface, and beginning with version 2022.04.15, we support building hpctoolkit plus rocm with a fully integrated spack build. We require ROCM 5.x or later, and the ROCM version should match the version the application uses. This is still somewhat fluid and subject to change.
There are two ways to build HPCToolkit plus ROCM with spack.
HPCToolkit uses four ROCM prerequisites (hip, hsa-rocr-dev,
roctracer-dev and rocprofiler-dev). If you have AMD’s all-in-one ROCM
package installed in /opt
, then specify all four prereqs in
packages.yaml
. For example, if ROCM 5.0.0 is installed at
/opt/rocm-5.0.0
, then you would use:
packages: hip: externals: - spec: hip@5.0.0 prefix: /opt/rocm-5.0.0 hsa-rocr-dev: externals: - spec: hsa-rocr-dev@5.0.0 prefix: /opt/rocm-5.0.0 roctracer-dev: externals: - spec: roctracer-dev@5.0.0 prefix: /opt/rocm-5.0.0 rocprofiler-dev: externals: - spec: rocprofiler-dev@5.0.0 prefix: /opt/rocm-5.0.0
Currently, with AMD’s directory layout, the hip and hsa-rocr-dev
prefixes could be specified either as /opt/rocm-5.0.0
or
/opt/rocm-5.0.0/hip
(and /opt/rocm-5.0.0/hsa
). But
roctracer-dev and rocprofiler-dev require /opt/rocm-5.0.0
.
Also, the rocm packages do not support spack external find
.
But all this is fluid and subject to change.
Alternatively, if ROCM is not installed in /opt/rocm
, or if you
want to build a different version, then omit the externals definitions
in packages.yaml
(but be prepared for spack to build an extra
80-90 packages). In either case, install hpctoolkit with:
spack install hpctoolkit +rocm ...
For developers building with autotools, use the following configure
options. If /opt/rocm
is available, then use the
--with-rocm
option. Otherwise, use the other four options.
configure \ --with-rocm=/opt/rocm \ (for all-in-one /opt/rocm) --with-rocm-hip=/path/to/hip/prefix \ --with-rocm-hsa=/path/to/hsa-rocr-dev/prefix \ --with-rocm-tracer=/path/to/roctracer-dev/prefix \ --with-rocm-profiler=/path/to/rocprofiler-dev/prefix \ ...
It is allowed to mix the all-in-one option with the individual packages. The rule is that the specific overrides the general.
For all three GPU types, an application can access the GPU through the
native interface (CUDA, ROCM, Level Zero) or through the OpenCL
interface. To add support for OpenCL, add the +opencl
variant
in addition to the native interface. We recommend adding opencl
support for all GPU types. For example, with CUDA:
spack install hpctoolkit +cuda +opencl
For an autotools build, use the --with-opencl
option.
configure \ --with-cuda=/path/to/cuda/prefix \ --with-opencl=/path/to/opencl-c-headers/prefix \ ...
HPCToolkit always supports profiling MPI applications. For
hpctoolkit, the spack variant +mpi
is for building hpcprof-mpi,
the MPI version of hpcprof. If you want to build hpcprof-mpi, then
you need to supply an installation of MPI.
spack install hpctoolkit +mpi
Normally, for systems with compute nodes, you should use an existing
MPI module that was built for the correct interconnect for your system
and add this to packages.yaml
. The MPI module should be built
with the same version of GNU gcc/g++ used to build hpctoolkit (to keep
the C++ libraries in sync). For example,
packages: mpich: externals: - spec: mpich@4.0 modules: - mpich/4.0
HPCToolkit can access the Hardware Performance Counters with either
PAPI (default) or Perfmon (libpfm4). PAPI runs on top of the perfmon
library and uses its own, internal (but slightly out of date) copy of
perfmon. So, building with +papi
allows accessing the counters
with either PAPI or perfmon events.
If you want to disable PAPI and use the latest Perfmon instead, then
build hpctoolkit with ~papi
.
spack install hpctoolkit ~papi
Beginning with the 2023 release, HPCToolkit can now profile Python scripts and attribute samples to python source functions instead of the python interpreter. This requires Python 3.10 or later and is not the same python to run the spack scripts. This should be the same python used to run the application.
spack install hpctoolkit +python
When building with autotools, use the --enable-python
argument
with the path to the python-config
command.
configure \ --enable-python=/path/to/python-config ...
There are two ways to build hpcprof-mpi
on Cray systems
depending on how old the system is and what MPI wrapper is available.
Newer Crays have an mpicxx
wrapper from the cray-mpich
module (but it may not be on your PATH). Older Crays use the
CC
wrapper from the craype
module.
On either type of system, start by switching to the PrgEnv-gnu
module and unload the Darshan module if it exists. Darshan is a
profiling tool that monitors an application’s use of I/O, but it
conflicts with hpctoolkit.
module swap PrgEnv-cray PrgEnv-gnu module unload darshan
Next, we need the front-end GCC compiler that is compatible with the
MPI compiler. The gcc compiler should use the front-end operating
system type (sles, not cnl) and should be version 8.x or later
(preferably 9.x or later). The cc
and cxx
compilers
should be gcc and g++, not the cc and CC wrappers, and the modules
should include at least PrgEnv-gnu
and gcc
.
For example, I have the following on Crusher at ORNL in my
compilers.yaml
file (your versions may differ). Note that
spack may report the front-end arch type as either cray or linux.
compilers: - compiler: spec: gcc@11.2.0 paths: cc: /opt/cray/pe/gcc/11.2.0/bin/gcc cxx: /opt/cray/pe/gcc/11.2.0/bin/g++ f77: /opt/cray/pe/gcc/11.2.0/bin/gfortran fc: /opt/cray/pe/gcc/11.2.0/bin/gfortran modules: - PrgEnv-gnu/8.3.3 - gcc/11.2.0 operating_system: sles15 target: x86_64 ...
New Cray The preferred method for newer Crays is using the
+mpi
option and the cray-mpich
module. This requires
the mpicxx
wrapper, although it won’t be on your PATH. Look in
the $MPICH_DIR
or $CRAY_MPICH_DIR
directory for the
mpicxx
wrapper. For example on Crusher, this is at the
following path, your path may be different.
/opt/cray/pe/mpich/8.1.17/ofi/gnu/9.1/bin/mpicxx
If this is available, then add a spack externals entry for
cray-mpich
and the mpi
virtual package to
packages.yaml
. For example, I used this entry on Crusher, your
versions may be different (put the specs in quotes).
packages: mpi: require: "cray-mpich@8.1.17" cray-mpich: externals: - spec: "cray-mpich@8.1.17" prefix: /opt/cray/pe/mpich/8.1.17/ofi/gnu/9.1 modules: - cray-mpich/8.1.17
Then, build with +mpi
for the front-end arch type (with arch or
os). If the front and back-end arch types are the same, then you
don’t need to specify that. For example,
spack install hpctoolkit +mpi os=fe (or arch=cray-sles15-x86_64)
Cray’s use of modules is complex and requires several modules to be loaded at compile time. You will likely find that the above recipe fails with an undefined reference to one or more modules. For example,
/usr/bin/ld: warning: libfabric.so.1, needed by /opt/cray/pe/mpich/8.1.17/ofi/gnu/9.1/lib/libmpi_gnu_91.so, not found (try using -rpath or -rpath-link) /usr/bin/ld: /opt/cray/pe/mpich/8.1.17/ofi/gnu/9.1/lib/libmpi_gnu_91.so: undefined reference to `fi_strerror@FABRIC_1.0'
There are two solutions. One, you could search the failing build log to identify the missing modules and add them to the compiler entry. This may require several modules. For example on Crusher, I added these modules to the compiler entry and then the build succeeded.
modules: - PrgEnv-gnu/8.3.3 - gcc/11.2.0 - craype/2.7.16 - cray-mpich/8.1.17 - libfabric/1.15.0.0
The other solution is to load the PrgEnv-gnu
and related
modules and then install hpctoolkit with the --dirty
flag.
Note: only the final hpctoolkit package needs --dirty
. For
example,
spack install --only dependencies hpctoolkit +mpi os=fe spack install --dirty hpctoolkit +mpi os=fe
Note: Some very new Cray systems (eg, Sunspot at ANL) have
PrgEnv-gnu
but use a different MPI module than
cray-mpich
. On such a system, continue to add the extra
modules to the compilers entry but use a spack externals entry for the
other MPI module.
Old Cray
Some older Cray systems (eg, Theta at ANL) don’t have the
mpicxx
wrapper. In this case, it’s necessary to use the
+cray
option. This option tells hpctoolkit’s configure to
search for the older CC
wrapper.
Prepare the PrgEnv-gnu
and GCC compiler the same as with a
newer Cray, then build hpctoolkit with the +cray
option again
for the front-end arch type.
spack install hpctoolkit +cray os=fe
As with new Crays, you will likely need to add extra modules to the
compiler entry or else build with --dirty
.
Autotools For building with autotools, use the MPICXX
configure variable to specify the MPI compiler, either CC or mpicxx.
Note: the --enable-all-static
option is no longer used.
configure \ --prefix=/path/to/install/prefix \ --with-spack=/path/to/linux-sles15-x86_64/gcc-11.2.0 \ MPICXX=CC (or /path/to/mpicxx)
Old HPCToolkit Versions
As of October 2022, we now always build hpcprof-mpi
on Cray
dynamically, and the +all-static
option no longer exists.
However, for old versions of hpctoolkit up to 2022.05.15, it is
possible to build hpcprof-mpi
either statically or dynamically,
depending on what your system supports. (Hpcprof-mpi is disabled for
the 2022.10.01 release.)
If your Cray supports it and CC
builds static binaries, then
you can build hpcprof-mpi
statically with the +cray
and
+cray-static
options.
spack install hpctoolkit @2022.05.15 +cray +cray-static os=fe
The +cray-static
option only applies with +cray
(using
the CC wrapper) and only for versions up to 2022.05.15.
Since the 2020.12 release, the HPCToolkit GUI interface provides both profile and trace views in a single application, i.e. hpcviewer. Prior to that, each view was a separate program: hpcviewer to analyze the profile database, and hpctraceviewer to display the traces.
We provide binary distributions for hpcviewer on Linux (x86_64, ppc64le and aarch64), Windows (x86_64) and MacOS (x86_64, M1 and M2). HPCToolkit databases are platform-independent and it is common to run hpcrun on one machine and then view the results on another machine.
Starting with 2021.01, the viewer now requires Java 11 or later, plus GTK+ 3.20 or later on Linux.
The spack install is available on Linux x86_64, little-endian ppc64le (power8 and 9) and aarch64 ARM, and also MacOS on x86_64. This installs hpcviewer and includes the Java prerequisite.
For the current viewers, use openjdk with the most recent version of
Java 11 for all platforms. Currently, this is the default, but if not,
then you can add an explicit openjdk dependency. You can check this
with spack info
and spack spec
.
spack info openjdk spack install hpcviewer spack install hpcviewer ^openjdk @11.0.12_7 (if needed)
Note: to run the viewer on Macos, you can either open the Finder and
click your way to the hpcviewer.app
directory and double-click
on the hpcviewer icon, or else use spack load hpcviewer
to put
hpcviewer
on your PATH.
Binary distributions of the viewers for all supported platforms are available at:
On Linux, download the linux.gtk
version of hpcviewer (and also
hpctraceviewer for older versions), unpack the tar files and run the
install scripts (for both viewers) with the path to the desired install
prefix.
./install /path/to/install/directory
On Windows and MacOS, download the win32
or macosx.cocoa
versions and unpack the zip or dmg files in the desired directory.
Due to Apple’s security precautions, on MacOS, you may need to use
curl or wget instead of a web browser.
Note: the manual install uses the existing system version of Java (or one of several versions with modules), whereas the spack install includes the java prerequisite. That is, the spack install is self-contained and does not need to change the system java.
Some systems may have compilers that are too old for building HPCToolkit or other packages. For example, RedHat 7.x comes with gcc 4.8.x which is very old. If your system doesn’t already have modules for later compilers, then you may need to build a new compiler yourself.
First, pick a directory in which to install the modules and make
subdirectories for the spack packages and module files. In this
example, I’m using /opt/spack/Modules
as my top-level directory
and subdirectories packages
and modules
. Edit
config.yaml
and modules.yaml
to add these paths.
config: install_tree: root: /opt/spack/Modules/packages modules: default: roots: # just use one of these module_roots: tcl: /opt/spack/Modules/modules lmod: /opt/spack/Modules/modules
Determine if your system uses TCL (environment) or Lmod modules.
Normally, the module
command is a shell function. TCL modules
use modulecmd
and Lmod modules eval LMOD_CMD
. Edit
modules.yaml
to enable the module type for your system. Again,
you only need one of these (and don’t use dotkit unless you work at
LLNL).
modules: default: enable: - tcl - lmod
Then, choose a version of gcc and use the default compiler (normally
/usr/bin/gcc
) to build the newer version. Currently, gcc 10.x
or 11.x is a good choice that builds robustly, has enough modern
features but is not too new to cause problems for some packages. For
example,
spack install gcc@10.4.0
Note: it is not necessary to rebuild the new compiler with itself.
After building a new compiler, then you need to tell spack how to find
it. First, use module use
and module load
to load the
module. For TCL modules, the module files are in a subdirectory of
module_roots
named after the system architecture. For example,
module use /opt/spack/Modules/modules/linux-rhel7-x86_64 module load gcc-8.4.0-gcc-4.8.5-qemsqrc
For Lmod modules, the module directory is one level below that and the module names are a little different.
module use /opt/spack/Modules/modules/linux-rhel7-x86_64/Core module load gcc/8.4.0-dan4vbm
For both TCL and Lmod modules, it’s best to put the module use
command in your shell’s startup scripts so that module avail
and
module load
will know where to find them. After loading the
module, run spack compiler find
.
$ spack compiler find ==> Added 1 new compiler to /home/krentel/.spack/linux/compilers.yaml gcc@8.4.0
Finally, always check the new entry in compilers.yaml
and add the
name of the module to the modules:
field.
- compiler: environment: {} extra_rpaths: [] flags: {} modules: - gcc-8.4.0-gcc-4.8.5-qemsqrc operating_system: rhel7 paths: cc: /opt/spack/Modules/packages/linux-rhel7-x86_64/gcc-4.8.5/gcc-8.4.0-qemsqrcwkk52f6neef4kg5wvoucsroif/bin/gcc cxx: /opt/spack/Modules/packages/linux-rhel7-x86_64/gcc-4.8.5/gcc-8.4.0-qemsqrcwkk52f6neef4kg5wvoucsroif/bin/g++ f77: /opt/spack/Modules/packages/linux-rhel7-x86_64/gcc-4.8.5/gcc-8.4.0-qemsqrcwkk52f6neef4kg5wvoucsroif/bin/gfortran fc: /opt/spack/Modules/packages/linux-rhel7-x86_64/gcc-4.8.5/gcc-8.4.0-qemsqrcwkk52f6neef4kg5wvoucsroif/bin/gfortran spec: gcc@8.4.0 target: x86_64
Note: as long as the spack packages and modules directories remain
intact and you don’t remove the compilers.yaml
entry, then this
compiler will always be available from within spack. You can also use
this compiler outside of spack by using module load
. If you
want to make this your default compiler for all spack builds, then you
can specify this in packages.yaml
. For example,
packages: all: compiler: [gcc@8.4.0]
Also, when using the compiler from within spack, it doesn’t matter if you have the module loaded or not. Spack will erase your environment and re-add the appropriate modules automatically.
If your system does not support modules, then you will have to add it.
If you have root access, the easiest solution is to install a system
package for modules. If not, then use spack to install the
environment-modules package. Source the bash or csh script in the
init
directory to add the module
function to your
environment. For example,
spack install environment-modules cd /path/to/environment-modules-5.3.0-ism7cdy4xverxywj27jvjstqwk5oxe2v/init (bash) . ./bash (csh) source ./csh
Again, add the setup command to your shell’s startup scripts.
A spack mirror allows you to download and save a source tar file in advance. This is useful if your system is behind a firewall, or if you need to manually agree to a license, or if you just don’t want to keep downloading the same file over and over.
A mirror has a simple directory structure and is easy to set up. Create a top-level directory with subdirectories named after the spack packages and copy the tar files into their package’s subdirectory. For example,
my-mirror/ boost/ boost-1.66.0.tar.bz2 (from boost_1_66_0.tar.bz2) boost-1.70.0.tar.bz2 dyninst/ dyninst-10.1.0.tar.gz (from git checkout) ibm-java/ ibm-java-8.0.5.30.None (from ibm-java-sdk-8.0-5.30-ppc64le-archive.bin) intel-xed/ intel-xed-2019.03.01.tar.gz (from git checkout) mbuild-2019.03.01.tar.gz (resource from git checkout) jdk/ jdk-1.8.0_202.tar.gz (from jdk-8u202-linux-x64.tar.gz)
Note: the names of the files in the spack mirror always follow the same,
specific format, regardless of the actual name of the tar file. Version
is the spack name for the version (from spack info
), and
extension is the same extension as the tar file (tar.gz
,
tar.bz2
, etc) or else None
for other types of files.
<package-name> - <version> . <extension>
For example, the boost 1.66.0 tar file is actually named
boost_1_66_0.tar.bz2
but is stored in the mirror as
boost-1.66.0.tar.bz2
and jdk-8u202-linux-x64.tar.gz
is
renamed to jdk-1.8.0_202.tar.gz
.
For packages that use a snapshot from a git repository (tag or commit
hash), clone the repository, checkout the desired version, make a tar
file and gzip the file. (You should exclude the .git
subdirectory.) But note that spack refuses to use a cached file for
the head of a branch because it is a moving target.
Finally, after creating the mirror directory, add it to spack with
spack mirror add
. For example,
spack mirror add my-mirror file:///home/krentel/spack/my-mirror spack mirror list my-mirror file:///home/krentel/spack/mirror
Note: by default, spack stores downloaded files inside the spack
repository at spack/var/spack/cache
. This directory is a full
spack mirror, so instead of creating a separate directory tree, you
could just copy the files into the cache
directory. This is
useful when spack fetch
has trouble downloading a file. If you
can download the file manually, or copy it from another machine, then
just rename the file as above and copy it into the spack file cache.
For more information on mirrors, see:
Spack is somewhat fragile for how it downloads tar files and will often fail for transitory network problems. This is especially true for packages with many dependencies. For example:
==> Installing m4 ==> Searching for binary cache of m4 ==> No binary for m4 found: installing from source curl: (6) Could not resolve host: ftp.wayne.edu; Name or service not known ==> Fetching https://ftpmirror.gnu.org/m4/m4-1.4.18.tar.gz ==> Fetching from https://ftpmirror.gnu.org/m4/m4-1.4.18.tar.gz failed. ==> Error: FetchError: All fetchers failed for m4-1.4.18-vorbvkcjfac43b7vuswsvnm6xe7w7or5
There are two workarounds. First, assuming the problem is temporary,
simply wait 10 minutes or an hour and try again. Second, you could
manually download the file(s) by some other means and copy them to
spack’s cache directory spack/var/spack/cache/<package>
or to a
spack mirror.
Normally, HPCToolkit should build and work correctly with the latest version for all of its dependencies. But sometimes a new release will change something and break the build. This has happened a couple times where a new release of Boost has broken the build for Dyninst. Or, maybe the latest version of gcc/g++ disallows some usage and breaks the build.
The solution is to use packages.yaml
to specify an earlier
version until the rest of the code adapts to the change.
Spack is quite aggressive about compiling with a clean environment and
will unload modules unless they are specifically required by some config
file (compilers.yaml
or packages.yaml
). This can result
in a situation where you think some compiler or build tool is available
from your environment but spack removes it during the build.
In this example, I am using modules for GCC/8.3.0
and
CMake/3.8.2
. Spack finds the gcc 8.3.0 compiler and I added
cmake@3.8.2
to packages.yaml
. But I failed to add the
modules:
field for gcc 8.3.0 in compilers.yaml
. As a
result, the build fails with:
cmake: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by cmake) cmake: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by cmake) cmake: /usr/lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by cmake) ==> Error: ProcessError: Command exited with status 1:
The problem is that cmake 3.8.2 was built with g++ 8.3.0, but spack is
running cmake without the GCC/8.3.0 libraries and so the build fails as
above. One way to confirm this is to rerun spack install --dirty
which then succeeds. The --dirty
option tells spack not to
unload your modules. Whenever the build fails with a mismatched library
as above and especially when --dirty
fixes the problem, this is a
clear sign that spack is missing a module during the build.
Although --dirty
may make the build succeed, there should be no
case where this is necessary. The correct solution is to fill in the
modules:
field in compilers.yaml
or some other config
file. See the section on Compilers above.
Copyright © HPCToolkit Project a Series of LF Projects, LLC For web site terms of use, trademark policy and other project policies please see https://lfprojects.org.