The HPCToolkit Performance Tools
is a collection of performance analysis tools for node-based performance analysis.
It has been designed around the following principles:
- Be language independent.
- Avoid code instrumentation.
- Avoid blind spots.
- Provide context for understanding layered and object-oriented software.
- Support multiple performance measures to prevent myopic interpretation.
- Display user-defined derived performance metrics for effective analysis.
- Take a top down approach to performance analysis.
- Use hierarchical aggregation to mitigate approximate attribution.
- Ensure that measurement and analysis can scale to very large programs and executions.
More detailed explanation of these design principles is available
in papers on the HPCToolkit website at hpctoolkit.org.
Table of Contents
A typical performance analysis session consists of:
- Measuring execution costs.
uses statistical sampling to collect with low overhead and high accuracy
a set of call path profiles,
i.e. measurements of hardware resource consumption (costs) together with the call paths at which consumption occurred.
For statically linked applications hpclink(1)
serves the same purpose.
- Analyzing source code structure.
discovers static program structure such as procedures and loop nests
from binary code in the application's executable and the shared libraries and compiled GPU binaries.
It takes into account optimizing compiler transformations such as restructuring of procedures and loops
for inlining, software pipelining, multicore parallelization, and offloading to GPUs.
- Attributing measured costs to source code structure.
static program structure information
to attribute measured costs incurred by the optimized object code
to meaningful source code constructs such as procedures, loop nests, and individual lines of code.
The result of attribution is an experiment database
stored in a file system directory.
- Visualizing attributed costs in source code or timeline views.
are tools for presenting the resulting experiment databases.
displays measurements in outline form,
each entry attributing costs to a source code construct by line number
and linked to a display of corresponding application source code.
displays measurements as a two dimensional timeline
with execution progress aalong the horizontal axis
and the application's parallel threads along the vertical axis.
The visualization step may be done interactively with either tool on a personal computer.
even if the application must run in batch on a large computing cluster.
To this end, experiment databases are self contained and relocatable,
even containing a copy of the application source code,
and the hpcviewer
is platform-independent (via Eclipse RCP)
and lightweight enough for good interactive performance on a laptop.
Assume we have an application called zoo
whose source code is located in path-to-zoo.
First compile and link your application normally with full optimization
and as much debugging information as possible.
Typically this involves compiler options such as -O3 -g.
for options for specific compilers.)
Then perform the following steps.
Profile with hpcrun(1)
Assume you wish to measure two different sets of resources,
which will require two measurement runs.
always collects the data needed for hpcviewer,
but if you want to use traces you must add
option to collect additional data.
hpcrun -t <event-set-1> zoo
hpcrun -t <event-set-2> zoo
by default puts its results into a measurement directory
so the two sets of measurements are combined automatically.
to discover program structure of the program
and the shared libraries and GPU binaries it used during the run.
has a number of advanced options, it is typically run with none.
directory is passed as the last argument.
By default the generated structure files are put into subirectories of the measurements directory.
Create an experiment database using hpcprof(1)
(The version of hpcprof(1)
the version of hpcrun(1)
Use the -I
option to specify the location of zoo's
The measurement directory is specified as the last argument.
By default the generated experiment database is named hpctoolkit-zoo-database.
hpcprof -I path-to-zoo/+ hpctoolkit-zoo-measurements
Visualize using hpcviewer(1)
the experiment database in either source or timeline view,
on any machine where you've copied the database:
you may also view "derived metrics",
ie combinations of measured metrics which are computed on the fly.
See The hpcviewer User Interface
Guide for more information.
- © 2002-2022, Rice University.
- See README.License.
Email: hpctoolkit-forum =at= rice.edu