For decades, scientific discovery has been fueled by investments in High Performance
Computing (HPC). Whether in research fields, such as climate modeling, astrophysics, or
genomics, or commercial endeavors, such as engineering, oil exploration, or
pharmaceuticals, HPC has been deployed to help scientists and engineers reach their next
insights and innovations sooner, and to make the once-impossible possible.
While the need for scientific advancement does not wane, the computing tools that comprise
HPC have evolved over multiple generations. Monolithic supercomputers gave way to
modular clusters of servers, open-source Linux displaced proprietary versions of Unix, and
cloud computing emerged to complement on-premises data centers. Applications themselves
have also evolved: “Big Data” analytics introduced a new dimension of data-driven
computing. HPC has been changed by all these trends, yet the need for HPC has persisted.