A Framework for Multi-Execution Performance Tuning

Main Article Content

Karen L. Karavanic
Barton P. Miller

Abstract

This paper describes a design and prototype implementation of a performance tool designed to answer performance questions that span multiple program executions from all stages of the lifespan of an application. We use the scientific experimentation archetype as a basis for designing an Experiment Management environment for parallel performance. In our model, information from all experiments for one application, including the components of the code executed, the execution environment, and the performance data collected, is gathered in a Program Space. Our Experiment Management tool enables exploration of this space with a simple naming mechanism, a selection and query facility, and a set of visualizations. A key component of this work is the ability to automatically describe the differences between two runs of a program, both the structural differences (differences in program source code and the resources used at runtime), and the performance variation (how were the resources used and how did this change from one run to the next).

We present a new approach to automated performance diagnosis that incorporates knowledge from previous runs of the same application. The result is a performance tool that learns from each diagnostic program run, adapting its search strategy to obtain more useful diagnoses more quickly. We show performance gains of up to 98% obtained by incorporating historical knowledge into the Performance Consultant's search strategy. The results presented demonstrate the utility of our approach for repeated performance diagnosis of similar program runs, a common scenario when tuning parallel applications. The improvements achieved show that our new approach to gathering and storing historical application data can be successfully applied to the problem of automating performance diagnosis.

Article Details

Section
Special Issue Papers