Performance World [ Performance World Home | Board | Tools | PerformanceLib | Links | GamsWorld group | Search | Contact ]

Performance Tools 1.2 Overview

These tools are meant to simplify the task of performance data collection, measurement, postprocessing and visualization. We provide tools for: All output files are in HTML webpage format for portability. Users also have the option of creating text files of the results.

An overview of each of the tools is given below.

Try our PAVER Server for automated performance analysis and batch file creation.

Note: Performance Tools 1.x has been superseded by PAVER 2.




Download Performance Tools 1.2:

Includes the following tools: Also includes the following sample data files, where solvers have been renamed to generic A, B, C to hide proprietary data: Also includes the following support utilities (not meant as stand-alone utilities):

Top



Batch File Creation Using crbatch.gms

The Performance Tools batch file creation utility is useful if users wish to create batch files consisting of GAMS runs for: Users can specify models from LINLib either by origin, size, type (LP or MIP), or any combination. If models to be used are not from LINLib, batch files can be created by inputting a textfile containing the model names. Finally, users can specify solvers and GAMS command line options as from a regular GAMS run.

The routine crbatch.gms is useful in particular for creating trace files in order to obtain performance data over large collections of models.

Details on using crbatch.gms are found in the section Creating automated batch runs.

Top



Running All Performance Tools Automatically Using pprocess.gms

The routine pprocess.gms runs all Performance Tools automatically on a set of trace files. The routine simplifies the task of post-processing by running all Performance tool utilities for every combination of two trace files.

For example, suppose you have results using CONOTP1, CONOPT2, and CONOPT3 on an instance of the COPS models. It would be quite cumbersome to run all Performance Tools on all combinations of solvers, i.e. (CONOPT1, CONOPT2), (CONOPT1, CONOPT3), and (CONOPT2, CONOPT3) for all tools. This utility automates this task and summarizes the results in a single file.

Output is in the form of an HTML file, where each square entry is a link to the associated models.

Intput consists of two or more (up to 8) tracefiles, where each tracefile contains model run results for a single solver.

A sample pprocess.gms output: Details on using pprocess.gms are found in the section Running all Performance Tools automatically.

Top



Comparing Solver Outcomes Using square.gms

The routine square.gms compares solver outcomes of solvers on a given model collection. Possible results are optimal, infeasible, unbounded, interrupt (time or iteration) or fail. The results are displayed in a square, where each square element shows the number of models of that solver outcome pair. Output is in the form of an HTML file, where each square entry is a link to the associated models.

Intput consists of two tracefiles, where each tracefile contains model run results for a single solver.

A sample square.gms output: Details on using square.gms are found in the section Comparing solver outcomes of two solvers.

Top



Comparing Solver Resource Times Using restime.gms

Users can also do cross comparisons of resource times used by solvers. The utility creates a table showing the number of models solved by two solvers in the same amount of time, faster by one of them or much faster. Also, if one solver was able to obtain a solution and then other one did not, then the first solver is considered infinitely faster.

The thresholds for faster and much faster are 10% and 50% respectively. That is, if two solvers have resource times within 10% of eachother, they are considered the same, if one of them performs between 10% and 50% faster than the other, then it is faster, and if it performed more than 50% faster it is considered much faster. If different thresholds are desired, these can be specified on input.

The resource time utility creates two output files in HTML format (wth a text file format optional). The first compares only those models that were solved optimally by both solvers. The second compares all models listed in the tracefiles.

Intput consists of two tracefiles, where each tracefile contains model run results for a single solver.

Sample restimelp.gms output: Details on using restime.gms are found in the section Comparing solver resource times of two solvers.

Top



Terminating Solvers at Resource Time Limit Using schulz.gms

Sometimes experimental solvers do not terminate in the prespecified time limit or at all. Running batches of models with such a solver (e.g. for performance testing) requires frequent attention (to terminate the hanging process). This little GAMS/AWK program scans the list of processes and checks if the time (elapsed/CPU time/...) exceedes the preset limit. If the time is exceeded 'schulz' sends a terminate signal to the process. If the process still doesn't terminate and remains in the list of processes, the 'schulz' sends a more effective signal.

Several freeware utilities are needed for Windows users: Pslist and Pskill. Both are available for free from Sysinternals for non-commercial use. Download and place into your GAMS system directory.

UNIX users do not need to download, as equivalent utilities exist on UNIX systems.

Download:

Details on using schulz.gms are found in the section Termination routine for ensuring solvers terminate at resource time limit.

Top



Performance Profiles for Solver Comparisons Using pprofile.gms

Performance profiles provide an effective means to compare solver performance for several solvers at once, eliminating some of the bias certain comparisons have. For some comparisons, a small number of problems may unduly influence the results.

Initial performance comparisons were developed in [1] by Billups, Dirkse, and Ferris. In [2] Dolan and More expanded this approach to compare solver performance using performanc profiles. In their approach, they define the performance profile of a solver as a (cumulative) distribution function for a performance metric. In particular, they use the ratio of the solver resource time for a given solver versus the best time of all the solvers.

The profile computation routine is pprofile.gms. The user inputs different trace files, where each trace file contains information for one solver only. Users can input a maximum of 8 different trace files. The GAMS routine creates a text file containing a table of the performance profiles, which can be used to plot using a variety of software packages, such as Gnuplot, Excel or others.

Under Windows users can make use of the plotting routine Gnuplotxy created by Bruce Mccarl and Uwe Schneider. We have a performance profile plotting routine available called plotprof.gms which automates profile computation and plotting in one step. Examples are given below for users to download.

Download:

Reference:

Details on using pprofile.gms and performance profile interpretation are found in the section Performance Profiles for solvers using pprofile.gms.

Top