Changes

Jump to: navigation, search

SPO600 Algorithm Selection Lab

1,417 bytes added, 10:01, 25 February 2022
no edit summary
However, ''do'' compare the relative performance of the various algorithms on the ''same'' machine.
 
=== Important! ===
 
The hardest part of this lab, and the most critical component, is being able to separate the performance of the volume scaling code from the rest of the code (which only exists to set up the test of the scaling code). The volume scaling code runs ''very'' quickly, and is dwarfed by the rest of the code.
 
You '''must''':
* Control variables in your test environment.
** What else is the machine doing while you are testing?
** Who else is logged in to the machine?
** What background operations are being performed?
** How does your login on the machine affect performance (e.g., network activity)?
* Isolate the performance of the volume scaling code. There are two practical approaches:
** Subtract the performance of the dummy version of the program from each of the other versions, or
** Add code to the program to measure and report just the performance of the volume-scaling code
* Repeat the tests multiple times to ensure that the results you are getting are consistent, valid, and accurately reflect the performance of the volume scaling code.
** Make sure you are performing enough calculation to give a useful result -- adjust the SAMPLES value in <code>vol.h</code> to a sufficiently high value
** Discard outliers (unusually high or low results)
** Average the results.
** Take some measure of the amount of variation of your results (e.g., tolerance limits or standard deviation).
=== Benchmarking ===

Navigation menu