SPO600 Baseline Builds and Benchmarking Lab
Revision as of 13:40, 11 January 2016 by Chris Tyler (talk | contribs)
Lab 2
Prerequisites
You must have working accounts on the SPO600 Servers or your own Fedora system.
As a Group
- Set up your pod (see note above).
- Select one of these software packages:
- Apache httpd
- Nginx http server
- MySQL server
- Python
- Perl
- PHP
- Obtain the software (via git or other version control system if necessary, or by downloading the appropriate archive/tarball).
- Do a baseline build. You may need to install build dependencies.
- Decide what you're going to benchmark and how you're going to do the benchmarking. Some programs may come with test suites, test harnesses, or exercisers (dummy clients) that are appropriate for benchmarking, while in other cases you may need to create your own test harness or exerciser program/script. In some cases, you will want to measure execution time, in other cases, some measure of performance (e.g., throughput). Make sure you control the appropriate factors, test for repeatability, and document the benchmark conditions so that the results can be reliably reproduced in the future. Most of these programs are complex, and different aspects or features of the program could be benchmarked (e.g., static content via http, static content via https, or CGI content under Apache httpd) - select one clear area for examination.
- Execute your benchmarking plan and record the results. With your results, include everything that may affect the benchmark results: the version of the operating system, libraries, and toolchain (compiler, linker, etc); the system specifications and configuration; the version of the software you built and benchmarked; and so forth. These commands may be useful:
lshw free cat /proc/cpuinfo cat /etc/*release* rpm -qa hdinfo cat /proc/mdstat pvs vgs lvs
Individual Work
- Complete any of the tasks not completed by the group during the class.
- Analyze the results. Look for repeatabile, consistent results. Understand the limitations of the benchmark results you obtained.
- Blog your benchmark configuration (system, build options, toolchain versions), your results, your analysis of the results, and your experience doing this lab, including things that you learned and unanswered questions that have come up.