Difference between revisions of "SPO600 Baseline Builds and Benchmarking Lab"
Chris Tyler (talk | contribs) |
Chris Tyler (talk | contribs) (→Individual Work) |
||
Line 30: | Line 30: | ||
# Complete any of the tasks not completed by the group during the class. | # Complete any of the tasks not completed by the group during the class. | ||
# Analyze the results. Look for repeatabile, consistent results. Understand the limitations of the benchmark results you obtained. | # Analyze the results. Look for repeatabile, consistent results. Understand the limitations of the benchmark results you obtained. | ||
− | # Blog your results, your analysis of the results, and your experience doing this lab, including things that you learned and unanswered questions that have come up. | + | # Blog your benchmark configuration (system, build options, toolchain versions), your results, your analysis of the results, and your experience doing this lab, including things that you learned and unanswered questions that have come up. |
Revision as of 09:36, 22 January 2015
Lab 2
Prerequisites
You must have working accounts on the SPO600 Servers or your own Fedora system.
As a Group
- Set up your pod (see note above).
- Select one of these software packages:
- Apache httpd
- Nginx http server
- MySQL server
- Python
- Perl
- PHP
- Obtain the software (via git or other version control system if necessary, or by downloading the appropriate archive/tarball).
- Do a baseline build. You may need to install build dependencies.
- Decide what you're going to benchmark and how you're going to do the benchmarking. Some programs may come with test suites, test harnesses, or exercisers (dummy clients) that are appropriate for benchmarking, while in other cases you may need to create your own test harness or exerciser program/script. Make sure you control the appropriate factors, test for repeatability, and document the benchmark conditions so that the results can be reliably reproduced in the future. Most of these programs are complex, and different aspects or features of the program could be benchmarked (e.g., static content via http, static content via https, or CGI content under Apache httpd) - select one clear area for examination.
- Execute your benchmarking plan and record the results.
Individual Work
- Complete any of the tasks not completed by the group during the class.
- Analyze the results. Look for repeatabile, consistent results. Understand the limitations of the benchmark results you obtained.
- Blog your benchmark configuration (system, build options, toolchain versions), your results, your analysis of the results, and your experience doing this lab, including things that you learned and unanswered questions that have come up.