Open main menu

CDOT Wiki β

Changes

SPO600 Baseline Builds and Benchmarking Lab

2,697 bytes added, 02:27, 22 January 2015
no edit summary
[[Category:SPO600 Labs]]{{Chris Tyler DraftAdmon/lab|Purpose of this Lab|In this lab, you will do a baseline build of a software package and benchmark its performance.}}
== Lab 2 ==
=== Prerequisites ===
You must have an account on the system "Australia". Your professor will have created this account based working accounts on the [[http://zenit.senecac.on.ca/wiki/index.php/SSH#Using_Public_Keys_with_SSHSPO600 Servers]] or [[SPO600 Host Setup|SSH public keysyour own Fedora system]] which you should have previously sent.
==As a Group = UNDER CONSTRUCTION!= {{Admon/note|Work in a Group of 2-6|This part of the lab should be performed in class in a group of 2-6 students. Use one of the [[ALC]] screens and set up one person as the "driver" (person using the keyboard and screen), working on your choice of [[SPO600 Servers|Australia or Red]] or [[SPO600 Host Setup|the Fedora system of one of the group members]]. The rest of the group can then discuss the project and give instructions to the driver.You should rearrange the POD tables and chairs into the most convenient arrangement for your group.You may switch drivers (and/or device being used) if agreed by the group. Take advantage of the different skills present in your group - for example, you may have someone with system administration skills, another with scripting skills, another with strong C programming skills, and yet another with good analysis skills.}} # Set up your pod (see note above).# Select one of these software packages:#* Apache httpd#* Nginx http server#* MySQL server#* Python#* Perl#* PHP# Obtain the software (via git or other version control system if necessary, or by downloading the appropriate archive/tarball).# Do a baseline build. You may need to install build dependencies.# Decide what you're going to benchmark and how you're going to do the benchmarking. Some programs may come with test suites, test harnesses, or exercisers (dummy clients) that are appropriate for benchmarking, while in other cases you may need to create your own test harness or exerciser program/script. Make sure you control the appropriate factors, test for repeatability, and document the benchmark conditions so that the results can be reliably reproduced in the future. Most of these programs are complex, and different aspects or features of the program could be benchmarked (e.g., static content via http, static content via https, or CGI content under Apache httpd) - select one clear area for examination.# Execute your benchmarking plan and record the results. {{Admon/tip|Share the Wealth|Make sure that each member of the group has access to the files the group was working on before the end of class (e.g., put them in a folder with world-readable permissions, post them on a public URL, or mail them to each member of the group). It would also be a good idea to share contact information within the group.}} ==Individual Work == # Complete any of the tasks not completed by the group during the class.# Analyze the results. Look for repeatabile, consistent results. Understand the limitations of the benchmark results you obtained.# Blog your results, your analysis of the results, and your experience doing this lab, including things that you learned and unanswered questions that have come up.