Firefox Performance Testing : A Python framework for Windows
Project Name
Firefox Performance Testing : A Python framework for Windows
Project Description
The goal of this project is to:
- get the current framework up and running to help work with others
- get the framework running in an automated fashion
- help with the creation and execution of new tests
- work to upgrade the framework to work with a mozilla graph server
- work with the mozilla community and contribute to an open source project
From this project, you will:
- learn python
- learn about white box testing methodologies
- work with an open source community
- more generally learn about the functioning of QA in an open source community
This will benefit you in the future when presented with a new program, you'll be able to give an idea of how to approach testing - to give adequate coverage and be able to provide some metric of program stability and functionality
Note: This is NOT the typical mundane black box testing
Project Leader(s)
Project Contributor(s)
Ben Hearsum (bhearsum)
- Set up the VM for performance testing
- Helped with the debugging process for report.py, run_tests.py and ts.py
Tom Aratyn (mystic)
- Introduced Closures in Python
Project Details
This is different from Tinderbox. Two major differences are:
- First, it doesn't build, it just runs the performance test given a path to the executable. This is helpful if you're testing the performance of an extension or a build from another server. (You could build on a fast server, and then run performance tests on a machine with low memory).
- Second, it measures performance characteristics while it's running the pageload tests--you can track cpu speed, memory, or any of the other counters listed here.
Project News
Saturday, September 23, 2006
Performance tests didn't run sucessfully.
- There weren't any results generated in the extension_perf_testing\base_profile and extension_perf_reports folders.
- Output after the performance tests were run:
Traceback (most recent call last): File "C:\proj\mozilla\testing\performance\win32\run_tests.py", line 129, in ? test_file(sys.argv[i]) File "C:\proj\mozilla\testing\performance\win32\run_tests.py", line 122, in te st_file TP_RESOLUTION) File "C:\proj\mozilla\testing\performance\win32\report.py", line 152, in Gener ateReport mean = mean / len(ts_times[i]) ZeroDivisionError: integer division or modulo by zero
Sunday, September 24, 2006
Understand further the approach to testing with the Python framework
Monday, September 25, 2006
elichak will be working on a resolution with alice to get the results generated in the extension_perf_testing\base_profile and extension_perf_reports folders.
Friday, September 29, 2006
elichak re-configured the environment of the machine to run the tests again. Cleaned up old files to do a clean test. Reinstalled Cygwin (replaced Make 3.80 with Make 3.81) and updated the testing files through CVS.
Sunday, October 1, 2006
Alice has successfully run the tests. The Zero Division error didn't occur again after she updated her test files. There were results generated in the extension_perf_testing\base_profile and extension_perf_reports folders. elichak attempted to run the test with the alice's code but the Zero Divsion Error still occured on her machine.
Wednesday, October 4, 2006
Elichak consulted Robcee about the Zero Division Error and he suggested a few things, like debugging the script. Elichak found out that the value of ts_time in the report.py file is empty but couldn't find out why the value of ts_time isn't assigned. According to alice, she didn't debug the scripts and only had to update the files to make them work.
Friday, October 6, 2006
Ben set up the VM for elichak to run her performance testing in that environment.
Wednesday, October 11, 2006
- elichak configured the environment in the VM for her testing. The tests still gave the same results as before:
- Zero Division Error at lines 122 and 129 in run_tests.py and line 153 in report.py
- 2 files in the extension_perf_reports dir are generated but there are no graphs
- elichak also changed the TS_NUM_RUNS, TP_NUM_CYCLES, TP_RESOLUTION values to 1 in run_tests.py to shorten the cycles of the performance testing for the purpose of debugging the scripts.
- The error occurs in report.py because ts_time is empty, therefore, this fails:
for ts_time in ts_times[i]: mean += float(ts_time) mean = mean / len(ts_times[i])
- Ben assisted elichak with the debugging process. elichak and Ben hacked deeper down into the scripts.
- We speculate that the thing that is affecting the value of ts_time being generated is in ffprocess.py: RunProcessAndWaitForOutput always returns None in line 232
return (None, True)
- Further debugging by elichak is in process
Thusday, October 12, 2006
Work completed
The Zero Division Error is solved. Turns out that it was just a configuration problem. The documentation to set up the environment was rather subtle and needs a re-work.
Solution
Contents in the C:\proj\mozilla\testing\performance\win32\base_profile should also be in C:\extension_perf_testing\base_profile dir.
All work for this project is done on the VM, hera.senecac.on.ca
Work in progress
- Trying out a few things in the framework to find out which direction I would like to do to the framework, either building new tests, improving on existing ones, strengthening the framework itself or porting it to other OS's
- Revise the Firefox Performance Testing documentation
Friday, 20 Oct 2006
Last week, elichak has established to work on automating the setup of the environment and performance testing. The performance testing and environment setup is currently all over the place and is tedious for the developer to set it up.
The automation will entail:
Generating directories, dropping files in directories, installation of libraries, options to configure the performance testings etc.
Tuesday, 31 Oct 2006
Alice and Liz had a meeting and have established the key things that need to be done. What needs to be done:
- ease configuration of python framework
- too many config files to edit
- have to know whole framework to configure it
- not flexible
- tedious
- too many directories to create
- too many extra libraries to load
- a lot of dependancies!
- things have to be copied to special directories
- bad configurations don't cause errors!
- too many config files to edit
How do we fix this?
- configuration checker
- yaml file validator
- paths.py validator
- checking the paths for existance
- notify user if path doesnt exist and ask user if they want it created
- checking if the directories have contents
- next steps
- get all the configuration in one place!
- paths.py, config.yml, constants
- have to run both ts and tp at the same time
- get all the configuration in one place!
Tuesday, 21 Nov 2006
For the past three weeks, elichak has been working on the code base of the Performance Testing Framework.
Configuration Checker
Elichak: The run_tests.py needed much work done.
YAML file validator
Problem
The validator of yaml file was weak. It only checked for certain items in the file and would crash if those items were not there or if those items didn't have any value.
Solution
Changed the validator to check if items exist before storing the value. If one of the items don't exist, the program would terminate and would let the user know that there is something wrong with the yaml file.
Reference
Project Events
Bon Echo Community Test Day
- Friday, October 06, 2006, from 7am - 5pm PDT