Changes

Jump to: navigation, search

Delta debugging framework

10,100 bytes added, 14:16, 17 April 2013
m
Reverted edits by Saoagent (Talk) to last revision by Reed
([[#top|↑ top]])
[http://en.wikipedia.org/wiki/Delta_Debugging Delta debugging] is an automated approach to debugging that isolates failures systematically. Given a failing test that can be mechanically verified (including a browser crash), [http://en.wikipedia.org/wiki/Delta_Debugging delta debugging] is a way of automatically isolating the change that introduced the failure. Having  For developers, the occurrence of a scenario similar to this happens all too often: a developer codes a piece of functionality that works. Then, over a period of time, multiple changes to the program source files are made and that piece of functionality stops working. The cause of the regression could be any of the changes that were made since the time the functionality was last known to work. To isolate the cause of the regression, the developer begins the debugging process. Generally, debugging is a manual process where the developer must walk through the code while trying to keep track of variables and function calls. Sure, there are debuggers that can help you keep track of variables, the call stack, watch certain blocks of code, and execute the code step by step, however debugging is still mainly a manual process. Written in perl, given# that the source code is located in an SVN repository (support for CVS in the future)# a test case that can automatically verify whether or not a piece of functionality of a program works or not# a way to automatically build the program from the source code (if needed)the delta debugging framework aims to automatically isolate the failure-inducing changes to the source code that caused a regression. == Project License == Written in place perl, given a test case that can automatically verify whether or<br />not a piece of functionality of a program works or not, the delta debugging<br />framework aims to pull builds from CVSautomatically isolate the failure-inducing changes to the<br />source code that caused a regression.<br /><br />Copyright (C) 2006 Richard Chu, Aditya Nanda Kuswanto, bisect Dean William Woodside<br /><br />This program is free software; you can redistribute it and/or modify it under<br />the terms of the GNU General Public License as published by date and change set the Free Software<br />Foundation; either version 2 of the License, or (using at your option) any later<br />version.<br /><br />This program is distributed in the hope that it will be useful, but WITHOUT<br />ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS<br />FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.<br /><br />You should have received a copy of the GNU General Public License along with<br />this program; if not, write to the Free Software Foundation, Inc., 59 Temple<br />Place, Suite 330, Boston, MA 02111-1307 USA<br /><br />[http://enwww.wikipediaopensource.org/wikilicenses/Bonsai_CVS_code_management_system bonsai ] data gpl-- remember, CVS doesn't have changesets!license.php The GNU General Public License (GPL)Version 2, and report results would let computers make developers more productiveJune 1991]<br /><br />Contact Information about the authors of the delta debugging framework can be found<br />on our individual profile pages.
== Project Leader(s) ==
''Name(s) of people casually working on the project, or who have contributed significant help. Include links to personal pages within wiki. <br />NOTE: only Project Leader(s) should add names here. You '''can’t''' add your own name to the Contributor list.''
 
[[User:Reed|Reed Loden]] - Setup the CVS repository for us with the web front-end. Provided direction in querying Bonsai (a means to extract the output in XML).<br>
[[User:Elichak|Liz Chak]] - Documentation of the subroutines in the svn.pl and makewrapper.pl source files.
 
== Project Source Repository ==
 
([[#top|&uarr; top]])
 
Assuming you have [http://subversion.tigris.org/ SVN], the project's source can be obtained via SVN using the following command:
 
svn checkout svn://cdot.senecac.on.ca/deltadbg
 
The source can also be obtained at the following links:
* [http://matrix.senecac.on.ca/~rchu2/ddf/ddf.zip Delta Debugging Framework.zip]
* [http://matrix.senecac.on.ca/~rchu2/ddf/ddf.tar.bz2 Delta Debugging Framework.tar.bz2]
 
The test cases can be obtained via SVN using the following command
 
svn checkout svn://cdot.senecac.on.ca/deltatest
 
The source can also be obtained at the following links:
* [http://matrix.senecac.on.ca/~rchu2/ddf/deltatest.zip DeltaTest.zip]
* [http://matrix.senecac.on.ca/~rchu2/ddf/deltatest.tar.bz2 DeltaTest.tar.bz2]
== Project Details ==
Now that we are aware of the different concepts that we must take into account with regards to delta debugging, the next section will outline some facts and assumptions that are being made, and attempt to define the vision and process of the delta debugging framework.
 == Project Facts and Assumptions Principles ==
([[#top|&uarr; top]])
 
'''Project Facts:'''
# The source tree for the Mozilla project is HUGE. With many different source file types (C++, JS, XUL, etc.) in many different directories.
# The developer has a test case that can be used indicate whether the test passes/fails/is indeterminate.
# The developer will NOT know the date/version of the last known good version.
# Bonsai is a tool that can produce a list of differences between versions of a source file. (Bonsai's functionality has not been examined closely yet but will have to as it may be a key component to the framework)
 
 
'''Possible Vision of the Delta Debugging Framework''':
 
(subject to change based on stakeholder consultation/feedback, feasibility study)
 
# Since the last time a developer executed a test case that passed, the developer modified some source files. The source files may be of the same type or mixed type, same directory or different directory. It shouldn't matter. The framework should be source type and location agnostic. Upon executing the test case again, the result is now a failure. The developer panics. It's only days before the deadline to submit bug patches before the source tree is supposed to be closed for release and the bug is a blocker. The developer doesn't want to be shamed for delaying the release, and the source code is too complex to find the bug in time, so what should they do? Use the delta debugging framework! that's what. How? you may ask. Well keep reading to find out. <small>* scenario may vary.</small>
# The delta debugging framework may require the developer to input one piece of information. The test case/function that used to pass but now fails. It will be used to determine whether the source files with progressive changes passes/fails the test.
# Once the developer has inputted this piece of information, it will use Bonsai to query the source tree and compile a list of all the changes to the source files since a certain amount of time.
# (If there was a method of determining change dependencies so as to eliminate the possibility of inconsistencies, it would be done in this step. One possible way of reducing the possibility of inconsistencies is to logically group changes by location or check in time.)
# This step would be where the delta debugging algorithm would come into play. The algorithm should basically:
## Recursively, incrementally remove changes from the source code with the regression.
## Recompile the source tree.
## Execute the test case. There may be 3 outcomes:
### The test case passes. We know that the failure-inducing change(s) are in the change(s) that were removed.
### The test case fails. We know that the failure-inducing change(s) are not exclusively in the change(s) that were removed. I say not exclusively because of the concept of Interference (described above).
### The test case is indeterminate. There were some inconsistencies.
 
== Project Flowchart ==
''' FLOWCHART TO BE UPDATED SOONUpdated Delta Debugging Flowchart. I HOPE'''  [[Image:Dd_flowchart1.PNG]] Here are some thoughts regarding the flowchart:# The whole process revolves around a certain '''Test''', which must be passed to complete the process. It is assumed that the source code passed this test before, but not anymore due to recent changes to the tree. The framework will locate these changes.# The '''Test''' is a versatile module and can be adapted to accept any tests the user may use.# When the initial test fails, the framework first attempts to locate which changeset causes this failure. This is done by "going back through time", retrieving the trees from previous revisions, and running each tree through the same test. The idea is to locate the latest revision where the test passes successfully.# Once this revision is identified, the framework will extract the '''diff''', the difference between the two revisions. # The framework will then use this '''diff''' to break down the difference possibilities (e.g. directory, file, etc) and isolate the cause of the failure.# Once this is done, the framework will deliver the cause of the failure in a report for the user and the operation is finished.
[[Image:Dd_flowchart.PNG]]== Project Test Cases ==
The test cases used in this project are located in the [[Delta_debugging_testcases|Delta Debugging Testcases page]]. == Project Source Repository Roadmap == ([[#top|&uarr; top]]) [[Delta Debugging Framework Roadmap|Delta Debugging Framework Roadmap]] The page outlines our current vision of the delta debugging framework and a roadmap of the work that needs to be completed to accomplish our vision. This roadmap may be subject to change and/or expanded as our vision expands through feedback and requests from others, our own ideas, etc. == Partial Class Diagram ==
([[#top|&uarr; top]])
Assuming you have [http://subversion.tigrisMost of the classes in blue exist in the source repository.org/ SVN], the projectThe classes in pale yellow are classes that won's source can t be obtained via SVN using completed in the following commandfirst release.  [[Image:Dd_partialclassdiagram2.PNG]]
svn checkout svn://cdot.senecac.on.ca/deltadbg
== Project Task List News ==
([[#top|&uarr; top]])
<table style''This is where your regular updates will go. In these you should discuss the status or your work, your interactions with other members of the community (e.g., Seneca and Mozilla), problems you have encountered, etc. Put detailed technical information into the Project Details page (i.e., update it as you go), and save this section for news about participation in the project.'' === Dec. 22, 2006 ="width: 100%;" class="standard-table" border=" I haven't posted an update in a while. So what's been done? I finally had some time to do a second round of testing & debugging of the delta debugging framework. And guess what? It ''seems'' to work now. The problem? Combination of logical errors when applying and unapplying changes in the framework and a bad test case. Go figure. However, before I get ahead of myself and officially tag and release the delta debugging framework as version 0.1, I would like to test it out on another test program. Hopefully, this can be done this weekend. And if all goes well, version 0.1" cellpaddingwill be officially released before the end of the year.   === Dec. 13, 2006 =="1" cellspacing="1"> <tr> Created [[Delta_debugging_testcases|Delta Debugging Testcases]] page to discuss the nature of the test cases created to test the algorithm. Included in the page are 2 testcases created so far, the '''HelloWorld''' binary test and the '''Sudoku''' test. Both tests can be found in the '''deltatest''' svn repository. The repository can be checked out using this command:<thpre>Tasksvn checkout svn://cdot.senecac.on.ca/deltatest </thpre> <th>Description<Exactly 12 days before Christmas, the delta debugging framework has been released under the [http://www.opensource.org/licenses/th>gpl-license.php GPL Version 2] License.  <th>Assigned Unfortunately, we haven't had the time to test the delta debugger much since Dec. 09, 2006 because of exams, other school work; Planning to spend some time this weekend to test the delta debugger and figure out why it currently seems to<not be able find the minimal set of failure inducing directories/th>files (whether its because of unreliable test case or logical error in the program).  <th>Status</th>A roadmap of our vision of the direction of the project will be heading in the future will be created and posted soon. </tr>
=== Dec. 11, 2006 ===
<tr> <td colspan="4"><strong>Change set / Change</strong> </td> </tr>Uploaded testcase for '''HelloWorld''' binary at '''deltatest svn'''. The test simulates the error that may occur when compilation fails due to syntax error. The exalted HelloWorld program is located on the HelloWorld directory, while the test definition is at HelloTestCase1.pm. The algorithm detects failed test and reverts the affected file to the version where the test passes.
<tr> <td>Retrieval of Change / Change set</td> <td>The Granularity concept. A single revision may consist of hundreds or thousands of lines of code that were changed, yet only a couple lines of the change may be responsible Note for the regression. Thus, There must be a method to break the change into smaller manageable chunks. The different types of chunks we may breaking up a changeset are: Revision, Directories, Files, Code Blocks, and Lines.</td> <td>[[Userfuture:RichardChu|Richard Chu]]</td> <td>Work in progress. Currently can retrieve change sets of type Revision and File. Need to complete retrieval of Directory, Code Block, Line of Code change set.</td> </tr>improve user feedback functions!
<tr>
<td>Application of Change / Change set</td>
<td>OK. Change sets can be retrieved. Now what? You must be able to apply a change or change set or subset of a change set to the source tree. Your mission is to figure out how to do that.</td>
<td>[[User:RichardChu|Richard Chu]]</td>
<td>Not Started.</td>
</tr>
=== Dec. 10, 2006 ===
<tr>Where is the CVS/Bonsai work heading? Here is a breakdown of the past 3-4 weeks: <td colspan=* Initially was going for a straight wrapper around CVS ala the style Richard used for SVN.* Tried to find some functionality within Bonsai that could make it easier.* Talked to Reed Loden, he set up a repository for us to try with. Thanks Reed!* Thought that there may be some additional (read: unpublished) tools that could be worked with. Got in contact with some of the "4Project Participants"><strong>GNU Make</strong> (listed on [http://www.gnumozilla.org/softwareprojects/makebonsai/ http]. Was told the person in particular wasn't a contributor (just submitted a bug report). They in turn pointed me to [irc://wwwirc.gnumozilla.org/software#mozwebtools #mozwebtools].* Lurked on [irc:/make/irc.mozilla.org/#mozwebtools #mozwebtools]for a few weeks. Talked to 'justdave' about Bonsai. Reed Loden chimed up and informed me that Bonsai can output to XML using ?xml=1 on the query (score! thanks again). </td>* Researched some PERL parsing utilities. Trying out XML::LibXML for DOM-style parsing. </tr>* Hopefully wrap something up by Wednesday. Failing that, might just go with simple CVS wrapper of some sort.
<tr>
<td>Wrapper around the GNU make utility</td>
<td>Mozilla uses the GNU make utility to build their source tree. your mission is to make a wrapper around the GNU make utility so that the make command can be programmatically called to build the source tree.</td>
<td>[[User:RichardChu|Richard Chu]]</td>
<td>Work in progress. Initial wrapper created: ''makewrapper.pl''. Requires thorough test case (''maketest.pl'' needs more test cases).</td>
</tr>
=== Dec. 09, 2006 ===
<tr>What has been done since last week? <td colspan="4"><strong>Subversion (SVN) Repository</strong> ([http* Got a test program and uploaded it to svn://subversioncdot.tigrissenecac.org/ http://subversionon.tigris.orgca/], [http://svnbookdeltatest.red-bean.com/nightly/en/indexThe pristine working version is revision 4.html http://svnbookThe latest committed copy is revision 8.red-beanThe regressive code was committed somewhere in between.com/nightly/en/index* Started testing the delta debugging framework.html]) </td> </tr>
<tr> <td>Wrapper around The results of the necessary SVN commands</td> <td>For the automated debugging to work, we may need to automatically modify the working copy by reverting to a different revision or updating certain directories and files. It may also need to know the differences between revisions and changesets.</td> <td>[[User:RichardChu|Richard Chu]]</td> <td>Work in progress. Initial wrapper created: ''svn.pl''. Currently has subroutines for ''commit'', ''update'', ''diff'', and ''checkout'' commands. May need to wrap other SVN commands. Requires thorough test case (''svntest.pl'' needs more test cases).</td> </tr>testing?
<tr> <td>Query SVN repository for differences between two revisions</td> <td>Your mission is to find out the relevant commands that can return the differences between two revisions, '''Finding the meta-data that is kept with each minimal revision, how differences between two revisions are stored and formatted, and how this data can be parsed into a usable form for our project (wrapper?).<set/td> <td>[[User:RichardChu|Richard Chu]]</td> <td>Work in progress.</td> </tr>last known good revision'''
Works. The delta debugger correctly reverts to a previous revision, builds the source code, and runs the test case. The test case returns the proper results on whether or not it passes or fails. The delta debugger correctly stops at revision 4 - the last known good version.
'''Finding the minimal failure-inducing set of directories'''
<tr> <td colspan="4"><strong>CVS/Mozilla Bonsai</strong> ([http://wwwIndeterminate.mozilla.org/bonsai.html http://www.mozilla.org/bonsai.html], [http://cvsbook.red-bean.com/OSDevWithCVS_3E.pdf CVS Book])<br />You can do these tasks by trying to interpret There is only 1 directory in the Bonsai source code yourself, or preferably by finding a person who has intimate knowledge repository so that directory should be returned as the minimal failure inducing set of the Bonsai source code directories. Does it return it? yes and asking themno. </td> </tr>
<tr> <td>Query CVS via Bonsai for checkins</td> <td>You can use Bonsai to search for The delta debugger correctly applies all of the checkins made changes within a certain time frame, within a certain that directory. And I think it correctly builds the source tree and runs the test case. However, made by a certain developer, etcthe return code of the test case is not as expected. Your mission is I expect the test case to find report that the relevant source filestest fails, functionshowever, variables, etc. it reports that drive this functionalityit passes. </td> <td>TBD.</td> <td>Not startedThus, the delta debugger returns no directories as failure inducing.</td> </tr>
<tr> <td>Results of querying CVS via Bonsai for checkins</td> <td>Bonsai obviously returns results from However, if I force the query. The question is how? Your mission is test case to find return the relevant source filesexpected result, functions, variables, etc. that is used to return and store results. What you need to find out is what type of data is returned and how are then the delta debugger correctly returns the directory as the results formatted?</td> <td>TBD.</td> <td>Not startedfailure-inducing one.</td> </tr>
<tr bgcolor="#F0F0F0"> <td>Query CVS via Bonsai for version history</td> <td>For each file in I suspect (or at least hope) that the CVS repository, Bonsai has indeterminate results of finding the ability to list failure inducing set of directories is because of a history of versionspossibly unreliable or inconsistent test case. From the first to the latest. Your mission is to find However, I can not be sure until I rule out the relevant source files, functions, variables, etc. that are used to obtain test case as the version history for a certain source file.</td> <td>TBD.</td> <td>Not startedproblem.</td> </tr>
<tr bgcolor="#F0F0F0"> <td>Results '''Finding the minimal failure-inducing set of querying CVS via Bonsai for version history</td> <td>Your mission is to find out the relevant source files, functions, variables, etc. that are used to return and store the results of searching for a file's version history. What data is returned and how is the data stored/formatted?</td> <td>TBD.</td> <td>Not started.</td> </tr>''
<tr> <td>Query CVS via Bonsai for differences between versions</td> <td>Using Bonsai, a user can see the differences between two different versions of a source fileIndeterminate. Your mission is to find out the relevant There are multiple source files, functions, variables, etcin the repository. that are used to find Does it return the differences between two different versions of a correct failure-inducing source file? I don't know.</td> <td>TBD.</td> <td>Not startedI have the same suspicions for this as for the directory changeset.</td> </tr>
<tr> <td>Results of querying CVS via Bonsai for differences between versions</td> <td>Using BonsaiBased on the testing, a user can see the differences between two different versions of a source file. How are the results returned? How is it formatted? Your mission is seems to find out the relevant source files, functions, variables, etc. that are used be able to return the results cycle through every combination of the query. You are to find out how the data is returned and how it is formatted.</td> <td>TBD.</td> <td>Not started.</td> </tr> <tr> <td colspan="4"><strong>Implementation of Delta Debugging & Driver</strong> ([http://www.st.cs.uni-sb.de/papers/tse2002/ Simplifying and Isolating Failure-Inducing Input, Ziller and Hildebrandt, 2002.])<br /> </td> </tr> <tr> <td>General Algorithm</td> <td>The general delta debugging algorithm implementation. For details, see [http://www.st.cs.uni-sb.de/papers/tse2002/ http://www.st.cs.uni-sb.de/papers/tse2002/].</td> <td>[[User:dwwoodsi|Dean Woodside]]</td> <td>Work changes in progress, check SVN repository from time-to-time.</td> </tr> <tr> <td>Minimizing Algorithm</td> <td>The minimizing delta debugging algorithm implementation. For details, see [http://www.st.cs.uni-sb.de/papers/tse2002/ http://www.st.cs.uni-sb.de/papers/tse2002/].</td> <td>[[User:dwwoodsi|Dean Woodside]]</td> <td>Not started.</td> </tr> <tr> <td>Schema Defintion of Driver Data</td> <td>A simple XSD which defines the driving test type (e.g. user interaction, program input)changeset, apply the minimal set combination of circumstance (scenario) to reproduce changes, build the failuresource code, and run the expected outcome after automating the circumstancestest case.</td> <td>[[User:dwwoodsi|Dean Woodside]]</td> <td>Work in progress, check SVN repository from time-The test case just seems to-time.</td> </tr> <tr> <td>Choosing a Record/Replay Facility</td> <td>In not report the case of scenarios that require user interaction (namely, mouse actions), the framework will require a record/replay facility that will record user interaction the first time through and then replay it later during the automation.</td> <td>[[User:dwwoodsi|Dean Woodside]]</td> <td>Found some Windows tools in James Whittaker's famed [http://www.howtobreaksoftware.com/ How to Break Software]. Checking these out for their appropriateness. Tending to think we might have to roll our own (can't script existing ones [/well/])correct test results.</td> </tr></table>
== Project References ==
([[#top|&uarr; top]])
[http=== Dec. 03, 2006 === Committed some updated to the SVN repository.* The test framework. There are a couple of files to the framework://programmingTest.pl, TestCase.pl, TestSuite.newsforgepl, TestResult.com/articlepl, TestRunner.pl. It is loosely based off of the design of the JUnit framework. Why such an elaborate design just for the need of users to define the test case that can determine whether or not a piece of functionality works or not to be run?sid=05/06/30/1549248&from=rss NewsForgeFor a few reasons that I may be adamant about:# To use the delta debugging framework, the user should not have to touch the DeltaDebugger.pl file to define the tests and how to run them. Using the testing framework, this can be done by subclassing the TestCase.pl class and overriding the run() subroutine.# For the delta debugger to work, it needs to know whether the test case passes or fails. Using the test framework, I hope to control the possible return codes of the tests to either pass or fail only.* testtest.pl that tests the functionality of the test framework.* updates to DeltaDebugger.pl to make use of the test framework.  Crunch time. One week left. The high priority tasks that still need to be done: An Introduction # Acquisition of a program we could use to Delta Debuggingtest the delta debugging framework. See [[#How_to_Get_Involved|How To Get Involved]] for more info.Delta # Test, debug the delta debugging simplifies framework.  === Nov. 26, 2006 === Committed some updates to the SVN repository.* Updated the delta debugging process in a program by automating algorithm module. I didn't realize this yesterday but the algorithm to find the process minimal set of failure inducing files (and continually splitting code block and line of code changes if those changeset types ever gets dones) is the program same (with minor modifications) as the algorithm that can find the minimal set of failure inducing directories. Thus I generalized that algorithm to smaller chunks called deltasremove the directory changeset specific code so that it will work with all other types of changesets. This technique is useful in three circumstances:*Error occurs due Removed the debugging/test related code from the source files.CVS Repository Setup (thanks to [[user inputs (e:reed|Reed Loden!]]): '''hera.senecac.on.ca/deltatest'''*[http://hera.senecac.on.ca:43080/viewvc.cgi/?root=deltatest ViewVC Web Repository Browser]*If you want commit access for whatever reason, email one of the project members '''Milestone:'''* Even though the test framework is incomplete, I think we can go ahead and begin the initial testing of the delta debugger on a real regressive program as I think we are ready. Coincidentally, exactly 2 months after the first project news posting on Sept. 26, 2006.g === Nov. keypress25, file 2006 === I/Ohaven't posted an update in a while so here goes. What's been done since then? Committed some updates to the SVN repository.* Modified the Changeset hierarchy of classes. Added a getChange()subroutine that takes an index and retrieves the change from the changeset. Delta debugging is used Also modified the getChangeset() subroutine to optionally take an array of directories/files to eliminate user actions irrelevant limit the search scope to within the nature directories/files passed in. These changes are possibly dangerously untested.* Committed the DeltaDebugger.pl file. This file houses the actual delta debugging algorithm. It requires three user-defined pieces of information: a Build object, an RCS object, and the error automated test cases. Currently, it can theoretically find the failure inducing revision, and pinpoint the cause minimal failure inducing set of directories. * Committed a DeltaDebuggerTest.pl file. It just tests the errorcorrectivity of the theoretical.  In the works:*Error occurs due Continue working on the delta debugging algorithm. Need to recent changes be able to find the minimal failure inducing set of files.* Test framework. Allow users to plug in test cases/suites without touching the codeDeltaDebugger.pl module.  The deadline for a version 0.01 release is looming. 1-2 weeks left to get this done. In What needs to be done to accomplish this situation, deltas are retrieved from ?* Finish everything that is in the net differences from both codesworks real soon.*Multithreading environmentNeed a test program that we could use and upload to our test SVN repository to test the delta debugging framework. Delta Ideally, the test program will meet the following requirements:# Has source files that span multiple directories wide and deep yet be small enough that the delta debugging can track down be done in a short amount of time so that all aspects of the exact order of operations originating from multiple threads delta debugger can be tested.# Has a regression. Or can easily be modified so that some functionality will stop working.# Has an automated test case that caused tests the errorregressive functionality.* Put theory into practice. So far the delta debugging algorithm has not been tested on a real program. The correctness of the algorithm has only been confirmed in theory. We need to focus on one of these circumstancestest the algorithm in a production environment real soon.  === Nov. Judging from the project description19, 2006 === The earlier crash case we should work on had (see the update directly below) was a non-regressive bug--there was no former build that worked with it. Going to use [https://bugzilla.mozilla.org/show_bug.cgi?id=325377 Bug #325377] instead. Having difficulty identifying when it was first case, while perhaps opening introduced--the information in the bug report doesn't seem to be quite accurate. Using the door nightly builds as archived at [http://archive.mozilla.org/pub/mozilla/nightly/ http://archive.mozilla.org/pub/mozilla/nightly/] to future expansionnarrow it down. Fortunately this crash is easily automated and does not require user interaction.
== Points of Confusion ==
([[#top|&uarr; top]])=== Nov. 18, 2006 ===
*<strike>Found a suitable crash case thanks to the people of [irc://irc.mozilla.org#qa #qa] (in particular, asqueella and Aleksej). For full details on the bug, see [Bonsai issue]https://bugzilla.mozilla.org/show_bug.cgi?id=354300 Bug #354300] -- '''unresolved'''.</strike>
*Talked to Reed Loden on IRC. He will be setting up a CVS repository for us something this coming week (Tuesday at earliest).
When I get confused, I draw diagrams.
=== Nov. 17, 2006 ===
Committed some updates to the SVN repository.* Changed applyChanges subroutine to take array of indices instead of scalar of an index. * Added unapplyChanges subroutine to Changeset classes.* [http://www.cpan.org/modules/by-module/Math/Math-Combinatorics-0.08.readme Math::Combinatorics], shamelessly stolen from [Imagehttp:Dd_partialclassdiagram//www.PNG]cpan.org/modules/by-module/Math/Math-Combinatorics-0.08.tar.gz here]. This module is used in the Delta Debugging Algorithm module to help find the minimal failure-inducing changeset.
In the pipeline:
* Delta Debugging Algorithm partially complete. Unthoroughly tested though can theoretically find the directories that contain the failure inducing changes.
* Test cases and samples we may be able to use to test the algorithm.
Uploaded files into '''scen1''' directory, containing test module for '''binaryTest'''. The test is ready to be used in the algorithm. The Cleardirectory contains: Seemingly Straight Forward* '''binaryTest.pl''' - test to detect the existence of a file.* '''helloWorld.pl''' - enough said!* '''binaryTestCaller.pl''' - runs '''helloWorld.pl''', pipe the result to '''hello.log''', and have '''binaryTest.pl''' attempt to detect it.This is the working version of the code, labeled '''revision 12'''. Now I have to find a way to wreck it........
The RCS tree is straight forward. It will encapsulate the data and operations related to the revision control system. SVN wraps the operations of the SVN revision control system, CVS will wrap the operations of the CVS revision control system, etc.
The Build tree is straight forward. It wraps the build tool used to build the source tree=== Nov.14, 2006 ===
The development of testing system for the framework is in the works. The first scenario revolves around a test called '''The Blurry: Current Points of ConfusionBinaryExist''', which has been shamelessly ripped from the Tinderbox script. All this test does is check whether a given file exists in the system. While this test can aspire for great things, right now it's doing simple thing, like checking whether its client Hello World program is doing what it's supposed to. Initial testing reveals that this test has potential. Will be uploaded to the SVN soon.
RCS's can remember the changes (deltas) that occurred in previous versions of a file, the history of changes that occur between revisions, etc.
A Changeset and its subclasses will encapsulate the idea of a set of changes=== Nov. A set of changes could be broken down into various categories such as a specific revision05, a list of directories, a list of files, a list of blocks of code, and finally a line of code.2006 ===
A Change and its subclasses encapsulate I didn't know where else to put this so I'm putting this here. While searching around for the idea elusive Mozilla tests that are run in Tinderbox, I found [http://wiki.mozilla.org/SoftwareTesting:Scratchpad this gem]. All of a single change. A change the tests are apparently located in the ''mozilla/testing'' directory and can be a change made within a checked out using this command while at the ''mozilla'' directory, change made within a file, change made to block of code, or a change to a line.:
A ChangesetFactory is supposed to return a change set based on the type of change set requested. To get the requested change set, one needs to know the type of revision control system (SVN, CVS, other, etc.) and/or the data required to connect to it. So there obviously need a link between RCS and ChangesetFactory/Changeset. The question is how? What is the proper/best way to link them together? One way is to pass in an RCS object to the ChangesetFactory which would then pass that object to the appropriate Changeset subclass. I don't like that solution but it's the simplest.cvs update -d testing
AlsoThe tests we would most likely be interested in are located in the ''tinderbox-standalone-tests'' subdirectory. Based on a quick scan of the perl files there, the method ''test-mozilla.pl'' file in that directory seems to drive the tests located in the ''Tests'' subdirectory which seem to get contain a change set for SVN may be different from CVSlot of performance tests. So there may be a Changeset hierarchy for SVN The arguments that the test subroutines receive (such as build directory and another one for CVSbinary name) seem to come from the ''Settings. I donpm''t like file located in the idea of that at all. There must be another way''Util'' directory.
'''The Blind: Future Points of Confusions'''Attempts to run the tests have so far been unsuccessful. If someone (hint hint) could figure out how to run these tests and how these tests work that would be great.
* Applying a change in a changeset. Should the Changeset subclasses be able to do that? Are they the information expert? They know about the changes. Should they know how to apply them? How would we go abouts applying a subset of changes in a changeset? For example, there may have been changes in 10 different directories, how would we apply the changes from say 4 of the 10 directories and not the others?
* Connecting all 3 hierarchies together. Need to be able to connect to SVN, need to be able to get and apply changes, need to be able to build the source tree.
* The actual delta debugging algorithm.
But that's all for the future=== Oct.31/Nov. 01, 2006 ===
== Project News ==Committed some updates to the SVN repository.* Added file DirectoryChangeset.pl. This file gets a list of directories changed since the revision number passed in.* Added file DirectoryChange.pl. This file encapsulates the idea of a changed directory, sort of. Basically holds revision number and path of directory.
UPDATE:* I didn't feel tired so I added an applyChange([[#top|&uarr; top]])subroutine to the Changeset classes and ChangesetFactory class. This allows the user to apply a change (specified by the index passed in to the subroutine) in a changeset.
Up Next:* Based on how much time I predict I will have left to work on this, I don''This is where your regular updates t think I will go. In these you should discuss have enough time to do the status Change/Changeset classes for Codeblock or your work, your interactions with other members of the community (e.gLine.Therefore, Seneca it's time to skip ahead and Mozilla), problems you have encountered, etcwork on the application of changes. Put detailed technical information into Should the Project Details page (i.e., update it as you go), and save this section for news about participation user be able to pass in an array of indices of changes to apply in a changeset? Or is just allowing the user to give one index good enough? We may find that out soon enough when we try to implement the projectdelta debugging algorithm.''
I read through your documentation here, and it is looking good. I also spoke to Shaver by phone this morning, and we chatted briefly about this project. He suggests that you start your work by looking for a suitable '''Crash Case''', one that happens reliably. Then you need at what would be necessary in order to bisect the change set (e.g., [http://www.mozilla.org/bonsai.html bonsai] data) in order to get closer to the change that introduced the bug. Shaver suggested that robc (Rob Campbell) might be a good person to help you brainstorm on this.
 
== How to Get Involved ==
 
We need a test program that we could use and upload to our test SVN repository to test the delta debugging framework. Ideally, the test program will meet the following requirements:
# Has source files that span multiple directories wide and deep yet be small enough that the delta debugging can be done in a short amount of time so that all aspects of the delta debugger can be tested.
# Has a regression. Or can easily be modified so that some functionality will stop working.
# Has an automated test case that tests the regressive functionality.
If you don't have a program that meets the first requirement, we could also use test programs that have multiple source files. The key being that the program has more than one source file. Programs that are contained in only one source files are useless to us.
 
If you have a program that meets these requirements, and you want to contribute to this project, then holla.
 
 
<hr />
 
 
If you are looking for an easy way in which to contribute to this project, you can jump in by writing one or more tests for the test suite. This does not require that you learn about the delta debugging inner-workings or structure.
 
Basic Advice:
* You '''must''' be able to automate the test--no human intervention is allowed.
* Possible test types include:
*: '''Crashing'''
*:: Can you crash the program with a minimal collection of circumstances (steps) that are easily reproducable? (In other words, can you write a script so that this happens in a controlled manner.)
*: '''Performance-related'''
*:: Is there a threshold for unacceptable consumption of time and/or space that is reason for concern?
*: '''Program hanging'''
*:: Does the program hang? Will it occur in a certain functionality of the software that is possible to isolate (reproduce) through scripted means?
*: '''Unexpected return codes'''
*:: What is a normal return code for the program? What is considered unexpected? Script a series of actions and pass the return code up to the test framework.
* Each test will fit into the test framework (which, at this point, still has to be designed). The tests must follow a few rules (again, undecided at this point).
 
Please check back in a few days. Expect some templates and samples up shortly to help get you going. <u>The currently listed test types are subject to change.</u>
 
 
==Future of the Project==
Here are some of the ideas related to the continuation of this project. Included are some personal ideas of the team members, tasks to reach the overall objective (a working, robust, Delta Debugging Framework for Mozilla), and additional features/functionality that would enhance the framework. This is subject to change, and a project roadmap will be written in the near future.
 
===CVS Support via Bonsai===
For the exploration into Bonsai and to see where it is/was heading, please view the [[delta debugging framework bonsai direction|Bonsai Direction]]. It is likely that a workable solution could be produced utilizing some of the details found in the link. This functionality would be particularly useful to Mozilla as this [Bonsai] is the technology they currently use.
 
===Enhancement of the Algorithm===
Richard's great algorithm can be further enhanced using a binary search-like approach that splits the revision from the current, all the way back to when the regression was first noticed (or, alternatively, when the crash case last known to have worked). Currently it works in a sequential manner, testing all previous revisions in order.
 
:'''More Granularity'''
:For this course, Richard's algorithm supported down to the file-level of change. In the future, it could go as far as evaluating changes in lines of code.
 
===Fleshed Out Test Suite Design===
The test suite test types should be further fleshed out and individual tests gathered (no participation from the class was possible due to time constraints; the test suite design wasn't fully explored and documented). Test suites could be put together for each major Mozilla.org project (Firefox, Thunderbird, Sunbird, Bugzilla, etc.).
 
===More Crash Cases===
More crash cases need to be found for the success in testing the project.
 
===Unit Tests===
A debugging framework, more so than other projects, should have its code quality tested and scrutinized heavily.
 
===Code Review===
Perhaps some manual audits could be performed by hand from outside contributors in the future.
3
edits

Navigation menu