Changes

Jump to: navigation, search

Delta debugging framework

8,199 bytes added, 14:16, 17 April 2013
m
Reverted edits by Saoagent (Talk) to last revision by Reed
([[#top|↑ top]])
[http://en.wikipedia.org/wiki/Delta_Debugging Delta debugging] is an automated approach to debugging that isolates failures systematically. Given a failing test that can be mechanically verified (including a browser crash), [http://en.wikipedia.org/wiki/Delta_Debugging delta debugging] is a way of automatically isolating the change that introduced the failure. Having  For developers, the occurrence of a scenario similar to this happens all too often: a developer codes a piece of functionality that works. Then, over a period of time, multiple changes to the program source files are made and that piece of functionality stops working. The cause of the regression could be any of the changes that were made since the time the functionality was last known to work. To isolate the cause of the regression, the developer begins the debugging process. Generally, debugging is a manual process where the developer must walk through the code while trying to keep track of variables and function calls. Sure, there are debuggers that can help you keep track of variables, the call stack, watch certain blocks of code, and execute the code step by step, however debugging is still mainly a manual process. Written in perl, given# that the source code is located in an SVN repository (support for CVS in the future)# a test case that can automatically verify whether or not a piece of functionality of a program works or not# a way to automatically build the program from the source code (if needed)the delta debugging framework aims to automatically isolate the failure-inducing changes to the source code that caused a regression. == Project License == Written in place perl, given a test case that can automatically verify whether or<br />not a piece of functionality of a program works or not, the delta debugging<br />framework aims to pull builds from CVSautomatically isolate the failure-inducing changes to the<br />source code that caused a regression.<br /><br />Copyright (C) 2006 Richard Chu, Aditya Nanda Kuswanto, bisect Dean William Woodside<br /><br />This program is free software; you can redistribute it and/or modify it under<br />the terms of the GNU General Public License as published by date and change set the Free Software<br />Foundation; either version 2 of the License, or (using at your option) any later<br />version.<br /><br />This program is distributed in the hope that it will be useful, but WITHOUT<br />ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS<br />FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.<br /><br />You should have received a copy of the GNU General Public License along with<br />this program; if not, write to the Free Software Foundation, Inc., 59 Temple<br />Place, Suite 330, Boston, MA 02111-1307 USA<br /><br />[http://enwww.wikipediaopensource.org/wikilicenses/Bonsai_CVS_code_management_system bonsai ] data gpl-- remember, CVS doesn't have changesets!license.php The GNU General Public License (GPL)Version 2, and report results would let computers make developers more productiveJune 1991]<br /><br />Contact Information about the authors of the delta debugging framework can be found<br />on our individual profile pages.
== Project Leader(s) ==
''Name(s) of people casually working on the project, or who have contributed significant help. Include links to personal pages within wiki. <br />NOTE: only Project Leader(s) should add names here. You '''can’t''' add your own name to the Contributor list.''
 
[[User:Reed|Reed Loden]] - Setup the CVS repository for us with the web front-end. Provided direction in querying Bonsai (a means to extract the output in XML).<br>
[[User:Elichak|Liz Chak]] - Documentation of the subroutines in the svn.pl and makewrapper.pl source files.
 
== Project Source Repository ==
 
([[#top|&uarr; top]])
 
Assuming you have [http://subversion.tigris.org/ SVN], the project's source can be obtained via SVN using the following command:
 
svn checkout svn://cdot.senecac.on.ca/deltadbg
 
The source can also be obtained at the following links:
* [http://matrix.senecac.on.ca/~rchu2/ddf/ddf.zip Delta Debugging Framework.zip]
* [http://matrix.senecac.on.ca/~rchu2/ddf/ddf.tar.bz2 Delta Debugging Framework.tar.bz2]
 
The test cases can be obtained via SVN using the following command
 
svn checkout svn://cdot.senecac.on.ca/deltatest
 
The source can also be obtained at the following links:
* [http://matrix.senecac.on.ca/~rchu2/ddf/deltatest.zip DeltaTest.zip]
* [http://matrix.senecac.on.ca/~rchu2/ddf/deltatest.tar.bz2 DeltaTest.tar.bz2]
== Project Details ==
Now that we are aware of the different concepts that we must take into account with regards to delta debugging, the next section will outline some facts and assumptions that are being made, and attempt to define the vision and process of the delta debugging framework.
 == Project Facts and Assumptions Principles ==
([[#top|&uarr; top]])
 
'''Project Facts:'''
# The source tree for the Mozilla project is HUGE. With many different source file types (C++, JS, XUL, etc.) in many different directories.
# The developer has a test case that can be used indicate whether the test passes/fails/is indeterminate.
# The developer will NOT know the date/version of the last known good version.
# Bonsai is a tool that can produce a list of differences between versions of a source file. (Bonsai's functionality has not been examined closely yet but will have to as it may be a key component to the framework)
 
 
'''Possible Vision of the Delta Debugging Framework''':
 
(subject to change based on stakeholder consultation/feedback, feasibility study)
 
# Since the last time a developer executed a test case that passed, the developer modified some source files. The source files may be of the same type or mixed type, same directory or different directory. It shouldn't matter. The framework should be source type and location agnostic. Upon executing the test case again, the result is now a failure. The developer panics. It's only days before the deadline to submit bug patches before the source tree is supposed to be closed for release and the bug is a blocker. The developer doesn't want to be shamed for delaying the release, and the source code is too complex to find the bug in time, so what should they do? Use the delta debugging framework! that's what. How? you may ask. Well keep reading to find out. <small>* scenario may vary.</small>
# The delta debugging framework may require the developer to input one piece of information. The test case/function that used to pass but now fails. It will be used to determine whether the source files with progressive changes passes/fails the test.
# Once the developer has inputted this piece of information, it will use Bonsai to query the source tree and compile a list of all the changes to the source files since a certain amount of time.
# (If there was a method of determining change dependencies so as to eliminate the possibility of inconsistencies, it would be done in this step. One possible way of reducing the possibility of inconsistencies is to logically group changes by location or check in time.)
# This step would be where the delta debugging algorithm would come into play. The algorithm should basically:
## Recursively, incrementally remove changes from the source code with the regression.
## Recompile the source tree.
## Execute the test case. There may be 3 outcomes:
### The test case passes. We know that the failure-inducing change(s) are in the change(s) that were removed.
### The test case fails. We know that the failure-inducing change(s) are not exclusively in the change(s) that were removed. I say not exclusively because of the concept of Interference (described above).
### The test case is indeterminate. There were some inconsistencies.
 
== Project Flowchart ==
''' FLOWCHART TO BE UPDATED SOONUpdated Delta Debugging Flowchart. I HOPE'''  [[Image:Dd_flowchart1. PNG]] Here are some thoughts regarding the flowchart:# The whole process revolves around a certain '''Test''', which must be passed to complete the process. It is assumed that the source code passed this test before, but not anymore due to recent changes to the tree. The framework will locate these changes.# The '''Test''' is a versatile module and can be adapted to accept any tests the user may use.# When the initial test fails, the framework first attempts to locate which changeset causes this failure. This is done by "going back through time", retrieving the trees from previous revisions, and running each tree through the same test. The idea is to locate the latest revision where the test passes successfully.# Once this revision is identified, the framework will extract the '''diff''', the difference between the two revisions.# The framework will then use this '''diff''' to break down the difference possibilities (e.g. directory, file, etc) and isolate the cause of the failure.# Once this is done, the framework will deliver the cause of the failure in a report for the user and the operation is finished.  == Project Test Cases == The test cases used in this project are located in the [[Delta_debugging_testcases|Delta Debugging Testcases page]]. == Project Roadmap ==
([[#top|&uarr; top]])
[[Image:Dd_flowchart.PNGDelta Debugging Framework Roadmap|Delta Debugging Framework Roadmap]]
The page outlines our current vision of the delta debugging framework and a roadmap of the work that needs to be completed to accomplish our vision. This roadmap may be subject to change and/or expanded as our vision expands through feedback and requests from others, our own ideas, etc. == Project Source Repository Partial Class Diagram ==
([[#top|&uarr; top]])
Assuming you have [http://subversionMost of the classes in blue exist in the source repository.tigris.org/ SVN], the projectThe classes in pale yellow are classes that won's source can t be obtained via SVN using completed in the following command:first release. 
svn checkout svn[[Image://cdotDd_partialclassdiagram2.senecac.on.ca/deltadbgPNG]]
 == Project Task List News ==
([[#top|&uarr; top]])
<table style''This is where your regular updates will go. In these you should discuss the status or your work, your interactions with other members of the community (e.g., Seneca and Mozilla), problems you have encountered, etc. Put detailed technical information into the Project Details page (i.e., update it as you go), and save this section for news about participation in the project.'' ="width: 100%;" class="standard-table" border="1" cellpaddingDec. 22, 2006 =="1" cellspacing="1"> <tr> <th>Task</th>I haven't posted an update in a while. So what's been done? <th>Description</th> <th>Assigned I finally had some time to</th>do a second round of testing & debugging of the delta debugging framework. And guess what? It ''seems'' to work now. The problem? Combination of logical errors when applying and unapplying changes in the framework and a bad test case. Go figure. <th>Status</th> </tr>However, before I get ahead of myself and officially tag and release the delta debugging framework as version 0.1, I would like to test it out on another test program. Hopefully, this can be done this weekend. And if all goes well, version 0.1 will be officially released before the end of the year.
<tr>
<td colspan="4"><strong>Change set / Change</strong>
</td>
</tr>
<tr> <td>Retrieval of Change / Change set</td> <td>The Granularity concept=== Dec. A single revision may consist of hundreds or thousands of lines of code that were changed13, yet only a couple lines of the change may be responsible for the regression. Thus, There must be a method to break the change into smaller manageable chunks. The different types of chunks we may breaking up a changeset are: Revision, Directories, Files, Code Blocks, and Lines.</td> <td>[[User:RichardChu|Richard Chu]]</td> <td>Work in progress. Currently can retrieve change sets of type Revision, Directory, and File. Need to complete retrieval of Code Block, Line of Code change set? <br />Requires through test suite (ChangesetTest.pl needs more test cases)</td> </tr>2006 ===
<tr> <td>Application of Change / Change set</td> <td>OK. Change sets can be retrieved. Now what? You must be able to apply a change or change set or subset of a change set to the source tree. Your mission is to figure out how to do that.</td> <td>Created [[User:RichardChuDelta_debugging_testcases|Richard ChuDelta Debugging Testcases]]</td> <td>Work in progresspage to discuss the nature of the test cases created to test the algorithm. Can apply a change (specified by index passed Included in) from a Revision, Directorythe page are 2 testcases created so far, the '''HelloWorld''' binary test and File Changesetthe '''Sudoku''' test. Do we want to Both tests can be able to pass found in an array of indices and apply the changes associated with those indices? Requires some thought'''deltatest''' svn repository. The repository can be checked out using this command:<br pre> svn checkout svn:/>Requires through test suite (ChangesetTest/cdot.senecac.on.pl needs more test cases)<ca/td> deltatest </trpre>
Exactly 12 days before Christmas, the delta debugging framework has been released under the [http://www.opensource.org/licenses/gpl-license.php GPL Version 2] License.
<tr> <td colspan="4"><strong>GNU Make<Unfortunately, we haven't had the time to test the delta debugger much since Dec. 09, 2006 because of exams, other school work; Planning to spend some time this weekend to test the delta debugger and figure out why it currently seems to not be able find the minimal set of failure inducing directories/strong> files ([http://www.gnu.org/software/make/ http://wwwwhether its because of unreliable test case or logical error in the program).gnu.org/software/make/]) </td> </tr>
<tr> <td>Wrapper around A roadmap of our vision of the GNU make utility</td> <td>Mozilla uses direction of the GNU make utility to build their source tree. your mission is to make a wrapper around the GNU make utility so that project will be heading in the make command can future will be programmatically called to build the source tree.</td> <td>[[User:RichardChu|Richard Chu]]</td> <td>Work in progress. Initial wrapper created: ''makewrapper.pl''. Requires thorough test case (''maketest.pl'' needs more test cases)and posted soon.</td> </tr>
=== Dec. 11, 2006 ===
<tr> <td colspan="4"><strong>Subversion (SVN) Repository</strong> ([http://subversionUploaded testcase for '''HelloWorld''' binary at '''deltatest svn'''.tigrisThe test simulates the error that may occur when compilation fails due to syntax error.org/ http://subversion.tigris.org/]The exalted HelloWorld program is located on the HelloWorld directory, [http://svnbook.red-bean.com/nightly/en/index.html http://svnbookwhile the test definition is at HelloTestCase1.red-beanpm.com/nightly/en/indexThe algorithm detects failed test and reverts the affected file to the version where the test passes.html]) </td> </tr>
<tr> <td>Wrapper around the necessary SVN commands</td> <td>For the automated debugging to work, we may need to automatically modify the working copy by reverting to a different revision or updating certain directories and files. It may also need to know Note for the differences between revisions and changesets.</td> <td>[[Userfuture:RichardChu|Richard Chu]]</td> <td>Work in progress. Initial wrapper created: ''svn.pl''. Currently has subroutines for ''commit'', ''update'', ''diff'', and ''checkout'' commands. May need to wrap other SVN commands. Requires thorough test case (''svntest.pl'' needs more test cases).</td> </tr>improve user feedback functions!
<tr>
<td>Query SVN repository for differences between two revisions</td>
<td>Your mission is to find out the relevant commands that can return the differences between two revisions, the meta-data that is kept with each revision, how differences between two revisions are stored and formatted, and how this data can be parsed into a usable form for our project (wrapper?).</td>
<td>[[User:RichardChu|Richard Chu]]</td>
<td>Work in progress.</td>
</tr>
=== Dec. 10, 2006 ===
Where is the CVS/Bonsai work heading? Here is a breakdown of the past 3-4 weeks:
* Initially was going for a straight wrapper around CVS ala the style Richard used for SVN.
* Tried to find some functionality within Bonsai that could make it easier.
* Talked to Reed Loden, he set up a repository for us to try with. Thanks Reed!
* Thought that there may be some additional (read: unpublished) tools that could be worked with. Got in contact with some of the "Project Participants" listed on [http://www.mozilla.org/projects/bonsai/]. Was told the person in particular wasn't a contributor (just submitted a bug report). They in turn pointed me to [irc://irc.mozilla.org/#mozwebtools #mozwebtools].
* Lurked on [irc://irc.mozilla.org/#mozwebtools #mozwebtools] for a few weeks. Talked to 'justdave' about Bonsai. Reed Loden chimed up and informed me that Bonsai can output to XML using ?xml=1 on the query (score! thanks again).
* Researched some PERL parsing utilities. Trying out XML::LibXML for DOM-style parsing.
* Hopefully wrap something up by Wednesday. Failing that, might just go with simple CVS wrapper of some sort.
<tr>
<td colspan="4"><strong>CVS/Mozilla Bonsai</strong> ([http://www.mozilla.org/bonsai.html http://www.mozilla.org/bonsai.html], [http://cvsbook.red-bean.com/OSDevWithCVS_3E.pdf CVS Book])<br />You can do these tasks by trying to interpret the Bonsai source code yourself, or preferably by finding a person who has intimate knowledge of the Bonsai source code and asking them.
</td>
</tr>
<tr>
<td>Query CVS via Bonsai for checkins</td>
<td>You can use Bonsai to search for the checkins made within a certain time frame, within a certain directory, made by a certain developer, etc. Your mission is to find the relevant source files, functions, variables, etc. that drive this functionality. </td>
<td>TBD.</td>
<td>Not started.</td>
</tr>
<tr> <td>Results of querying CVS via Bonsai for checkins</td> <td>Bonsai obviously returns results from the query=== Dec. The question is how? Your mission is to find the relevant source files09, functions, variables, etc. that is used to return and store results. What you need to find out is what type of data is returned and how are the results formatted?</td> <td>TBD.</td> <td>Not started.</td> </tr>2006 ===
<tr bgcolor="#F0F0F0">What has been done since last week? <td>Query CVS via Bonsai for version history<* Got a test program and uploaded it to svn://td> <td>For each file in the CVS repository, Bonsai has the ability to list a history of versionscdot.senecac.on. From the first to the latestca/deltatest. Your mission The pristine working version is to find out the relevant source files, functions, variables, etcrevision 4. that are used to obtain the version history for a certain source fileThe latest committed copy is revision 8.</td> <td>TBDThe regressive code was committed somewhere in between.</td> <td>Not started* Started testing the delta debugging framework.</td> </tr>
<tr bgcolor="#F0F0F0"> <td>Results of querying CVS via Bonsai for version history</td> <td>Your mission is to find out the relevant source files, functions, variables, etc. that are used to return and store the The results of searching for a file's version history. What data is returned and how is the data stored/formattedtesting?</td> <td>TBD.</td> <td>Not started.</td> </tr>
<tr> <td>Query CVS via Bonsai for differences between versions</td> <td>Using Bonsai, a user can see the differences between two different versions of a source file. Your mission is to find out the relevant source files, functions, variables, etc. that are used to find '''Finding the differences between two different versions of a source file.</td> <td>TBD.</td> <td>Not started.</td> <minimal revision set/tr>last known good revision'''
<tr> <td>Results of querying CVS via Bonsai for differences between versions</td> <td>Using Bonsai, a user can see the differences between two different versions of a source fileWorks. How are the results returned? How is it formatted? Your mission is The delta debugger correctly reverts to find out a previous revision, builds the relevant source filescode, functions, variables, etcand runs the test case. that are used to return The test case returns the proper results of the queryon whether or not it passes or fails. You are to find out how The delta debugger correctly stops at revision 4 - the data is returned and how it is formatted.</td> <td>TBD.</td> <td>Not startedlast known good version.</td> </tr>
<tr> <td colspan="4"><strong>Test Case(s)</strong> ([http://www.mozilla.org/tinderbox.html Tindexbox])<br /> </td> </tr>'''Finding the minimal failure-inducing set of directories'''
<tr> <td>Creation / Extrapolation of Test Case(s)</td> <td>We need test cases that can return whether or not the test passes or failsIndeterminate. Tinderbox has a couple of tests that are executed after the source There is built. Extrapolate those tests from only 1 directory in the Tinderbox source code so that we can use them in this project. We also need a test case that can pass/fail consistently repository so that we can test directory should be returned as the delta debuggerminimal failure inducing set of directories.</td> <td>[[User:Ankuswan|Aditya Nanda Kuswanto]]</td> <td>Work in progress. Found the tests! Now need to figure out how to run them Does it return it? yes and how they workno.</td> </tr>
The delta debugger correctly applies all of the changes within that directory. And I think it correctly builds the source tree and runs the test case. However, the return code of the test case is not as expected. I expect the test case to report that the test fails, however, it reports that it passes. Thus, the delta debugger returns no directories as failure inducing.
However, if I force the test case to return the expected result, then the delta debugger correctly returns the directory as the failure-inducing one.
<tr> <td colspan="4"><strong>Implementation of Delta Debugging Algorithm</strong> I suspect ([http://www.infosun.fmi.uni-passau.de/st/papers/tr-99-01/ Yesterday, my program worked. Today, it does not. Why?]or at least hope)<br /> </td> </tr> <tr> <td>The Algorithm</td> <td>The delta debugging algorithm. Drives that the framework to retrieve change sets, apply changes, build source tree, run test case(s) to find indeterminate results of finding the minimal set of failure inducing changes. The intersection set of all other parts directories is because of the framework to make them work togethera possibly unreliable or inconsistent test case. IdeallyHowever, should I can not be abstract enough for easy extensibility with little impact.</td> <td>[[User:dwwoodsi|Dean Woodside]]</td> <td>Work in progress. Check sure until I rule out the test case as the SVN repository from time to timeproblem.</td> </tr>
</table>'''Finding the minimal failure-inducing set of files'''
== Project References ==Indeterminate. There are multiple source files in the repository. Does it return the correct failure-inducing source file? I don't know. I have the same suspicions for this as for the directory changeset.
Based on the testing, it seems to be able to cycle through every combination of changes in the changeset, apply the combination of changes, build the source code, and run the test case. The test case just seems to not report the correct test results.   === Dec. 03, 2006 === Committed some updated to the SVN repository.* The test framework. There are a couple of files to the framework: Test.pl, TestCase.pl, TestSuite.pl, TestResult.pl, TestRunner.pl. It is loosely based off of the design of the JUnit framework. Why such an elaborate design just for the need of users to define the test case that can determine whether or not a piece of functionality works or not to be run? For a few reasons that I may be adamant about:# To use the delta debugging framework, the user should not have to touch the DeltaDebugger.pl file to define the tests and how to run them. Using the testing framework, this can be done by subclassing the TestCase.pl class and overriding the run() subroutine.# For the delta debugger to work, it needs to know whether the test case passes or fails. Using the test framework, I hope to control the possible return codes of the tests to either pass or fail only.* testtest.pl that tests the functionality of the test framework.* updates to DeltaDebugger.pl to make use of the test framework.  Crunch time. One week left. The high priority tasks that still need to be done:# Acquisition of a program we could use to test the delta debugging framework. See [[#topHow_to_Get_Involved|&uarr; topHow To Get Involved]]for more info.# Test, debug the delta debugging framework.  === Nov. 26, 2006 === Committed some updates to the SVN repository.* Updated the delta debugging algorithm module. I didn't realize this yesterday but the algorithm to find the minimal set of failure inducing files (and code block and line of code changes if those changeset types ever gets dones) is the same (with minor modifications)as the algorithm that can find the minimal set of failure inducing directories. Thus I generalized that algorithm to remove the directory changeset specific code so that it will work with all other types of changesets.* Removed the debugging/test related code from the source files.CVS Repository Setup (thanks to [[user:reed|Reed Loden!]]): '''hera.senecac.on.ca/deltatest'''*[http://hera.senecac.on.ca:43080/viewvc.cgi/?root=deltatest ViewVC Web Repository Browser]*If you want commit access for whatever reason, email one of the project members '''Milestone:'''* Even though the test framework is incomplete, I think we can go ahead and begin the initial testing of the delta debugger on a real regressive program as I think we are ready. Coincidentally, exactly 2 months after the first project news posting on Sept. 26, 2006.
[http://programming.newsforge.com/article.pl?sid=05/06/30/1549248&from=rss NewsForge: An Introduction to Delta Debugging]Delta debugging simplifies debugging process in a program by automating the process and continually splitting the program to smaller chunks called deltas. This technique is useful in three circumstances:*Error occurs due to user inputs (e.g= Nov. keypress25, file I/O). Delta debugging is used to eliminate user actions irrelevant to the nature of the error and pinpoint the cause of the error.*Error occurs due to recent changes to the code. In this situation, deltas are retrieved from the net differences from both codes.*Multithreading environment. Delta debugging can track down the exact order of operations originating from multiple threads that caused the error.2006 ===
== Points of Confusion ==I haven't posted an update in a while so here goes. What's been done since then?
Committed some updates to the SVN repository.* Modified the Changeset hierarchy of classes. Added a getChange([[#top|&uarr; top]])subroutine that takes an index and retrieves the change from the changeset. Also modified the getChangeset() subroutine to optionally take an array of directories/files to limit the search scope to within the directories/files passed in. These changes are possibly dangerously untested.* Committed the DeltaDebugger.pl file. This file houses the actual delta debugging algorithm. It requires three user-defined pieces of information: a Build object, an RCS object, and the automated test cases. Currently, it can theoretically find the failure inducing revision, and the minimal failure inducing set of directories. * Committed a DeltaDebuggerTest.pl file. It just tests the correctivity of the theoretical.
[[Bonsai issue]] -- '''unresolved'''
In the works:
* Continue working on the delta debugging algorithm. Need to be able to find the minimal failure inducing set of files.
* Test framework. Allow users to plug in test cases/suites without touching the DeltaDebugger.pl module.
When I get confused, I draw diagrams.
The deadline for a version 0.01 release is looming. 1-2 weeks left to get this done. What needs to be done to accomplish this?
* Finish everything that is in the works real soon.
* Need a test program that we could use and upload to our test SVN repository to test the delta debugging framework. Ideally, the test program will meet the following requirements:
# Has source files that span multiple directories wide and deep yet be small enough that the delta debugging can be done in a short amount of time so that all aspects of the delta debugger can be tested.
# Has a regression. Or can easily be modified so that some functionality will stop working.
# Has an automated test case that tests the regressive functionality.
* Put theory into practice. So far the delta debugging algorithm has not been tested on a real program. The correctness of the algorithm has only been confirmed in theory. We need to test the algorithm in a production environment real soon.
[[Image:Dd_partialclassdiagram.PNG]]
=== Nov. 19, 2006 ===
'''The Clear: Seemingly Straight Forward'''earlier crash case we had (see the update directly below) was a non-regressive bug--there was no former build that worked with it.
The RCS tree is straight forwardGoing to use [https://bugzilla.mozilla.org/show_bug. It will encapsulate cgi?id=325377 Bug #325377] instead. Having difficulty identifying when it was first introduced--the data and operations related information in the bug report doesn't seem to be quite accurate. Using the revision control systemnightly builds as archived at [http://archive.mozilla.org/pub/mozilla/nightly/ http://archive.mozilla. SVN wraps the operations of the SVN revision control system, CVS will wrap the operations of the CVS revision control system, etcorg/pub/mozilla/nightly/] to narrow it down.
The Build tree Fortunately this crash is straight forward. It wraps the build tool used to build the source treeeasily automated and does not require user interaction.
'''The Blurry: Current Points of Confusion'''
RCS's can remember the changes (deltas) that occurred in previous versions of a file=== Nov. 18, the history of changes that occur between revisions, etc.2006 ===
A Changeset and its subclasses will encapsulate *<strike>Found a suitable crash case thanks to the idea people of a set of changes[irc://irc.mozilla. A set of changes could be broken down into various categories such as a specific revisionorg#qa #qa] (in particular, a list of directoriesasqueella and Aleksej). For full details on the bug, a list of files, a list of blocks of code, and finally a line of codesee [https://bugzilla.mozilla.org/show_bug.cgi?id=354300 Bug #354300].</strike>
A Change and its subclasses encapsulate the idea of a single change*Talked to Reed Loden on IRC. A change can He will be setting up a change made within a directory, change made within a file, change made to block of code, or a change to a lineCVS repository for us something this coming week (Tuesday at earliest).
A ChangesetFactory is supposed to return a change set based on the type of change set requested. To get the requested change set, one needs to know the type of revision control system (SVN, CVS, other, etc.) and/or the data required to connect to it. So there obviously need a link between RCS and ChangesetFactory/Changeset. The question is how? What is the proper/best way to link them together? One way is to pass in an RCS object to the ChangesetFactory which would then pass that object to the appropriate Changeset subclass. I don't like that solution but it's the simplest.
Also=== Nov. 17, the method to get a change set for SVN may be different from CVS. So there may be a Changeset hierarchy for SVN and another one for CVS. I don't like the idea of that at all. There must be another way.2006 ===
'''The BlindCommitted some updates to the SVN repository.* Changed applyChanges subroutine to take array of indices instead of scalar of an index. * Added unapplyChanges subroutine to Changeset classes.* [http: Future Points of Confusions'''//www.cpan.org/modules/by-module/Math/Math-Combinatorics-0.08.readme Math::Combinatorics], shamelessly stolen from [http://www.cpan.org/modules/by-module/Math/Math-Combinatorics-0.08.tar.gz here]. This module is used in the Delta Debugging Algorithm module to help find the minimal failure-inducing changeset.
In the pipeline:* Applying a change in a changesetDelta Debugging Algorithm partially complete. Should Unthoroughly tested though can theoretically find the Changeset subclasses be able to do directories that? Are they the information expert? They know about contain the failure inducing changes. Should they know how to apply them? How would * Test cases and samples we go abouts applying a subset of changes in a changeset? For example, there may have been changes in 10 different directories, how would we apply the changes from say 4 of the 10 directories and not the others?* Connecting all 3 hierarchies together. Need to be able to connect use to SVN, need to be able to get and apply changes, need to be able to build test the source tree. * The actual delta debugging algorithm.
But thatUploaded files into 's all ''scen1''' directory, containing test module for '''binaryTest'''. The test is ready to be used in the futurealgorithm. The directory contains:* '''binaryTest.pl''' - test to detect the existence of a file.* '''helloWorld.pl''' - enough said!* '''binaryTestCaller.pl''' - runs '''helloWorld.pl''', pipe the result to '''hello.log''', and have '''binaryTest.pl''' attempt to detect it.This is the working version of the code, labeled '''revision 12'''. Now I have to find a way to wreck it........
== Project News ==
([[#top|&uarr; top]])=== Nov. 14, 2006 ===
The development of testing system for the framework is in the works. The first scenario revolves around a test called '''BinaryExist'''This is where your regular updates will go. In these you should discuss the status or your work, your interactions with other members of which has been shamelessly ripped from the community (eTinderbox script.g., Seneca and Mozilla), problems you have encountered, etc. Put detailed technical information into All this test does is check whether a given file exists in the Project Details page (i.esystem.While this test can aspire for great things, update right now it as you go)'s doing simple thing, and save like checking whether its client Hello World program is doing what it's supposed to. Initial testing reveals that this section for news about participation in test has potential. Will be uploaded to the projectSVN soon.''
Attempts to run the tests have so far been unsuccessful. If someone (hint hint) could figure out how to run these tests and how these tests work that would be great.
 
=== Oct. 31/Nov. 01, 2006 ===
I read through your documentation here, and it is looking good. I also spoke to Shaver by phone this morning, and we chatted briefly about this project. He suggests that you start your work by looking for a suitable '''Crash Case''', one that happens reliably. Then you need at what would be necessary in order to bisect the change set (e.g., [http://www.mozilla.org/bonsai.html bonsai] data) in order to get closer to the change that introduced the bug. Shaver suggested that robc (Rob Campbell) might be a good person to help you brainstorm on this.
 
== How to Get Involved ==
 
We need a test program that we could use and upload to our test SVN repository to test the delta debugging framework. Ideally, the test program will meet the following requirements:
# Has source files that span multiple directories wide and deep yet be small enough that the delta debugging can be done in a short amount of time so that all aspects of the delta debugger can be tested.
# Has a regression. Or can easily be modified so that some functionality will stop working.
# Has an automated test case that tests the regressive functionality.
If you don't have a program that meets the first requirement, we could also use test programs that have multiple source files. The key being that the program has more than one source file. Programs that are contained in only one source files are useless to us.
 
If you have a program that meets these requirements, and you want to contribute to this project, then holla.
 
 
<hr />
 
 
If you are looking for an easy way in which to contribute to this project, you can jump in by writing one or more tests for the test suite. This does not require that you learn about the delta debugging inner-workings or structure.
 
Basic Advice:
* You '''must''' be able to automate the test--no human intervention is allowed.
* Possible test types include:
*: '''Crashing'''
*:: Can you crash the program with a minimal collection of circumstances (steps) that are easily reproducable? (In other words, can you write a script so that this happens in a controlled manner.)
*: '''Performance-related'''
*:: Is there a threshold for unacceptable consumption of time and/or space that is reason for concern?
*: '''Program hanging'''
*:: Does the program hang? Will it occur in a certain functionality of the software that is possible to isolate (reproduce) through scripted means?
*: '''Unexpected return codes'''
*:: What is a normal return code for the program? What is considered unexpected? Script a series of actions and pass the return code up to the test framework.
* Each test will fit into the test framework (which, at this point, still has to be designed). The tests must follow a few rules (again, undecided at this point).
 
Please check back in a few days. Expect some templates and samples up shortly to help get you going. <u>The currently listed test types are subject to change.</u>
 
 
==Future of the Project==
Here are some of the ideas related to the continuation of this project. Included are some personal ideas of the team members, tasks to reach the overall objective (a working, robust, Delta Debugging Framework for Mozilla), and additional features/functionality that would enhance the framework. This is subject to change, and a project roadmap will be written in the near future.
 
===CVS Support via Bonsai===
For the exploration into Bonsai and to see where it is/was heading, please view the [[delta debugging framework bonsai direction|Bonsai Direction]]. It is likely that a workable solution could be produced utilizing some of the details found in the link. This functionality would be particularly useful to Mozilla as this [Bonsai] is the technology they currently use.
 
===Enhancement of the Algorithm===
Richard's great algorithm can be further enhanced using a binary search-like approach that splits the revision from the current, all the way back to when the regression was first noticed (or, alternatively, when the crash case last known to have worked). Currently it works in a sequential manner, testing all previous revisions in order.
 
:'''More Granularity'''
:For this course, Richard's algorithm supported down to the file-level of change. In the future, it could go as far as evaluating changes in lines of code.
 
===Fleshed Out Test Suite Design===
The test suite test types should be further fleshed out and individual tests gathered (no participation from the class was possible due to time constraints; the test suite design wasn't fully explored and documented). Test suites could be put together for each major Mozilla.org project (Firefox, Thunderbird, Sunbird, Bugzilla, etc.).
 
===More Crash Cases===
More crash cases need to be found for the success in testing the project.
 
===Unit Tests===
A debugging framework, more so than other projects, should have its code quality tested and scrutinized heavily.
 
===Code Review===
Perhaps some manual audits could be performed by hand from outside contributors in the future.
3
edits

Navigation menu