Difference between revisions of "Processingjs paper"

From CDOT Wiki
Jump to: navigation, search
(Created page with 'Js and processing integration Processing is Java based, and in order to make it work in the web, it has to be completely converted into JavaScript. Syntactically JavaScript and …')
 
(First Draft)
 
(77 intermediate revisions by 3 users not shown)
Line 1: Line 1:
Js and processing integration
+
=Processing.js Game Paper=
 +
 
 +
=First Draft=
 +
 
 +
==Introduction==
 +
 
 +
Game delivery in a webpage typically required some sort of plug-in. However due to security concerns and general wariness to plugins, they are not the most effective means to deliver content.  Furthermore there are often some platform where a plugin does not exist or cannot exist.  Even Flash which is one of the most ubiquitous visual environment is not available on every platform.  The only real solution to web delivery of rich graphics is to integrate it into native browser technology.
 +
 
 +
The HTML <canvas> element allows the programatic delivery of graphics in a web page without plugins.  With its inclusion in the soon to be released IE 9, the <canvas> element now represents a means to deliver graphical content in all the major browsers.  The typical way to interact with the canvas is to use javascript and but for artists, educators, and other people less familiar with Javascript, learning to do this can be a barrier to entry.
 +
 
 +
The Processing language introduced by Ben Fry and Casey Reas is a simple and elegant language for data visualization that is already used by artists, educators as well as commercial media to deliver rich graphical content called sketches.  There is a large body of work around the world which had been previously developed using Processing. However, Processing was originally developed with Java and thus delivering Processing sketches on a webpage required that the user install a Java plugin.  Furthermore the sketches themselves were self contained items as opposed to being part of a web page.  That is, the elements of the Document Object Model (DOM) of a webpage could not interact with it or vice versa. Thus, while it was possible to deliver visual content it would be difficult to create Processing sketches to take full advantage of modern web services such as flickr, twitter etc.
 +
 
 +
Processing.js is an open source, cross browser Javascript port of the Processing language.  It uses the canvas element for rendering and does not require any plug-ins.  However, Processing.js is more than just a Processing parser written in JavaScript.  It also enables the embedding of other web technologies into Processing sketches.  This extension will allow for a new set of visualizations previously not possible.  Processing.js seamlessly integrates web technologies with the processing language to provide an accessible framework for multimedia web applications.
 +
 
 +
==Background==
 +
 
 +
The processing.js project was started by John Resig who wanted to utilize the HTML5 canvas element and take advantage of the Java Processing language. It took about seven months to get a working version, consisting of 5000 lines of code but it was not a complete port of the Processing language. The project, similarly to other open source products, was released with the hope that a developer community will converge around it and contribute to development. In September 2009, we began the work to complete the port to JavaScript.  In order to facilitate an architecture for participation the source code had to be readily available and the inner workings of the project and the missing functionality must be publicized.  To this end the source code was made available publicly on GitHub and an issue tracking system was used to manage the large number of issues needed to be resolved in order to complete the port.  A review process was setup to ensure that the code submitted was of sufficient quality.
 +
 
 +
From it's inception, Processing.js was designed to be more than just a rewrite of the Java functions provided by Processing to JavaScript.  John Resig wrote the original Processing.js parser to scan a Processing sketch for hints of Java code and convert that code to JavaScript.  However, if the parser encountered JavaScript code, it would leave the code intact.  This method allowed not only for the conversion of existing Processing code to JavaScript but the injection of JavaScript into Processing sketches as well.  By allowing JavaScript to exist within a Processing Sketch intact,Java and JavaScript code can exist together without any need to declare the language you are using.  Old sketches written for Processing will work but new sketches written for Processing.js can not only have Processing code but can make use of JavaScript to interact with other elements of the webpage.
 +
 
 +
==JavaScript==
 +
 
 +
When the original Processing Language, also known as P5, was first developed Java was suppose to become the language of the web while JavaScript was a little toy language that many did not take serious.  However, as the web matured, JavaScript became the language of the web but many of the misconceptions about it still persists. /*cite javascript the good parts here*/  With recent developments in JavaScript technology, JavaScript is now fast enough to handle the demands of realtime interactive web graphics.
 +
 
 +
Processing.js is more than just a Processing parser re-written in JavaScript. It is designed in a way that connects the Processing language (also known as P5) with web technologies such as JavaScript, the HTML5 canvas element, JQuery, and various web services.  Furthermore, Processing.js is built in such a way as to allow easy integration of new technologies as they emerge.  It is designed to be fast and to take advantage of recent JavaScript developments to ensure that the platform is responsive.
 +
 
 +
While syntactically JavaScript and Java are fairly similar, there are some fundamental differences that has made this conversion challenging.  The first is that we wanted to do this conversion dynamically in real time.  The code produced by the converter needed to be fully object oriented and we had to provide support to all native Java functions and objects that are supported by Processing.  We also had to take into account the differences between working with web resources vs local resources.  Furthermore we had to consider how we would handle some fundamental differences between Java and JavaScript such as typed vs. typeless variables, function overloading and variable name overloading.
 +
 
 +
The original code for Processing.js used regular expressions to convert Java into JavaScript when it was encountered.  It did this by scanning for hints of Java code within the entire sketch and then replaced the Java code with its JavaScript equivalent.  Due to the difference in how Java and JavaScript accessed object properties from methods inside an object, the with statement was used as a simple solution to avoid having to prepend all function calls with "this." or "Processing.".  However, the use of the with statement also meant that the JavaScript generated would fall off Trace /*cite trace paper here... do we need to talk about trace in the back ground section???*/ making the code run slower than it needed to in some browsers.  Later this method of scanning the entire sketch was replaced by the creation of an abstract syntax tree that broke up the code into smaller pieces.  Each piece then had the regular expressions applied to change it.  This made it was easier to apply the regular expressions correctly without accidentally converting code that was already working.  It also made it easier to create proper inheritance structures and attach properties and methods to the correct object in the hierarchy chain as smaller pieces of code was being converted at any one time.
 +
 
 +
==Browser Unification==
 +
 
 +
One important feature provided by Processing.js is that it hides the differences between browsers.  Web standards are often loosely defined, and thus variations can exist. These variations not only exist between different browser vendors but can even exist between versions of the same browser on different platforms.  Something as simple as key events can vary widely between browsers.  Processing.js hides a large number of these differences from the user by creating a unified method of handling events.  Regardless of the browser/platform, the functions for handling events within Processing.js are handled the same way.
 +
 
 +
Different browser makers are also at various stages of implementation for various newer technologies.  For example, WebGL provides typed arrays which are much faster than traditional JavaScript arrays.  While these typed arrays are implemented for WebGL, they can be used outside of that context also and can provide tremendous speed improvement.  However, not every browser supports WebGL at this time thus a fallback to regular JavaScript arrays is necessary if the feature does not exist.
 +
 
 +
By hiding these differences between browser makers from the user, Processing.js provides a means for game developers to make games without worry about the differences between browsers.  If a feature exists that can make the rendering smoother and faster, Processing.js will make use of it to increase performance.  If it does not exist a fallback mechanism is available to allow it to still run.
 +
 
 +
==3D support==
 +
 
 +
The introduction of the <canvas> tag into the HTML5 specification allowed Processing to be ported to JavaScript, thus enabling users to run 2D sketches within the browser without additional plug-ins. At the time when porting began, there was no plug-in free method of delivering 3D content. This limited Processing.js to its 2D functions.  WebGL, A JavaScript API that is based on OpenGL ES 2.0, is now being implemented by Firefox, Chrome and Safari. It is has become a viable candidate for use in Processing.js to render 3D sketches.  Additionally, since WebGL closely matches OpenGL which is used by Processing, the porting of the 3D Processing functions was relatively straight forward.
 +
 
 +
===Differences between OpenGL and WebGL===
 +
The matter of porting Processing (which uses OpenGL /*1.x?? if it was opengl 2.0 it would have been even easier right?*/) was simplified because the WebGL interface is similar that of OpenGL, but there are a number of differences between the interfaces. The single largest difference between WebGL and OpenGL 1.x is that like OpenGL ES 2.0, the fixed-function pipeline was been removed. Because of this, user-defined vertex and fragment shaders were necessary for lighting operations. Since some shapes in Processing aren't lit and others were, multiple shaders were written. One shader exists for lit objects such as boxes and spheres, another less complex shader was written for unlit objects such as lines and points.
 +
 
 +
The following shaders are used for rendering unlit shapes specified with begin/end function calls.
 +
 
 +
<pre>
 +
"varying vec4 vFrontColor;" +
 +
"attribute vec3 aVertex;" +
 +
"attribute vec4 aColor;" +
 +
"uniform mat4 uView;" +
 +
"uniform mat4 uProjection;" +
 +
"void main(void) {" +
 +
"  frontColor = aColor;" +
 +
"  gl_Position = uProjection * uView * vec4(aVertex, 1.0);" +

 +
"}";
 +
</pre>
 +
fragment shader:
 +
<pre>
 +
ifdef"GLfESf GL_ES\n" +
 +
"prehighpn highp float;\n" endif"#endif\n" +
 +
 
 +
"vvecinvFrontColorntColor;" +
 +
"void main(void){" +glrFragColoragCvFrontColorntColor;" +
 +
"}";
 +
</pre>
 +
 
 +
===Typed Arrays===
 +
Performance is always a concern when rendering 3D content, so it was necessary to create a faster version of JavaScript'script's inherently slow arrays types. Because of this, typed arrays were incorporated into pre-release versions of WebGL browsers. Unlike regular arrays which can contain different types such as strings, numbers and objects, typed arrays can only contain one type and cannot by dynamically resized. Some of these types include Float32Intay, Int32Uinty, Uint16ArrUintnd Uint8Array. These types provide a significant performance increase when manipulating arrays.
 +
 
 +
(table removed)
 +
<table border="1">
 +
<tr>
 +
<td>Operation</td>
 +
<td>Array</td>
 +
<td>Float32Array</td>
 +
</tr>
 +
 
 +
<tr>
 +
<td>Write</td>
 +
<td>8947</td>
 +
<td>1455</td>
 +
</tr>
 +
 
 +
<tr>
 +
<td>Read</td>
 +
<td>1948</td>
 +
<td>1109</td>
 +
</tr>
 +
 
 +
<tr>
 +
<td>Loop-copy</td>
 +
<td>&gt;10, 000</td>
 +
<td>1969</td>
 +
</tr>
 +
 
 +
<tr>
 +
<td>Slice-Copy</td>
 +
<td>1125</td>
 +
<td>503</td>
 +
</tr>
 +
 
 +
</table>
 +
 
 +
Win7 64Bit, 4GB Ram, Dual-Core 1.30Ghz Intel U7300
 +
(citation needed)
 +
 
 +
Alistair MacDonald
 +
 
 +
[http://weblog.bocoup.com/javascript-typed-arrays link]
 +
 
 +
Because typed arrays are only available for pre-release browsers, they cannot currently be used in 2D sketches. Once they become implemented in browsers, a significant amount of the Processing.js code base can make use of these structures, increasing performance throughout the library.... /* andor, mike said its in... is it???*/
 +
 
 +
==Conclusion==
 +
 
 +
==References==
 +
 
 +
=Notes=
 +
==Introduction==
 +
 
 +
Data visualization in a webpage beyond images typically required some sort of plug-in. However due to security concerns and general wariness to plugins, they are not the most effective means to deliver content.  Furthermore there are often some platform where a plugin does not exist or cannot exist.  Even Flash which is one of the most ubiquitous visual environments are not available on every platform.  The only real solution to web delivery of rich graphics is to integrate it into native browser technology.
 +
 
 +
The HTML <canvas> element allows the programatic delivery of graphics in a web page without plugins.  With its inclusion in the soon to be released IE 9, the <canvas> element now represents a means to deliver graphical content in all the major browsers.  The typical way to draw within a canvas is to use javascript but for artists, educators, and other people less familiar with Javascript, learning to do this can be a barrier to entry.
 +
 
 +
 
 +
The Processing language introduced by Ben Fry and Casey Reas is a simple and elegent language for data visualization that is already used by artists, educators as well as commercial media to deliver rich graphical content called sketches.  There is a large body of work around the world which had been previously developed using Processing. However, this is largely not something that is consistently delivered through a web page.  This is due to the fact that Processing was originally developed with Java and thus delivering Processing sketches required that the user install a Java plugin.  Furthermore the sketches themselves were self contained items as opposed to being part of a web page.  That is, the elements of the Document Object Model (DOM) of a webpage could not interact with it or vice versa. Thus, while it was possible to deliver visual content it would be difficult to create Processing sketches to take full advantage of modern web services such as flickr, twitter etc.
 +
 
 +
Processing.js is an open source, cross browser Javascript port of the Processing language.  It uses the canvas element for rendering and does not require any plug-ins.  However, Processing.js is more than just a Processing parser written in JavaScript.  It also enables the embedding of other web technologies into Processing sketches.  This extension will allow for a new set of visualizations previously not possible.  Processing.js seamlessly integrates web technologies with the processing language to provide an accessible framework for multimedia web applications.
 +
 
 +
==Background==
 +
 
 +
The processing.js project was started by John Resig who wanted to utilize the HTML5 canvas element and take advantage of the Java Processing language. It took about seven months to get a working version, consisting of 5000 lines of code but it was not a complete port of the Processing language.  The project, similarly to other open source products, was released with the hope that a developer community will converge around it and contribute to development.
 +
 
 +
"The Mozilla experience however, suggests that proprietary products may not be well-suited to distributed development if they have tightly-coupled architectures. There is a need to create an “architecture for participation,” one that promotes ease of understanding by limiting module size, and ease of contribution " - (MacCormack, Rusnak and Baldwin 2004).
 +
 
 +
In September 2009, the work to complete the Processing port to JavaScript was begun. In order to facilitate an architecture for participation a number of things needed to happen. First and foremost the source code had to be readily available. Secondly, the inner workings of the project and the missing functionality must be publicized and a dialog started.  To this end the source code was made available publicly on GitHub and an issue tracking system was used to manage the large number of issues needed to be resolved in order to complete the port.  A review process was setup to ensure that the code submitted was of sufficient quality.
 +
 
 +
==DOM Integration?? (need a better header)==
 +
 
 +
Processing.js is more than just a Processing parser written in JavaScript. It is designed in a way that connects the Processing language (also known as P5) with web technologies such as JavaScript, the HTML5 canvas element, JQuery, and various web services.  Furthermore, Processing.js is built in such a way as to allow easy integration of new technologies as they emerge.
 +
 
 +
The original Processing Language is Java based. To run a Processing sketch in a web page, the Java code has to be completely converted into JavaScript. While syntactically JavaScript and Java are fairly similar, there are some fundamental differences that has made this conversion challenging.  The first is that we wanted to do this conversion dynamically in real time.  The code produced by the converter needed to be fully object oriented and we had to provide support to all native Java functions and objects (such as Strings) that are supported by Processing.  We also had to take into account the differences between working with web resources vs local resources.  Furthermore we had to consider how we would handle some fundamental differences between Java and JavaScript such as typed vs. typeless variables, function overloading and variable name overloading.
 +
 
 +
From it's inception, Processing.js was designed to be more than just a rewrite of the Java functions provided by Processing to JavaScript.  John Resig wrote the original Processing.js parser to scan a Processing sketch for hints of Java code and convert that code to JavaScript.  However, if the parser encountered JavaScript code, it would leave the code intact.  This method allowed not only for the conversion of existing Processing code to JavaScript but the injection of JavaScript into Processing sketches as well.  This simple idea means that within a processing sketch Java and JavaScript code can exist together without any need to declare the language you are using.
 +
 
 +
 
 +
 
 +
 
 +
==3D support==
 +
 
 +
The introduction of the <canvas> tag into the HTML5 specification allowed Processing to be ported to JavaScript, thus enabling users to run 2D sketches within the browser without additional plug-ins. At the time when porting began, there was no plug-in free method of delivering 3D content. This limited Processing.js to its 2D functions.  WebGL, A JavaScript API that is based on OpenGL ES 2.0, is now being implemented by Firefox, Chrome and Safari. It is now a viable candidate for use in Processing.js to render 3D sketches.  Additionally, since WebGL closely matches OpenGL which is used by Processing, it substantially aided the porting process.
 +
 
 +
===Differences===
 +
The matter of porting Processing (which uses OpenGL) was simplified because the WebGL interface is similar that of OpenGL, but there are a number of differences between the interfaces. Arguably, the single largest difference between WebGL and OpenGL is that like OpenGL ES 2.0, the fixed-function pipeline was been removed. Because of this, not all Processing source code could not be ported directly. Instead, user-defined vertex and fragment shaders were necessary to write for lighting operations. Since some shapes in Processing aren't lit and others were, multiple shaders were written. One shader exists for lit objects such as boxes and spheres, another less complex shader was written for unlit objects such as lines and points.
 +
 
 +
The following shaders are used for rendering unlit shapes specified with begin/end function calls.
 +
 
 +
<pre>
 +
"varying vec4 vFrontColor;" +
 +
"attribute vec3 aVertex;" +
 +
"attribute vec4 aColor;" +
 +
"uniform mat4 uView;" +
 +
"uniform mat4 uProjection;" +
 +
"void main(void) {" +
 +
"  frontColor = aColor;" +
 +
"  gl_Position = uProjection * uView * vec4(aVertex, 1.0);" +

 +
"}";
 +
</pre>
 +
fragment shader:
 +
<pre>
 +
ifdef"GLfESf GL_ES\n" +
 +
"prehighpn highp float;\n" endif"#endif\n" +
 +
 
 +
"vvecinvFrontColorntColor;" +
 +
"void main(void){" +glrFragColoragCvFrontColorntColor;" +
 +
"}";
 +
</pre>
 +
 
 +
Examining the shaders reveals some of the idiosyncrasWebGLf WebGgl The gl_Color keyword is considered invalid. Instead, users must create their own varying vector. Furthermore, a preprocessor statement to set float types to use high precision is also required. These are some examples of changes to the specifications changes which were introduced over time.
 +
 
 +
===Typed Arrays===
 +
Performance is always a concern when rendering 3D content, so it was necessary to create a faster version of JavaScript'script's inherently slow arrays types. Because of this, typed arrays were incorporated into pre-release versions of WebGL browsers. Unlike regular arrays which can contain different types such as strings, numbers and objects, typed arrays can only contain one type and cannot by dynamically resized. Some of these types include Float32Intay, Int32Uinty, Uint16ArrUintnd Uint8Array. These types provide a significant performance increase when manipulating arrays.
 +
 
 +
(table removed)
 +
<table border="1">
 +
<tr>
 +
<td>Operation</td>
 +
<td>Array</td>
 +
<td>Float32Array</td>
 +
</tr>
 +
 
 +
<tr>
 +
<td>Write</td>
 +
<td>8947</td>
 +
<td>1455</td>
 +
</tr>
 +
 
 +
<tr>
 +
<td>Read</td>
 +
<td>1948</td>
 +
<td>1109</td>
 +
</tr>
 +
 
 +
<tr>
 +
<td>Loop-copy</td>
 +
<td>&gt;10, 000</td>
 +
<td>1969</td>
 +
</tr>
 +
 
 +
<tr>
 +
<td>Slice-Copy</td>
 +
<td>1125</td>
 +
<td>503</td>
 +
</tr>
 +
 
 +
</table>
 +
 
 +
Win7 64Bit, 4GB Ram, Dual-Core 1.30Ghz Intel U7300
 +
(citation needed)
 +
 
 +
Alistair MacDonald
 +
 
 +
[http://weblog.bocoup.com/javascript-typed-arrays link]
 +
 
 +
Because typed arrays are only available for pre-release browsers, they cannot currently be used in 2D sketches. Once they become implemented in browsers, a significant amount of the Processing.js code base can make use of these structures, increasing performance throughout the library.
 +
 
 +
===Specification Changes and Browser Inconsistencies===
 +
As the specification is concurrently implemented in different browsers, several inconsistencies between browsers  have appeared. These range from minor issues, such as Minefield and Chrome/Chromium return "function" while WebKit returns "object" when the type of a typed array is queried. Another is  the way WebGL's readPixels() function is implemented. This function isn't used extensively in the library itself, but it is used in the Processing.js reference testing framework.
 +
 
 +
===Problems===
 +
WebGL provides a close match to OpenGL for incorporating 3D into Processing.js, but it does present some issues when trying to port over code. There are interface differences, changes to the interface are common, and some functionality isn't available at all such as  point smoothing.
 +
 
 +
 
 +
==Browser Unification==
 +
 
 +
One important feature provided by Processing.js is that it hides the differences between browsers.  Web standards are often loosely defined, and thus variations can exist. These variations not only exist between different browser vendors but can even exist between versions of the same browser on different platforms.  Something as simple as key events can vary widely between browsers.  Processing.js hides all these intricacies from the user keeping it simple for content creators.
 +
 
 +
 
 +
/*Above this line is our final draft, below this line is the original writeups*/
 +
 
 +
/* ToDo:  Rewrite as game paper, conclusion, references, demos, video editing*/
 +
 
 +
 
 +
/* Mike an Andor...so does pjs use typed arrays for 2D if available?  or just 3D?*/
 +
 
 +
One thing the web is known for is innovation.  This is the case for Processing.js and many of the browsers on which the library is used.  With innovation, there comes differences in implementation.  Each browser handles key strokes and other web events differently.  This is due to a somewhat lenient standardization that mostly just ensures that certain events exist.  It is not preventative for browser vendors to customize and create their own unique events, which would stifle innovation.
 +
 
 +
Developers need to make sure that their creation handles the necessary differences for all browsers.  We ensured that this was done for Processing.js so that the functionality of the Processing language be easily accessible for the open web.  Processing.js does not only handle events, but it takes those events and standardizes it to copy (or at the very least imitate) a proper Processing compilation.  One of the biggest pieces of code in Processing.js that we worked on to unify the browsers involve key events.
 +
 
 +
Handling key events was a difficult task because not only were there different browsers but the functionality of those browsers varied with different operating systems.  We found glitches wherein Google Chrome was doing something entirely different on an Apple OSX system compared to Google Chrome on a Linux Ubuntu system.  We opted for feature detection to handle specific bugs such as the aforementioned.  It was the appropriate move compared to browser detection, which would have left it less manageable and more complicated.  Browser detection involves obtaining a specific string or phrase that we can extract from browsers.  However, this method is dangerous due to the fact that we can never really predict what the string we extract will say.  One version may say something but the next update from the browser vendor may change the string entirely.  If relied upon, it would break whole sections of code.  Feature detection may still break if the feature is removed within the next update.  The great idea behind feature detection is that it would only break that specific feature within the code and can be easily pinpointed.
 +
 
 +
Key event feature detection turned out to be a daunting task.  Generally, this wouldn't be such a tough task.  It would involve just returning or modifying the key given by the stroke and browser.  With Processing, it involves the use of user written functions when pressing, holding or releasing a key.  So, we had to adapt the browser key strokes to run those functions when needed.  This adaptation involved making sure that the keys were fired and re-fired properly.  It involved a lot of testing and manipulating using a Processing IDE. 
 +
 
 +
(figure/image of w3c keycode/charcode app comparing chrome and firefox, using the same key (a) - http://www.w3.org/2002/09/tests/keys.html)
 +
 
 +
As seen above (in Figure …), keyCode under the keypress column on Firefox fires a 0.  Whereas the same row and column on Chrome, gives a 97 like the charCode.  Re-firing of keys also differ.  Chrome likes to re-fire both the keydown and keypress events; Firefox only re-fires the keypress.  Manually adjusting and testing this was definitely a task.  In the end, we managed to replicate the key strokes of Processing while using different browsers and maintaining browser accessibility for artists and developers.
 +
 
 +
Keys are not the only code we've worked with to ensuring browser accessibility.  Another example is the newly implemented typed arrays for Javascript.
 +
 
 +
// Typed Arrays: fallback to WebGL arrays or Native JS arrays if unavailable
 +
  function setupTypedArray(name, fallback) {
 +
    // check if TypedArray exists
 +
    // typeof on Minefield and Chrome return function, typeof on Webkit returns object.
 +
    if (typeof this[name] !== "function" && typeof this[name] !== "object") {
 +
      // nope.. check if WebGLArray exists
 +
      if (typeof this[fallback] === "function") {
 +
        this[name] = this[fallback];
 +
      } else {
 +
        // nope.. set as Native JS array
 +
        this[name] = function(obj) {
 +
          if (obj instanceof Array) {
 +
            return obj;
 +
          } else if (typeof obj === "number") {
 +
            return new Array(obj);
 +
          }
 +
        };
 +
      }
 +
    }
 +
  }
 +
 
 +
The code above shows feature detection for typed arrays.  As seen from the commenting, Minefield/Firefox and Chrome return functions for the typeof the object and webkit returns an object.  In new technologies like this and WebGL, as another example, standardization is very new and limited so browsers have lots of wiggle room to customize.  We, as developers of Processing.js, code it so when other developers use our library they do not have to worry about the differences and quirks of different browsers.
 +
 
 +
 
 +
Resources:
 +
http://www.w3.org/2002/09/tests/keys.html
 +
http://www.quirksmode.org/
 +
 
 +
 
 +
/*Above this line is our final draft, below this line is the original writeups*/
 +
 
 +
We could of done a straight up JavaScript port of the Processing language, but that would mean all Processing sketches written in Processing, would need to be rewritten in JavaScript. This way, all previous Processing sketches can simply be dropped into the web, and they will work. We took this one step further, allowing both languages to mingle as one. When we parse the Java into JavaScript, we don't break previously existing JavaScript, this means you can add JavaScript right into the Java, without having to declare that you are doing so. We simply ignore the JavaScript we encounter while parsing the Java, leaving it in tact. Not only do we allow mingling of the two languages, which is unique and powerful in itself, but also allows for sketches to be written in pure JavaScript. The advantages of this is we had a huge library of work to test and draw from right from the beginning.
 +
 
 +
John Resig, the mastermind behind Jquery, is also the mastermind behind Processing.js. His initial work was to use regex to scan the sketch source code for hints of Java, replace it with JavaScript, and leave all JavaScript in tact. He started by taking a previously existing Processing sketch, adding functional support to make that one sketch work, and doing this one sketch at a time, creating missing functions as needed. He took advantage of the pre existing library of sketches, so for each sketch he explicitly supported, he would be that much closer to implicitly supporting other sketches.
 +
 
 +
“In development I worked in a backwards manner. Instead of building the API up from the ground - I worked from the top, down, implementing enough of the API to get individual demos working.” -http://ejohn.org/blog/processingjs/
 +
 
 +
Scott Downe's work was mostly related to fixing bugs, and removing the dangerous JavaScript function with. Fixing bugs was a good place to start learning the code, getting his feet wet. The first bug he fixed was to make sure potential code contained in strings were not parsed. This was initially accomplished by masking all strings with a key, and storing their values before the code was parsed, and later replacing the unchanged strings via their keys after parsing. Other, smaller bugs were fixed until it became apparent that the use of the with function meant we would fall off trace, and wouldn't reach our full speed potential. With was being used in two places, first being around all of the sketch, to load in the whole of the Processing library, and to load in method calls from internal function use. We have to do this, because of the differences in how Java and JavaScript call and access their object properties. JavaScript accesses all properties within the object itself separated with a dot from inside or outside the object, where as Java only needs a dot when accessed from outside the object. Using with meant we could contain all Processing functions inside an object, and not have to change how it is called inside the Java. This was the easiest and fastest way to do this, but needed to be changed. Removing with meant prepending the processing object to all calls to the API and internal object properties. So we needed to store a list of the existing properties for both the API and created objects, and when the parser finds a match, prepends itself, either being “Propcessing” or “this” to the property. This worked, but was fragile; we were still using regex's, and doing this to the whole of the source, meaning each new regex we called was a danger to parse code that is similar, but different, potentially breaking code we did not intend to that previously worked. Despite working, this was a hack and a maintenance nightmare. We needed something better.
 +
 
 +
Notmasteryet rewrote the parser to convert the sketch into an abstract syntax tree, which is an abstract tree representation of blocks of code. By doing this, blocks can be precisely parsed without the worry of breaking or parsing unintended things in an unexpected way. Regex is still used for each part, but is now contained to specifically targeted smaller chunks code, instead of the whole thing. This makes maintaining the code much easier, makes object inheritance easier, and makes JavaScript code included in the sketch more stable. In fact, since the abstract syntax tree's inclusion, we have found new bugs in the parser to be pretty much non existent.
 +
 
 +
Each of the above people contributed object inheritance in some form or another, but I wanted to specifically touch on the challenges in inheritance. Object inheritance was much easier using with, because we could easily add the inherited properties to an object, and when called, not worry about where it is being called from. When with is removed, we had to maintain this data internally, and be able to prepend the right object to the right method calls. This got significantly more complicated when you consider where things may be called from, including super constructors, and super methods calling methods form its parent, calling these potentially chaining calls in the correct order. Because we have to store all created classes methods at the time of parsing, we don't yet know if another class will use it as a super class, so all classes and their properties must be stored, so later we can prepend the correct object to the correct calls in a complex chain of limitless inherited calls. This was buggy and fragile code that took a while to get right, but Notmasteryet's work helped a ton in this area, and something we are quite proud of.
 +
 
 +
 
 +
/* future work or things to watch if using pjs below*/
 +
 
 +
Some of the differences between Java and JavaScript presented some unique challenges. Some of which are still unsolved. Because at the time of parsing, we are just parsing the code as if it was pure text, so we cannot validate any of the data referenced in the code. When an image is to be loaded in the code, the client will now have to download that image from the server, this is a unique problem that Processing does not have. This means an image may not be available when needed, and getting that data directly from the source at time of parse is not reliable, we would need to know this before we parse. We solved this by adding a directive at the top of the code that would define all images needed to be preloaded, so we can parse the directive first, then convert the code to JavaScript, then run it, safely knowing images will be ready to use at run time. Java supports overloading, in that its functions are uniquly identified by their name, return type, and parameters, this making up a function's signature. ( - source this ) JavaScript only holds the function name as its signature, presenting another unique problem. We can check the number of parameters in a function, and merge all overloaded functions into one, and check the number of arguments passed in, to know which block to call. This check is at run time, not at call time as Java would do it. However, we currently do not reliably check the type of the arguments passed in, so it will break if a function has two versions, first accepting a single string as the only argument, and the second accepting a single number as the only argument. Similarly, if we have a variable using the same name as a function, called variable name overloading, we will break in the same way. This is because Java would consider these different things, and JavaScript considers a function to be a variable of a different type, sharing the same space.
 +
 
 +
“In order to support this there would have to be considerable overhead - and it's generally not a good practice to begin with.” -http://ejohn.org/blog/processingjs/
 +
 
 +
Another interesting difference stems from Java being a typed language, and JavaScript being typeless. Java would require casting in most cases, where as with javaScript we can simply throw the cast away for all literal variable types. The problem is if the type is something like a double, or a char, which in JavaScript is simply a string or int. ( source this? ) We solved this for chars with a custom char class, it solved a lot of issues we were having but it is not perfect, by not solving all issues in all cases. Some other types like double and byte will require more overhead and will not be possible without complete type tracking.
 +
 
 +
==Demos==
 +
 
 +
===Image manipulation===
 +
Processing.js includes full support for pixel and color manipulation of images on the canvas element.  Images can be resized, tinted, blended, copied, resized, or have filters and masks applied to them.  Images can also be manipulated at the pixel level allowing for any level of image manipulation required.  Images can also be created and filled from pieces of other images, the current canvas content, or have their pixels filled dynamically.  This functionality allows for images to be created from external data that is passed into the processing sketch and visualized through code.
 +
  copying pieces of an image
 +
blending regions of an image with different modes
 +
different types of filters applied to an image
 +
resizing an image
 +
Pjs directives
 +
In order for Processing.js to closely match the functionality of the native Processing language some custom flags had to be created to make the library behave like the native language.  Pjs directives are a set of commands that are embedded in a multiline comment at the top of the sketch to control a few aspects of how the sketch will work. Placing the directives in a multiline comment allows for backwards compatibility of sketches with native Processing so that sketches written in Processing.js can be run on the native Processing JAVA platform.  There are currently three Processing.js directives.  These directives add the ability to preload images before the sketch begins to run, and to toggle transparent backgrounds and anti-aliasing of lines.
 +
 
 +
==Browser Unification==
 +
 
 +
One thing the web is known for is innovation.  This is the case for Processing.js and many of the browsers on which the library is used.  With innovation, there comes differences in implementation.  Each browser handles key strokes and other web events differently.  This is due to a somewhat lenient standardization that mostly just ensures that certain events exist.  It is not preventative for browser vendors to customize and create their own unique events, which would stifle innovation.
 +
 
 +
Developers need to make sure that their creation handles the necessary differences for all browsers.  We ensured that this was done for Processing.js so that the functionality of the Processing language be easily accessible for the open web.  Processing.js does not only handle events, but it takes those events and standardizes it to copy (or at the very least imitate) a proper Processing compilation.  One of the biggest pieces of code in Processing.js that we worked on to unify the browsers involve key events.
 +
 
 +
Handling key events was a difficult task because not only were there different browsers but the functionality of those browsers varied with different operating systems.  We found glitches wherein Google Chrome was doing something entirely different on an Apple OSX system compared to Google Chrome on a Linux Ubuntu system.  We opted for feature detection to handle specific bugs such as the aforementioned.  It was the appropriate move compared to browser detection, which would have left it less manageable and more complicated.  Browser detection involves obtaining a specific string or phrase that we can extract from browsers.  However, this method is dangerous due to the fact that we can never really predict what the string we extract will say.  One version may say something but the next update from the browser vendor may change the string entirely.  If relied upon, it would break whole sections of code.  Feature detection may still break if the feature is removed within the next update.  The great idea behind feature detection is that it would only break that specific feature within the code and can be easily pinpointed.
 +
 
 +
Key event feature detection turned out to be a daunting task.  Generally, this wouldn't be such a tough task.  It would involve just returning or modifying the key given by the stroke and browser.  With Processing, it involves the use of user written functions when pressing, holding or releasing a key.  So, we had to adapt the browser key strokes to run those functions when needed.  This adaptation involved making sure that the keys were fired and re-fired properly.  It involved a lot of testing and manipulating using a Processing IDE. 
 +
 
 +
(figure/image of w3c keycode/charcode app comparing chrome and firefox, using the same key (a) - http://www.w3.org/2002/09/tests/keys.html)
 +
 
 +
As seen above (in Figure …), keyCode under the keypress column on Firefox fires a 0.  Whereas the same row and column on Chrome, gives a 97 like the charCode.  Re-firing of keys also differ.  Chrome likes to re-fire both the keydown and keypress events; Firefox only re-fires the keypress.  Manually adjusting and testing this was definitely a task.  In the end, we managed to replicate the key strokes of Processing while using different browsers and maintaining browser accessibility for artists and developers.
 +
 
 +
Keys are not the only code we've worked with to ensuring browser accessibility.  Another example is the newly implemented typed arrays for Javascript.
 +
 
 +
// Typed Arrays: fallback to WebGL arrays or Native JS arrays if unavailable
 +
  function setupTypedArray(name, fallback) {
 +
    // check if TypedArray exists
 +
    // typeof on Minefield and Chrome return function, typeof on Webkit returns object.
 +
    if (typeof this[name] !== "function" && typeof this[name] !== "object") {
 +
      // nope.. check if WebGLArray exists
 +
      if (typeof this[fallback] === "function") {
 +
        this[name] = this[fallback];
 +
      } else {
 +
        // nope.. set as Native JS array
 +
        this[name] = function(obj) {
 +
          if (obj instanceof Array) {
 +
            return obj;
 +
          } else if (typeof obj === "number") {
 +
            return new Array(obj);
 +
          }
 +
        };
 +
      }
 +
    }
 +
  }
 +
 
 +
The code above shows feature detection for typed arrays.  As seen from the commenting, Minefield/Firefox and Chrome return functions for the typeof the object and webkit returns an object.  In new technologies like this and WebGL, as another example, standardization is very new and limited so browsers have lots of wiggle room to customize.  We, as developers of Processing.js, code it so when other developers use our library they do not have to worry about the differences and quirks of different browsers.
 +
 
 +
 
 +
Resources:
 +
http://www.w3.org/2002/09/tests/keys.html
 +
http://www.quirksmode.org/
 +
 
 +
==community and collaboration==
 +
Society has a vital interest in encouraging and rewarding innovation. Presently, there are two major models characterizing how this may be done. The first, the “private investment” model and the second, the “collective action” model (von Hippel and von Krogh 2003). Von Hippel and von Krogh go on to say that the private investment model assumes private returns to the innovator resulting from private goods and efficient rule of intellectual property protection. Whereas the collective action model assumes collaboration from multiple innovators resulting in a public good that can be accessed by anyone.
 +
 
 +
 +
 
 +
The phenomenon of open source software development illustrates that in order to solve a shared or personal technical problem, users program and reveal their innovations without getting private returns from selling the software.  The source code of open source software is made freely available so that users can access, modify, and redistribute it (Shuo July 2010). Open source projects are released under the terms and requirements of certain licenses.
 +
 
 +
 +
 
 +
The processingjs project was started by one individual who wanted to utilize the HTML5 canvas element and take advantage of the Java Processing language. It took about seven months to get a working version, consisting of 5000 lines of code, of the project released. However, the part of the project that allowed for dynamic conversion of code written in the Processing language, to JavaScript, referred to as the parser, was limiting. Moreover, the release contained a lot of gaps as some of the functionality was not yet supported (Resig 2008).
 +
 
 +
 +
 
 +
The project, similarly to other open source products, was released with the hope that a developer community will converge around it and contribute to development. The Mozilla experience however, suggests that proprietary products may not be well-suited to distributed development if they have tightly-coupled architectures. There is a need to create an “architecture for participation,” one that promotes ease of understanding by limiting module size, and ease of contribution (MacCormack, Rusnak and Baldwin 2004). In order to facilitate an architecture for participation a number of things needed to happen. First and foremost the source code must be readily available. Secondly, the inner workings of the project and the missing functionality must be publicized and a dialog started.
 +
 
 +
 +
 
 +
A Git repository was started to allow contributors and users easy access to the project’s source code.  Git is an extremely fast, efficient, distributed version control system ideal for the collaborative development of software. The repository is hosted by GitHub which provides an online way of collaborating with others and forking repositories (GitHub Social Coding 2010). GitHub makes Open Source’s fork-and-extend legal capability a practical reality (Walsh 2009). This promotes a pressure free environment where any contributor can alter the code of their own repository without worrying about their coding style or syntax.
 +
 
 +
 +
 
 +
To raise awareness and encourage dialog both a project website and an online discussion channel were made. The website consisted of tutorials that allowed novice users to quickly pick-up the project, demonstrations of previous Java Processing examples that were ported to processingjs, and a list of features that were not yet supported. Furthermore an Internet Relay Chat (IRC) channel was made to allow for general discussions on the project as well as a Google Group which would facilitate discussions for those unfamiliar with IRC.
 +
 
 +
 +
 
 +
The project grew and attracted numerous contributors. However, as Behlendorf (1999) stated, “essential to the health of an open-source project is that the project have sufficient momentum to be able to evolve and respond to new challenges. Nothing is static in the software world, and each major component requires maintenance and new enhancements continually”. To support the growth of the project Lighthouse, an online issue tracking system was put in place. Lighthouse allows anyone to create tickets related to the project. A ticket may have many purposes including reporting a bug in the current code, requesting a new feature, or simply starting a discussion. A major advantage to using Lighthouse is the ability to plan milestones and allow users to see which features and bugs fixes will be available in the next release. Not to mention the tracking of discussions that have already happen that novice users and new contributors can learn from. Of course an issue tracking system is not all the project needed to succeed. In September of 2009 ten students from Canada’s Seneca College joined the project with the hopes of releasing a 1.0 version – the projects first stable release. The introduction of new contributors was vital to the health of the project. As identified by Liu et al (2010), a high turn-over rate of developers is common in an Open Source project but it also proves to be very challenging. With a dedicated team that included a release engineer it became possible to have frequent releases of the project and an up-to-date project repository.  However, it also brought to life another well known problem often found in Open Source projects; bad code quality.
 +
 
 +
 +
 
 +
A 2008 study done by Koch and Neumann that analyzed the impact on quality and design associated with the number of contributors and the amount of their work yielded the following conclusion. “We identify the number of commits, the number of distinct programmers, and the active time as factors of influence which have a negative effect on quality. In particular, complexity and size are negatively influenced by these process metrics. Furthermore a high concentration of added work fosters bad quality.” To ensure that all code patches meat the coding standards, and passed various tests a two step review process was put in place. The first step was a peer-review that can be performed by virtually anyone but was usually performed by another contributor. The second step was a super-review that was performed by only the contributors that had the appropriate status. In order to be able to perform super-reviews a contributor must have a combination of the following, advanced JavaScript knowledge, thorough knowledge of the project and its components, or the ability to identify potential problems. In addition to this process each release was thoroughly tested on all platforms and all supporting browsers.
 +
 
 +
 +
 
 +
In December of 2010 the first stable version of processingjs was released. Included in the release were over 1,000 bug fixes, features, and under-the-hood improvements. At the time the project had twenty six recorded code contributors, eleven of which had the status of super reviewer. At least twenty users logged in to the IRC channel at any given time, 608 members of the Google Group and 99 forks of its repository.
 +
 
 +
==Scalable Vector Graphic Support==
 +
 
 +
Processingjs supports two major systems for representing graphics: raster, and vector graphics. Raster graphics are images represented by an array of pixels. Each pixel is either an RBG value or an index into a list of colors. This series of pixels, or bitmaps, is often stored in a compressed format such as JPEG, GIF, and PNG. Vector graphics however are objects rather than a series of pixels. They work by describing the grid points at which lines or curves are to be drawn. Some people describe vector graphics as a set of instructions for a drawing, while bitmap graphics (raster) are points of color in specific places (Eisenberg 2002). Vector graphics have a significant advantage over raster graphics because they are scalable; they can be scaled to any size without the loss of image quality. SVG, which stands for Scalable Vector Graphics, is a language which describes 2D graphics (straight lines or curves) expressed in mathematic relations in XML. Processingjs supports basic SVG shapes, path parsing, transformations and style, as well as shape reusability.
 +
 
 +
 +
 
 +
Basic SVG shapes include line, circle, ellipse, rectangle, polygon and polyline. As mentioned above the SVG language will provide instructions on drawing each shape. The attributes of the circle include center x-coordinate, center y-coordinate, and the radius. The x and y coordinate of 0 represents the upper left corner of the picture. The y coordinates increase as you move vertically downwards; and the x coordinates increase as you move horizontally to the right.
 +
 
 +
 +
 
 +
Paths represent the outline of a shape which can be filled, stroked, used as a clipping path, or any combination of the three. A path is described using the concept of a current point. In an analogy with drawing on paper, the current point can be thought of as the location of the pen. The position of the pen can be changed, and the outline of a shape (open or closed) can be traced by dragging the pen in either straight lines or curves. Paths represent the geometry of the outline of an object, defined in terms of moveto (set a new current point), lineto (draw a straight line), curveto (draw a curve using a cubic Bézier), arc (elliptical or circular arc) and closepath (close the current shape by drawing a line to the last moveto) elements. Compound paths (i.e., a path with multiple subpaths) are possible to allow effects such as "donut holes" in objects (Paths 2010). Table 1.1 illustrates the different commands represented inside a path. Uppercase commands use absolute coordinates and lowercase commands use relative coordinates.
 +
 
 +
 +
 
 +
Path commands
 +
 
 +
Command Arguments Effect
 +
 
 +
Command Arguments Effect
 +
 
 +
Command Arguments Effect
 +
 
 +
M, m
 +
 
 +
x y
 +
 
 +
Move  to given coordinates.
 +
 
 +
L, l
 +
 
 +
x y
 +
 
 +
Draw a line to the given
 +
 
 +
H, h
 +
 
 +
x
 +
 
 +
Draw  a horizontal line to the given x-coordinate.
 +
 
 +
V, v
 +
 
 +
y
 +
 
 +
Draw  a vertical line to the given x-coordinate.
 +
 
 +
A, a
 +
 
 +
rx ry
 +
 
 +
x-axis-rotation
 +
 
 +
large-arc sweep
 +
 
 +
Draw an elliptical arc from  the current point to (x, y). The points are  on an ellipse with x-radius rx and y-radius ry. The ellipse is rotated x-axis-rotation degrees. If the  arc is less than 180 degrees, large-arc is zero; if greater than 180  degrees, large-arc is one. If the  arc is to be drawn in the positive direction, sweep is one; otherwise it is zero.
 +
 
 +
Q, q
 +
 
 +
x1 y1 x y
 +
 
 +
Draw a quadratic Bézier  curve from the current point to (x, y) using control point (x1, y1).
 +
 
 +
T, t
 +
 
 +
x y
 +
 
 +
Draw a quadratic Bézier  curve from the current point to (x, y). The control point will be the reflection of the previous Q command's control point. If  there is no previous curve, the current point will be used as the control  point.
 +
 
 +
C, c
 +
 
 +
x1 y1 x2 y2 x y
 +
 
 +
Draw a cubic Bézier curve  from the current point to (x, y) using control  point (x1, y1) as the control point for  the beginning of the curve and (x2, y2) as the control point for the endpoint of the curve.
 +
 
 +
S, s
 +
 
 +
x2 y2 x y
 +
 
 +
Draw a cubic Bézier curve  from the current point to (x, y), using (x2, y2) as the control point for  this new endpoint. The first control point will be the reflection of the  previous C command's ending  control point. If there is no previous curve, the current point will be used  as the first control point.
 +
 
 +
Table 1.1 Source: (Eisenberg 2002)
 +
 
 +
 +
 
 +
Transformations and styles can be applied to all elements in the SVG language. In order to change the placing of a particular shape a transformation can be applied. Moreover, to change a shape’s look a style attribute can be applied. Processingjs supports six transformations: matrix, translate, scale, rotate, skewX, and skewY. A matrix transformation specifies a transformation in the form of a transformation matrix of six values. Translate moves the shape to the x and y values provided. Scale increases or decreases the size of the shape. The rotate transformation rotates the shape either by its coordinates. You may supply multipleorigin or by a specific point. SkewX skews all x-coordinates by a specified angle. Visually, this makes vertical lines appear at an angle. Lastly, skewY skews all y-coordinates by a specified angle. This makes horizontal lines appear to be at an angle. One can apply multiple transformations to any shape. Styles that can be applied include opacity, fill, fill opacity, stroke, stroke weight, and stroke opacity.
 +
 
 +
 +
 
 +
Processingjs’ class structure enables shape reusability. Each shape or group of shapes has its own properties and can be recreated without the underlining SVG language.
 +
 +
 
 +
 +
 
 +
 +
 
 +
 +
 
 +
 +
 
 +
Bibliography
 +
 
 +
 +
 
 +
Eisenberg, David J. SVG essentials. O'Reilly & Associates, Inc. Sebastopol, 2002.
 +
 
 +
"Paths." SVG 1.1 (Second Edition). June 22 , 2010. http://www.w3.org/TR/SVG/paths.html#Introduction (accessed Dec 2010).
 +
 
 +
==DOM integration==
 +
What is this?
 +
Merging technologies
 +
Processing.js helps merge multiple new and emerging HTML5 technologies together to make design and production for the web easier.  Processing.js connects the processing language with web technologies such as WebGL, JavaScript, and the HTML5 canvas element.  More importantly the library is built in such a way as to allow new technologies to be added in at a later date and for the scope of the library to change as new technologies evolve.  In the future, other technologies such as 3D audio, controller inputs, and HTML5 video integration could be added to the library to allow Processing sketches to integrate with them.
 +
-WebGL integration example paragraph-
 +
Image manipulation
 +
Processing.js includes full support for pixel and color manipulation of images on the canvas element.  Images can be resized, tinted, blended, copied, resized, or have filters and masks applied to them.  Images can also be manipulated at the pixel level allowing for any level of image manipulation required.  Images can also be created and filled from pieces of other images, the current canvas content, or have their pixels filled dynamically.  This functionality allows for images to be created from external data that is passed into the processing sketch and visualized through code.
 +
  copying pieces of an image
 +
blending regions of an image with different modes
 +
different types of filters applied to an image
 +
resizing an image
 +
Pjs directives
 +
In order for Processing.js to closely match the functionality of the native Processing language some custom flags had to be created to make the library behave like the native language.  Pjs directives are a set of commands that are embedded in a multiline comment at the top of the sketch to control a few aspects of how the sketch will work. Placing the directives in a multiline comment allows for backwards compatibility of sketches with native Processing so that sketches written in Processing.js can be run on the native Processing JAVA platform.  There are currently three Processing.js directives.  These directives add the ability to preload images before the sketch begins to run, and to toggle transparent backgrounds and anti-aliasing of lines.
 +
 
 +
==WebGL section==
 +
[images here]
 +
 
 +
Andor Salga
 +
 
 +
'''WebGL Introduction'''<br />
 +
The introduction of the <canvas> tag into the HTML5 specification allowed Processing to be ported to JavaScript, thus enabling users to run 2D sketches within the browser without additional plug-ins. At the time when porting began, there still was no plug-in free method of delivering 3D content. This limited Processing.js to 2D until WebGL was introduced. Once WebGL was implemented on pre-release versions of Firefox, Safari and Chrome, it became a viable candidate for use in Processing.js to render 3D sketches.  Additionally, since WebGL closely matches OpenGL which is used by Processing, it substantially aided the porting process.
 +
 
 +
WebGL first began as an experimental add-on for Firefox developed at Mozilla. It was later adopted by the Khronos group who manage the OpenGL specifications. It is a JavaScript API which provides a subset of the functionality of OpenGL ES 2.0. The interface is relatively simple, yet it still provides enough functionality to emulate almost all of Processing's 3D functions. WebGL continues go through interface changes and revisions.
 +
 
 +
'''Differences'''<br />
 +
The matter of porting Processing (which uses OpenGL) was simplified because the WebGL interface is similar that of OpenGL, but there are a number of differences between the interfaces. Arguably, the single largest difference between WebGL and OpenGL is that like OpenGL ES 2.0, the fixed-function pipeline was been removed. Because of this, not all Processing source code could not be ported directly. Instead, user-defined vertex and fragment shaders were necessary to write for lighting operations. Since some shapes in Processing aren't lit, a few shaders were written. One shader exists for lit objects such as boxes and spheres, another less complex shader was written for unlit objects such as lines and points.
 +
 
 +
The following shaders are used for rendering unlit shapes specified with begin/end function calls.
 +
 
 +
<pre>
 +
"varying vec4 vFrontColor;" +
 +
"attribute vec3 aVertex;" +
 +
"attribute vec4 aColor;" +
 +
"uniform mat4 uView;" +
 +
"uniform mat4 uProjection;" +
 +
"void main(void) {" +
 +
"  frontColor = aColor;" +
 +
"  gl_Position = uProjection * uView * vec4(aVertex, 1.0);" +

 +
"}";
 +
</pre>
 +
fragment shader:
 +
<pre>
 +
ifdef"GLfESf GL_ES\n" +
 +
"prehighpn highp float;\n" endif"#endif\n" +
 +
 
 +
"vvecinvFrontColorntColor;" +
 +
"void main(void){" +glrFragColoragCvFrontColorntColor;" +
 +
"}";
 +
</pre>
 +
 
 +
Examining the shaders reveals some of the idiosyncrasWebGLf WebGgl The gl_Color keyword is considered invalid. Instead, users must create their own varying vector. Furthermore, a preprocessor statement to set float types to use high precision is also required. These are some examples of changes to the specifications changes which were introduced over time.
 +
 
 +
'''Typed Arrays'''<br />
 +
Performance is always a concern when rendering 3D content, so it was necessary to create a faster versJavaScript'script's inherently slow arrays types. Because of this, typed arrays were incorporated into pre-release versiWebGLf WebGL browsers. Unlike regular arrays which can contain different types such as strings, numbers and objects, typed arrays can only contain one type and cannot by dynamically resized. Some of these types include Float32Intay, Int32Uinty, Uint16ArrUintnd Uint8Array. These types provide a significant performance increase when manipulating arrays.
 +
 
 +
(table removed)
 +
<table border="1">
 +
<tr>
 +
<td>Operation</td>
 +
<td>Array</td>
 +
<td>Float32Array</td>
 +
</tr>
 +
 
 +
<tr>
 +
<td>Write</td>
 +
<td>8947</td>
 +
<td>1455</td>
 +
</tr>
 +
 
 +
<tr>
 +
<td>Read</td>
 +
<td>1948</td>
 +
<td>1109</td>
 +
</tr>
 +
 
 +
<tr>
 +
<td>Loop-copy</td>
 +
<td>&gt;10, 000</td>
 +
<td>1969</td>
 +
</tr>
 +
 
 +
<tr>
 +
<td>Slice-Copy</td>
 +
<td>1125</td>
 +
<td>503</td>
 +
</tr>
 +
 
 +
</table>
 +
 
 +
Win7 64Bit, 4GB Ram, Dual-Core 1.30Ghz Intel U7300
 +
(citation needed)
 +
 
 +
Alistair MacDonald
 +
 
 +
[http://weblog.bocoup.com/javascript-typed-arrays link]
 +
 
 +
Because typed arrays are only available for pre-release browsers, they cannot currently be used in 2D sketches. Once they become implemented in browsers, a significant amount of the Processing.js code base can make use of these structures, increasing performance throughout the library.
 +
 
 +
'''Specification Changes and Browser Inconsistencies'''<br />
 +
As the specification is concurrently implemented in different browsers, several inconsistencies between browsers  have appeared. These range from minor issues, such as Minefield and Chrome/Chromium return "function" while WebKit returns "object" when the type of a typed array is queried. Another is  the way WebGL's readPixels() function is implemented. This function isn't used extensively in the library itself, but it is used in the Processing.js reference testing framework.
 +
 
 +
'''Problems'''<br />
 +
WebGL provides a close match to OpenGL for incorporating 3D into Processing.js, but it does present some issues when trying to port over code. There are interface differences, changes to the interface are common, and some functionality isn't available at all such as  point smoothing.
 +
 
 +
==Js and processing integration==
  
 
Processing is Java based, and in order to make it work in the web, it has to be completely converted into JavaScript. Syntactically JavaScript and Java are actually quite similar, and people have done work like this before (google, java nes emulator to js nes emulator). Our unique challenges were that we had to do this dynamically, be fully object oriented, support all native Java functions that are supported by Processing, and consider all web like differences, like images having to be pre loaded before we can start processing the code, casting typeless variables,  function overloading, and variable name overloading.
 
Processing is Java based, and in order to make it work in the web, it has to be completely converted into JavaScript. Syntactically JavaScript and Java are actually quite similar, and people have done work like this before (google, java nes emulator to js nes emulator). Our unique challenges were that we had to do this dynamically, be fully object oriented, support all native Java functions that are supported by Processing, and consider all web like differences, like images having to be pre loaded before we can start processing the code, casting typeless variables,  function overloading, and variable name overloading.

Latest revision as of 21:19, 11 January 2011

Processing.js Game Paper

First Draft

Introduction

Game delivery in a webpage typically required some sort of plug-in. However due to security concerns and general wariness to plugins, they are not the most effective means to deliver content. Furthermore there are often some platform where a plugin does not exist or cannot exist. Even Flash which is one of the most ubiquitous visual environment is not available on every platform. The only real solution to web delivery of rich graphics is to integrate it into native browser technology.

The HTML <canvas> element allows the programatic delivery of graphics in a web page without plugins. With its inclusion in the soon to be released IE 9, the <canvas> element now represents a means to deliver graphical content in all the major browsers. The typical way to interact with the canvas is to use javascript and but for artists, educators, and other people less familiar with Javascript, learning to do this can be a barrier to entry.

The Processing language introduced by Ben Fry and Casey Reas is a simple and elegant language for data visualization that is already used by artists, educators as well as commercial media to deliver rich graphical content called sketches. There is a large body of work around the world which had been previously developed using Processing. However, Processing was originally developed with Java and thus delivering Processing sketches on a webpage required that the user install a Java plugin. Furthermore the sketches themselves were self contained items as opposed to being part of a web page. That is, the elements of the Document Object Model (DOM) of a webpage could not interact with it or vice versa. Thus, while it was possible to deliver visual content it would be difficult to create Processing sketches to take full advantage of modern web services such as flickr, twitter etc.

Processing.js is an open source, cross browser Javascript port of the Processing language. It uses the canvas element for rendering and does not require any plug-ins. However, Processing.js is more than just a Processing parser written in JavaScript. It also enables the embedding of other web technologies into Processing sketches. This extension will allow for a new set of visualizations previously not possible. Processing.js seamlessly integrates web technologies with the processing language to provide an accessible framework for multimedia web applications.

Background

The processing.js project was started by John Resig who wanted to utilize the HTML5 canvas element and take advantage of the Java Processing language. It took about seven months to get a working version, consisting of 5000 lines of code but it was not a complete port of the Processing language. The project, similarly to other open source products, was released with the hope that a developer community will converge around it and contribute to development. In September 2009, we began the work to complete the port to JavaScript. In order to facilitate an architecture for participation the source code had to be readily available and the inner workings of the project and the missing functionality must be publicized. To this end the source code was made available publicly on GitHub and an issue tracking system was used to manage the large number of issues needed to be resolved in order to complete the port. A review process was setup to ensure that the code submitted was of sufficient quality.

From it's inception, Processing.js was designed to be more than just a rewrite of the Java functions provided by Processing to JavaScript. John Resig wrote the original Processing.js parser to scan a Processing sketch for hints of Java code and convert that code to JavaScript. However, if the parser encountered JavaScript code, it would leave the code intact. This method allowed not only for the conversion of existing Processing code to JavaScript but the injection of JavaScript into Processing sketches as well. By allowing JavaScript to exist within a Processing Sketch intact,Java and JavaScript code can exist together without any need to declare the language you are using. Old sketches written for Processing will work but new sketches written for Processing.js can not only have Processing code but can make use of JavaScript to interact with other elements of the webpage.

JavaScript

When the original Processing Language, also known as P5, was first developed Java was suppose to become the language of the web while JavaScript was a little toy language that many did not take serious. However, as the web matured, JavaScript became the language of the web but many of the misconceptions about it still persists. /*cite javascript the good parts here*/ With recent developments in JavaScript technology, JavaScript is now fast enough to handle the demands of realtime interactive web graphics.

Processing.js is more than just a Processing parser re-written in JavaScript. It is designed in a way that connects the Processing language (also known as P5) with web technologies such as JavaScript, the HTML5 canvas element, JQuery, and various web services. Furthermore, Processing.js is built in such a way as to allow easy integration of new technologies as they emerge. It is designed to be fast and to take advantage of recent JavaScript developments to ensure that the platform is responsive.

While syntactically JavaScript and Java are fairly similar, there are some fundamental differences that has made this conversion challenging. The first is that we wanted to do this conversion dynamically in real time. The code produced by the converter needed to be fully object oriented and we had to provide support to all native Java functions and objects that are supported by Processing. We also had to take into account the differences between working with web resources vs local resources. Furthermore we had to consider how we would handle some fundamental differences between Java and JavaScript such as typed vs. typeless variables, function overloading and variable name overloading.

The original code for Processing.js used regular expressions to convert Java into JavaScript when it was encountered. It did this by scanning for hints of Java code within the entire sketch and then replaced the Java code with its JavaScript equivalent. Due to the difference in how Java and JavaScript accessed object properties from methods inside an object, the with statement was used as a simple solution to avoid having to prepend all function calls with "this." or "Processing.". However, the use of the with statement also meant that the JavaScript generated would fall off Trace /*cite trace paper here... do we need to talk about trace in the back ground section???*/ making the code run slower than it needed to in some browsers. Later this method of scanning the entire sketch was replaced by the creation of an abstract syntax tree that broke up the code into smaller pieces. Each piece then had the regular expressions applied to change it. This made it was easier to apply the regular expressions correctly without accidentally converting code that was already working. It also made it easier to create proper inheritance structures and attach properties and methods to the correct object in the hierarchy chain as smaller pieces of code was being converted at any one time.

Browser Unification

One important feature provided by Processing.js is that it hides the differences between browsers. Web standards are often loosely defined, and thus variations can exist. These variations not only exist between different browser vendors but can even exist between versions of the same browser on different platforms. Something as simple as key events can vary widely between browsers. Processing.js hides a large number of these differences from the user by creating a unified method of handling events. Regardless of the browser/platform, the functions for handling events within Processing.js are handled the same way.

Different browser makers are also at various stages of implementation for various newer technologies. For example, WebGL provides typed arrays which are much faster than traditional JavaScript arrays. While these typed arrays are implemented for WebGL, they can be used outside of that context also and can provide tremendous speed improvement. However, not every browser supports WebGL at this time thus a fallback to regular JavaScript arrays is necessary if the feature does not exist.

By hiding these differences between browser makers from the user, Processing.js provides a means for game developers to make games without worry about the differences between browsers. If a feature exists that can make the rendering smoother and faster, Processing.js will make use of it to increase performance. If it does not exist a fallback mechanism is available to allow it to still run.

3D support

The introduction of the <canvas> tag into the HTML5 specification allowed Processing to be ported to JavaScript, thus enabling users to run 2D sketches within the browser without additional plug-ins. At the time when porting began, there was no plug-in free method of delivering 3D content. This limited Processing.js to its 2D functions. WebGL, A JavaScript API that is based on OpenGL ES 2.0, is now being implemented by Firefox, Chrome and Safari. It is has become a viable candidate for use in Processing.js to render 3D sketches. Additionally, since WebGL closely matches OpenGL which is used by Processing, the porting of the 3D Processing functions was relatively straight forward.

Differences between OpenGL and WebGL

The matter of porting Processing (which uses OpenGL /*1.x?? if it was opengl 2.0 it would have been even easier right?*/) was simplified because the WebGL interface is similar that of OpenGL, but there are a number of differences between the interfaces. The single largest difference between WebGL and OpenGL 1.x is that like OpenGL ES 2.0, the fixed-function pipeline was been removed. Because of this, user-defined vertex and fragment shaders were necessary for lighting operations. Since some shapes in Processing aren't lit and others were, multiple shaders were written. One shader exists for lit objects such as boxes and spheres, another less complex shader was written for unlit objects such as lines and points.

The following shaders are used for rendering unlit shapes specified with begin/end function calls.

"varying vec4 vFrontColor;" +
"attribute vec3 aVertex;" +
"attribute vec4 aColor;" +
"uniform mat4 uView;" +
"uniform mat4 uProjection;" +
"void main(void) {" +
"  frontColor = aColor;" +
"  gl_Position = uProjection * uView * vec4(aVertex, 1.0);" +

"}";

fragment shader:

ifdef"GLfESf GL_ES\n" +
"prehighpn highp float;\n" endif"#endif\n" +

"vvecinvFrontColorntColor;" +
"void main(void){" +glrFragColoragCvFrontColorntColor;" +
"}";

Typed Arrays

Performance is always a concern when rendering 3D content, so it was necessary to create a faster version of JavaScript'script's inherently slow arrays types. Because of this, typed arrays were incorporated into pre-release versions of WebGL browsers. Unlike regular arrays which can contain different types such as strings, numbers and objects, typed arrays can only contain one type and cannot by dynamically resized. Some of these types include Float32Intay, Int32Uinty, Uint16ArrUintnd Uint8Array. These types provide a significant performance increase when manipulating arrays.

(table removed)

Operation Array Float32Array
Write 8947 1455
Read 1948 1109
Loop-copy >10, 000 1969
Slice-Copy 1125 503

Win7 64Bit, 4GB Ram, Dual-Core 1.30Ghz Intel U7300 (citation needed)

Alistair MacDonald

link

Because typed arrays are only available for pre-release browsers, they cannot currently be used in 2D sketches. Once they become implemented in browsers, a significant amount of the Processing.js code base can make use of these structures, increasing performance throughout the library.... /* andor, mike said its in... is it???*/

Conclusion

References

Notes

Introduction

Data visualization in a webpage beyond images typically required some sort of plug-in. However due to security concerns and general wariness to plugins, they are not the most effective means to deliver content. Furthermore there are often some platform where a plugin does not exist or cannot exist. Even Flash which is one of the most ubiquitous visual environments are not available on every platform. The only real solution to web delivery of rich graphics is to integrate it into native browser technology.

The HTML <canvas> element allows the programatic delivery of graphics in a web page without plugins. With its inclusion in the soon to be released IE 9, the <canvas> element now represents a means to deliver graphical content in all the major browsers. The typical way to draw within a canvas is to use javascript but for artists, educators, and other people less familiar with Javascript, learning to do this can be a barrier to entry.


The Processing language introduced by Ben Fry and Casey Reas is a simple and elegent language for data visualization that is already used by artists, educators as well as commercial media to deliver rich graphical content called sketches. There is a large body of work around the world which had been previously developed using Processing. However, this is largely not something that is consistently delivered through a web page. This is due to the fact that Processing was originally developed with Java and thus delivering Processing sketches required that the user install a Java plugin. Furthermore the sketches themselves were self contained items as opposed to being part of a web page. That is, the elements of the Document Object Model (DOM) of a webpage could not interact with it or vice versa. Thus, while it was possible to deliver visual content it would be difficult to create Processing sketches to take full advantage of modern web services such as flickr, twitter etc.

Processing.js is an open source, cross browser Javascript port of the Processing language. It uses the canvas element for rendering and does not require any plug-ins. However, Processing.js is more than just a Processing parser written in JavaScript. It also enables the embedding of other web technologies into Processing sketches. This extension will allow for a new set of visualizations previously not possible. Processing.js seamlessly integrates web technologies with the processing language to provide an accessible framework for multimedia web applications.

Background

The processing.js project was started by John Resig who wanted to utilize the HTML5 canvas element and take advantage of the Java Processing language. It took about seven months to get a working version, consisting of 5000 lines of code but it was not a complete port of the Processing language. The project, similarly to other open source products, was released with the hope that a developer community will converge around it and contribute to development.

"The Mozilla experience however, suggests that proprietary products may not be well-suited to distributed development if they have tightly-coupled architectures. There is a need to create an “architecture for participation,” one that promotes ease of understanding by limiting module size, and ease of contribution " - (MacCormack, Rusnak and Baldwin 2004).

In September 2009, the work to complete the Processing port to JavaScript was begun. In order to facilitate an architecture for participation a number of things needed to happen. First and foremost the source code had to be readily available. Secondly, the inner workings of the project and the missing functionality must be publicized and a dialog started. To this end the source code was made available publicly on GitHub and an issue tracking system was used to manage the large number of issues needed to be resolved in order to complete the port. A review process was setup to ensure that the code submitted was of sufficient quality.

DOM Integration?? (need a better header)

Processing.js is more than just a Processing parser written in JavaScript. It is designed in a way that connects the Processing language (also known as P5) with web technologies such as JavaScript, the HTML5 canvas element, JQuery, and various web services. Furthermore, Processing.js is built in such a way as to allow easy integration of new technologies as they emerge.

The original Processing Language is Java based. To run a Processing sketch in a web page, the Java code has to be completely converted into JavaScript. While syntactically JavaScript and Java are fairly similar, there are some fundamental differences that has made this conversion challenging. The first is that we wanted to do this conversion dynamically in real time. The code produced by the converter needed to be fully object oriented and we had to provide support to all native Java functions and objects (such as Strings) that are supported by Processing. We also had to take into account the differences between working with web resources vs local resources. Furthermore we had to consider how we would handle some fundamental differences between Java and JavaScript such as typed vs. typeless variables, function overloading and variable name overloading.

From it's inception, Processing.js was designed to be more than just a rewrite of the Java functions provided by Processing to JavaScript. John Resig wrote the original Processing.js parser to scan a Processing sketch for hints of Java code and convert that code to JavaScript. However, if the parser encountered JavaScript code, it would leave the code intact. This method allowed not only for the conversion of existing Processing code to JavaScript but the injection of JavaScript into Processing sketches as well. This simple idea means that within a processing sketch Java and JavaScript code can exist together without any need to declare the language you are using.



3D support

The introduction of the <canvas> tag into the HTML5 specification allowed Processing to be ported to JavaScript, thus enabling users to run 2D sketches within the browser without additional plug-ins. At the time when porting began, there was no plug-in free method of delivering 3D content. This limited Processing.js to its 2D functions. WebGL, A JavaScript API that is based on OpenGL ES 2.0, is now being implemented by Firefox, Chrome and Safari. It is now a viable candidate for use in Processing.js to render 3D sketches. Additionally, since WebGL closely matches OpenGL which is used by Processing, it substantially aided the porting process.

Differences

The matter of porting Processing (which uses OpenGL) was simplified because the WebGL interface is similar that of OpenGL, but there are a number of differences between the interfaces. Arguably, the single largest difference between WebGL and OpenGL is that like OpenGL ES 2.0, the fixed-function pipeline was been removed. Because of this, not all Processing source code could not be ported directly. Instead, user-defined vertex and fragment shaders were necessary to write for lighting operations. Since some shapes in Processing aren't lit and others were, multiple shaders were written. One shader exists for lit objects such as boxes and spheres, another less complex shader was written for unlit objects such as lines and points.

The following shaders are used for rendering unlit shapes specified with begin/end function calls.

"varying vec4 vFrontColor;" +
"attribute vec3 aVertex;" +
"attribute vec4 aColor;" +
"uniform mat4 uView;" +
"uniform mat4 uProjection;" +
"void main(void) {" +
"  frontColor = aColor;" +
"  gl_Position = uProjection * uView * vec4(aVertex, 1.0);" +

"}";

fragment shader:

ifdef"GLfESf GL_ES\n" +
"prehighpn highp float;\n" endif"#endif\n" +

"vvecinvFrontColorntColor;" +
"void main(void){" +glrFragColoragCvFrontColorntColor;" +
"}";

Examining the shaders reveals some of the idiosyncrasWebGLf WebGgl The gl_Color keyword is considered invalid. Instead, users must create their own varying vector. Furthermore, a preprocessor statement to set float types to use high precision is also required. These are some examples of changes to the specifications changes which were introduced over time.

Typed Arrays

Performance is always a concern when rendering 3D content, so it was necessary to create a faster version of JavaScript'script's inherently slow arrays types. Because of this, typed arrays were incorporated into pre-release versions of WebGL browsers. Unlike regular arrays which can contain different types such as strings, numbers and objects, typed arrays can only contain one type and cannot by dynamically resized. Some of these types include Float32Intay, Int32Uinty, Uint16ArrUintnd Uint8Array. These types provide a significant performance increase when manipulating arrays.

(table removed)

Operation Array Float32Array
Write 8947 1455
Read 1948 1109
Loop-copy >10, 000 1969
Slice-Copy 1125 503

Win7 64Bit, 4GB Ram, Dual-Core 1.30Ghz Intel U7300 (citation needed)

Alistair MacDonald

link

Because typed arrays are only available for pre-release browsers, they cannot currently be used in 2D sketches. Once they become implemented in browsers, a significant amount of the Processing.js code base can make use of these structures, increasing performance throughout the library.

Specification Changes and Browser Inconsistencies

As the specification is concurrently implemented in different browsers, several inconsistencies between browsers have appeared. These range from minor issues, such as Minefield and Chrome/Chromium return "function" while WebKit returns "object" when the type of a typed array is queried. Another is the way WebGL's readPixels() function is implemented. This function isn't used extensively in the library itself, but it is used in the Processing.js reference testing framework.

Problems

WebGL provides a close match to OpenGL for incorporating 3D into Processing.js, but it does present some issues when trying to port over code. There are interface differences, changes to the interface are common, and some functionality isn't available at all such as point smoothing.


Browser Unification

One important feature provided by Processing.js is that it hides the differences between browsers. Web standards are often loosely defined, and thus variations can exist. These variations not only exist between different browser vendors but can even exist between versions of the same browser on different platforms. Something as simple as key events can vary widely between browsers. Processing.js hides all these intricacies from the user keeping it simple for content creators.


/*Above this line is our final draft, below this line is the original writeups*/

/* ToDo: Rewrite as game paper, conclusion, references, demos, video editing*/


/* Mike an Andor...so does pjs use typed arrays for 2D if available? or just 3D?*/

One thing the web is known for is innovation. This is the case for Processing.js and many of the browsers on which the library is used. With innovation, there comes differences in implementation. Each browser handles key strokes and other web events differently. This is due to a somewhat lenient standardization that mostly just ensures that certain events exist. It is not preventative for browser vendors to customize and create their own unique events, which would stifle innovation.

Developers need to make sure that their creation handles the necessary differences for all browsers. We ensured that this was done for Processing.js so that the functionality of the Processing language be easily accessible for the open web. Processing.js does not only handle events, but it takes those events and standardizes it to copy (or at the very least imitate) a proper Processing compilation. One of the biggest pieces of code in Processing.js that we worked on to unify the browsers involve key events.

Handling key events was a difficult task because not only were there different browsers but the functionality of those browsers varied with different operating systems. We found glitches wherein Google Chrome was doing something entirely different on an Apple OSX system compared to Google Chrome on a Linux Ubuntu system. We opted for feature detection to handle specific bugs such as the aforementioned. It was the appropriate move compared to browser detection, which would have left it less manageable and more complicated. Browser detection involves obtaining a specific string or phrase that we can extract from browsers. However, this method is dangerous due to the fact that we can never really predict what the string we extract will say. One version may say something but the next update from the browser vendor may change the string entirely. If relied upon, it would break whole sections of code. Feature detection may still break if the feature is removed within the next update. The great idea behind feature detection is that it would only break that specific feature within the code and can be easily pinpointed.

Key event feature detection turned out to be a daunting task. Generally, this wouldn't be such a tough task. It would involve just returning or modifying the key given by the stroke and browser. With Processing, it involves the use of user written functions when pressing, holding or releasing a key. So, we had to adapt the browser key strokes to run those functions when needed. This adaptation involved making sure that the keys were fired and re-fired properly. It involved a lot of testing and manipulating using a Processing IDE.

(figure/image of w3c keycode/charcode app comparing chrome and firefox, using the same key (a) - http://www.w3.org/2002/09/tests/keys.html)

As seen above (in Figure …), keyCode under the keypress column on Firefox fires a 0. Whereas the same row and column on Chrome, gives a 97 like the charCode. Re-firing of keys also differ. Chrome likes to re-fire both the keydown and keypress events; Firefox only re-fires the keypress. Manually adjusting and testing this was definitely a task. In the end, we managed to replicate the key strokes of Processing while using different browsers and maintaining browser accessibility for artists and developers.

Keys are not the only code we've worked with to ensuring browser accessibility. Another example is the newly implemented typed arrays for Javascript.

// Typed Arrays: fallback to WebGL arrays or Native JS arrays if unavailable

 function setupTypedArray(name, fallback) {
   // check if TypedArray exists
   // typeof on Minefield and Chrome return function, typeof on Webkit returns object.
   if (typeof this[name] !== "function" && typeof this[name] !== "object") {
     // nope.. check if WebGLArray exists
     if (typeof this[fallback] === "function") {
       this[name] = this[fallback];
     } else {
       // nope.. set as Native JS array
       this[name] = function(obj) {
         if (obj instanceof Array) {
           return obj;
         } else if (typeof obj === "number") {
           return new Array(obj);
         }
       };
     }
   }
 }

The code above shows feature detection for typed arrays. As seen from the commenting, Minefield/Firefox and Chrome return functions for the typeof the object and webkit returns an object. In new technologies like this and WebGL, as another example, standardization is very new and limited so browsers have lots of wiggle room to customize. We, as developers of Processing.js, code it so when other developers use our library they do not have to worry about the differences and quirks of different browsers.


Resources: http://www.w3.org/2002/09/tests/keys.html http://www.quirksmode.org/


/*Above this line is our final draft, below this line is the original writeups*/

We could of done a straight up JavaScript port of the Processing language, but that would mean all Processing sketches written in Processing, would need to be rewritten in JavaScript. This way, all previous Processing sketches can simply be dropped into the web, and they will work. We took this one step further, allowing both languages to mingle as one. When we parse the Java into JavaScript, we don't break previously existing JavaScript, this means you can add JavaScript right into the Java, without having to declare that you are doing so. We simply ignore the JavaScript we encounter while parsing the Java, leaving it in tact. Not only do we allow mingling of the two languages, which is unique and powerful in itself, but also allows for sketches to be written in pure JavaScript. The advantages of this is we had a huge library of work to test and draw from right from the beginning.

John Resig, the mastermind behind Jquery, is also the mastermind behind Processing.js. His initial work was to use regex to scan the sketch source code for hints of Java, replace it with JavaScript, and leave all JavaScript in tact. He started by taking a previously existing Processing sketch, adding functional support to make that one sketch work, and doing this one sketch at a time, creating missing functions as needed. He took advantage of the pre existing library of sketches, so for each sketch he explicitly supported, he would be that much closer to implicitly supporting other sketches.

“In development I worked in a backwards manner. Instead of building the API up from the ground - I worked from the top, down, implementing enough of the API to get individual demos working.” -http://ejohn.org/blog/processingjs/

Scott Downe's work was mostly related to fixing bugs, and removing the dangerous JavaScript function with. Fixing bugs was a good place to start learning the code, getting his feet wet. The first bug he fixed was to make sure potential code contained in strings were not parsed. This was initially accomplished by masking all strings with a key, and storing their values before the code was parsed, and later replacing the unchanged strings via their keys after parsing. Other, smaller bugs were fixed until it became apparent that the use of the with function meant we would fall off trace, and wouldn't reach our full speed potential. With was being used in two places, first being around all of the sketch, to load in the whole of the Processing library, and to load in method calls from internal function use. We have to do this, because of the differences in how Java and JavaScript call and access their object properties. JavaScript accesses all properties within the object itself separated with a dot from inside or outside the object, where as Java only needs a dot when accessed from outside the object. Using with meant we could contain all Processing functions inside an object, and not have to change how it is called inside the Java. This was the easiest and fastest way to do this, but needed to be changed. Removing with meant prepending the processing object to all calls to the API and internal object properties. So we needed to store a list of the existing properties for both the API and created objects, and when the parser finds a match, prepends itself, either being “Propcessing” or “this” to the property. This worked, but was fragile; we were still using regex's, and doing this to the whole of the source, meaning each new regex we called was a danger to parse code that is similar, but different, potentially breaking code we did not intend to that previously worked. Despite working, this was a hack and a maintenance nightmare. We needed something better.

Notmasteryet rewrote the parser to convert the sketch into an abstract syntax tree, which is an abstract tree representation of blocks of code. By doing this, blocks can be precisely parsed without the worry of breaking or parsing unintended things in an unexpected way. Regex is still used for each part, but is now contained to specifically targeted smaller chunks code, instead of the whole thing. This makes maintaining the code much easier, makes object inheritance easier, and makes JavaScript code included in the sketch more stable. In fact, since the abstract syntax tree's inclusion, we have found new bugs in the parser to be pretty much non existent.

Each of the above people contributed object inheritance in some form or another, but I wanted to specifically touch on the challenges in inheritance. Object inheritance was much easier using with, because we could easily add the inherited properties to an object, and when called, not worry about where it is being called from. When with is removed, we had to maintain this data internally, and be able to prepend the right object to the right method calls. This got significantly more complicated when you consider where things may be called from, including super constructors, and super methods calling methods form its parent, calling these potentially chaining calls in the correct order. Because we have to store all created classes methods at the time of parsing, we don't yet know if another class will use it as a super class, so all classes and their properties must be stored, so later we can prepend the correct object to the correct calls in a complex chain of limitless inherited calls. This was buggy and fragile code that took a while to get right, but Notmasteryet's work helped a ton in this area, and something we are quite proud of.


/* future work or things to watch if using pjs below*/

Some of the differences between Java and JavaScript presented some unique challenges. Some of which are still unsolved. Because at the time of parsing, we are just parsing the code as if it was pure text, so we cannot validate any of the data referenced in the code. When an image is to be loaded in the code, the client will now have to download that image from the server, this is a unique problem that Processing does not have. This means an image may not be available when needed, and getting that data directly from the source at time of parse is not reliable, we would need to know this before we parse. We solved this by adding a directive at the top of the code that would define all images needed to be preloaded, so we can parse the directive first, then convert the code to JavaScript, then run it, safely knowing images will be ready to use at run time. Java supports overloading, in that its functions are uniquly identified by their name, return type, and parameters, this making up a function's signature. ( - source this ) JavaScript only holds the function name as its signature, presenting another unique problem. We can check the number of parameters in a function, and merge all overloaded functions into one, and check the number of arguments passed in, to know which block to call. This check is at run time, not at call time as Java would do it. However, we currently do not reliably check the type of the arguments passed in, so it will break if a function has two versions, first accepting a single string as the only argument, and the second accepting a single number as the only argument. Similarly, if we have a variable using the same name as a function, called variable name overloading, we will break in the same way. This is because Java would consider these different things, and JavaScript considers a function to be a variable of a different type, sharing the same space.

“In order to support this there would have to be considerable overhead - and it's generally not a good practice to begin with.” -http://ejohn.org/blog/processingjs/

Another interesting difference stems from Java being a typed language, and JavaScript being typeless. Java would require casting in most cases, where as with javaScript we can simply throw the cast away for all literal variable types. The problem is if the type is something like a double, or a char, which in JavaScript is simply a string or int. ( source this? ) We solved this for chars with a custom char class, it solved a lot of issues we were having but it is not perfect, by not solving all issues in all cases. Some other types like double and byte will require more overhead and will not be possible without complete type tracking.

Demos

Image manipulation

Processing.js includes full support for pixel and color manipulation of images on the canvas element. Images can be resized, tinted, blended, copied, resized, or have filters and masks applied to them. Images can also be manipulated at the pixel level allowing for any level of image manipulation required. Images can also be created and filled from pieces of other images, the current canvas content, or have their pixels filled dynamically. This functionality allows for images to be created from external data that is passed into the processing sketch and visualized through code.

 copying pieces of an image
blending regions of an image with different modes
different types of filters applied to an image
resizing an image

Pjs directives In order for Processing.js to closely match the functionality of the native Processing language some custom flags had to be created to make the library behave like the native language. Pjs directives are a set of commands that are embedded in a multiline comment at the top of the sketch to control a few aspects of how the sketch will work. Placing the directives in a multiline comment allows for backwards compatibility of sketches with native Processing so that sketches written in Processing.js can be run on the native Processing JAVA platform. There are currently three Processing.js directives. These directives add the ability to preload images before the sketch begins to run, and to toggle transparent backgrounds and anti-aliasing of lines.

Browser Unification

One thing the web is known for is innovation. This is the case for Processing.js and many of the browsers on which the library is used. With innovation, there comes differences in implementation. Each browser handles key strokes and other web events differently. This is due to a somewhat lenient standardization that mostly just ensures that certain events exist. It is not preventative for browser vendors to customize and create their own unique events, which would stifle innovation.

Developers need to make sure that their creation handles the necessary differences for all browsers. We ensured that this was done for Processing.js so that the functionality of the Processing language be easily accessible for the open web. Processing.js does not only handle events, but it takes those events and standardizes it to copy (or at the very least imitate) a proper Processing compilation. One of the biggest pieces of code in Processing.js that we worked on to unify the browsers involve key events.

Handling key events was a difficult task because not only were there different browsers but the functionality of those browsers varied with different operating systems. We found glitches wherein Google Chrome was doing something entirely different on an Apple OSX system compared to Google Chrome on a Linux Ubuntu system. We opted for feature detection to handle specific bugs such as the aforementioned. It was the appropriate move compared to browser detection, which would have left it less manageable and more complicated. Browser detection involves obtaining a specific string or phrase that we can extract from browsers. However, this method is dangerous due to the fact that we can never really predict what the string we extract will say. One version may say something but the next update from the browser vendor may change the string entirely. If relied upon, it would break whole sections of code. Feature detection may still break if the feature is removed within the next update. The great idea behind feature detection is that it would only break that specific feature within the code and can be easily pinpointed.

Key event feature detection turned out to be a daunting task. Generally, this wouldn't be such a tough task. It would involve just returning or modifying the key given by the stroke and browser. With Processing, it involves the use of user written functions when pressing, holding or releasing a key. So, we had to adapt the browser key strokes to run those functions when needed. This adaptation involved making sure that the keys were fired and re-fired properly. It involved a lot of testing and manipulating using a Processing IDE.

(figure/image of w3c keycode/charcode app comparing chrome and firefox, using the same key (a) - http://www.w3.org/2002/09/tests/keys.html)

As seen above (in Figure …), keyCode under the keypress column on Firefox fires a 0. Whereas the same row and column on Chrome, gives a 97 like the charCode. Re-firing of keys also differ. Chrome likes to re-fire both the keydown and keypress events; Firefox only re-fires the keypress. Manually adjusting and testing this was definitely a task. In the end, we managed to replicate the key strokes of Processing while using different browsers and maintaining browser accessibility for artists and developers.

Keys are not the only code we've worked with to ensuring browser accessibility. Another example is the newly implemented typed arrays for Javascript.

// Typed Arrays: fallback to WebGL arrays or Native JS arrays if unavailable

 function setupTypedArray(name, fallback) {
   // check if TypedArray exists
   // typeof on Minefield and Chrome return function, typeof on Webkit returns object.
   if (typeof this[name] !== "function" && typeof this[name] !== "object") {
     // nope.. check if WebGLArray exists
     if (typeof this[fallback] === "function") {
       this[name] = this[fallback];
     } else {
       // nope.. set as Native JS array
       this[name] = function(obj) {
         if (obj instanceof Array) {
           return obj;
         } else if (typeof obj === "number") {
           return new Array(obj);
         }
       };
     }
   }
 }

The code above shows feature detection for typed arrays. As seen from the commenting, Minefield/Firefox and Chrome return functions for the typeof the object and webkit returns an object. In new technologies like this and WebGL, as another example, standardization is very new and limited so browsers have lots of wiggle room to customize. We, as developers of Processing.js, code it so when other developers use our library they do not have to worry about the differences and quirks of different browsers.


Resources: http://www.w3.org/2002/09/tests/keys.html http://www.quirksmode.org/

community and collaboration

Society has a vital interest in encouraging and rewarding innovation. Presently, there are two major models characterizing how this may be done. The first, the “private investment” model and the second, the “collective action” model (von Hippel and von Krogh 2003). Von Hippel and von Krogh go on to say that the private investment model assumes private returns to the innovator resulting from private goods and efficient rule of intellectual property protection. Whereas the collective action model assumes collaboration from multiple innovators resulting in a public good that can be accessed by anyone.


The phenomenon of open source software development illustrates that in order to solve a shared or personal technical problem, users program and reveal their innovations without getting private returns from selling the software. The source code of open source software is made freely available so that users can access, modify, and redistribute it (Shuo July 2010). Open source projects are released under the terms and requirements of certain licenses.


The processingjs project was started by one individual who wanted to utilize the HTML5 canvas element and take advantage of the Java Processing language. It took about seven months to get a working version, consisting of 5000 lines of code, of the project released. However, the part of the project that allowed for dynamic conversion of code written in the Processing language, to JavaScript, referred to as the parser, was limiting. Moreover, the release contained a lot of gaps as some of the functionality was not yet supported (Resig 2008).


The project, similarly to other open source products, was released with the hope that a developer community will converge around it and contribute to development. The Mozilla experience however, suggests that proprietary products may not be well-suited to distributed development if they have tightly-coupled architectures. There is a need to create an “architecture for participation,” one that promotes ease of understanding by limiting module size, and ease of contribution (MacCormack, Rusnak and Baldwin 2004). In order to facilitate an architecture for participation a number of things needed to happen. First and foremost the source code must be readily available. Secondly, the inner workings of the project and the missing functionality must be publicized and a dialog started.


A Git repository was started to allow contributors and users easy access to the project’s source code. Git is an extremely fast, efficient, distributed version control system ideal for the collaborative development of software. The repository is hosted by GitHub which provides an online way of collaborating with others and forking repositories (GitHub Social Coding 2010). GitHub makes Open Source’s fork-and-extend legal capability a practical reality (Walsh 2009). This promotes a pressure free environment where any contributor can alter the code of their own repository without worrying about their coding style or syntax.


To raise awareness and encourage dialog both a project website and an online discussion channel were made. The website consisted of tutorials that allowed novice users to quickly pick-up the project, demonstrations of previous Java Processing examples that were ported to processingjs, and a list of features that were not yet supported. Furthermore an Internet Relay Chat (IRC) channel was made to allow for general discussions on the project as well as a Google Group which would facilitate discussions for those unfamiliar with IRC.


The project grew and attracted numerous contributors. However, as Behlendorf (1999) stated, “essential to the health of an open-source project is that the project have sufficient momentum to be able to evolve and respond to new challenges. Nothing is static in the software world, and each major component requires maintenance and new enhancements continually”. To support the growth of the project Lighthouse, an online issue tracking system was put in place. Lighthouse allows anyone to create tickets related to the project. A ticket may have many purposes including reporting a bug in the current code, requesting a new feature, or simply starting a discussion. A major advantage to using Lighthouse is the ability to plan milestones and allow users to see which features and bugs fixes will be available in the next release. Not to mention the tracking of discussions that have already happen that novice users and new contributors can learn from. Of course an issue tracking system is not all the project needed to succeed. In September of 2009 ten students from Canada’s Seneca College joined the project with the hopes of releasing a 1.0 version – the projects first stable release. The introduction of new contributors was vital to the health of the project. As identified by Liu et al (2010), a high turn-over rate of developers is common in an Open Source project but it also proves to be very challenging. With a dedicated team that included a release engineer it became possible to have frequent releases of the project and an up-to-date project repository. However, it also brought to life another well known problem often found in Open Source projects; bad code quality.


A 2008 study done by Koch and Neumann that analyzed the impact on quality and design associated with the number of contributors and the amount of their work yielded the following conclusion. “We identify the number of commits, the number of distinct programmers, and the active time as factors of influence which have a negative effect on quality. In particular, complexity and size are negatively influenced by these process metrics. Furthermore a high concentration of added work fosters bad quality.” To ensure that all code patches meat the coding standards, and passed various tests a two step review process was put in place. The first step was a peer-review that can be performed by virtually anyone but was usually performed by another contributor. The second step was a super-review that was performed by only the contributors that had the appropriate status. In order to be able to perform super-reviews a contributor must have a combination of the following, advanced JavaScript knowledge, thorough knowledge of the project and its components, or the ability to identify potential problems. In addition to this process each release was thoroughly tested on all platforms and all supporting browsers.


In December of 2010 the first stable version of processingjs was released. Included in the release were over 1,000 bug fixes, features, and under-the-hood improvements. At the time the project had twenty six recorded code contributors, eleven of which had the status of super reviewer. At least twenty users logged in to the IRC channel at any given time, 608 members of the Google Group and 99 forks of its repository.

Scalable Vector Graphic Support

Processingjs supports two major systems for representing graphics: raster, and vector graphics. Raster graphics are images represented by an array of pixels. Each pixel is either an RBG value or an index into a list of colors. This series of pixels, or bitmaps, is often stored in a compressed format such as JPEG, GIF, and PNG. Vector graphics however are objects rather than a series of pixels. They work by describing the grid points at which lines or curves are to be drawn. Some people describe vector graphics as a set of instructions for a drawing, while bitmap graphics (raster) are points of color in specific places (Eisenberg 2002). Vector graphics have a significant advantage over raster graphics because they are scalable; they can be scaled to any size without the loss of image quality. SVG, which stands for Scalable Vector Graphics, is a language which describes 2D graphics (straight lines or curves) expressed in mathematic relations in XML. Processingjs supports basic SVG shapes, path parsing, transformations and style, as well as shape reusability.


Basic SVG shapes include line, circle, ellipse, rectangle, polygon and polyline. As mentioned above the SVG language will provide instructions on drawing each shape. The attributes of the circle include center x-coordinate, center y-coordinate, and the radius. The x and y coordinate of 0 represents the upper left corner of the picture. The y coordinates increase as you move vertically downwards; and the x coordinates increase as you move horizontally to the right.


Paths represent the outline of a shape which can be filled, stroked, used as a clipping path, or any combination of the three. A path is described using the concept of a current point. In an analogy with drawing on paper, the current point can be thought of as the location of the pen. The position of the pen can be changed, and the outline of a shape (open or closed) can be traced by dragging the pen in either straight lines or curves. Paths represent the geometry of the outline of an object, defined in terms of moveto (set a new current point), lineto (draw a straight line), curveto (draw a curve using a cubic Bézier), arc (elliptical or circular arc) and closepath (close the current shape by drawing a line to the last moveto) elements. Compound paths (i.e., a path with multiple subpaths) are possible to allow effects such as "donut holes" in objects (Paths 2010). Table 1.1 illustrates the different commands represented inside a path. Uppercase commands use absolute coordinates and lowercase commands use relative coordinates.


Path commands

Command Arguments Effect

Command Arguments Effect

Command Arguments Effect

M, m

x y

Move to given coordinates.

L, l

x y

Draw a line to the given

H, h

x

Draw a horizontal line to the given x-coordinate.

V, v

y

Draw a vertical line to the given x-coordinate.

A, a

rx ry

x-axis-rotation

large-arc sweep

Draw an elliptical arc from the current point to (x, y). The points are on an ellipse with x-radius rx and y-radius ry. The ellipse is rotated x-axis-rotation degrees. If the arc is less than 180 degrees, large-arc is zero; if greater than 180 degrees, large-arc is one. If the arc is to be drawn in the positive direction, sweep is one; otherwise it is zero.

Q, q

x1 y1 x y

Draw a quadratic Bézier curve from the current point to (x, y) using control point (x1, y1).

T, t

x y

Draw a quadratic Bézier curve from the current point to (x, y). The control point will be the reflection of the previous Q command's control point. If there is no previous curve, the current point will be used as the control point.

C, c

x1 y1 x2 y2 x y

Draw a cubic Bézier curve from the current point to (x, y) using control point (x1, y1) as the control point for the beginning of the curve and (x2, y2) as the control point for the endpoint of the curve.

S, s

x2 y2 x y

Draw a cubic Bézier curve from the current point to (x, y), using (x2, y2) as the control point for this new endpoint. The first control point will be the reflection of the previous C command's ending control point. If there is no previous curve, the current point will be used as the first control point.

Table 1.1 Source: (Eisenberg 2002)


Transformations and styles can be applied to all elements in the SVG language. In order to change the placing of a particular shape a transformation can be applied. Moreover, to change a shape’s look a style attribute can be applied. Processingjs supports six transformations: matrix, translate, scale, rotate, skewX, and skewY. A matrix transformation specifies a transformation in the form of a transformation matrix of six values. Translate moves the shape to the x and y values provided. Scale increases or decreases the size of the shape. The rotate transformation rotates the shape either by its coordinates. You may supply multipleorigin or by a specific point. SkewX skews all x-coordinates by a specified angle. Visually, this makes vertical lines appear at an angle. Lastly, skewY skews all y-coordinates by a specified angle. This makes horizontal lines appear to be at an angle. One can apply multiple transformations to any shape. Styles that can be applied include opacity, fill, fill opacity, stroke, stroke weight, and stroke opacity.


Processingjs’ class structure enables shape reusability. Each shape or group of shapes has its own properties and can be recreated without the underlining SVG language.






Bibliography


Eisenberg, David J. SVG essentials. O'Reilly & Associates, Inc. Sebastopol, 2002.

"Paths." SVG 1.1 (Second Edition). June 22 , 2010. http://www.w3.org/TR/SVG/paths.html#Introduction (accessed Dec 2010).

DOM integration

What is this? Merging technologies Processing.js helps merge multiple new and emerging HTML5 technologies together to make design and production for the web easier. Processing.js connects the processing language with web technologies such as WebGL, JavaScript, and the HTML5 canvas element. More importantly the library is built in such a way as to allow new technologies to be added in at a later date and for the scope of the library to change as new technologies evolve. In the future, other technologies such as 3D audio, controller inputs, and HTML5 video integration could be added to the library to allow Processing sketches to integrate with them. -WebGL integration example paragraph- Image manipulation Processing.js includes full support for pixel and color manipulation of images on the canvas element. Images can be resized, tinted, blended, copied, resized, or have filters and masks applied to them. Images can also be manipulated at the pixel level allowing for any level of image manipulation required. Images can also be created and filled from pieces of other images, the current canvas content, or have their pixels filled dynamically. This functionality allows for images to be created from external data that is passed into the processing sketch and visualized through code.

 copying pieces of an image
blending regions of an image with different modes
different types of filters applied to an image
resizing an image

Pjs directives In order for Processing.js to closely match the functionality of the native Processing language some custom flags had to be created to make the library behave like the native language. Pjs directives are a set of commands that are embedded in a multiline comment at the top of the sketch to control a few aspects of how the sketch will work. Placing the directives in a multiline comment allows for backwards compatibility of sketches with native Processing so that sketches written in Processing.js can be run on the native Processing JAVA platform. There are currently three Processing.js directives. These directives add the ability to preload images before the sketch begins to run, and to toggle transparent backgrounds and anti-aliasing of lines.

WebGL section

[images here]

Andor Salga

WebGL Introduction
The introduction of the <canvas> tag into the HTML5 specification allowed Processing to be ported to JavaScript, thus enabling users to run 2D sketches within the browser without additional plug-ins. At the time when porting began, there still was no plug-in free method of delivering 3D content. This limited Processing.js to 2D until WebGL was introduced. Once WebGL was implemented on pre-release versions of Firefox, Safari and Chrome, it became a viable candidate for use in Processing.js to render 3D sketches. Additionally, since WebGL closely matches OpenGL which is used by Processing, it substantially aided the porting process.

WebGL first began as an experimental add-on for Firefox developed at Mozilla. It was later adopted by the Khronos group who manage the OpenGL specifications. It is a JavaScript API which provides a subset of the functionality of OpenGL ES 2.0. The interface is relatively simple, yet it still provides enough functionality to emulate almost all of Processing's 3D functions. WebGL continues go through interface changes and revisions.

Differences
The matter of porting Processing (which uses OpenGL) was simplified because the WebGL interface is similar that of OpenGL, but there are a number of differences between the interfaces. Arguably, the single largest difference between WebGL and OpenGL is that like OpenGL ES 2.0, the fixed-function pipeline was been removed. Because of this, not all Processing source code could not be ported directly. Instead, user-defined vertex and fragment shaders were necessary to write for lighting operations. Since some shapes in Processing aren't lit, a few shaders were written. One shader exists for lit objects such as boxes and spheres, another less complex shader was written for unlit objects such as lines and points.

The following shaders are used for rendering unlit shapes specified with begin/end function calls.

"varying vec4 vFrontColor;" +
"attribute vec3 aVertex;" +
"attribute vec4 aColor;" +
"uniform mat4 uView;" +
"uniform mat4 uProjection;" +
"void main(void) {" +
"  frontColor = aColor;" +
"  gl_Position = uProjection * uView * vec4(aVertex, 1.0);" +

"}";

fragment shader:

ifdef"GLfESf GL_ES\n" +
"prehighpn highp float;\n" endif"#endif\n" +

"vvecinvFrontColorntColor;" +
"void main(void){" +glrFragColoragCvFrontColorntColor;" +
"}";

Examining the shaders reveals some of the idiosyncrasWebGLf WebGgl The gl_Color keyword is considered invalid. Instead, users must create their own varying vector. Furthermore, a preprocessor statement to set float types to use high precision is also required. These are some examples of changes to the specifications changes which were introduced over time.

Typed Arrays
Performance is always a concern when rendering 3D content, so it was necessary to create a faster versJavaScript'script's inherently slow arrays types. Because of this, typed arrays were incorporated into pre-release versiWebGLf WebGL browsers. Unlike regular arrays which can contain different types such as strings, numbers and objects, typed arrays can only contain one type and cannot by dynamically resized. Some of these types include Float32Intay, Int32Uinty, Uint16ArrUintnd Uint8Array. These types provide a significant performance increase when manipulating arrays.

(table removed)

Operation Array Float32Array
Write 8947 1455
Read 1948 1109
Loop-copy >10, 000 1969
Slice-Copy 1125 503

Win7 64Bit, 4GB Ram, Dual-Core 1.30Ghz Intel U7300 (citation needed)

Alistair MacDonald

link

Because typed arrays are only available for pre-release browsers, they cannot currently be used in 2D sketches. Once they become implemented in browsers, a significant amount of the Processing.js code base can make use of these structures, increasing performance throughout the library.

Specification Changes and Browser Inconsistencies
As the specification is concurrently implemented in different browsers, several inconsistencies between browsers have appeared. These range from minor issues, such as Minefield and Chrome/Chromium return "function" while WebKit returns "object" when the type of a typed array is queried. Another is the way WebGL's readPixels() function is implemented. This function isn't used extensively in the library itself, but it is used in the Processing.js reference testing framework.

Problems
WebGL provides a close match to OpenGL for incorporating 3D into Processing.js, but it does present some issues when trying to port over code. There are interface differences, changes to the interface are common, and some functionality isn't available at all such as point smoothing.

Js and processing integration

Processing is Java based, and in order to make it work in the web, it has to be completely converted into JavaScript. Syntactically JavaScript and Java are actually quite similar, and people have done work like this before (google, java nes emulator to js nes emulator). Our unique challenges were that we had to do this dynamically, be fully object oriented, support all native Java functions that are supported by Processing, and consider all web like differences, like images having to be pre loaded before we can start processing the code, casting typeless variables, function overloading, and variable name overloading.

We could of done a straight up JavaScript port of the Processing language, but that would mean all Processing sketches written in Processing, would need to be rewritten in JavaScript. This way, all previous Processing sketches can simply be dropped into the web, and they will work. We took this one step further, allowing both languages to mingle as one. When we parse the Java into JavaScript, we don't break previously existing JavaScript, this means you can add JavaScript right into the Java, without having to declare that you are doing so. We simply ignore the JavaScript we encounter while parsing the Java, leaving it in tact. Not only do we allow mingling of the two languages, which is unique and powerful in itself, but also allows for sketches to be written in pure JavaScript. The advantages of this is we had a huge library of work to test and draw from right from the beginning.

John Resig, the mastermind behind Jquery, is also the mastermind behind Processing.js. His initial work was to use regex to scan the sketch source code for hints of Java, replace it with JavaScript, and leave all JavaScript in tact. He started by taking a previously existing Processing sketch, adding functional support to make that one sketch work, and doing this one sketch at a time, creating missing functions as needed. He took advantage of the pre existing library of sketches, so for each sketch he explicitly supported, he would be that much closer to implicitly supporting other sketches.

“In development I worked in a backwards manner. Instead of building the API up from the ground - I worked from the top, down, implementing enough of the API to get individual demos working.” -http://ejohn.org/blog/processingjs/

Scott Downe's work was mostly related to fixing bugs, and removing the dangerous JavaScript function with. Fixing bugs was a good place to start learning the code, getting his feet wet. The first bug he fixed was to make sure potential code contained in strings were not parsed. This was initially accomplished by masking all strings with a key, and storing their values before the code was parsed, and later replacing the unchanged strings via their keys after parsing. Other, smaller bugs were fixed until it became apparent that the use of the with function meant we would fall off trace, and wouldn't reach our full speed potential. With was being used in two places, first being around all of the sketch, to load in the whole of the Processing library, and to load in method calls from internal function use. We have to do this, because of the differences in how Java and JavaScript call and access their object properties. JavaScript accesses all properties within the object itself separated with a dot from inside or outside the object, where as Java only needs a dot when accessed from outside the object. Using with meant we could contain all Processing functions inside an object, and not have to change how it is called inside the Java. This was the easiest and fastest way to do this, but needed to be changed. Removing with meant prepending the processing object to all calls to the API and internal object properties. So we needed to store a list of the existing properties for both the API and created objects, and when the parser finds a match, prepends itself, either being “Propcessing” or “this” to the property. This worked, but was fragile; we were still using regex's, and doing this to the whole of the source, meaning each new regex we called was a danger to parse code that is similar, but different, potentially breaking code we did not intend to that previously worked. Despite working, this was a hack and a maintenance nightmare. We needed something better.

Notmasteryet rewrote the parser to convert the sketch into an abstract syntax tree, which is an abstract tree representation of blocks of code. By doing this, blocks can be precisely parsed without the worry of breaking or parsing unintended things in an unexpected way. Regex is still used for each part, but is now contained to specifically targeted smaller chunks code, instead of the whole thing. This makes maintaining the code much easier, makes object inheritance easier, and makes JavaScript code included in the sketch more stable. In fact, since the abstract syntax tree's inclusion, we have found new bugs in the parser to be pretty much non existent.

Each of the above people contributed object inheritance in some form or another, but I wanted to specifically touch on the challenges in inheritance. Object inheritance was much easier using with, because we could easily add the inherited properties to an object, and when called, not worry about where it is being called from. When with is removed, we had to maintain this data internally, and be able to prepend the right object to the right method calls. This got significantly more complicated when you consider where things may be called from, including super constructors, and super methods calling methods form its parent, calling these potentially chaining calls in the correct order. Because we have to store all created classes methods at the time of parsing, we don't yet know if another class will use it as a super class, so all classes and their properties must be stored, so later we can prepend the correct object to the correct calls in a complex chain of limitless inherited calls. This was buggy and fragile code that took a while to get right, but Notmasteryet's work helped a ton in this area, and something we are quite proud of.

Some of the differences between Java and JavaScript presented some unique challenges. Some of which are still unsolved. Because at the time of parsing, we are just parsing the code as if it was pure text, so we cannot validate any of the data referenced in the code. When an image is to be loaded in the code, the client will now have to download that image from the server, this is a unique problem that Processing does not have. This means an image may not be available when needed, and getting that data directly from the source at time of parse is not reliable, we would need to know this before we parse. We solved this by adding a directive at the top of the code that would define all images needed to be preloaded, so we can parse the directive first, then convert the code to JavaScript, then run it, safely knowing images will be ready to use at run time. Java supports overloading, in that its functions are uniquly identified by their name, return type, and parameters, this making up a function's signature. ( - source this ) JavaScript only holds the function name as its signature, presenting another unique problem. We can check the number of parameters in a function, and merge all overloaded functions into one, and check the number of arguments passed in, to know which block to call. This check is at run time, not at call time as Java would do it. However, we currently do not reliably check the type of the arguments passed in, so it will break if a function has two versions, first accepting a single string as the only argument, and the second accepting a single number as the only argument. Similarly, if we have a variable using the same name as a function, called variable name overloading, we will break in the same way. This is because Java would consider these different things, and JavaScript considers a function to be a variable of a different type, sharing the same space.

“In order to support this there would have to be considerable overhead - and it's generally not a good practice to begin with.” -http://ejohn.org/blog/processingjs/

Another interesting difference stems from Java being a typed language, and JavaScript being typeless. Java would require casting in most cases, where as with javaScript we can simply throw the cast away for all literal variable types. The problem is if the type is something like a double, or a char, which in JavaScript is simply a string or int. ( source this? ) We solved this for chars with a custom char class, it solved a lot of issues we were having but it is not perfect, by not solving all issues in all cases. Some other types like double and byte will require more overhead and will not be possible without complete type tracking.