User:Kevin William Kofler/FSOSS 2013
Contents
FSOSS 2013 Report: Powered by ARM and Processing.js
Open Source and Accessible Technology Paving the Way for a Gaming Renaissance
Introduction
It should come as no surprise to anyone that has followed my progression through the BSD program: I am absolutely fanatical about video games. Since I was young, I have been captivated by the possibilities inherent in the creation of virtual worlds, and impressed by the artistic possibilities of combining music, art, architecture, writing, theatre and code into a single product. Truly, in their best examples, they can be pinnacles of human expression and creativity. Video games are responsible for my interest in programming, and they are what inspired me to pursue programming as a career. I believe that with better tools and frameworks, many of my peers will finally be able to realize their creative aspirations in ways that traditional media (film, animation, fiction, etc.) could never hope to attain.
For the past couple of decades, however, the development of high quality interactive entertainment products has been difficult, to say the least. Back in the ‘70s and ‘80s, it was possible for small groups of programmers and artists to work on projects for a few months, and still produce work that was best-in-class and highly creative. They worked out of passion, and generally shared their projects in what we would now identify as open source methodologies: publishing source code in hobbyist magazines, and distributing shareware floppy disks among friends. They were less concerned with making money, and more interested in exploring an exciting new avenue for creative expression. Through the ‘90s, however, the costs of development and scale of the projects being worked on ballooned. What was once a garage hobbyist’s playground became a multi-million dollar industry, where competition between studios necessitated the use of closed source technologies to provide studios with market advantages. Cheap, closed platforms with steep licensing buy-ins emerged with the advent of consoles, and a fledgling art form was slowly inundated with greed and secrecy. Instead of collectively working together to advance the state of the art, the risk-averse status quo led to an eventual decline in the creativity of commercial triple-A products, culminating in the last console generation: a period of time in which it seemed like every other game was a derivative first-person shooter, or an unimaginative sequel to well-defined franchises. Without a change in direction and ideology, game development will stagnate even further.
Therein lies my fascination with large open source projects. It is the realization of the early computer hobbyist landscape at a scale previously unimaginable, and unhindered by geographical or cultural restraints. Anyone in the world can become aware of a project, and start hacking away on it in whatever way they see fit. Don’t like the direction your community is headed? Fork the project and start down another path, attracting like-minded individuals to aid you in your endeavours. Constant, unimpeded growth and progression becomes not only possible, but par for the course.
It is with these thoughts in mind that I attended Seneca’s Free Software and Open Source Symposium, hoping to find examples of teams working not for personal profit, but to enable others to produce projects not possible in isolation. I wanted clear cut examples of a desire to provide the creative people of recent generations with the hardware and software they need to realize their visions, particularly with a focus on interactive entertainment.
For this paper, I’ll be comparing the attitudes of two tracks of lecturers toward open source, and evaluating how their opinions reflect those of the wider open source community, with an emphasis on how these ideologies will affect game developers and other media authors (my intended area of expertise).
Powered by ARM
The first lecture in my comparison was presented by Andrew Greene, a research assistant from Seneca’s Center for the Development of Open Source Technology (CDOT) currently working on experiments with ARM based technologies (i.e. Seneca’s Raspberry Pi cluster), and Christoper Markieta, a CDOT researcher that is working with the OSTEP (Open Source Technology for Emerging Platforms) team on their projects’ networking and backup infrastructure, as well as working with their Pidora development efforts (Pidora being an implementation of Fedora OS which runs on the Raspberry Pi ARM-based computer).
Obviously, Andrew and Chris are heavily invested in ARM processor software development, and expressed an interest in clearing up any misconceptions symposium-goers might have had about the platform. ARM processors refer to any computer processors based on the ARM family of architectures. All ARM architectures are based on an advanced reduced instruction set computing (RISC) architecture design. While these designs are not as flexible as traditional x86 designs, the instructions sets that they do support are optimized for peak speed and efficiency, which in turn allows processor manufacturers to use fewer transistors in each individual chip.
Consumer electronics manufacturers gravitate toward these chips because having fewer transistors per chip lowers the cost of manufacturing (fewer defects per production run), the die size of the processor, the heat produced by the processor, and the amount of energy used to perform a single calculation, a perfect storm of features that has made them quite popular in the small device market. Almost every mobile phone, fan-less tablet, and mobile gaming system has an ARM processor at its core. The relative simplicity of these architectures also allows for faster iteration times; while Intel may take up to 2 years to release an architecture revision (not necessarily a new architecture; every other release is typically a die-shrink of the previous), ARM licensees usually have less than 6 months between chip revisions (such as Qualcomm’s transition from the Snapdragon S4 to the S4 Pro to the 600 to the 800, all in less than 2 years). These faster iteration times have allowed for drastic improvements that have allowed ARM partners to nearly catch Intel and AMD in performance per watt, and they will almost certainly reach (at least) parity with these powerhouses in the next year or so.
Consumers gravitate towards these devices for their portability, battery life, and ease of use; for day-to-day computing needs (gaming, media, internet browsing, and communication) ARM processors have been more than satisfactory for the average user. The ubiquitous inclusion of touch screens in ARM mobile devices is the primary contributor to their ease-of-use, and appeals especially to artists. Content creation (videos, 3D modelling, photo editing, etc.) is now possible to do efficiently and easily on these mobile platforms.
Andrew and Chris went on to detail that ARM is finally producing 64 bit architectures, which will allow them to make significant inroads into the server market, traditionally dominated by much more expensive x86 designs. ARM servers are highly power efficient and scalable, which should make large scale application hosting (game servers, anyone?) significantly easier to afford and implement.
The Take-Away:
The average user is transitioning away from traditional computing platforms (x86-64 desktops and laptops) to cheaper and more power efficient ARM mobile computing platforms. Lower cost-of-entry should allow even developing countries to easily afford reasonably powerful computing platforms. An expanded audience will allow for the production of niche media which will still attract enough interest to remain profitable. This will consequently lower the financial risks associated with producing niche media, and in turn media will no longer be forced to appeal to the lowest common denominator. This should contribute greatly to reducing the stagnation that I mentioned earlier.
What’s the open source connection? If the open source community adequately engages these emerging markets (by providing server frameworks, content authoring tools, development tools, etc.) we will start to see material produced by segments of the public previously under-represented. I’m not looking at this solely from a media production perspective; the number of contributors to open source development will inevitably increase with the number of computer users, and contributors from different cultural backgrounds will bring with them perspectives and computing needs which will further advance the state of the art.
Processing.js
The second lecture in my comparison was presented by two more CDOT researchers, Dylan Segna and Andrei Kopytov. Their work (conveniently) focuses on the development of Processing.js, a JavaScript implementation of the Processing environment, a beginner’s programming language and IDE developed in Java which was designed to facilitate the production of visual artwork through code. Processing is designed to be easy to learn, and simplifies many aspects of visual computing, enabling artists to get started with computer programming while retaining the gratification that comes from having visual feedback for your efforts.
Processing.js essentially allows the use of input listening (for both the mouse and keyboard) and graphic drawing in an HTML5 canvas element, replacing the need for a dedicated Java plugin or Flash plugin to run web-based animations and games. In fact, they presented an implementation of a Mario clone developed by the eminent (:P) Pomax! Up until that point, I hadn’t realized that Processing.js was capable of the level of performance necessary for something timing dependent, like a platformer. It was interesting to see that it was nearly mature in this regard, as I had always assumed that it was simply not fast enough to draw frames for something where milliseconds of delay in input could potentially make a game unplayable (or at least unenjoyable). 3D graphics functionality is apparently also in the pipeline, but wasn’t quite ready for demonstration (I’m assuming the frame rate is abysmal, at present).
Dylan and Andre briefly described the benefits and drawbacks of using Processing.js for web projects.
The key benefit was Processing,js’ use of and integration with web standards. The second game demo that Dylan and Andre were presenting, which was their own original work, utilized a combination of the Processing.js environment to run and draw frames for the game loop, and typical CSS, HTML and JavaScript to power a simple GUI overlay which was updated less frequently. The GUI overlay could still receive data and instructions from the Processing.js window, however, which was an interesting example of a situation in which it could be more useful than Flash. Of course, at this point any canvas element construct is more useful than Flash :P The point remains that Processing.js can work in concert with JQuery and whatever other JavaScript libraries the developer wishes to implement. In addition, like any other web application, changes made by the developer can be viewed almost instantly with a page refresh; no recompiling required :P
The implementation of a Processing.js instance had three major drawbacks, however: there is no audio support without the inclusion of a separate library (something which instantly makes it less ideal for game development), accessing variables outside of the Processing.js instance is incredibly difficult (if not impossible), and browser-based debugging efforts have thus far been thwarted by some strange behaviours of the instance (for example, a line number reported which is thousands of lines after the end of the file).
Very interesting none-the-less, and seeing some of the functionality that Dylan and Andrei were able to eke out of HTML5’s canvas element was inspiring.
The Take-Away:
Open source projects are working to create hardware independent computing and development environments which will simplify the development/content creation process, allowing more people to realize their visions and improve the quality of content across the web. There is a measurable desire to enable creative people to utilize computers as well as they can, without letting too much of the implementation details get in their way.
Comparing the Open Source Attitudes of the Lecturers
Since both tracks were presented by current/former Seneca College students that either have worked or are working as research assistants in CDOT, I think it’s safe to say that they were all enthusiastic about the prospects of open source development.
Andrew and Chris seemed to be excited about having the chance to create large-scale server applications on affordable hardware. I’d imagine that working within the constraints of such limiting hardware as the Raspberry Pi cluster presents some interesting engineering problems which only an open source developer would ever be asked to deal with, but which might provide system design improvements which could be used by larger businesses. Both of them seemed interested in the possibilities that future ARM hardware would provide, and pleasantly surprised by the amount of attendees that were familiar with the architecture.
Dylan and Andrei were similarly excited about their work, and had in fact hosted a Processing.js workshop the day before. They projected a sense of inclusiveness and helpfulness which was refreshing to see, and seemed very willing to teach. They reassured the audience many times that Processing.js was easy to work with, as long as you were patient and willing to ask questions of the community when you ran into problems.
The impression that I had of all of the speakers was that they were very passionate and enthusiastic about being able to work with open source software, and had no reservations about sharing their work with the community. In fact, they seemed quite happy to.
And This Is All Important Because…?
Low cost, accessible media consumption platforms now exist, thanks to inroads made by ARM processor architecture. These are platforms that have the power to generate worlds on par with entertainment products from the Golden Age (in my opinion) of game development: late Playstation 2, and early Xbox 360/Playstation 3. Possible production quality is high enough to provide a triple-A experience, but development costs are low enough to allow for an incredible array of projects that are, very importantly, NOT AVERSE TO TAKING RISKS! People are once again experimenting with game mechanics and presentation styles. This type of platform is no longer restricted to dedicated, limited use hardware; basically every cellphone or tablet moving forward will have ample processing and graphics horsepower. Having access to larger potential audiences (ARM based communication device users) with powerful devices has lowered the hardware barrier to negligible status. Only the storefront approval processes for Android and iOS remain an issue. Luckily…
Open source, web technology frameworks are being rapidly developed by a large pool of talented developers. Processing.js is a relatively simple example of a recent paradigm: developers are moving away from the dedicated plug-ins of yesteryear towards the use of web standards, such as HTML5 and JavaScript, to emulate the functionality that they require at near native performance levels. Though not mentioned in any of the FSOSS tracks, many teams are working toward the development of high end, 3D game engines built on frameworks that rely solely on the use of a modern web browser. As these projects improve, and as content-authoring tools are developed which streamline the process of creating content for these engines (something like a browser-based UDK clone for Mozilla’s Emscripten created game engine would be wonderful!) more people will have a chance to create the interactive projects that they envision while simultaneously not having to worry about which platform they need to target and other finicky details.
In short, open source will contribute to a wider pool of content which will invariably have more of the unique gems so sorely needed by all of the creative industries.
My Thoughts on Open Source
I’m excited to continue contributing towards open source projects, and I’m invigorated by the passion and selflessness exhibited by those who are active in the community. I believe that the open availability of cheap or free content tools and content delivery platforms will be a net benefit to their respective industries; genius that was squandered in the past due to financial restraints will finally have the means and technology to contribute new and meaningful work.
I’ve always had a soft spot for open source development projects (oh, to think of all the time I’ve wasted away using open source game console emulators :P ), and FSOSS has at the very least reinforced my commitment to contributing to these projects. I hope that I’ll eventually be able to cultivate a good reputation in the community, and have a GitHub portfolio that’s a little less pathetic :P