How CoccyxJS uses RequireJS to define its own dependencies but can also be used from the global name space

images-1RequireJS is a great asset when you want to write highly modularized applications, but when it comes to using RequireJS to write highly modularized JavaScript libraries that are meant to be shared with others, then there are challenges. It is a fact that for what ever reasons, not everyone is enamored with RequireJS; they just wont use your library if it means they also have to use RequireJS for their own applications. I think that it is telling that many of today’s most popular libraries do not directly use RequireJS internally to define their own dependencies and modules because they don’t want to discourage other developers from using their libraries by imposing RequireJS as a dependency.

The ultimate solution to this problem would be if there was a way to create a library that is itself built using RequireJS, which defines its own modules and dependencies using the RequireJS define method, but that can also be used in the old “global” fashion that most libraries still tend to adopt and users like. It is easy to draw a comparison to jQuery, which is both AMD compliant and capable of being used from the global name space, and say, “just do it the way jQuery does.” The problem with that is that although jQuery is AMD compliant, it actually doesn’t use RequireJS at all to define its own modules and inter module dependencies.

As you can guess, I struggled with this very issue myself while developing CoccyxJS v0.6, which will be released very soon. I wanted to use AMD modules to break up CoccyxJS into its logical components and have its components declaring their own dependencies upon other CoccyxJS components, such as define(‘views’, [‘application’, ‘helpers’], function(app, helpers){…}); for example, but I also didn’t want to discourage the adoption of my library just because I chose to implement it using RequireJS. After some thought it dawned on me that conceptually it should be possible to have my cake and eat it too, to use RequireJS in a way that would make CoccyxJS a compliant AMD module, usable by applications that are also using RequireJS, and usable by application that access it from the global name space as well.

The solution I employed is actually quite simple and I offer it up to anyone else who is considering developing a library they’d like to build using RequireJS, but who also don’t want to discourage its adoption.

First and foremost, CoccyxJS is built as one would build a library that is intended to be used from the global name space. The difference though is that it also defines its own inter dependencies between its modules using the RequireJS define method. In order to get this to work, so that it can be used both as an AMD module and from the global name space, you never use modular dependencies as arguments to define’s callback functions and you never define your modules by returning objects from those callback functions.

For example, in CoccyxJS, the views module is defined as define(‘views’, [‘application’, ‘helpers’], function(){…}). The views module declares its dependency upon both the application module and the helpers module, but it doesn’t use them as arguments in the callback function. By declaring its dependency on these 2 modules, the callback function won’t be called until they are loaded and resolved. Internally, anything that is exposed in these two modules is done so in the old fashioned global way, by tacking them onto the single global variable, Coccyx. By the time the callback function is called, everything that the views module needs from the 2 other modules has already been placed into the Coccyx name space. For example, the helpers module, in total, is shown below:

define('helpers', [], function(){
    'use strict';

    var Coccyx = window.Coccyx = window.Coccyx || {};

    Coccyx.helpers = {
        //Returns true if s1 contains s2, otherwise returns false.
        contains: function(s1, s2){
            var i, len;
            if(typeof s1 === 'string'){
                for(i = 0, len = s1.length; i < len; i++){
                    if(s1[i] === s2) {
                        return true;
                    }
                }
            }
            return false;
        },
        //Returns a deep copy object o.
        deepCopy: function(o){
            return JSON.parse(JSON.stringify(o));
        },
        //Pass one or more objects as the source objects whose properties are to be copied to the target object.
        extend: function(targetObj){
            var len = arguments.length - 1,
                property, i;
            for(i = 1; i <= len; i++){
                var src = arguments[i];
                for(property in src){
                    if(src.hasOwnProperty(property)){
                        targetObj[property] = src[property];
                    }
                }
            }
            return targetObj;
        },
        //For each matching property name, replaces target's value with source's value.
        replace: function(target, source){
            for(var prop in target){
                if(target.hasOwnProperty(prop) && source.hasOwnProperty(prop)){
                    target[prop] = source[prop];
                }
            }
            //0.6.0 Return target.
            return target;
        }
    };

});

Within the views module, when it needs to call helpers.extend(), it would do so by calling Coccyx.helpers.extend(), which is how it is defined within the helpers module.

I use Grunt to concatenate all the modules that make up the CoccyxJS library into a single file. The last file concatenated at the end, the amd.js file, looks like this:

define('coccyx', ['application', 'helpers', 'router', 'history', 'models', 'collections', 'views', 'eventer', 'ajax'], function () {
    'use strict';
    return window.Coccyx;
});

The amd.js file is ultimatley responsible for turning CoccyxJS into a compliant AMD module. It states its dependencies on all the sub modules and it returns window.Coccyx, which allows it to be used from the global name space.

Now, in order to use CoccyxJS from the global name space, you just need to add this little JavaScript shim before your script tag that loads CoccyxJS:

//A shim that mocks the RequireJS define() method so that Coccyx.js can be used without RequireJS.
//Calls the RequireJS define callback function, allowing Coccyx.js modules to "define" themselves.
//IMPORTANT - include this script before Coccyx.js.
(function(){
    'use strict';
    window.define =  function define(){
        (arguments[arguments.length - 1])();
    };
}());

And there you have it. By using this technique when creating your own libraries you will be able to satisfy developers who want to use RequireJS and those that don’t. Your library will gain all the benefits of using RequireJS to build itself from individual AMD modules and ultimately defining itself as a compliant AMD module, while also being able to be used from the global name space.

I really like RequireJS, but I don’t want to alienate potential adopters of my library who might otherwise adopt CoccyxJS for their own development efforts but for its RequireJS dependency. By using this technique, I feel like I am able to have my cake and eat it too, and able to pass that cake on to the users. “Let them eat cake,” that’s what I say 🙂

:wq

The Tools Of My Trade – Sublime Text 3 & Tern

toolbox-1In an article I wrote a while back, I expressed what every developer knows (or should know) which is the importance of choosing the right development tools. In that article I also stated my appreciation for WebStorm, which is a light-weight but full featured JavaScript/Web IDE from JetBrains. Now I’d like to touch on another subject that I raised in that article but never expanded upon, which is using a standalone text editor for JavaScript/Web development.

My favorite text editor of all time is Vim. It is dear to my heart and almost as old as I am (not quite lol). I use it for all sorts of editing but I’ve never liked using it for editing code. Call me muscle-memory challenged but I want my code editing tools to provide me with intuitive code completion. WebStorm shines in this area and does so because it actually indexes all the symbols it finds in your project files. Editors, for the most part, don’t do that. But…. enter Sublime Text 3 and Tern.

Sublime Text 3, while still in beta, is stable and even faster than its predecessor. Most importantly, though, it now indexes the symbols in your project’s files and uses the index to support project-wide symbol searching. This is a vast improvement over Sublime Text 2. But wait, that’s not all. Then there’s Tern. For those of you who might not have heard about Tern, all I can say is go to the link above and read up on it. It is an incredible asset when coding JavaScript and now there are 2 Tern plugins for Sublime, this one and this one. Sublime Text 3 along with Tern makes for a near perfect stand-alone text editor for JavaScript development.

I say near perfect because Sublime Text 3 still cannot format HTML and code to save its life and its attempt at Vim support via Vintage mode leaves me wanting more – a lot more – and while Vintage mode is better than nothing at all, I desperately wish that Sublime’s developer would set about matching the Vim emulation found in WebStorm. No, I am begging him – please make Vintage mode equal to or better than that in WebStorm. I can’t promise him fame or fortune but I can promise him that he would have earned a special place in every Vim lover’s heart. Oh, and if he could also make Sublime format HTML and JavaScript in a sanely manner then that would be greatly appreciated as well :).

But even with its short comings and challenges, there is in my opinion no other stand-alone editor that can even come close to Sublime Text 3, especially when teamed up with Tern. I love it and to prove my love I purchased a license a few months back to show my support and appreciation and I encourage everyone to do the same, especially if you have been using Sublime regularly.

As a side note to all this love and admiration for Sublime Text 3 and Tern, I am a little concerned about the number of Sublime Text plugins that are still not yet compatible with Sublime Text 3 or are not yet discoverable through the package manager. I hope that the pace will pick up and that Sublime’s thriving eco system will continue to enrich what is already an outstanding editor.

Zombie Events, How To Kill Them And How To Avoid Them

zombies21If you do any kind of front-end development I’d say that eventually you had run into Zombie events. If you haven’t then let me explain what they are:

Zombie events are events that are raised for elements that have been removed from the DOM and they are the result of using event delegation for event handlers.

Simply put, event delegation is the practice of attaching event handlers to DOM elements that are parents to the actual DOM elements raising the events (the event targets) as opposed to the practice of attaching event handlers directly onto the DOM elements that actually raise the events (the event targets).

Event Delegation – An Example

As an example, suppose we have a list of links. Instead of attaching a click event handler to each li element we instead attach all of the click event handlers to a parent tag of the li elements, the ul tag, or to a div element serving as a container, or even to the body element itself. That is, in a nutshell, what event delegation is.

First, the markup

<div id="page_container">
    <ul>
        <li><a href='#'><span class='somelist'>some text</span></a></li>
        <li><a href='#'><span class='somelist'>some text</span></a></li>
        <li><a href='#'><span class='somelist'>some text</span></a></li>
    </ul>
</div>

and then the JavaScript

(function ( window, $ ) {
    'use strict';
    $( function () {
        $( '#page_container' ).on( 'click', function ( event ) {
            // we only are interested in the event if it was raised by a span element with a class of 'somelist'
            var $eventTarget = $( event.target );
            if ( $eventTarget.attr( 'class' ) === 'somelist' ) {
                window.alert( $eventTarget.attr( 'class' ) );
            }
        } );
    } );
}( window, jQuery ));

In the example markup above, a list is created consisting of 3 anchor tages, each of which has a span tag with a class of ‘sometext’ and in the JavaScript example code above, all click events are delegated to the div whose id is ‘page_container’. When the click event is raised, it will bubble up the DOM and when it reaches the parent container with an id of ‘page_container’ the event callback function will be called. In the event callback function, since we are only concerned with click events that were raised by span elements with a class of ‘sometext’, we filter for only those related events and ignore those that aren’t.

The Advantage Of Using Event Delegation

Delegated events have the advantage that they can process events from descendant elements that are added to the document at a later time which would be a commaon need in dynamic pages, specifically single page applications, where dom elements are added to and removed in response to the user’s interaction with the page.

The Disadvantage Of Using Event Delegation Are Zombie Events

While the benefits of using event delegation far outweigh its disadvantages, you do have to insure that when elements are removed from the page that any delegated events associated with those elements are removed as well. If these aren’t removed then the result will be Zombie events.

Avoiding Zombie Events & Killing Them If You Have Them

To kill Zombie events you first have to know how they are created. Once you understand that, getting rid of the little buggers is easy. If you’d like to learn all about Zombie events, how they are created and how you can eliminate them, then I suggest you head on over to my repo on GitHub where you can do just that!

And That’s A Wrap!

As always, feel free to leave comments and feedback. I always enjoy hearing from you.

:q

How You Create Your Objects Does Matter, But Maybe Not Why You Think

index.html pageJavaScript (ES5) can accommodate constructing objects and object hierarchies using pseudo classical, prototypal and composition/modular. I’ve often wondered what the ramifications are in terms of browser performance and resources these methods have. In order to find out, I created a simple hierarchy consisting of a person and an employee with person being at the root of the hierarchy and employee inheriting from person. I then modeled this hierarchy using each of the 3 methods, ran each of the methods through a benchmark that consisted of creating 1,000,000 object instances and recorded the time to completion.

My expectations before actually running the benchmarks were that of the 3 methods, pseudo classical would be the fastest, followed by composition and then last but not least, prototypal. My reasoning for my assumptions were:

  1. Pseudo classical should be the fastest because ‘new’ is baked into JavaScript and so I imagined that every JavaScript virtual machine should optimize the crap out of it. Secondly, pseudo classical doesn’t need to copy ‘hasown’ properties as prototypal does (see the source code of my implementations on GitHub).
  2. Prototypal should be the slowest because 2 objects are needed to create an instance, the prototype to base the new object on and a second object which must be iterated over and whose ‘hasown’ properties are copied to the new object.
  3. And finally composition, which I just assumed would always fall somewhere in the middle of the others.

The benchmark tests I created do not use the DOM at all, they merely create the intended object and they do not store them in memory. My goal was to create these tests so that they would be similar to putting a car on a dynamo, putting the transmission into neutral, and flooring the gas pedal.

Creating 1,000,000 Objects – The Benchmark Results

I was kind of surprised, actually, at the results. Not only were my assumptions not always right but it is clear that the 3 browsers that I tested on my Mac (all the latest versions of Chrome, Safari and Firefox as of February 27, 2013) are not all created equally when it comes to creating lots of objects… Not even by a long shot.

chrome browser console

Chrome Browser Console Output

firefox browser console

Firefox Browser Console Output

safari browser console

Safari Browser Console Output

When Speed Counts, Think About How You Approach Object Creation

What is clear from my test results, though, is that Chrome’s virtual machine is by far and away the fastest of the 3 browsers when asked to create millions of objects. On average I’ve found that  when using pseudo classical/new, in fact, it is almost 60 times faster when compared to Safari and almost 28 times faster when compared to Firefox. Things start to even out somewhat when using prototypal, but when it comes to using composition, Chrome once again blows the doors off the others being almost 10 times faster.

As my test have shown, if you are creating millions of objects as my tests did then you need to think about your approach and if time is really critical then maybe you should consider targeting each browser with a specific approach.

Get It On GitHub And Run The Tests 

My repo on GitHub contains all the code I used to test. There are no other dependencies. I also included a nice little web page for initiating the tests and displaying the output though in order to view nanosecond based results you will have to view the results in the browser console.

I’m looking forward to hearing from you all, especially I’m interested in your reactions to the test results and even about the code that I wrote for benchmarking. Please feel free to leave your comments here.

Oh, one last note – I didn’t run these test on Windows or Linux so if someone would like to do that and comment on your finding here then that would be greatly appreciated. I am very curious regarding the effect that the OS has on the test results and the results for the latest versions of IE.

:q

Prototypal versus Pseudo Classical – The Debate

ImageDisplay_im

In JavaScript circles there can be no greater contentious debate than the one revolving around prototypal versus pseudo classical inheritance and cantankerous discussions about politics can appear civil when compared. Proponents on either side of the aisle are quick to deride those whose views differ from their own. The ‘purists’ as I call them are quick to point out that there are no classes in JavaScript and that objects beget objects, period! On the other hand, ‘classical style’ developers (those who very likely came to JavaScript after having used a traditional class based programming language, such as C++, Java, C#, et al.) are quick to point out that JavaScript actually does support object creation via its new operator conjoined with constructor functions which provides the semantic sugar needed to mimic traditional class- based languages.

Part of the problem, I believe, lies in the terms either side use to bolster their own views. For instance, when speaking on the subject proponents on both sides of the debate often refer to ‘classical inheritance’ instead of the using the correct term which is ‘pseudo classical inheritance’. In this regard I believe even Douglas Crockford refers to the conjoined use of the new operator with a constructor function as pseudo classical inheritance and not as classical inheritance. Inclusion of the word pseudo here dramatically changes the meaning and obviates any confustion that it isn’t really classical at all but rather is a fake or pretending form of classical inheritance.

Furthering the confusion is the often misquoting of Douglas Crockford on this subject. While Mr. Crockford classifies the conjoined use of the new operator with a constructor function as one of JavaScript’s bad parts, it is important to understand his motivation for that. JavaScript at this time (here I am referring to Ecmascript 5) cannot prevent you from calling a constructor function without the use of the new operator. This can result in polluting the global name space. Knowing this then should reminds us all that we should be using static parsers such as JSLint and JSHint and declaring ‘use static’ in all our functions.

So what, then, is the real issue here and why do I really not care that much about it? There are certain developers who are passionately opinionated on this subject and they feel they must try to coerce everyone into accepting their own opinions as the gospel and heaven help you or anyone who disagrees with them. In development as in life we can not escape the fact that there will always be those who want to draw us in to their own views of correctness, so beware.

JavaScript is an incredibly expressive and dynamic language. It allows us to create objects in many different ways and I believe that is one of JavaScript’s greatest strengths. It is therefore a testament to the versatility of JavaScript that the language allows those who favor one form of object creation over the other to use their favored approach.

When coding JavaScript I use both pseudo and prototypal approaches to creating objects though I admittedly favor shallow object hierarchies and composition(*1) over deeply rooted inheritance. Mr. Crockford seems to favor this technique as well. In its essence, composition allows avoiding deeply rooted object hierarchies while allowing for the ability to create semantically rich and expressive objects. JavaScript makes implementing composition incredibly easy and I will cover composition in depth in a future article.

The best advice I can give to budding JavaScript developers is the following:

  • Avoid getting caught up in philosophical debates. That isn’t to say you shouldn’t have opinions. Just don’t get caught up in them. Instead focus on perfecting your craft and code, code, code.
  • Learn how to create objects using all the varied ways that JavaScript supports — prototypal, pseudo classical and composition.
  • Understand their good and bad parts — they have both.
  • Use linting all the time. Today there is no excuse for not using it. If your editor or IDE doesn’t support real time linting then dump it and find one that does.
  • Use strict. Unless you are dealing with legacy code any new code you develop should include ‘use strict’ in every function.
  • Last but not least, let your own educated opinions guide you – emphasis on ‘educated’.

(*1) To be more precise, I often create objects combining prototypal inheritance, the modular pattern and composition.

:q

A Much Better Way To Do Callbacks

$.Deferred and Promise

The traditional way of dealing with callbacks in JavaScript can often yield unreadable and hard to maintain code. jQuery’s Deferred ($.Deferred), a chainable utility object first introduced in v1.5, can be used to register multiple callbacks into callback queues, invoke callback queues, and relay the success or failure state of any synchronous or asynchronous function. Using $.Deferred and Promise can eliminate many of the negatives often associated with cascading callback functions.

Rather than writing a full tutorial I thought it would be better to provide you with real working examples using $.Deferred and Promise which cover various use cases. The code in my repo is well documented and simple working code that you can clone from GitHub at github.com/jeffschwartz/jQueryDeferred and explore on your own using just your browser and a simple code editor.

If you have not yet been exposed to $.Deferred and Promise I recommend you read the 2 articles I mention below as well as jQuery’s own documentation, located at http://api.jquery.com/category/deferred-object/. These 3 resources should provide you with a solid foundation and understanding of $.Deferred and Promise which will make following my code much easier.

  1. The article by Jeremy Chone, at http://www.html5rocks.com/en/tutorials/async/deferred/
  2. The article by Julian Aubourg and Addy Osmani, at http://msdn.microsoft.com/en-us/magazine/gg723713.aspx

Includes 4 Examples and a Test Suite

The code comes with 4 examples covering 4 use cases, which are:

  1. Example #1 – calling getData1 to obtain an integer using Promise.done & Promise.fail
  2. Example #2 – calling getData2 to obtain an integer using $.when and Promise.then
  3. Example #3 – calling getData3, getData4 in parallel to obtain 2 integers using $.when and then summing the 2 returned integer values using Promise.done
  4. Example #4 – calling getData5, getData6 in parallel to obtain 2 integers using $.when and then calling getSum using Promise.then to sum the 2 returned integer values and obtaining the result using Promise.done

In addition to the 4 examples, a full test suite that includes 7 tests, one for each mock api function, is also included and it uses QUnit. To run the tests open the test/qunit.html file in your browser.

Comments are appreciated as always.

:q

Under The Hood

I’m naturally curious. That is why I read and study so much and it is also why I am passionate about programming. When I’m working with a new library I’m constantly asking myself “how’s it doing that?”, and so I spend a lot of time looking under the hood, so to speak.

This article is all about me looking under the hood and reporting what I find there. This isn’t going to be a static document. I’ll keep appending new things to it as I learn rather than writing new articles for each, and so it is not a one time deal and you will need to check back here every once in a while if you are at all curious to find out what’s the newest thing I’ve discovered and what my ‘take away’ from it is.

Node Uses Both Prototypal and Pseudo Classical Styles – published 20012/10/27

In an Express.js based Node application, you typically see the following boilerplate code in app.js:

http.createServer(app).listen(app.get('port'), function(){
  console.log("Express server listening on port " + app.get('port'));
});

As tempting as it may be to just take it for granted that the code above sets up the listener for requests arriving on the specified port, I decided to dig a little deeper into the source code and perhaps learn a thing or two. What follows are some of the little tidbits of goodness that I’ve been able to spy.

From the boilerplate code above, for instance, the reference that createServer returns points to an instance of a Server object whose own prototype is inherited from net.Server’s prototype. How’s it doing that? To be able to answer that we have to look under the hood of the http module (http.js), beginning with its createServer method.

function Server(requestListener) {
  if (!(this instanceof Server)) return new Server(requestListener);
  net.Server.call(this, { allowHalfOpen: true });

  if (requestListener) {
    this.addListener('request', requestListener);
  }

  // Similar option to this. Too lazy to write my own docs.
  // http://www.squid-cache.org/Doc/config/half_closed_clients/
  // http://wiki.squid-cache.org/SquidFaq/InnerWorkings#What_is_a_half-closed_filedescriptor.3F
  this.httpAllowHalfOpen = false;

  this.addListener('connection', connectionListener);
}
util.inherits(Server, net.Server);

exports.Server = Server;

exports.createServer = function(requestListener) {
  return new Server(requestListener);
};

See the call to util.inherits(Server, net.Server) on line 16 above? Here’s the code from the util module (util.js) and guess what it is doing:

exports.inherits = function(ctor, superCtor) {
  ctor.super_ = superCtor;
  ctor.prototype = Object.create(superCtor.prototype, {
    constructor: {
      value: ctor,
      enumerable: false,
      writable: true,
      configurable: true
    }
  });
};

On line 3 above we see that the method inherits is calling the Object.create method, passing it a reference to net.Server’s prototype. The object reference that Object.create returns is an object whose own prototype inherits from net.Server’s prototype, and that reference is assigned to http.Server’s prototype property. So though htttp.Server doesn’t itself have a listen method, it inherits one now prototypically. If you are still confused, just think of net.Server’s prototype serving as http’s super prototype. I was going to say its superclass but I don’t want to confuse you even more than you might already be.

And the takeaway of this is

For me what’s worth noting here is that Node’s modules use both prototypal style and pseudo classical style in a  complimentary manner, thereby producing a utility whose sum is greater than what using either one alone might provide. This confirms to me, at least, that I should no longer have to pick one way or the other of modeling object hierarchies in JavaScript and that instead I should be thinking in more dynamic terms of using both styles, just as the code in Node’s modules are.

:q