Friday, October 18, 2013

Reusable javascript modules with Bower

In a previous post I have used require.js to download javascript files and optimize them.
Here I'll introduce a useful tool to manage your javascript and css assets. But first of all let's talk about modules definition in Javascript.

Universal module definition

At the moment there is no standard way to define a module (we are waiting for EcmaScript6) but you can easily define your module in a way compatible with AMD, UMD and regular script import. Just pick one of these snippets. I recommend to use one of those because declaring object in the global namespace is not a good practice and libraries like requirejs can help to load and optimize your scripts.

Bower

Bower is a package manager designed for the web. You can install it using:

npm install -g bower

And then start downloading your first package (jquery for example):

bower install jquery

You'll find jquery in the jquery folder inside the "bower_components" folder. Bower packages are versioned so you could (and should) specify what version install.
bower install jquery#1.10.2
If this package has some dependencies (other bower packages of course) those will be installed in the same "bower_components" folder. For those who are using node.js: be aware that it is quite different from how npm works! In this case all the dependencies are resolved and flatten in the same folder.

Bower package

A bower package, in fact, is a simple folder with a bower.json file:
{
  "name": "mypackage",
  "version": "0.0.1",
  "main": [
     "script.js",
     "style.css"
  ],
  "homepage": "https://github.com/User/myproject",
  "authors": [
    "Donald Duck "<donald.duck@gmail.com>";
  ],
  "description": "an example package",
  "keywords": [
    "js",
    "css",
  ],
  "license": "private",
  "private": true,
  "ignore": [
    "**/.*",
    "node_modules",
    "bower_components",
    "test",
    "tests"
  ],
  "dependencies": {
    "lodash": "~1.3.1",
  }
}
Let's see the most important fields:

name: the name of the package

version: the version of the package. It should be the same of the tag on git.

main: these are the main files of your project. For example a jquery repository has a lot of stuff (unbuilded file, test, fixtures etc.) but the main file is just jquery.js.

dependencies: an array of bower packages needed to run this one. You should always specific the version using this syntax.

Bower repository

The published packages are just git tags with a bower.json. The bower command  anyway uses a central repository to store the metadata (name, url of repository, keywords etc.). This repository is on https://bower.herokuapp.com/. If you have a public project you can publish on the central repository using:
bower register mypackage https://github.com/user/mypackage.git
The versions available will be taken from the available tags on you repo.

Running your own repository

Let's say you are working on a project and you don't want to share every package you are working on.
You can easily set up your own bower registry and configure bower to use it. You can do this putting a ".bowerrc" in your working directory (or in your home):
{
    "registry": {
        "search": ["http://bower.myserver.com", "https://bower.herokuapp.com/"],
        "register": "http://bower.myserver.com"
    }
}

This instructs bower to download packages from your server or the official one (the first has the priority), and to register the packages on your server only.

Automate everything !

Using grunt-bower-task you can create a build step that install all the dependencies required by your bower.json and copy all the files defined in "main" in a folder. If you use require.js you can also use grunt-bower-requirejs to build  the require.js configuration automatically.

I think this is enough to start working with bower. Enjoy!




Thursday, July 25, 2013

Server side modules with Architect

After my last notes on a client side modular architecture using require.js I'll talk about server side modules using Architect.

Modules

In my opinion every application (not only the bigger ones) needs to be divided into interdependent modules. This helps a lot when you deal with testing and enhancing your application.
In node.js applications, you will see and use this pattern very often:

//import a module
var Database = require('./db');

// initialize a module using some options 
var db = new Database('http://my.db.connection');

// initializing a module injecting a dependency 
var usermanager = require('./usermanager')(db);

But what if you want to use this module in other applications ? In this case you should make an npm independent package for each module, of course. And, in the main application you would initialize each package in the proper order. This is repeatitive and tedious furthermore, when the number of the packages starts to get bigger, initializing each module in the proper order can be an issue.

From modules to plugins

Architect uses a simple declarative syntax to explain the interconnections between packages. It also starts the system in the correct order initializing each package just once.
I recommend to read the documentation on github. It's very clear and detailed.

Express.js and Architect

I have set up a simple express.js/architect boilerplate to show how to use Architect to make an extensible express.js application.


I hope this will be useful ...

P.S.
A friend of mine suggest me to use a git subtree for each package. Nice idea!


Thursday, July 18, 2013

Client side modules with require.js


In this post I will show you how to use Require.js to split a project into simple and manageable modules.

UMD and AMD

I really didn't want to dive into differences of these 2 module systems but I think it's very important to realize of what module system we are talking about.

UMD is the module system used by node.js. You define modules doing this (foobar.js):

module.exports = {
    foo: 'bar'
};

And load a module doing this:

var foobar = require('./foobar');
console.log(foobar.foo); // prints bar

You can use UMD in the browser using browserify.
If you usually work with Javascript in the browser you will notice two issues:
  • the exports object would be overwritten every time by differents modules
  • it uses a synchronous approach
As a matter of fact it cannot work in the browser without a build step (and this is the browserify's task).

AMD instead is designed from the ground up to work in the browser.

Saying this I am not advocating one of these systems. They are both very useful even though they use different approaches.

I started by explaining UMD because, unfortunately, both systems use a function called "require".
Now that you can't be fooled anymore by this let's go on.

What is an AMD module

An AMD module must be contained in a single file and is encapsulated by the global function define.
"define" takes two parameters: an array of dependencies and the actual module code.

//module3.js
define(['module1', 'module2'], function(module1, module2) {
    'use strict';

    var namespace = {};
    
    return namespace;
});

In this example I have defined a module called "module3". This module needs module1 and module2 to run.
The return value of define will be returned if another module requires module3.
module1 and module2 are resolved loading with AJAX module1.js and module2.js (both AMD modules).
The job of require.js is basically to resolve the dependency tree and to make sure that every module would be run just once.

A module has a pair of interesting properties:

  • It is loaded the first time is required by another module.
  • not a single variable is added to the global namespace. For this reason you can even use different versions of the same library if you need to

Bootstrap and configuration

After defining modules you will need to bootstrap your application (configure and load the first module). For doing this you will define in your page a script tag with require.js and the url of the bootstrap script (main.js).

<script data-main="/js/main" src="js/vendor/require.js"></script>

After loading require.js it will load the "main.js" bootstrap using ajax. From this point every script will be loaded asynchronously and the DOMContentLoaded event (the jquery ready event) will be fired independently from the script loading.

Main.js is made of 2 part. The first one is require.js configuration:

require.config({
  baseUrl: "js/",
  paths: {
    jquery: 'vendor/jquery-1.9.1',
    underscore: 'vendor/underscore',
    backbone: 'vendor/backbone',
  },
  shim: {
      underscore: {
          exports: "_"
      },
      backbone: {
          deps: ['underscore', 'jquery'],
          exports: 'Backbone'
      },
      'jquery.bootstrap': {
          deps: ['jquery'],
          exports: '$'
      }
    }
});

Here are the most important parameters:
baseUrl: this is the path used to resolve modules.. So if you require "module1" this will be loaded from "js/module1.js".

paths is very useful to define script that are in some other paths. When you require "jquery" will be loaded the script "js/vendor/jquery-1.9.1.js".

Shims

Until now I assumed that every script is an AMD module but this is not always true.
require.js shim option allows to wrap automatically a non AMD script into an AMD define.

"deps" are dependencies to be injected and "exports" is the value to be returned:

For example your backbone.js will become:

define(['underscore', 'jquery'], function (_, $){

..actual backbone.js source ...

return Backbone;

});

The trick works even when you have to add attributes to an existing object (like defining a jquery plugin):

define(['jquery'], function ($){

$.fn.myJqueryPlugin = function (){
};

return $;

});

This is the case of Twitter bootstrap.

require

The second part of main.js is the actual bootstrap. It loads and execute the first module.

require([
  // Load our app module and pass it to our definition function
  'app',
], function(App){
  // The "app" dependency is passed in as "App"
  App.start();
});
The app module will start a chain of loading that will load the entire application's scripts.

The useful text plugin

The text plugin allows you to add text files as dependencies and this is very useful for client side templating.
Just add "text" in the path config:

require.config({
  ...
  paths: {
     ...
    text: 'vendor/text'

and you can load your templates:

define(['underscore', "text!templates/template.html"], function (_, template_html){
    var template = _.template(template_html);
    ...
});

You can also write your own plugin as described here.

Script optimization

Using a lot of modules have an obvious drawback: the time spent to load these modules in the browser.
Require.js comes with a useful tool for optimizing scripts: r.js. It analizes, concatenates and minifies all the dependencies in one single script.

I warmly recommend to use a build step to make this operation automatically.

I use grunt and a grunt plugin to automate everything.

Installing grunt and the require.js optimizer plugin

Grunt is structured in 2 different modules: "grunt-cli" and "grunt" and a series of plugins. "grunt-cli" can be installed globally:

npm install -g grunt-cli
grunt and the plugins should be installed locally pinning a release version. This system allows to use different versions and plugins for each project.
npm install grunt --save-dev

npm install grunt-contrib-requirejs --save-dev

The save-dev option adds the modules in the package.json under the "devDependencies" key and using the latest release.
...
  "devDependencies": {
    "grunt": "~0.4.1",
    "grunt-contrib-requirejs": "~0.4.1"
  },
...

You can also do this manually and launch "npm install".

Grunt configuration

Grunt needs a configuration file called Gruntfile.js this is an example from a past project of mine:

module.exports = function(grunt) {

  // Project configuration.
  grunt.initConfig({
    pkg: grunt.file.readJSON('package.json'),
    requirejs: {
      compile: {
        options: {
          mainConfigFile: "static/js/main.js",
          baseUrl: "static/js",
          name: "main",
          paths: {
            'socketio': 'empty:',
            'backboneio': 'empty:'
          },
          out: "static/js/main-built.js"
        }
      }
    }
  });

  // Load the plugin that provides the "uglify" task.
  grunt.loadNpmTasks('grunt-contrib-requirejs');
  // Default task(s).
  grunt.registerTask('default', ['requirejs']);

};

In the "requirejs" configuration I have:
mainConfigFile: the path of the bootstrap script
baseUrl: the same as configured in the main.js configuration
name: the name of the main script
paths: in this section I added a pair of script to not be included in the optimized script. These 2 files are served directly by my node module. You can also do the same for script served through a CDN or an external service.
out: the output file

"registerTask" allows me to launch the whole optimization step with the "grunt" command (using no options).

At the end of the task I can load my new optimized script using:

<script data-main="/js/main-built" src="js/vendor/require.js"></script>
I think this is all. Stay tuned for the next!

Edit: If you liked this you'll probably be interested in this other blog post on require.js edge cases.

Monday, July 1, 2013

Writing a real time single page application - server side

In the first part I highlighted how to take care of the frontend of a real time web app. Now I'll explain how to face the backend. As usual I will not dive into details, follow the links if you search in depth explanations.

Server side

I wrote the server side of my application using node.js and a very lightweight framework called express.js
The most important feature of this framework is the middleware system.
This is a middleware:

function middleware(req, res, next){
...
}

A middleware is a sort of russian doll. You can put a middleware inside another middleware (the "next" argument).

A middleware can basically:
  • call the next middleware in the chain ( next() )
  • call the output function ( res() )
  • change the input (req)
  • overwrite the output function (res)

You can use a middleware for:
  • authenticate and get user informations
  • route to a specific middleware using the URL (req.url) and the method (req.method)
  • add a specific header to the HTTP response
  • etc.

Using middlewares is very common because it allows to build simpler and reusable components.

Express.js gives you already a lot of middlewares but I used passportjs to get a complete solution for authentication.
In this example I will store users into couchdb using nano

var config = require('./config'), // I used a config.json to store configuration parameters
    db_url = config.db_protocol + "://" + config.db_user + ":" + config.db_password + "@" + config.db_url;
    nano = require('nano')(db_url), 
    setupAuth = require('./auth'),
    MemoryStore = express.session.MemoryStore,
    sessionStore = new MemoryStore(), // nano will store user id in this session
    passport = setupAuth(userdb);

var app = express(),
    server = http.createServer(app);
    
// configure Express
app.configure(function() {
    app.set('views', __dirname + '/views');
    app.set('view engine', 'ejs');
    app.use(express.logger());
    app.use(express.cookieParser());
    app.use(express.bodyParser());
    app.use(express.methodOverride());
    app.use(express.session({ store: sessionStore, secret: config.session_secret, key: config.session_key }))
    // Initialize Passport!  Also use passport.session() middleware, to support
    // persistent login sessions (recommended).
    app.use(flash());
    app.use(passport.initialize());
    app.use(passport.session());
    app.use(app.router);
    app.use(express.static(__dirname + '/' + config.static_dir));
});

var ensureAuthenticated = function(req, res, next) {
    if (req.isAuthenticated()) {
        return next();
    }
    res.redirect('/login');
};

// I check if a user is authenticated before accessing this URL
app.get('/', ensureAuthenticated, function(req, res){
  res.render('index', { user: req.user});
});

// login
app.get('/login', function(req, res){
    res.render('login', { user: req.user });
});

app.post('/login', 
    passport.authenticate('local', { failureRedirect: '/login', failureFlash: true }),
function(req, res) {
    res.redirect('/');
});

app.get('/logout', function(req, res){
  req.logout();
  res.redirect('/login');
});

server.listen(config.http_port);

The auth module contains functions used by passport:

var passport = require('passport'),
    LocalStrategy = require('passport-local').Strategy,
    crypto = require('crypto');


module.exports = function (userdb){
    var getUserFromId = function(id, done) {
        userdb.get(id, { revs_info: false }, function(err, body) {
            if (!err){
                return done(null, body);
            }
            else {
                return done(null, false, { message: 'Invalid credentials' });
            }
        });
    };

    passport.getUserFromId = getUserFromId;
    
    passport.serializeUser(function(user, done) {
        done(null, user._id);
    });
        
    passport.deserializeUser(getUserFromId);

    passport.use(new LocalStrategy(
        function(username, password, done) {
            var shasum = crypto.createHash('sha1').update(password),
                key = [username, shasum.digest('hex')]

            // this view's keys are [username, password]
            // the password is hashed of course
            userdb.view("user", "login", { keys: [key] }, function (err, body){
                if (!err) {
                    if (body.rows.length){
                        // logged in !!!
                        return done(null, body.rows[0].value);
                    }
                    else {
                        return done(null, false, { message: 'Invalid credentials' });
                    }
                }
                else {
                    console.error(err);
                    return done(null, false, { message: 'Db error, try later' });
                }
            });        

        })
    );

    
    return passport;
};

This example is explained in the passport documentation.
If you are careful you will notice that the getUserFromId function is called every time to get the complete object from the database (couchdb in this case).
This is not very optimal and it's better to cache users for some time. I used this nice memoization module:

    var memoize = require('memoizee');
    
    // cache this function for optimal performance (2 minutes)
    // https://npmjs.org/package/memoizee
    getUserFromId = memoize(getUserFromId, { maxAge: 120000,  async: true});

Backbone.io in the server

At this point I will define a backbone.io backend (as explained here: https://github.com/scttnlsn/backbone.io)

var items_backend = backboneio.createBackend();

A backend is very similar to an express.js middleware.

var backendMiddleware1 = function(req, res, next) {
   ...
};

items_backend.use(backendMiddleware1);
items_backend.use(backendMiddleware2);

Once defined, I will connect the backend.

var io = backboneio.listen(server, {items: items_backend});

The io object which is returned by the listen method is a socket.io object.

When a websocket is connected for the first time it makes a basic handshake. This phase can be used to perform the passport authentication.

var cookie = require('cookie'),
    cookiesig = require('cookie-signature');
// socket io authorization
io.set('authorization', function (data, accept) {
    if (data.headers.cookie) {
        data.cookie = cookie.parse(data.headers.cookie);

        // cookies are signed for better security:
        //
        // s:name:signature
        //
        // s: is a prefix for signed cookies
        // name is the cookie name
        // signature is an hmac of the value
        // doing this the client cannot change the cookie value
        // without invalidate the cookie
        if (data.cookie[session_key].indexOf('s:') === 0){
            data.sessionID = cookiesig.unsign(data.cookie[session_key].slice(2), session_secret);
        }
        else {
            data.sessionID = data.cookie[session_key];
        }
        
        // (literally) get the session data from the session store
        sessionStore.get(data.sessionID, function (err, session) {
            
            if (err || !session) {
                // if we cannot grab a session, turn down the connection
                accept('Cannot get sessionid', false);
            } else {
                // save the session data and accept the connection
                data.session = session;
                if("passport" in session && "user" in session.passport){
                    passport.getUserFromId(session.passport.user, function (err, user, message){
                        if(err || !user){
                            accept('Cannot find user', false);
                        }
                        else {
                            try{
                                data.user = user;

                                accept(null, true);
                            }
                            catch (e){
                                accept('Error: ' + e.toString(), false);
                            }

                        }
                        
                    });
                }
                else {
                    accept('Session does not contain userid', false);
                }
            }
        });
    } else {
       return accept('No cookie transmitted.', false);
    }
});

The tricky part here is to extract the session from the (signed) cookie. The passport and sessionStore objects are the same defined before for normal authentication.
The backend authentication middleware can get the object through req.socket.handshake object:

var authMiddleware = function(req, res, next) {
    var user = req.socket.handshake.user;

    if (!req.user){
        next(new Error('Unauthorized'));
    }
    else {
        req.user = user;
        next();
    }

};

Backend events and channels

When a backend changes something backbone.io automatically broadcasts the change to every node connected (and triggers the events I talked before).
You often need to notify only a subset of clients. For this reason you can define channels. Every changes will be notified to clients connected to a certain channel.
The channel can be defined client side but I added a useful feature to define channel server side, during the handshake.

There is also another case where you need to detect whether couchdb has been changed by another application.
In this case I recommend you to use my fork of backbone.io  because it supports channels.

Database and flow control with promises

The last piece of the application is talking to the database. In my application I used couchdb but it is really not important which database will you use.
In the first paragraph I have underlined that a single resource operation could cause many operations in the backend. For this reason is very important using a smarter way to control the flow. I have chosen promises.

Promises are a standard pattern to manage asynchronous tasks. I used this library: https://github.com/kriskowal/q
The advantage of promises is to avoid the "pyramid of doom" of nested callbacks:

step1(function (value1) {
    step2(value1, function(value2) {
        step3(value2, function(value3) {
            step4(value3, function(value4) {
                // Do something with value4
            });
        });
    });
});

And transform that in something more manageable:

Q.fcall(step1)
.then(step2)
.then(step3)
.then(step4)
.then(function (value4) {
    // Do something with value4
}, function (error) {
    // Handle any error from step1 through step4
})
.done();

With promises, managing errors is very easy.
This is an example using nano:

var getUser = function (id) {
    var deferred = Q.defer();

    userdb.get(id, {}, function(err, body) {
        if (err) {
            deferred.reject(new Error('Not found'));
        }
        else {
            deferred.resolve(body);
        }
    });
    return deferred.promise;
};

var getGroup = function (user) {
    var deferred = Q.defer();

    userdb.get(user.groupid, {}, function(err, body) {
        if (err) {
            deferred.reject(new Error('Not found'));
        }
        else {
            deferred.resolve(body);
        }
    });
    return deferred.promise;
};

function getGroupFromUserId(id, callback){
    getUser(id)
    .then(getGroup)
    .then(function (group){
        callback(group);
    })
    .done();
}

Backbone.io and databases

Backbone.io has some ready to use backend and it is quite easy to write your own following the examples (I added the couchdb backend).

The end?

This ends this whirlwind tour. I hope someone will find this useful.

Friday, June 28, 2013

Writing a real time single page application - client side

Introduction

In this two parts guide I will explain how to use backbone.js, node.js and socket.io (and many others libraries) to write a single page/real time application.
The guide's goal is to explain a system and I will not dive into details of every component, for this reason I suggest you to follow the links to get more informations.
The topic is quite hot, so let's get started !

First step: defining resources

This part is the most boring but it is very important.
In my application I should model every client server exchange using REST principles  (even though I will eventually use websockets instead of HTTP protocol).

Following these principles I need to define resources.
A resource is a well defined piece of information

Every resource should:
  • have an id (the URL)
  • be atomic (every exchange will be stateless)
A very important detail here is that resources don't need to match exactly with backend models (database schemas). For example a resource can be the result of a join between tables.

I will leave to node.js backend to manage the differences between the actual db models and resources.
The backend can update more than one model in a reliable way (and, in case, rolling back changes) while it's not so reliable do this directly from the client.

From this point when I talk about models in the client I am talking about an HTTP resources.

Client side

For the front end I will use backbone.js. This library's task is basically to organize homogeneous models (they are resources!) into collections and keep these data synchronized with the UI (view) and the backend.
This is a model:

var Item = Backbone.Model.extend({
    defaults: {
       "name": "",
       "description": "",
    },
    validate: function(attrs, options) {
        // return a string is attributes are wrong
    }
});

This is a simple collection:

var Items = Backbone.Collection.extend({
    model: Item,
    initialize: function (){
        //...
    }
});

This collection contains a group of Items.
A backbone collection usually has an url attribute that is used to download data from the server. So I'll rewrite the last collection:

var Items = Backbone.Collection.extend({
    url: "/items"
    model: Item,
    initialize: function (){
        //...
    }
});

Backbone.js uses AJAX by default to download and update collections. It uses the standard HTTP methods: GET POST PUT DELETE (optionally PATCH if supported).
If the collection's url is "/items" the single model urls will be "/items/model_id".

Backbone.IO

One of the nicest feature of Backbone.js is that you can use different ways to save your data (overwriting the Backbone.sync function).
In this example I will use backbone.io. This component sends data to the backend using a websocket or another available socket like transport (it uses socket.io under the hood).

Backbone.io.connect();

var Items = Backbone.Collection.extend({
    backend: 'items',
    model: Item,
    initialize: function (){
        this.bindBackend();        
        //...
    }
});

Backbone.io.connect is used to initialize websockets. I replaced the url attribute with a backend identifier.
I have also launched "bindBackend". This method connect backbone.io events to the collection events. So when the server broadcasts that something is changed the collection triggers the event and views are refreshed.

Views

Backbone.js main feature is to be unopinionated. Views, for example, are mostly boilerplate code. For this reason, you often need to build a more high level framework on top of it.
It is usually a good idea to use something like Marionette.
Anyway I tried to build something simpler. This is a model's view:

var ModelView = Backbone.View.extend({
    initialize: function (){
        this.initialEvents();
    },
    initialEvents: function (){
        if (this.model){
          this.listenTo(this.model, "change", this.render);
        }
    },
    serialize: function (){
        var attrs = _.clone(this.model.attributes);
        attrs.id = attrs[this.model.idAttribute];
        return attrs;
    },
    render: function (){
        this.preRender();
        this.$el.html(this.template(this.serialize()));
        this.postRender();
        return this;
    },
    preRender: function (){
        // this runs before the rendering
    },
    postRender: function (){
        // this runs after the rendering
    }
});

I also build a collection's view:

var CollectionView = Backbone.View.extend({
    contentSelector: '.append-item-here',
    // this is the class where I append the model views
    viewOptions: {},
    initialize: function (){
        this.children = {}; // this will contain the models views
        this.initialEvents();
    },
    initialEvents: function (){
        if (this.collection){
          this.listenTo(this.collection, "add", this.addChildView);
          this.listenTo(this.collection, "remove", this.removeChildView);
          this.listenTo(this.collection, "reset", this.render);
          this.listenTo(this.collection, "sort", this.render);
        }
    },   
    render: function (){
        var that = this;
        this.preRender();

        // I remove the old models views
        _.each(this.children, function (view){
            view.remove();
        });

        this.children = {};

        // I rebuild the models views
        this.collection.chain()
        .each(function(item, index){
            // every model view have a reference to its collection view
            var options = _.extend({index: index, parentView: that}, that.viewOptions);
            that.addChildView(item, that.collection, options);
        });

        this.postRender();
        return this;
    },
    // I can use this to filter some view if it is necessary
    filterView: function (model){
        return true;
    },
    addChildView: function (item, collection, options){
        // add a view to this.children trying to respect the sorting order
        var index;
        if (!this.filterView(item)){
            return;
        }
        options.model = item;
        if (options && options.index){
            index = options.index;
        }
        else {
            index = collection.chain().filter(this.filterView, this).indexOf(item).value();
            if (index === -1){
                index = 0;
            }
        }

        var view = new this.itemView(options);
        view.render();

        this.children[item.cid] = view;
        this.addToView(view, index);
        
    },
    removeChildView: function (item, collection, options){
        this.children[item.cid].remove();
        delete this.children[item.cid];
    },

    addToView: function (view, index){
        // append a model view to the collection view in the correct sorting order
        var $el = this.$el.find(this.contentSelector),
            $children = $el.children();
        if (index >= $children.length ){
            $el.append(view.el);
        }
        else {
            $children.eq(index).before(view.el);
        }
    },
    preRender: function (){
        // this runs before the rendering
    },
    postRender: function (){
        // this runs after the rendering
    }

});

Extending these base views I can write a very simple item view:

var ItemView = ModelView.extend({
    template: _.template(html_item),
    tagName: 'div',
    events: {
        //...
    }
});

and, of course, a very simple collection view:

var ItemsView =  CollectionView.extend({
    itemView: ItemView,
    el: $("#items"),
    events: {
        //...
    }
});

You should notice that I am using the underscore template engine (_.template).

Routing and bootstrapping

Backbone uses a router object to keep the state of your application. The state is represented by the current URL.

var Router = Backbone.Router.extend({

    routes: {
        "item/:id":        "changeitem",  // #item/id
    },
    changeitem: function(id) {
        // this is called when the URL is changed
    }
});

Like others backbone primitives you can react to an URL change handling an event (inside a view for example):

var ItemsView =  CollectionView.extend({
    itemView: ItemView,
    el: $("#items"),
    initialize: function (){
        this.listenTo(router, 'route:changeitem', this.update);
    },
    update: function (){
        // ...
    },
    events: {
        //...
    }
});

When every component is in place you can instance it and bootstrap the application.

var router = new Router(); // router instance
 
var items = new Items(); // collection instance
var itemsView = new ItemsView({collection: items}); // collection view instance
    
Backbone.history.start(); // this initialize the router

items.fetch(); // this loads initially the collection from the server

The fetch method is not your best option if you want to load models for the first time. It's better to load models inline.


I hope everything is clear. Next part: the server side!

Friday, April 26, 2013

Objects in HTML5 canvas (part 3: interacting with images)

This article is the third of a series of simple recipes that explains how to interact with objects in HTML5 canvas.
  • part 1 (Drawing shapes)
  • part 2 (Interacing with objects)
In this part we will solve the last problem: how to interact with objects with irregular shapes.

The simple answer is that you don't have to. You can wrap any image inside a rectangular box and detect where the image is transparent.

Introducing offscreen canvas

An offscreen canvas is a canvas outside the DOM. It's very useful for 2 reasons:
  • you can do complex drawing once and reuse the result with just an instruction
  • you can use it to check where an image is transparent

Let's draw

First of all I start with an images array (I used inline images, but you can use images urls):

var objs = [
    {name:'firefox',
     src:'data:image/png;base64,iVBORw0KGgoAAAA.....',
     x:30,
     y:50,
     angle:Math.PI/2,
     },
    {name:'opera',
     src:'data:image/png;base64,iVBORw0KGgoAAAA...',
     x:40,
     y:30,
     angle:Math.PI/4,
     },
    {name:'chrome',
     src:'data:image/png;base64,iVBORw0...',
     x:60,
     y:100,
     angle:-Math.PI/4,
     }
];

Then I draw each image inside an offscreen canvas and, then the offscreen canvas in the main canvas:

var i, obj;
ctx.clearRect(0, 0, canvas.width, canvas.height);
for (i = 0;i < objs.length;i++){
    obj = objs[i];
    (function (obj){
        var img = new Image();   // Creating a new img element
        img.onload = function(){
            // It's very important to wait until the image loads!!!
            // otherwise I have no clue of the real size of the image 
            ctx.save();

            ctx.translate(obj.x, obj.y);
            ctx.rotate(obj.angle);
            obj.width = img.width;
            obj.height = img.height;
            //I attach the offscreen canvas to the object
            obj.canvas = document.createElement("canvas");
            obj.canvas.width = img.width;
            obj.canvas.height = img.height;
            obj.ctx = obj.canvas.getContext('2d');
            // I draw the image on a off-screen canvas
            obj.ctx.drawImage(img, 0 , 0);
            // I draw the off-screen canvas on the main canvas
            ctx.drawImage(obj.canvas, -img.width/2 , -img.height/2);
            
            ctx.restore();
        };
        img.src = obj.src; // Set source path
    }(objs[i]));
}

Pay attention! I have applied transformations on the main canvas and not on each  off-screen canvases.

Detecting mouse clicks

Let's finish detecting on which shape a user is pointing. As usual I check each object in reverse order and transform pointer coordinates. Then I must retrieve the image from the offscreen canvas and test the color on the resulting coordinates. If the alpha value is opaque the pointer is on the image.

    // I get a single pixel
    imageData = obj.ctx.getImageData(p.x+(obj.width/2), p.y+(obj.height/2), 1, 1);
    // imageData contains r,g,b,a
    if(imageData.data[3] > 50){ // 50 is the threeshold
        log.innerHTML = 'clicked on ' + obj.name;
        return;
    }            

I have taken a pixel from the canvas using getImageData. This function returns  an array that include 4 values for each pixel (just one in my case).
The values are Red, Green, Blue and Alpha (transparency) and they have values between 0 and 255.
I set a threeshold of 50 on alpha.
This is the whole code:

canvas.addEventListener('click', function (evt){
    var rect = canvas.getBoundingClientRect(),
        posx = evt.clientX - rect.left,
        posy = evt.clientY - rect.top,
        i = objs.length,
        obj,p;
        for (; i > 0; i--){
            obj = objs[i-1];
            // translate coordinates
            p = new Point(posx, posy);
            p.translate(-obj.x, -obj.y);
            p.rotate(-obj.angle);

            imageData = obj.ctx.getImageData(p.x+(obj.width/2), p.y+(obj.height/2), 1, 1);

            if(imageData.data[3] > 50){
                log.innerHTML = 'clicked on ' + obj.name;
                return;
            }            
            
        }
        log.innerHTML = '';

}, false);


And this is the results:
  Your browser doesn't support canvas!


I hope this is helpful!

Friday, April 19, 2013

Objects in HTML5 canvas (part 2: interacting with objects)

In the previous part I have shown how to draw simple shapes to the canvas.

If you need to interact with a specific object in the canvas you have to face a problem.

Using DOM you can click on a node and the browser manages the whole interaction (triggers the event, puts the DOM node in event.target, etc.).

But, as explained before, once you have drawn an object to a canvas it become a part of the canvas. So the question is: how to detect when a user click on a specific shape?

Getting canvas related coordinates

The first obstacle to get through is getting where a user clicked on the canvas. When a user clicks you can get the mouse coordinates relative to the page and you must subtract the canvas position:

canvas.addEventListener('click', function (evt){
    var rect = canvas.getBoundingClientRect(),
        posx = evt.clientX - rect.left,
        posy = evt.clientY - rect.top;
    ...

It's not too complicate, isn't it?

Checking in reverse order

In the first part of this tutorial I put the objects to draw in an array.

var objs = [
    {name:'shape2',
     color:'green',
     x:30,
     y:50,
     angle:Math.PI/2,
     width: 40,
     height: 50
     },
    {name:'shape1',
     color:'red',
     x:40,
     y:30,
     angle:Math.PI/4,
     width: 30,
     height: 40
     }
];

Then each shape is drawn in order over the previous.
For this reason I have to check shapes in reverse order, from the object in foreground to the one in background.

        for (i = objs.length - 1; i >= 0; i--){
            obj = objs[i];
            ...

Pointer transformations

Now I have pointer coordinates and shape transformations:

    {name:'shape1',
     color:'red',
     x:40, // translation x
     y:30, // translation y
     angle:Math.PI/4, // rotation
     width: 30,
     height: 40
     }

How to detect if pointer coordinates are inside this shape?

My solution is a bit imaginative but it seems to work well.
I apply transformations to pointer coordinates. These are the opposite that I have applied to the object before drawing it to the canvas. Then I check if the new pointer coordinates are inside the object shape pretending that the shape has drawn around 0, 0.

// pointer transformation
p = new Point(posx, posy);
p.translate(-obj.x, -obj.y);
p.rotate(-obj.angle);

if (p.x > -(obj.width / 2) && p.x < obj.width / 2 && p.y > - (obj.height / 2) && p.y < obj.height / 2){
    log.innerHTML = 'clicked on ' + obj.name;
    return;
}            


I wrote this simple object to help with transformations (see part 1 for an explanation on transformations):

var Point = function (x, y){
    this.x = x;
    this.y = y;
};

Point.prototype.rotate = function (angle){
    var x, y;
    x = this.x*Math.cos(angle) - this.y * Math.sin(angle);
    y = this.x*Math.sin(angle) + this.y * Math.cos(angle);
    this.x = x;
    this.y = y;
    return this;
};

Point.prototype.translate = function (x, y){
    this.x = this.x + x;
    this.y = this.y + y;
    return this;
};

This is the whole code:

canvas.addEventListener('click', function (evt){
    var rect = canvas.getBoundingClientRect(),
        posx = evt.clientX - rect.left,
        posy = evt.clientY - rect.top,
        i, obj,p;
        // I get the position of the pointer
        // relative to the canvas
        for (i = objs.length - 1; i >= 0; i--){ // cycling in inverse order
            obj = objs[i];
            // pointer transformation
            p = new Point(posx, posy);
            p.translate(-obj.x, -obj.y);
            p.rotate(-obj.angle);

            if (p.x > -(obj.width / 2) && p.x < obj.width / 2 && p.y > - (obj.height / 2) && p.y < obj.height / 2){
                log.innerHTML = 'clicked on ' + obj.name;
                return;
            }            
            
        }
        log.innerHTML = '';

}, false);

This is the result (try to click on a shape):
Your browser doesn't support canvas!
In the next part I'll add a more fine control on images using off-screen canvases. Stay tuned!!!

Friday, April 12, 2013

Objects in HTML5 canvas (part 1: drawing shapes)

Canvas is an HTML5 element which can be used to draw graphics in the browser. Its API is very low level and it has no notion of shapes or paths after they are drawn.
It contains just a bidimensional matrix of points.

This little guide introduces you to use effectively this API to write shapes and control them with the mouse.

In this simple example we are going to draw some rectangular shapes.

Save the shapes to draw

The first thing to do is to save the shapes inside an array:

var objs = [
    {name:'shape2',
     color:'green',
     x:30,
     y:50,
     angle:Math.PI/2,
     width: 40,
     height: 50
     },
    {name:'shape1',
     color:'red',
     x:40,
     y:30,
     angle:Math.PI/4,
     width: 30,
     height: 40
     }
];
Objects inside the array will be drawn one after the another with the first object in the background and the last object in foreground.

var i, obj;
ctx.clearRect(0, 0, canvas.width, canvas.height); //empty the canvas
for (i = 0;i < objs.length;i++){
    obj = objs[i];
    ctx.save(); // save context

    ctx.translate(obj.x, obj.y); // apply transformations (I'll explain later)

    ctx.rotate(obj.angle);

    ctx.fillStyle = obj.color; // draw
    ctx.fillRect(
            -(obj.width / 2),
            -(obj.height / 2),
            obj.width,
            obj.height
        );
    
    ctx.restore(); // restore the context saved

}
The canvas coordinates start with 0,0 on the upper left corner. but you usually don't need to calculate the position of each shape. The canvas API gives the power to move the axis instead, and then draw the shapes around the point 0,0.

Introducing transformations

Transformations moves X and Y axis in the space. In the bidimensional world there are 3 main type of transformations:

  • translation
  • skew
  • scale

They are usually represented with a matrix

x   a1,b1,c1
y   a2,b2,c2
1    0,0,1

that express these three equations

x' = a1x + b1y + c1
y' = a2x + b2y + c2
1 = 0x + 0y + 1

These equations transform the original (x, y) coordinates in a new pair of coordinates (x', y').
In bidimensional transformations the last equation is an identity (It is alway true).
If you pay enough attention you will notice that you can define a matrix that doesn't change the original coordinates. This is called the identity matrix:

x   1,0,0
y   0,1,0
1   0,0,1

x' = 1x + 0y + 0 = x
y' = 0x + 1y + 0 = y
1 = 0x + 0y + 1

Now starting with the identity matrix I'll try to explain transformations (I am not a mathematician so forgive me if the explanation is not formally correct or inaccurate).

t1 and t2 are translations (in the x and y axis respectively). They move the axis left/right and up/down.
A translation of 0 means no translation.
In fact:

x   1,0,t1
y   0,1,t2
1   0,0,1

x' = 1x + 0y + t1 = x + t1
y' = 0x + 1y + t2 = y + t2
1 = 0x + 0y + 1

s1 and s2 means scale. s1 scales the x axis while s2 scales the y axis. A multiplication of 1 means no scale.

x   s1,0,0
y   0,s2,0
1   0,0,1

x' = s1x + 0y + 0 = s1*x
y' = 0x + s2y + 0 = s2*y
1 = 0x + 0y + 1

sk2 and sk1 skew respectively the x and y axis

x   1,sk1,0
y   sk2,1,0
1   0,0,1

x' = 1x + sk1y + 0 = x + sk1*y
y' = sk2x + 1y + 0 = y + sk2*x
1 = 0x + 0y + 1

Now the question is: and the rotation ?

The rotation is a combination of two transformations: skew and scale.
The formula is

x   cos(angle),-sin(angle),0
y   sin(angle),cos(angle),0
1   0,0,1

x = x * cos(angle) + y * -sin(angle)
y = x * sin(angle) + y * cos(angle)

The order in which you apply the tranformation is very important:
In fact if we first rotate and then translate we are translating the axis using the new rotated axis.

As a rule of thumb, we usually need to:
  • translate
  • rotate
  • scale

Back to canvas

Canvas has a complete API for transformations. 
You can get more information on MDN.
In the previous example we applied translate and rotate tranformations.

These functions change the state of our drawing context. And affects all drawing operations so, in order to apply different transformations on each shape, we need to reset the drawing context every time.
Otherwise the next transformation would be applied over the previous.


Your browser doesn't support canvas!
Next part I'll show how to interact with objects.

Friday, January 11, 2013

Backbone.occamsrazor.js

Despite of the awful name this is one of the nicest experiments I ever done (at least in Javascript :-) ).
I used occamsrazor.js to enable heterogeneous collections in backbone.js.

What ?

Backbone.js api is based on a RESTful architecture. This architecture is based on "resources" and "verbs". Resources are atomic unit of information identified by URLS. Verbs are an unified interface to make basic operations on resources (GET, PUT, POST, DELETE). 
3 of these VERBS operates on a single resource (GET, PUT and DELETE). POST operate on a special resource called "collection". A POST request on a collection causes the creation of a new resource. Backbone.js simplify the work allowing to issue a GET request to a collection. It is very useful to fetch all the models contained in a collection.
All of these client/server exchanges use basic JSON objects. These objects are just a bunch of attributes put together without a notions of the original model.
In order to transform this bunch of attributes in a full fledged object (with methods, prototype etc.) the backbone collection makes a simple assumption: all the object contained in a collection must be of the same type (model).

A positive side effect: models and views are now plugins!

occamsrazor.js enable collections formed by different models. As a (positive) side effect the models and model views are now plugins. I wrap the results in a simple backbone ehnancement.

Models and collections

Let's start with a simple empty collection:
    var shapesCollection = new Backbone.Occamsrazor.Collection;
With classic backbone collection you should define the model of the objects contained.
With Backbone.Occamsrazor.Collection instead you set what kinds of models it could contain. But first of all define some validator:
    var hasWidth = function (obj){
            if (obj instanceof Backbone.Model){
                return obj.has('width');
            }
            return 'width' in obj;
        },
        hasHeight = function (obj){
            if (obj instanceof Backbone.Model){
                return obj.has('height');
            }
            return 'height' in obj;
        },
        hasRadius = function (obj){
            if (obj instanceof Backbone.Model){
                return obj.has('radius');
            }
            return 'radius' in obj;
        },
        hasWidthHeight = occamsrazor.chain(hasWidth, hasHeight);
These validators take an object and returns a positive number if the object has a feature.
In this case is convenient validate both a simple object and a Backbone model.
You can find further explanations about validators in the occamsrazor.js documentation.
    shapesCollection.model.addConstructor(hasWidth, Backbone.Model.extend({
        getArea: function (){
            var w = this.get('width');
            return w*w;
        }
    }));

    shapesCollection.model.addConstructor(hasWidthHeight, Backbone.Model.extend({
        getArea: function (){
            var w = this.get('width'),
                h = this.get('height');
            return w*h;
        }
    }));

    shapesCollection.model.addConstructor(hasRadius, Backbone.Model.extend({
        getArea: function (){
            var r = this.get('radius');
            return Math.round(Math.PI * r * r);
        }
    }));
From now the collection can work with three kind of models transparently:
    shapesCollection.add([{width: 10, height: 5}, {width: 10}, {radius: 3}]);
    
    console.log(shapesCollection.at(0).getArea()); // 50
    console.log(shapesCollection.at(1).getArea()); // 100
    console.log(shapesCollection.at(2).getArea()); // 28

Views

If you use an heterogeneus collection you will surely need something analogous for the views.
You will probably need a different view for each model. This works almost the same as models. First create a collection view::
    var shapesView = new Backbone.Occamsrazor.CollectionView({collection: shapesCollection, el: $('#myid')});
And then add the views:
    shapesView.itemView.addConstructor([null, hasRadius], Backbone.Occamsrazor.ItemView.extend({
        tagName:  'div',
        render: function (){
            this.$el.html('The area of the circle is ' + this.model.getArea());
            return this;
            
        }
    }));

    shapesView.itemView.addConstructor([null, hasWidth], Backbone.Occamsrazor.ItemView.extend({
        tagName:  'div',
        render: function (){
            this.$el.html('The area of the square is ' + this.model.getArea());
            return this;
        }
    }));

    shapesView.itemView.addConstructor([null, hasWidthHeight], Backbone.Occamsrazor.ItemView.extend({
        tagName:  'div',
        render: function (){
            this.$el.html('The area of the rectangle is ' + this.model.getArea());
            return this;
        }
    }));
You should notice that I used Backbone.Occamsrazor.ItemView as view constructor function. This is nearly identical to Backbone.View: the only difference is in the way the model is passed to the costructor.
This emphasizes the fact that you must pass the model as argument and allows occamsrazor to pick the right view for a that model.

I have prepared a simple demostration of the potential of this library here. This is the classic todomvc application by Addy Osmani. The interesting part is in how it is simple adding plugins to enhance the application without touching the original code.

I hope this makes clear how occamsrazor.js can be helpful !