Monday, April 6, 2015

Agile, how to embrace the change

The agile manifesto gives us a better way for developing software, putting:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

This overview tells a lot from a project perspective, but what about developers perspective ? how is it possible to adapt to continuous changes?
Writing a good piece of code, fully tested, requires a lot of time and effort. Throwing it into the bin is not only a waste, it is also very bad for the morale.

Luckily enough there are a few tricks that can solve most of the problems. Not a big deal, just common sense, but still thing worth to be said:

Automation, automation, automation

Ok, you are writing a piece of software and you already know that you are going to rewrite part of it, probably more than once. Automate all tedious tasks, most of the time is it a pretty good investment, especially at the beginning of the project.
I can safely assume you can automate:

  • building
  • testing
  • releasing
  • deploying 
All this can take a bit to set up, the key part is that you can reuse them across many projects.

Divide and conquer

The common mistake in approaching the design of a software is looking at it as a single entity. The best approach instead is to split it into simple and configurable libraries/modules, they can be assembled at the last minutes to get what you want.
This approach minimizes the waste and enforces the separation of concerns. And it is also a overall win in maintainability.
A good library should solve a common problem, always. A very specific problem does not deserve to be taken into consideration. What happen if your beautifully crafted, business specific library, needs to be thrown away because of requirements change ?
Instead make the business requirement a configurable option of your generic library.
There is also a huge bonus: generic libraries can be easily reused across many projects.

Documentation

Writing a good documentation improves the thing that matter most: your code should be reused more than once. A good documentation should contain purpose, a design explanation, installation instruction, API and some examples. These are particularly important.
In this case documentation is more important than working software. Bugs can be fixed, a missing/outdated documentation means your library can't be used by anyone, even you after a month.

The open source power

A fully documented and tested generic library is a big investment, and it is often valuable enough for other people as well. Why don't share it with others ?
It can enrich the ecosystem of the platform that you are using. You can also get valuable feedback.

This method, applied: a true story

The first example it comes to my mind is several years old. I was working in a company where one of the most successful product was a solution for building on line catalogs.
When I got hired my first task was the third attempt to build this software. The first two attempts were way too vertical on the previous customers to be reused for the next ones.
The main problem was embedding specific customer requirement in the project, for example the design of the product page was something every customer wanted to be different.

The business input was: "I want a catalog product", way too generic and misleading from the developer point of view. There were though, a certain amount of loosely couple, common functionalities. The real point here was to identify these functionalities and build all of them as separate product.
This is a classic case in which a developer should not do what is being told, he should use his expertise to build a system that fulfil the requirements.
The product page, for example, was never a part of the product but I developed a system to build a page with subcomponent, all of them configurable.

After a few attempt we managed to have a very flexible collection of products for building catalog applications. By the time I left the company this was used several times ...

Bottom line

It is possible to be agile, to foresee and to adapt, after all is our job.


Wednesday, February 25, 2015

High performance animations (a few tricks)

The web browser is capable of very smooth animations, but there are a few gotchas. Here is what you should know.

jQuery

With jQuery animations went mainstream. But jQuery.animate is possibly the less efficient way for animating elements nowadays:
It can be still useful on old browsers or in specific cases. Better not using it on mobile browsers.

CSS transitions and animations

Native animations runs usually faster than JS ones. So it is quite obvious to use them when possible. There are plenty of tutorial on how to use them so google for it!

Composite layer css properties

When you change a CSS property you trigger some operations in the browser. These are:

  • recalculating sizes and positions (layout)
  • redrawing elements on the screen (paint)
  • composite all elements together (composite)

The topmost operations triggers the ones below. Furthermore the more elements are involved the worst the animation is , performance wise.
You can visualize and debug this process in the useful timeline panel (inside Chrome developer tools).
The trick here is to use css properties that triggers only the composite step. These are opacity
and transform. This is article contains what you need to know.
Sadly it is not enough to use these for getting the performance jump, you should also trigger the creation of a new composite layer using these CSS rules:

.will-change{
    transform: translate3d(0, 0, 0); 
    perspective: 1000;
    backface-visibility: hidden;
}

For a better support you can add these browser prefixes:

.will-change{
    -webkit-transform: translate3d(0, 0, 0); /*old chrome and safari*/
    -o-transform: translate3d(0, 0, 0); /*old opera*/
    -moz-transform: translate3d(0, 0, 0); /*old FF*/
    -ms-transform: translate3d(0, 0, 0); /*IE9*/
    transform: translate3d(0, 0, 0);
    -webkit-perspective: 1000;
    -o-perspective: 1000; /*old opera*/
    perspective: 1000;
    -webkit-backface-visibility: hidden;
    -o-backface-visibility: hidden; /*old opera*/
    backface-visibility: hidden;
}

Doing this opacity and transform are managed by the GPU if possible.
This can be a bit awkward and for this reason browser vendors have created a new css rules: will-change.
This will make the browser know that an element is going to change and to put it inside a compositing layer (this is not yet widely available so sadly, for now, it is better to stick with the hack).

Request Animation Frame


The way js animations work is changing a numeric CSS property over time. The only way to schedule an event in js was setTimeout (and its brother setInterval). As I mentioned they are still used by jQuery.
A while ago browser vendors introduced "requestAnimationFrame". It is a much better way to do it as it execute a piece of code right before the page refresh (approximately 60 times a second).
This is a polyfill for any browser (in the worst case it uses setTimeout).

If your animation depends on user interactions it can be a good idea to throttle the changes to the CSS using requestAnimationFrame. In this example I am using a queue with a simple policy that returns only the last function discarding the others.

function lastOne(q){
    return q.length ? [q.pop()] : [];
}

function getRenderingQueue(policy){
    var queue = [];
    var isRunning = false;
    policy = policy || function (q){return q;};

    var render = function (){
        var f;
        queue = policy(queue);
        isRunning = false;
        while (f = queue.shift()){
            f();
        }
    };
    return {
        empty: function (){
            queue = [];
        },
        push: function (func){
            queue.push(func);
            if (!isRunning){
                isRunning = true;
                window.requestAnimationFrame(render);
            }
        }
    }
}


var renderingQueue = getRenderingQueue(lastOne);

renderingQueue.push(function (){
    //changing a piece of CSS
});


Depending on your application you can decide using a different policy.

This is it, I'll soon put this in context.

Friday, January 30, 2015

urlwalker.js, a context aware router for express

Urlwalker.js is a router middleware for Connect/Express.js.

It is not a substitute of the default Express.js router but it works together with the latter, trying to get an object from a fragment of URL (It literally walks it segment by segment, hence the name). You can then use this object as model in the function called by the default Express.js router.
This process is called URL traversal. This concept is not by any means original: I took the inspiration from other web frameworks such as Zope and Pyramid.

Confused ? Let's make a step behind

URL, model and view

Using REST principles it seems to be natural mapping a URLs to a hierarchy of objects:

  • http://www.example.com/roald_dahl/the_chocolate_factory

This URL represents a relation between two objects: the author (roald_dahl) and one of his books (the_chocolate_factory). The last is the model used by the function. Let's put this thing together using express.js:
app.get("/:author/:book", function (req, res){
    // getting the book object
    // doing something with the object
    // return the result
});
The original "Expressjs" way to get the model is to do it directly inside the function (like the previous example) or (better) using app.param. But it is not flexible enough for managing a deeply arbitrary nested structure.
Furthermore I believe it can be useful to split the URL in two different parts. The first part is for getting an object and the second one to transform the object:

  • http://www.example.com/roald_dahl/the_chocolate_factory/index.json
  • http://www.example.com/roald_dahl/the_chocolate_factory/index.html

Both of these URLS point to the same object but return a different representations of that object.
Urlwalker.js follows this convention.

How to use it


The middleware is initialized with a function and a "root" object.

var traversal = require('urlwalkerjs').traversal;
var traversal_middleware = traversal(function (obj, segment, cb){
    return cb({ ... new obj ... })
    // or
    return cb(); // end of traversing
},root_object);

Then you can use it as a normal middleware and add the regular routing:

app.use(traversal_middleware);

app.get('index.json', function(req, res) {
  res.send(req.context);
});

The routing process starts with an object. I call it the "root object" and it is the second argument passed to the middleware. It can be anything, even undefined.
The function (the first argument of the middleware) is invoked for any URL segment. The first time is invoked with the first segment and the root object. It returns an object. The second time is called with the second segment and the object returned previously. The process is repeated until it can't find a match. Then it returns the last object in "req.context" and pass the control to the next middleware.
For this URL:

  • http://www.example.com/roald_dahl/the_chocolate_factory/index.json

The function is invoked twice:

  • from the root object and the segment "roald_dahl" I get an author object
  • from the author object and "the_chocolate_factory" I get a book object

Then the express.js function is called with the book object inside req.context.
For clarifying the process I have added an example here.

An example with occamsrazor.js


Defining this function with such a complex behaviour can be difficult and not very flexible.
For this reason you can use occamsrazor.js for adding dinamically new behaviours to the function (see example 2).
So it becomes:

var getObject = occamsrazor();
var has_authors = occamsrazor.validator().has("authors");
var has_books = occamsrazor.validator().has("books");

getObject.add(null, function (obj, id, cb){
    return cb(); // this will match if no one else match
});

getObject.add(has_authors, function (obj, id, cb){
    return cb(obj.authors[id]);
});

getObject.add(has_books, function (obj, id, cb){
    return cb(obj.books[id]);
});

var traversal_middleware = traversal(getObject, data);
app.use(traversal_middleware);
app.get('index.json', function(req, res) {
  res.send(req.context);
});

At the beginning it might seem a bit cumbersome until you realize you can easily extend the behaviour so easily:

var has_year = occamsrazor.validator().has("year");

getObject.add(has_year, function (obj, id, cb){
    return cb(obj.year[id]);
});

Plugin all the things

But why stops here? why can't we get the view with a similar mechanism  (example 3) ? Let's replace the Express.js routing completely with this:

...
var view = require('urlwalkerjs').view;
var getView = occamsrazor();

var view_middleware = view(getView);

getView.add(null, function (url, method, context, req, res, next){
    next(); // this will match if no one else match
});

getView.add(["/index", "GET", has_books], function (url, method, context, req, res, next){
  res.send('this is the author name: ' + req.context.name);
});

getView.add(["/index", "GET", has_authors], function (url, method, context, req, res, next){
  res.send('these are the authors available: ' + Object.keys(req.context.authors));
});

getView.add(["/index", "GET", has_title], function (url, method, context, req, res, next){
  res.send('Book: ' + req.context.title + " - " + req.context.year);
});

app.use(view_middleware);

A plugin architecture is very helpful, even though you don't need plugins at all. It allows you to apply the open/close principle and extend your application safely.