Monday, April 6, 2015

Agile, how to embrace the change

The agile manifesto gives us a better way for developing software, putting:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

This overview tells a lot from a project perspective, but what about developers perspective ? how is it possible to adapt to continuous changes?
Writing a good piece of code, fully tested, requires a lot of time and effort. Throwing it into the bin is not only a waste, it is also very bad for the morale.

Luckily enough there are a few tricks that can solve most of the problems. Not a big deal, just common sense, but still thing worth to be said:

Automation, automation, automation

Ok, you are writing a piece of software and you already know that you are going to rewrite part of it, probably more than once. Automate all tedious tasks, most of the time is it a pretty good investment, especially at the beginning of the project.
I can safely assume you can automate:

  • building
  • testing
  • releasing
  • deploying 
All this can take a bit to set up, the key part is that you can reuse them across many projects.

Divide and conquer

The common mistake in approaching the design of a software is looking at it as a single entity. The best approach instead is to split it into simple and configurable libraries/modules, they can be assembled at the last minutes to get what you want.
This approach minimizes the waste and enforces the separation of concerns. And it is also a overall win in maintainability.
A good library should solve a common problem, always. A very specific problem does not deserve to be taken into consideration. What happen if your beautifully crafted, business specific library, needs to be thrown away because of requirements change ?
Instead make the business requirement a configurable option of your generic library.
There is also a huge bonus: generic libraries can be easily reused across many projects.

Documentation

Writing a good documentation improves the thing that matter most: your code should be reused more than once. A good documentation should contain purpose, a design explanation, installation instruction, API and some examples. These are particularly important.
In this case documentation is more important than working software. Bugs can be fixed, a missing/outdated documentation means your library can't be used by anyone, even you after a month.

The open source power

A fully documented and tested generic library is a big investment, and it is often valuable enough for other people as well. Why don't share it with others ?
It can enrich the ecosystem of the platform that you are using. You can also get valuable feedback.

This method, applied: a true story

The first example it comes to my mind is several years old. I was working in a company where one of the most successful product was a solution for building on line catalogs.
When I got hired my first task was the third attempt to build this software. The first two attempts were way too vertical on the previous customers to be reused for the next ones.
The main problem was embedding specific customer requirement in the project, for example the design of the product page was something every customer wanted to be different.

The business input was: "I want a catalog product", way too generic and misleading from the developer point of view. There were though, a certain amount of loosely couple, common functionalities. The real point here was to identify these functionalities and build all of them as separate product.
This is a classic case in which a developer should not do what is being told, he should use his expertise to build a system that fulfil the requirements.
The product page, for example, was never a part of the product but I developed a system to build a page with subcomponent, all of them configurable.

After a few attempt we managed to have a very flexible collection of products for building catalog applications. By the time I left the company this was used several times ...

Bottom line

It is possible to be agile, to foresee and to adapt, after all is our job.


Wednesday, February 25, 2015

High performance animations (a few tricks)

The web browser is capable of very smooth animations, but there are a few gotchas. Here is what you should know.

jQuery

With jQuery animations went mainstream. But jQuery.animate is possibly the less efficient way for animating elements nowadays:
It can be still useful on old browsers or in specific cases. Better not using it on mobile browsers.

CSS transitions and animations

Native animations runs usually faster than JS ones. So it is quite obvious to use them when possible. There are plenty of tutorial on how to use them so google for it!

Composite layer css properties

When you change a CSS property you trigger some operations in the browser. These are:

  • recalculating sizes and positions (layout)
  • redrawing elements on the screen (paint)
  • composite all elements together (composite)

The topmost operations triggers the ones below. Furthermore the more elements are involved the worst the animation is , performance wise.
You can visualize and debug this process in the useful timeline panel (inside Chrome developer tools).
The trick here is to use css properties that triggers only the composite step. These are opacity
and transform. This is article contains what you need to know.
Sadly it is not enough to use these for getting the performance jump, you should also trigger the creation of a new composite layer using these CSS rules:

.will-change{
    transform: translate3d(0, 0, 0); 
    perspective: 1000;
    backface-visibility: hidden;
}

For a better support you can add these browser prefixes:

.will-change{
    -webkit-transform: translate3d(0, 0, 0); /*old chrome and safari*/
    -o-transform: translate3d(0, 0, 0); /*old opera*/
    -moz-transform: translate3d(0, 0, 0); /*old FF*/
    -ms-transform: translate3d(0, 0, 0); /*IE9*/
    transform: translate3d(0, 0, 0);
    -webkit-perspective: 1000;
    -o-perspective: 1000; /*old opera*/
    perspective: 1000;
    -webkit-backface-visibility: hidden;
    -o-backface-visibility: hidden; /*old opera*/
    backface-visibility: hidden;
}

Doing this opacity and transform are managed by the GPU if possible.
This can be a bit awkward and for this reason browser vendors have created a new css rules: will-change.
This will make the browser know that an element is going to change and to put it inside a compositing layer (this is not yet widely available so sadly, for now, it is better to stick with the hack).

Request Animation Frame


The way js animations work is changing a numeric CSS property over time. The only way to schedule an event in js was setTimeout (and its brother setInterval). As I mentioned they are still used by jQuery.
A while ago browser vendors introduced "requestAnimationFrame". It is a much better way to do it as it execute a piece of code right before the page refresh (approximately 60 times a second).
This is a polyfill for any browser (in the worst case it uses setTimeout).

If your animation depends on user interactions it can be a good idea to throttle the changes to the CSS using requestAnimationFrame. In this example I am using a queue with a simple policy that returns only the last function discarding the others.

function lastOne(q){
    return q.length ? [q.pop()] : [];
}

function getRenderingQueue(policy){
    var queue = [];
    var isRunning = false;
    policy = policy || function (q){return q;};

    var render = function (){
        var f;
        queue = policy(queue);
        isRunning = false;
        while (f = queue.shift()){
            f();
        }
    };
    return {
        empty: function (){
            queue = [];
        },
        push: function (func){
            queue.push(func);
            if (!isRunning){
                isRunning = true;
                window.requestAnimationFrame(render);
            }
        }
    }
}


var renderingQueue = getRenderingQueue(lastOne);

renderingQueue.push(function (){
    //changing a piece of CSS
});


Depending on your application you can decide using a different policy.

This is it, I'll soon put this in context.

Friday, January 30, 2015

urlwalker.js, a context aware router for express

Urlwalker.js is a router middleware for Connect/Express.js.

It is not a substitute of the default Express.js router but it works together with the latter, trying to get an object from a fragment of URL (It literally walks it segment by segment, hence the name). You can then use this object as model in the function called by the default Express.js router.
This process is called URL traversal. This concept is not by any means original: I took the inspiration from other web frameworks such as Zope and Pyramid.

Confused ? Let's make a step behind

URL, model and view

Using REST principles it seems to be natural mapping a URLs to a hierarchy of objects:

  • http://www.example.com/roald_dahl/the_chocolate_factory

This URL represents a relation between two objects: the author (roald_dahl) and one of his books (the_chocolate_factory). The last is the model used by the function. Let's put this thing together using express.js:
app.get("/:author/:book", function (req, res){
    // getting the book object
    // doing something with the object
    // return the result
});
The original "Expressjs" way to get the model is to do it directly inside the function (like the previous example) or (better) using app.param. But it is not flexible enough for managing a deeply arbitrary nested structure.
Furthermore I believe it can be useful to split the URL in two different parts. The first part is for getting an object and the second one to transform the object:

  • http://www.example.com/roald_dahl/the_chocolate_factory/index.json
  • http://www.example.com/roald_dahl/the_chocolate_factory/index.html

Both of these URLS point to the same object but return a different representations of that object.
Urlwalker.js follows this convention.

How to use it


The middleware is initialized with a function and a "root" object.

var traversal = require('urlwalkerjs').traversal;
var traversal_middleware = traversal(function (obj, segment, cb){
    return cb({ ... new obj ... })
    // or
    return cb(); // end of traversing
},root_object);

Then you can use it as a normal middleware and add the regular routing:

app.use(traversal_middleware);

app.get('index.json', function(req, res) {
  res.send(req.context);
});

The routing process starts with an object. I call it the "root object" and it is the second argument passed to the middleware. It can be anything, even undefined.
The function (the first argument of the middleware) is invoked for any URL segment. The first time is invoked with the first segment and the root object. It returns an object. The second time is called with the second segment and the object returned previously. The process is repeated until it can't find a match. Then it returns the last object in "req.context" and pass the control to the next middleware.
For this URL:

  • http://www.example.com/roald_dahl/the_chocolate_factory/index.json

The function is invoked twice:

  • from the root object and the segment "roald_dahl" I get an author object
  • from the author object and "the_chocolate_factory" I get a book object

Then the express.js function is called with the book object inside req.context.
For clarifying the process I have added an example here.

An example with occamsrazor.js


Defining this function with such a complex behaviour can be difficult and not very flexible.
For this reason you can use occamsrazor.js for adding dinamically new behaviours to the function (see example 2).
So it becomes:

var getObject = occamsrazor();
var has_authors = occamsrazor.validator().has("authors");
var has_books = occamsrazor.validator().has("books");

getObject.add(null, function (obj, id, cb){
    return cb(); // this will match if no one else match
});

getObject.add(has_authors, function (obj, id, cb){
    return cb(obj.authors[id]);
});

getObject.add(has_books, function (obj, id, cb){
    return cb(obj.books[id]);
});

var traversal_middleware = traversal(getObject, data);
app.use(traversal_middleware);
app.get('index.json', function(req, res) {
  res.send(req.context);
});

At the beginning it might seem a bit cumbersome until you realize you can easily extend the behaviour so easily:

var has_year = occamsrazor.validator().has("year");

getObject.add(has_year, function (obj, id, cb){
    return cb(obj.year[id]);
});

Plugin all the things

But why stops here? why can't we get the view with a similar mechanism  (example 3) ? Let's replace the Express.js routing completely with this:

...
var view = require('urlwalkerjs').view;
var getView = occamsrazor();

var view_middleware = view(getView);

getView.add(null, function (url, method, context, req, res, next){
    next(); // this will match if no one else match
});

getView.add(["/index", "GET", has_books], function (url, method, context, req, res, next){
  res.send('this is the author name: ' + req.context.name);
});

getView.add(["/index", "GET", has_authors], function (url, method, context, req, res, next){
  res.send('these are the authors available: ' + Object.keys(req.context.authors));
});

getView.add(["/index", "GET", has_title], function (url, method, context, req, res, next){
  res.send('Book: ' + req.context.title + " - " + req.context.year);
});

app.use(view_middleware);

A plugin architecture is very helpful, even though you don't need plugins at all. It allows you to apply the open/close principle and extend your application safely.

Thursday, January 15, 2015

Why I have stopped using requirejs (and you should too)

I have used requirejs extensively and I have written many posts about it. I think it is very ingenious and well designed.
It tries to solve more than one problem at the same time (in a very elegant way) but nowadays these problems are not so important and they have better solutions.

Loading scripts asynchronously

This was one of the main selling point of requirejs in the past. Now it is not necessary anymore. It is much better moving (back) the script tags on the top and use the async attribute as described by this great article. The async attribute now is very well supported !

Loading dependencies

Requirejs can dinamically load dependencies when they are required. But often you want to have the control. Sometime is better to include a library when you load the page (bundling more than a library together, for having them saved in the cache) and sometime you want to load it on demand. In case it is very easy do something like:

var script = document.createElement('script');
script.src = "http://www.example.com/script.js";
document.head.appendChild(script);

Isolate dependencies

Requirejs is even able to run 2 different versions of the same library. But is a feature rarely used and to be honest in 99.9 % of the cases using the module pattern is more than enough.

The only issue

The only issue of having all these asynchronous bundles (using the async attribute) is managing the execution order. You can use a tiny library like this one:

(function (w){

var go = {}, wait = {};
w.later = function (dep, func){
    if (go[dep]) func();
    else {
        wait[dep] = wait[dep] || [];
        wait[dep].push(func);
    }
}

w.later.go = function (dep){
    var funcs =  wait[dep] || [], l = funcs.length;
    delete wait[dep];
    go[dep] = true;
    for (var i = 0; i < l; i++){
        try{
            funcs[i]();
        }
        catch (e){
            console && console.error(e);
        }
    }
}
}(window));

This could be the only JS to be loaded synchronously. For maximum performances you can also minify it and inline in the HTML.
Then you can manage the dependencies at execution time:

later('foo', function (){
   // waiting for the bundle named foo (it is an arbitrary string)
});

You only need to put this instruction at the end of the bundle "foo":

later.go("foo");

There are still valid use cases for requirejs but I suggest to keep your build process lean, tweak performances by hand, use the async attribute and the module pattern.
Simpler and more performant !

Edited: and what about "defer"?

This article suggests to use async and defer together for improving performances on older browsers. I suggest to not do that, unless you know what your script is doing. This is because of this bug. The bug is even worst of what it seems, if you inject a script tag inside a DEFERred script the execution will stop waiting for the injected script to be downloaded and executed. So be careful!


Tuesday, January 6, 2015

Less Sass and (mo)Rework

I apologize for the pun. In this blog post I'd like to give my opinion about css preprocessors.

Sass and Less

Sass and Less are the most popular CSS preprocessors. I think CSS preprocessors are powerful tools but often they are not going to help you writing better CSS. Most of their powerful features can be abused to make the wrong choices in term of code reuse. This is because they don't promote reusability of the produced css rules.
There are still (a few) acceptable use cases: producing css demos, bulding very repetitive css rules as responsive grids and things like that.

But in the hands of inexperienced developers, they produce a mess: in my experience a bad less/sass is much worst than a bad css.

Writing proper css

Writing CSS is not too bad. The real challenge is to keep it maintainable. In this challenge Sass and Less are not helping. What is useful is design the css for the reuse, following these simple principles for example:
  • Naming is particularly important. With a namespace you can avoid conflicts. You can also separate rules in "general rules", "module rules", "exceptions". For this one you can adhere or find inspiration in SMACSS.
  • Avoid at any cost working with specificity. Use namespacing instead! So don't use selectors as ".my-module .my-special-class" but ".my-module-special-class"
  • a rule, a feature. Any rule should just contains a single functionality. This functionality can be a single rule or a combination of rules. CSS frameworks (like Bootstrap) are full of examples.

Enter reworkcss (and Myth.io)

Actually there are a couple of things that are really useful in CSS preprocessors. They can address automatically browsers prefixes, they can let you use variables and calculations (server side).
My favourite tool for doing this is rework. Although it is not a CSS preprocessor but a css parser. It produce an AST (that is a js object), you can change this object and write back the css.
It has a plugin system so it is very easy to create custom extensions.
You always start and end with a syntactically valid css (selector {property: value;}). You don't have to change syntax, syntax highlighters keeps working fine and the result can be easily used together with other static analysis tools.
You can roll out your own plugin for doing complex operations that are not possible at all with Sass/Less.
There are already a lot of plugins like one for using variables, another one for addressing browser prefixes etc.

grunt-css-annotator

This grunt plugin is an example of using rework. It scans some webpage (using PhantomJS) and adds an annotation in a comment if a css selector is used in those pages.
There is also an option to remove rules with specific annotations. It can be useful if you want to do spring cleaning in your css and this is only an example of the power of rework!

Myth.io

Myth.io is a CSS preprocessors built using a collection of useful rework plugins. It is designed for polyfilling some of the CSS features of tomorrow, like variables and calculations. But you can also extend it with other custom plugins!

EDIT: I have used myth.io for a serious project. It was mostly a pleasant experience but there is a severe limitation in the use of css variables! https://github.com/segmentio/myth/issues/10 .