Showing posts with label css. Show all posts
Showing posts with label css. Show all posts

Wednesday, February 25, 2015

High performance animations (a few tricks)

The web browser is capable of very smooth animations, but there are a few gotchas. Here is what you should know.

jQuery

With jQuery animations went mainstream. But jQuery.animate is possibly the less efficient way for animating elements nowadays:
It can be still useful on old browsers or in specific cases. Better not using it on mobile browsers.

CSS transitions and animations

Native animations runs usually faster than JS ones. So it is quite obvious to use them when possible. There are plenty of tutorial on how to use them so google for it!

Composite layer css properties

When you change a CSS property you trigger some operations in the browser. These are:

  • recalculating sizes and positions (layout)
  • redrawing elements on the screen (paint)
  • composite all elements together (composite)

The topmost operations triggers the ones below. Furthermore the more elements are involved the worst the animation is , performance wise.
You can visualize and debug this process in the useful timeline panel (inside Chrome developer tools).
The trick here is to use css properties that triggers only the composite step. These are opacity
and transform. This is article contains what you need to know.
Sadly it is not enough to use these for getting the performance jump, you should also trigger the creation of a new composite layer using these CSS rules:

.will-change{
    transform: translate3d(0, 0, 0); 
    perspective: 1000;
    backface-visibility: hidden;
}

For a better support you can add these browser prefixes:

.will-change{
    -webkit-transform: translate3d(0, 0, 0); /*old chrome and safari*/
    -o-transform: translate3d(0, 0, 0); /*old opera*/
    -moz-transform: translate3d(0, 0, 0); /*old FF*/
    -ms-transform: translate3d(0, 0, 0); /*IE9*/
    transform: translate3d(0, 0, 0);
    -webkit-perspective: 1000;
    -o-perspective: 1000; /*old opera*/
    perspective: 1000;
    -webkit-backface-visibility: hidden;
    -o-backface-visibility: hidden; /*old opera*/
    backface-visibility: hidden;
}

Doing this opacity and transform are managed by the GPU if possible.
This can be a bit awkward and for this reason browser vendors have created a new css rules: will-change.
This will make the browser know that an element is going to change and to put it inside a compositing layer (this is not yet widely available so sadly, for now, it is better to stick with the hack).

Request Animation Frame


The way js animations work is changing a numeric CSS property over time. The only way to schedule an event in js was setTimeout (and its brother setInterval). As I mentioned they are still used by jQuery.
A while ago browser vendors introduced "requestAnimationFrame". It is a much better way to do it as it execute a piece of code right before the page refresh (approximately 60 times a second).
This is a polyfill for any browser (in the worst case it uses setTimeout).

If your animation depends on user interactions it can be a good idea to throttle the changes to the CSS using requestAnimationFrame. In this example I am using a queue with a simple policy that returns only the last function discarding the others.

function lastOne(q){
    return q.length ? [q.pop()] : [];
}

function getRenderingQueue(policy){
    var queue = [];
    var isRunning = false;
    policy = policy || function (q){return q;};

    var render = function (){
        var f;
        queue = policy(queue);
        isRunning = false;
        while (f = queue.shift()){
            f();
        }
    };
    return {
        empty: function (){
            queue = [];
        },
        push: function (func){
            queue.push(func);
            if (!isRunning){
                isRunning = true;
                window.requestAnimationFrame(render);
            }
        }
    }
}


var renderingQueue = getRenderingQueue(lastOne);

renderingQueue.push(function (){
    //changing a piece of CSS
});


Depending on your application you can decide using a different policy.

This is it, I'll soon put this in context.

Tuesday, January 6, 2015

Less Sass and (mo)Rework

I apologize for the pun. In this blog post I'd like to give my opinion about css preprocessors.

Sass and Less

Sass and Less are the most popular CSS preprocessors. I think CSS preprocessors are powerful tools but often they are not going to help you writing better CSS. Most of their powerful features can be abused to make the wrong choices in term of code reuse. This is because they don't promote reusability of the produced css rules.
There are still (a few) acceptable use cases: producing css demos, bulding very repetitive css rules as responsive grids and things like that.

But in the hands of inexperienced developers, they produce a mess: in my experience a bad less/sass is much worst than a bad css.

Writing proper css

Writing CSS is not too bad. The real challenge is to keep it maintainable. In this challenge Sass and Less are not helping. What is useful is design the css for the reuse, following these simple principles for example:
  • Naming is particularly important. With a namespace you can avoid conflicts. You can also separate rules in "general rules", "module rules", "exceptions". For this one you can adhere or find inspiration in SMACSS.
  • Avoid at any cost working with specificity. Use namespacing instead! So don't use selectors as ".my-module .my-special-class" but ".my-module-special-class"
  • a rule, a feature. Any rule should just contains a single functionality. This functionality can be a single rule or a combination of rules. CSS frameworks (like Bootstrap) are full of examples.

Enter reworkcss (and Myth.io)

Actually there are a couple of things that are really useful in CSS preprocessors. They can address automatically browsers prefixes, they can let you use variables and calculations (server side).
My favourite tool for doing this is rework. Although it is not a CSS preprocessor but a css parser. It produce an AST (that is a js object), you can change this object and write back the css.
It has a plugin system so it is very easy to create custom extensions.
You always start and end with a syntactically valid css (selector {property: value;}). You don't have to change syntax, syntax highlighters keeps working fine and the result can be easily used together with other static analysis tools.
You can roll out your own plugin for doing complex operations that are not possible at all with Sass/Less.
There are already a lot of plugins like one for using variables, another one for addressing browser prefixes etc.

grunt-css-annotator

This grunt plugin is an example of using rework. It scans some webpage (using PhantomJS) and adds an annotation in a comment if a css selector is used in those pages.
There is also an option to remove rules with specific annotations. It can be useful if you want to do spring cleaning in your css and this is only an example of the power of rework!

Myth.io

Myth.io is a CSS preprocessors built using a collection of useful rework plugins. It is designed for polyfilling some of the CSS features of tomorrow, like variables and calculations. But you can also extend it with other custom plugins!

EDIT: I have used myth.io for a serious project. It was mostly a pleasant experience but there is a severe limitation in the use of css variables! https://github.com/segmentio/myth/issues/10 .

Monday, December 12, 2011

Adaptative web design: loading resources dinamically

Adaptative web design (also known as responsive web design) is a tecnique that adapts web pages to the devices (desktops, tablets and phones).

This tecnique uses many differents technologies. In this blog post I don't want to explain every detail of this tecnique but it could be useful reading an in-depth tutorial:
http://webdesignerwall.com/tutorials/responsive-design-with-css3-media-queries

The remaining Issue
Using media queries to adapts a web page to mobile has a huge drawback.  We load to a mobile phone every bit of the web page and then we hide text and shrink full size images and video. The results is great, but the speed ? Is it possible to avoid downloading all this unused stuff ?

I think I found an elegant solution with these scripts https://github.com/sithmel/jQuery-decomment .
They are very simple (only 31 LOC) and are designed to work together. They are doWhenVisible and decomment.

An example:
Our Web page contains a very big and complex slideshow but in the mobile version of our web site we'll hide it using css (is very big and heavy to load).

HTML

<div class="bigslideshow">
<!--
    <img src="bigimage1.jpg" />
    <img src="bigimage2.jpg" />
    <img src="bigimage3.jpg" />
-->
</div>

CSS

@media screen and (max-width: 650px) {
    .bigslideshow{
       display: none;
    }
}

Javascript

$.doWhenVisible('.bigslideshow', function (){
    this.decomment(); // remove the comments
    this.jcarousel(); // initialize the slideshow
});

Very easy ! doWhenVisible execute a callback (only once) when a DOM element becomes visible (it checks every time the page is resized).
Decomment obviously removes comments. Removing comments is just like adding dinamically DOM nodes. The browser reacts downloading the resources and rendering the whole.

We can go even further using a script loader to load our js only if effectively used (we use require.js here):

$.doWhenVisible('.bigslideshow', function (){
    require(["jquery.jcarousel.js"], function() {
    //load jcarousel
        this.decomment(); // remove the comments
        this.jcarousel(); // initialize the slideshow
    });
});
Right now I'm very proud of these little scripts!

Tuesday, October 11, 2011

How to write a good Javascript widget


Writing a Javascript widget is almost easy but I learnt at my expense that it may be tricky. So I wrote some simple rule that help to do the job well.

1 - keep in mind: progressive ehnancement
I already wrote about it, just a recap: write html semantically correct (your mark up MUST have sense even WITHOUT Javascript)
2 - do not use inline styling (use CSS)
When your script are about to create your widget, find the outermost element and give to it a specifical class. Then do not attach style directly to the DOM elements, instead use CSS.
In this way you can customize the style of your widget without touching a single line of Javascript.
3 - avoid FOUC
Usually you modify the DOM after it is loaded. That way you may see the page while change. This is called FOUC (flash of unstyled content). You can avoid this hiding the DOM nodes you are about to change (using CSS) and show after (using Javascript).
Just a suggestion: you can use a noscript tag to show what you have hidden before in case Javascript is not enabled.
4 - use a library
A library like jQuery helps a lot with the DOM manipulation, It helps to forget the differences between the browsers. You can find a thorough guide to make a plugin here.
5 - do not calculate size and position during the widget creation
You can't assume what part of your page will be displayed and when. So you can't calculate position and size of elements at creation time. An example: I used this spinner widget on my application (http://btburnett.com/spinner/example/example.html). The widget calculates the input dimension and adds a lot of on-line styling for the little arrows. But in my application display the widget inside an hidden tab and all the size calculation returns wrong results.