Optimizing CSS Selectors

The bad news about CSS selector optimization (i.e., choosing simple selectors and combinators that are less expensive for the browser to match) is that for the most part it will have no noticeable effect on your site’s performance. This is also the good news. Selector performance is not something you have to worry about. Unless your page contains 20,000+ CSS rules, opting for a child combinator over a descendant is not going to make a difference.

In most cases, the best way to optimize your CSS for web performance is simple: keep the file size down by not writing more rules than are necessary and by not making redundant declarations.

That said, for the serious players, it’s worth taking a look at how selectors are matched.  It may make you rethink a few of your CSS practices.

Right to Left
Counter to what most would expect, browsers match selectors by starting with the rightmost simple selector (the “key” selector) and progressing leftward. This means that a simple class selector of .list-link will match faster than #content div ul li a. In the former, the browser only has to look for elements with the specified class name. In the latter, the browser must first grab ALL anchor elements and then check them against all possible ancestor li, ul, and div elements.

Why do browsers do this and what does it mean for developers writing CSS?  To quote Mozilla’s David Baron from his 2008 Google Tech Talk:

“The way we avoid the problem of having to match every element against every selector is that we hash the selectors into a bunch of buckets in advance to filter out the ones that we know aren’t going to match.  And all of the filtering is done on the rightmost part of the selector, in other words the part to the right of the last combinator. So essentially…if a selector has an ID in the rightmost part of the selector, we’ll stick it into a hash table for selectors that have an ID…

So in Mozilla – and I think this is also reasonably true in other browser engines – your selectors are going to cause much less of a performance problem if the rightmost part of them is as specific as possible. Because then there won’t be any code at all to deal with testing them against all these other elements that probably are going to fail, but maybe not all that quickly in the algorhithm.”

As it turns out, browsers are so good at matching selectors to DOM elements that the time gap between the cheap and more expensive matches, in most real world cases, is negligible (assuming you’re not writing crazy bad selectors like body div div li:hover div li * ).

There are, of course, exceptions.  They mostly pertain to reflow, something I’m planning to write about soon.

If your site is such that you have a ton of CSS rules (2K+) that can’t be reduced, or if you want to know how to write perfectly fast CSS in a perfect world, check out David Hyatt’s recommendations for writing efficient CSS.

As with all things web performance related, Steve Souders’ books are mandatory reading.  In this case it’s the last chapter of his second book, Even Faster Web Sites.  He’s also discussed the subject on his blog here and here.

Both comments and pings are currently closed.


Comments are closed.