Update: The original post on Netmag has been updated since this was written.
I tweeted earlier that this should be retracted. Generally, these performance-related articles are essentially little more than linkbait -- there are perhaps an infinite number of things you should do to improve a page's performance before worrying about the purported perf hit of multiplication vs. division -- but this post went further than most in this genre: it offered patently inaccurate and misleading advice.
Here are a few examples, assembled by some people who actually know what they're talking about (largely Rick Waldron and Ben Alman, with some help from myself and several others from the place that shall be unnamed).
-
Calling
array.push()
five times in a row will never be a "performance improvement." The author has clearly confused creating an array literal["foo", "bar", "baz"]
and then using.join("")
on it vs. creating an array, pushing individual items, and then joining. See here for the proof, and see here for a possible explanation. -
The author sets up a
for
loop as follows:for(var i = 0; length = 999; i <= length; i++){
. This results in a syntax error. -
The author suggests
for(var i = my_array.length; i--)
as a shorter version of afor
loop. While you can get by with using the decrement as the conditional, omitting the semi-colon at the end causes a syntax error. Also, if someone were to move the semi colon to before the decrement, it would cause an infinite loop. Also, if you were ever to do this style of cleverness, a while loop looks much more sane:var i = my_array.length; while( i-- ) {}
. -
Because JavaScript lacks block scope, variables declared inside blocks are not local to any block. The variable declaration is actually "hoisted" to the beginning of the nearest execution context (function body or global scope) such that
var foo; for(...) { foo = 1; }
behaves exactly the same asfor(...) { var foo = 1; }
. It is, however, considered bad practice to declare variables inside of blocks, because novice JavaScript developers infer from it that JavaScript has block scope. -
Creating a variable to hold the
length
property in afor
loop is typically no faster (and sometimes slower) in Chrome. Making it faster in Firefox doesn't make it magically "faster" everywhere.
-
The article mentions DOM perf almost as an afterthought. When it comes to perf, rarely is JavaScript the problem -- DOM reflows are a much more likely culprit for performance issues, but even just modifying elements via the DOM API in memory is still slow. This is one of many reasons people use client-side templates, which the author does not mention. All of that said, if you're looking for real performance gains, take a long, hard look at your HTTP overhead. Chances are that optimizing an image or two will make more difference than any improvements you can make to your JavaScript.
-
The author talks about avoiding unnecessary method calls within conditionals, and then demonstrates by accessing a property, rather than calling a method.
-
The author talks about local vs global variables, and then proceeds to demonstrate the concept with an instance property, not a local variable.
-
The author uses the global
Array
constructor function to create a new array, rather than using the Array literal[]
syntax which is, itself, known to be a performance improvement. -
The section "Calling methods or properties on array elements" is true, but somewhat misses the point: lookups of any sort should be stored in a variable, rather than being repeated. This guidance has nothing to do with array elements; it's just as true for the result of a function call.
-
In the "encapsulating methods within class declarations" example, the author fails to point out that there are really 2 performance metrics to be tested: 1) cost while creating a new instance object and 2) cost while dereferencing a method from an instance object. The provided example only discusses #1 without addressing #2. What's interesting is that, typically, #1 and #2 are mutually exclusive. When methods are defined directly on an instance object, they consume more memory and cycles (because a new function object must be created for each instance object). When methods are defined on the prototype however, they must only be defined once, so the initial overhead is less, but each time the method is dereferenced from the instance object, the cost is slightly higher because the prototype chain must be traversed. A compromise can be had whereby a previously-defined function is defined as a method on each individual instance, thus reducing memory usage, but the cost of initially assigning the method probably won't offset the cost of accessing the method via the prototype down the road.
-
The "encapsulating methods within class declarations" section also lacks an understanding of semantic, language level behaviour and use cases for instance properties and methods, and prototype properties and methods. Comparing instance properties and methods with prototype properties and methods is like comparing a knife with a stealth fighter jet -- yes they can both be lethal, but one is far more efficient and suited to larger scale tasks and the other is suited to individual, "case by case" (ie. instance) tasks. Additionally, any time that data properties are defined on a prototype, it's likely a mistake that will result in bugs.
-
The section under the heading "Unnecessary method calls within conditionals in For Loops" gives an egregiously absurd example that itself misses the point that should've been made: operations in loops have costs, be mindful of this. In addition to this mistake, there is actually no such thing as a "conditional in For Loops"; the author is simply unaware that the actual name is "Expression", as in... IterationStatement, optional Expression.
-
The author advises using Firebug for profiling code, but provides absolutely zero guidance on how to do this, or even a link to any guidance. This article offers step-by-step guidance on using the Chrome developer tools to identify performance bottlenecks -- which, as we mentioned, you should only do after you've done a whole lot of other things you can do to improve your page's performance. The YSlow page offers a good list of things to do to improve page performance.
Smart people spent their time correcting this article, and this sucks. When misleading and inaccurate content gets published by a prominent site, it falls to a busy community to use its time -- a scarce resource -- to correct said content. Publications that distribute such content should be embarassed, and while authors who create it should perhaps be commended for their efforts, they should strongly consider enlisting a knowledgeable technical reviewer in the future. Goodness knows the publication won't cover their ass for them.
Want to know more? See some debunktastic perf tests :)
[OK, well, if we're going here, here's my reply.]
I appreciate you getting back to us so quickly, I appreciate that the
article has been updated, and I also appreciate that it would have
been more comfortable for you if this entire exchange had occurred
privately. I'm going to speak very frankly here: I feel it was very
important for it to occur publicly, as the web developer community
needs to know that content in your publication must be read with a
shaker full of salt. Handling this privately would have addressed this
single article, but would not have addressed the more systemic problem
of content being published and promoted without regard for its
accuracy.
I am well aware that there could be chilling effects associated with
calling out bad content, and I gave much thought to that fact before
publishing this. The fact is, I want people to second-guess
themselves before publishing content to such a large audience; if they
are unsure, then they should gain experience in lower-stakes
environments, and solicit feedback from people who know the subject
well. Being inexperienced is not a license to make stuff up.
As I've said repeatedly at this point, had Joe published this on his
personal blog, then a private conversation would have been the most
likely -- and most appropriate -- response. However, when it was
published on a well known site with a large audience, and promoted to
30,000 followers, the need to address this publicly was much more
clear. As I said here
https://gist.github.com/3086328#gistcomment-368792, these sorts of
egregiously inaccurate articles have a real impact on folks like us
who already spend a significant amount of time -- without compensation
-- assisting newcomers to web development.
If you are having difficulty identifying qualified authors and
reviewers, I would suggest the answer is one of economics: pay
attractive rates for good content and good review, and you will get
good content and good review. A subject-matter expert could reasonably
expect $500 to $1000 in exchange for the time required to write an
article for a for-profit publication; a reviewer could reasonably
expect $100-$200 for the time required to review it. Since few
publications are willing to pay these rates, few subject matter
experts are willing to provide these services, opting instead to
publish in a place where they can derive other benefits from their
efforts (and where they have control over the quality of the
surrounding content).
I do not know what your compensation structure is, but publications
that wish to avoid incidents such as the one that unfolded yesterday
do well to appreciate these economic considerations. It may well be
that the only viable business model requires that content be from
sub-par sources, because quality content is simply too expensive
relative to the money that can be extracted from it, and few subject
matter experts are willing to provide their time at a discount to a
for-profit entity. If a publication continues to distribute content
even when they can't reasonably ensure that the content is good, then
that publication should expect that the community will continue to
point out that content's flaws, and I don't see any reason to expect
the community to do so privately when the content in question is
public.
[At this point Oliver wrote back, saying that they didn't have trouble identifying quality authors, but they did have issues with follow-through by a few prominent authors. This was my reply.]
You may not have difficulty identifying quality authors, but you
clearly have issues with them following through. As someone who worked
on the night copy desk at a newspaper for five years, I take deadlines
very seriously, and it doesn't sit well with me when people miss them.
I'm sorry that has happened to you. However, I can also understand how
it might happen. While I can't speak to any individual case,
considering the billable rate of these folks, spending hours writing
quality content for less than $500 -- or hours reviewing an article
for less than $100 -- may not be at the top of their to-do list. That
may be the going rate, but it's not a rate that's going to incentivize
people who can easily bill upwards of $2,000 a day.
What I'm struggling with here is that you seem to expect community
members to work at discounted rates in order to ensure you have
quality content, and that when those community members aren't willing
to do that, your answer is to publish anyway. "Optimise your
JavaScript" never should have seen the light of day in its original
state. Just like being inexperienced isn't an excuse for making things
up, having difficulty getting quality content isn't a justification
for publishing things that are just plain wrong.