Update: The original post on Netmag has been updated since this was written.
I tweeted earlier that this should be retracted. Generally, these performance-related articles are essentially little more than linkbait -- there are perhaps an infinite number of things you should do to improve a page's performance before worrying about the purported perf hit of multiplication vs. division -- but this post went further than most in this genre: it offered patently inaccurate and misleading advice.
Here are a few examples, assembled by some people who actually know what they're talking about (largely Rick Waldron and Ben Alman, with some help from myself and several others from the place that shall be unnamed).
-
Calling
array.push()
five times in a row will never be a "performance improvement." The author has clearly confused creating an array literal["foo", "bar", "baz"]
and then using.join("")
on it vs. creating an array, pushing individual items, and then joining. See here for the proof, and see here for a possible explanation. -
The author sets up a
for
loop as follows:for(var i = 0; length = 999; i <= length; i++){
. This results in a syntax error. -
The author suggests
for(var i = my_array.length; i--)
as a shorter version of afor
loop. While you can get by with using the decrement as the conditional, omitting the semi-colon at the end causes a syntax error. Also, if someone were to move the semi colon to before the decrement, it would cause an infinite loop. Also, if you were ever to do this style of cleverness, a while loop looks much more sane:var i = my_array.length; while( i-- ) {}
. -
Because JavaScript lacks block scope, variables declared inside blocks are not local to any block. The variable declaration is actually "hoisted" to the beginning of the nearest execution context (function body or global scope) such that
var foo; for(...) { foo = 1; }
behaves exactly the same asfor(...) { var foo = 1; }
. It is, however, considered bad practice to declare variables inside of blocks, because novice JavaScript developers infer from it that JavaScript has block scope. -
Creating a variable to hold the
length
property in afor
loop is typically no faster (and sometimes slower) in Chrome. Making it faster in Firefox doesn't make it magically "faster" everywhere.
-
The article mentions DOM perf almost as an afterthought. When it comes to perf, rarely is JavaScript the problem -- DOM reflows are a much more likely culprit for performance issues, but even just modifying elements via the DOM API in memory is still slow. This is one of many reasons people use client-side templates, which the author does not mention. All of that said, if you're looking for real performance gains, take a long, hard look at your HTTP overhead. Chances are that optimizing an image or two will make more difference than any improvements you can make to your JavaScript.
-
The author talks about avoiding unnecessary method calls within conditionals, and then demonstrates by accessing a property, rather than calling a method.
-
The author talks about local vs global variables, and then proceeds to demonstrate the concept with an instance property, not a local variable.
-
The author uses the global
Array
constructor function to create a new array, rather than using the Array literal[]
syntax which is, itself, known to be a performance improvement. -
The section "Calling methods or properties on array elements" is true, but somewhat misses the point: lookups of any sort should be stored in a variable, rather than being repeated. This guidance has nothing to do with array elements; it's just as true for the result of a function call.
-
In the "encapsulating methods within class declarations" example, the author fails to point out that there are really 2 performance metrics to be tested: 1) cost while creating a new instance object and 2) cost while dereferencing a method from an instance object. The provided example only discusses #1 without addressing #2. What's interesting is that, typically, #1 and #2 are mutually exclusive. When methods are defined directly on an instance object, they consume more memory and cycles (because a new function object must be created for each instance object). When methods are defined on the prototype however, they must only be defined once, so the initial overhead is less, but each time the method is dereferenced from the instance object, the cost is slightly higher because the prototype chain must be traversed. A compromise can be had whereby a previously-defined function is defined as a method on each individual instance, thus reducing memory usage, but the cost of initially assigning the method probably won't offset the cost of accessing the method via the prototype down the road.
-
The "encapsulating methods within class declarations" section also lacks an understanding of semantic, language level behaviour and use cases for instance properties and methods, and prototype properties and methods. Comparing instance properties and methods with prototype properties and methods is like comparing a knife with a stealth fighter jet -- yes they can both be lethal, but one is far more efficient and suited to larger scale tasks and the other is suited to individual, "case by case" (ie. instance) tasks. Additionally, any time that data properties are defined on a prototype, it's likely a mistake that will result in bugs.
-
The section under the heading "Unnecessary method calls within conditionals in For Loops" gives an egregiously absurd example that itself misses the point that should've been made: operations in loops have costs, be mindful of this. In addition to this mistake, there is actually no such thing as a "conditional in For Loops"; the author is simply unaware that the actual name is "Expression", as in... IterationStatement, optional Expression.
-
The author advises using Firebug for profiling code, but provides absolutely zero guidance on how to do this, or even a link to any guidance. This article offers step-by-step guidance on using the Chrome developer tools to identify performance bottlenecks -- which, as we mentioned, you should only do after you've done a whole lot of other things you can do to improve your page's performance. The YSlow page offers a good list of things to do to improve page performance.
Smart people spent their time correcting this article, and this sucks. When misleading and inaccurate content gets published by a prominent site, it falls to a busy community to use its time -- a scarce resource -- to correct said content. Publications that distribute such content should be embarassed, and while authors who create it should perhaps be commended for their efforts, they should strongly consider enlisting a knowledgeable technical reviewer in the future. Goodness knows the publication won't cover their ass for them.
Want to know more? See some debunktastic perf tests :)
[Not a big fan of doing this publicly, but here we go. This is the email I sent @rmurphey last night:]
Thanks for your feedback on Joe Angus' JavaScript article we posted yesterday.
I'd like to take this opportunity and point out that, while the use of some techniques may be subjective, we of course always want to avoid factual errors and are very open to correcting things. As you may have noticed Joe has now updated the article:
http://www.netmagazine.com/tutorials/optimise-your-javascript
I would have been delighted had you got in touch with me (or directly with Joe) last night before you posted a public response on Github. We do respond to criticism (and we try to move very fast to get an update live), but I fear that if it's handled so publicly, it will put younger developers off to share their tips with the community. I've actually seen this happen quite a few times lately, and it's a real shame. By the looks of some comments on Twitter and your Gist post, I'm not alone with this opinion.
We're also currently looking for experts to help us peer review articles before they're published (both in print and online). In fact, this was one of the first things I started work on when I took the helm at .net a couple of months ago. However, as you know well-regarded experts who are very active in the community and often speak at conferences, like you, are unfortunately yet understandably often too busy to write tutorials themselves or help out with peer reviews.
We always approach people of your knowledge and standing in the community to work with us. Sadly, our previous experience in working with such experts has not always been the best. It's okay to say 'no' due to work pressures, but we also had a fair few occasions, where somebody agreed to write an article for us and then did not deliver and went quiet on us (Rick, I think you know what I'm talking about :)). As you can imagine that's leaving us in the lurch, especially when it comes to print deadilnes.
This means that we have to approach people lower down in the food chain who may not have the same knowledge, yet (or they approach us, as Joe did). It also means that, while technical accuracy is very important to us, errors will creep in from time to time, if we can't get people knowledgable enough to review our articles. That .net has a 'reckless disregard for accuracy' is strictly not true.
So, as a solution, I'd like to invite you to sit on our panel of peer reviewers. We've already signed up Stephanie Rieger, Christian Heilmann and Chris Mills. We're still working out the details but of course you'd at least get a credit on the article you have reviewed.
Anyway, let me know what you think. We'd love you to contribute to .net in whatever form. We're always learning and it's important to us to make your knowledge accessible to the wider community.
Cheers,
Oliver
Editor
.net