Skip to content

Instantly share code, notes, and snippets.

@dotherightthing
Created August 15, 2019 08:41
Show Gist options
  • Save dotherightthing/b9a298e1c975eb35aec5e374f2a2f61c to your computer and use it in GitHub Desktop.
Save dotherightthing/b9a298e1c975eb35aec5e374f2a2f61c to your computer and use it in GitHub Desktop.
[Web performance] #performance

Web performance

Created: 2017.04.15

  • focus on the critical 3%, make the distinction between critical and non-critical
  • mature optimization based on experience and benchmarking
  • middle-end as a buffer: resources, architecture, communication, data exchange contract with back-end operating as a headless stateful API
  • be aware that you are developing on souped-up hardware
  • hands on optimisations, rather than blindly using tools
  • avoid more than one copy of data-validation rules
  • optimise JSON
  • Ajax opens new connection on the server; Web Sockets have no HTTP overhead, extremely lightweight persistent connection between client and server, lower latency - 10ms vs ~250ms for Ajax, 2 way push of data from server
  • setTimeout and setInterval are just hints to the browser, but not accurate for animation; requestAnimationFrame called at browser frame rate, optimised by browser, better battery performance
  • 60Hz = 60 frames per second

Lazy loading

  • Preload something into the cache - `` - just a hint to the browser, which will decide when best to load it (or not at all), or var image = new Image(); - old school way. Essential to use Cache-Control Headers so that the image is not loading again.
  • DNS prefetching is done automatically by some browsers eg Chrome (in 2013) which involves looking ahead in the markup for links to other domains and preparing for those to be requested
  • Lazy loading / on demand loading / post loading -
  • onload means 'on done and execute'
  • loading script by creating an element and appending it to to the DOM - you control when the request starts

Parallel loading & dynamic loading

  • two files loaded at the same time, but either one could be executed first (ASAP), because browser behaviour is async - async=false (IE10+) on a script element puts the script back into a queue, for ordered execution, but means that the if one script fails to load, those afterwards won't execute.
  • document.ready can't fire until scripts have loaded, in case they contain document.write and change the page markup; if document.write runs after the page loads, it will wipe the page content out, so browsers have to tread carefully with any scripts that contained (linked) in the markup

UX / User Perception

  • <= 100ms response time is perceived as instantaneous * 80-90% of user response time is on the Front End * important content first * performant by default - slower more considered development = faster for end user * user perceived performance - visual completeness * document.ready is the time when the web browser has finished parsing files and a user can interact with the page * defer requests - until after document.ready * load on demand - some sliders preload all images, not just the adjacent images * split pages up rather than jamming everything into one page * tile backgrounds rather than loading massive photographs * do the least amount of work that’s necessary to get something visible to the screen, even if it means making a correction afterwards * also consider the user experience after the initial load time ## Steve Souders - 14 Rules for Faster-Loading Web Sites 1. HTTP Requests: request less, and less during page load time - less critical stuff can come later 2. Use a CDN for static resources: more data-centers on other continents; shared caching of popular files; serve jQuery from a CDN rather than as part of the concatenated scripts; 3. Expires/Cache-Control Headers: conditional loading - different expiration headers for different content - stable less volatile files that don't change can have longer expirations 4. Gzip (mod_deflate on Apache): 50-70% compression of text resources - less over the wire 5. Stylesheets at the top: browser will wait until everything is loaded so it doesn't have to repaint the layout 6. Scripts and the bottom: parsed top-down - placing in markup is an indication to browser of how content is prioritised, where is the emphasis; browser may not be done with the body when it encounters a script at the bottom of the page; document.ready and script-at-bottom os not the same thing 7. CSS Expressions (MSIE): graceful degradation rather than slowly working - hundreds of calculations a second 8. External JS/CSS: for caching. Most HTML documents are not cached because people don't add expiration headers to PHP generated pages. So inline code is not cached. 9. Fewer DNS Lookups: more host names != parallel reqiests, still only 1 internet pipe, messing with browser optimisation; DNS lookup = latency, pre lookip 10. Minify JS/CSS: 10-20% saving as well as Gzip 11. Redirects: 100-200ms http:// -> http://www redirect to canonical source = 2 requests, 2 roundtrips to the server. Go to the correct link directly.
  1. Duplicate scripts: plugins using different version of jQuery
  2. Etags: conditional loading - hexadecimal codes which fingerprint resources - only send me files that are different from my fingerprint - but don't use the default Apache config
  3. Cacheable Ajax: RESTful resources can have expirations too; don't use as well as Cache-Control Headers

Other tips

  • Reduce Cookie Size: cookies are set on every domain and subdomain and every request for every single resource loaded off the domain sends the cookie to Google for tracking
  • Cookie-free domains: just CSS & JS, no cookie setting resources loaded through that domain
  • Horizontal image sprites are more efficient for the browser; a giant image in memory is bad for mobile devices;
  • Remove images with empty src attributes
  • Lossless image compression by using better algorithms
  • Inline images - data URIs base 64 - image size and cachability are issues
  • Watch memory usage with larger images, even if they have a smaller size due to the optimisation technique

JavaScript is Async not Parallel

All subsystems in the browser share a single thread. The creator of JavaScript has stated that this won't ever change. Intel are working on some black-box parallel

  • the CSS rendering engine, repaints etc
  • the DOM
  • the JavaScript engine
  • the garbage collector

Jerkiness is caused when systems are competing for priority,

Web Workers

A web worker can run a file in a separate thread. A long running operation can't affect the main thread, so everything runs smoothly. The only downside is that it can take a while to get an answer back. Communications are string based, copied from the worker to the main thread, so at times there can be two copies of the communication.

The communication channel consists of sending messages and getting messages back, it can be a bottleneck.

An Ajax request is sent off, where it can be processed on a parallel thread, with networking etc, but when it comes back it has to compete with the other operations running in the single thread, so it can cause delays in CSS painting.

Garbage Collection

When objects are created in JavaScript, they are created in C++ and they consume memory.

The memory is dynamically allocated, because we didn't know before the programme ran that we would need that memory.

When objects aren't referenced anymore, they are still using memory. The memory needs be reclaimed. Garbage collectors look at objects and decide that objects aren't required any more and can be deleted. Garbage collectors run on the main/UI thread. The browser will run the garbage collector as much as it thinks is necessary.

One way to circumvent this is to reuse object references, rather than creating new objects.

Slower / longer running tests provide more useful data.

Repeat tests can perform unexpectedly, as the browser optimises for operation that it thinks that people need to do a lot.

Browser developers also follow the cow path, and optimise browsers to work with the code that people use, rather than the code that they should use because it would be more efficient.

Automated Performance Regression Testing

Could run something like benchmark.js, like jsperf, run that code over and over, output a number, store in logs, compare it next time.

People

  • Kyle Simpson
  • Steve Souders

Links

Sources

  • Website Performance by Kyle Simpson (2013, Safari Books Online)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment