Skip to content

Instantly share code, notes, and snippets.

@krfong916
Last active February 6, 2021 18:17
Show Gist options
  • Save krfong916/407010b2b0d2e52ec7eef17c9ea8b59f to your computer and use it in GitHub Desktop.
Save krfong916/407010b2b0d2e52ec7eef17c9ea8b59f to your computer and use it in GitHub Desktop.

What problem does file bundling solve

How were dependencies loaded on pages?

Previously, clients made many requests over the network for static assets. One would wait until all assets were loaded until the page was useable. With file bundling, the client doesn't make many requests to the server for multiple static files, instead one file is requested - CSS assets and javascript etc. -> optimizes load time, only send files over the network when the user needs it

before javascript modules (import and require statement) one main script for the entire site - this meant global variables and IIFEs (global scope pollution)

Modules were introduced in es6. In order to bridge the gap between es5 and es6, we used transpilers (babel). Transpilers transform your code and compile it into older code equivalents that could be executed across different browsers (b/c they run older JS engines). Transpile - transform and compile. Along with webpack, we could

Bundlers (such as Webpack) optimize code for the browser - they minify the bundle removing whitespace, long variable names etc. which can reduce file size significantly. A sourcemap can decode bundled modules - place formatting back in and allow source code to be read.

Read more here, by Caitlin Munley

https://www.simplethread.com/javascript-modules-and-code-bundling-explained/

What's a loader? How is it different than a plugin?

  • tells webpack how to interpret files, and matches and transforms files on a per-file basis before adding to the dependency graph as web pack creates a dependency graph, we can create rules to apply to modules to transform files that we add to the dependency graph (such as, when we see a javascript file, use babel-loader) a loader is basically a function that takes a source and returns a new source

Example:


What is a plugin, why a plugin?

plugin works at the end of the bundle generation process - modifies how bundles themselves are created

Minify and write to the file system - example of a plugin

It would be an inefficient process to transform files on an individual, per-file basis. Instead, we wait until the dependency graph is created and similar files are bundled.

When we want to interact with the compiler at runtime or apply functionality at the bundle level

Plugins - yes order matters - look at the tapable notes

https://stackoverflow.com/questions/41470771/webpack-does-the-order-of-plugins-matter

Linting - what step of the webpack build process is linting used? When does the typescript compiler run? Can we recieve build time errors?

What's the use of the fork-ts-check

Faster builds as opposed to using ts-loader alone, by itself.

Fastest compilation that's available. Use fork-ts-checker-webpack-plugin. It performs type checking in a separate process with ts-loader just handling transpilation.

files grow at a linear rate, therefore the time of compilation grows at a linear rate. To speed up, once solution is to enable transpilation only. This removes type checking. Another solution is to maintain type checking by running the type-checker on a separate process using fork-ts-checker.

Sourced from ts-loader readme documentation https://github.com/TypeStrong/ts-loader

https://github.com/TypeStrong/ts-loader/tree/master/examples/fork-ts-checker-webpack-plugin

Typescript and Eslint

Ok. What's the story with typescript and eslint. Here's the lowdown, tslint was used before as the linter (checker of compilation errors, linter issues) for ts code

Typescript and webpack

Background: Typescript is a superset of javascript. During build time, typescript compiles to javascript

For later

https://developpaper.com/collection-of-webpack-interview-questions/

Webpack

Problem statement

The top three slow page load time causes are:

  1. amount of JavaScript shipped on initial download
  2. amount of CSS shipped on initial download
  3. amount of network requests on initial download

File Bundlers: A background

What it is

At a high-level, Webpack resolves files and ties them into one single dependency graph. Files == assets, Webpack bundles these assets together to form a DAG. Even shorter, webpack is a file bundler. (a file bundler puts all of your code and its dependencies in one file)

Why it's valuable

Webpack can enable us to ship applications with better performance, pivot to new libraries quickly without breaking changes, and publish changes quickly. From a user perspective, Webpack can make the user-experience better, and from a developers perspective, it can improve our development experience.

How it works

Webpack is executed at build time. Here is the sequence of operations in order:

  • Resolve
  • Compose
  • Transform.

We define types of files for Webpack to match and resolve (.json, .js, .css, .ts, .wasm, etc. for instance). Webpack can apply custom loaders to then transform the contents of these files into code that will work in a browser.

Core concepts

  • Loaders
  • Code splitting
  • Plugins
  • Preset Environments

Code splitting

Evaluate code at build time and split code by mode/environment configuration

Loaders

Plugins

Questions

  • difference between lazy-loading and tree-shaking?

send only the necessary bundle needed size limits and code-splitting using asynchronous loading

resolve files and tie everything up together in a single dependency graph

defining modes configurations based on environment

Webpack

Core concepts entry output

  • where and how to distribute files

loader

  • tells webpack how to interpret files, and matches and transforms files on a per-file basis before adding to the dependency graph as web pack creates a dependency graph, we can create rules to apply to modules to transform files that we add to the dependency graph (such as, when we see a javascript file, use babel-loader) a loader is basically a function that takes a source and returns a new source

suppose we have a .less file. The less namespace matches a loader pattern, so we apply the loader to it. our output is a .css file. The .css namespace matches a css pattern, so we apply the css loader and convert the file to a stylesheet

we can also include and exclude specific files - such as test files

plugins adds additional functionality to compilations - can do everything you can’t do with a loader loaders are applied on a per-file basis, with plugins, we can access a bundle of files? for instance - uglifying js, we wouldnt want to apply that to a loader because we would get a less than optimal minification process. The minifier has limited scope of other files and what is used - it only knows about that file. We can also apply compression to this example

When we want to interact with the compiler at runtime or apply functionality at the bundle level, plugins are the way to go

webpack config file

  • we can conditionally build based on CLI parameters - such as environment variables

injects whatever output assets into the file

webpack code coverage is very important opportunities to identify what code is used and unused. we can set a standard in the CI pipeline of a threshold of kb shipped every PR if we reduce the threshold then we’re in the clear - performance standards is first class - it’s something we do and something that we’re concerned about code coverage unveils how much code we actually need important to ship a super fast experience - can really affect mobile users or people with slower networks

webpack treats loaders like javascript, so we can use them like javascript modules allow us to make things reusable and composeable

we can use loaders to also help us optimize file loading. For instance, we can specify a threshold of bytes for inlining images. anything past the threshold, we can emit it to the dist directory and return the hashed dist url of where the file will be. Leveraging loaders to abstract file url handling and manually specifying file urls can save us a ton of time

Suppose we have many images that we send to the browser. We can use service workers can help us cache those images ahead of time for performance

specifying options in loader rules can seperate production and dev cases

take the image put it in the dist directory and return its url - file-loader/url-loader

all the assets are tied together in the same graph - we can transform content with loaders before they are compiled

code splitting - at build time, creating separate chunks of javascript that will be loaded asynchronously. Creating lazy bundles in our build step and are accessed by web pack and the code is transformed asynchronously

“dynamic” and static code splitting everything that is done in webpack is done at build time we will always code split statically - meaning the assets already exist we want to code split when we have a heavy library, but we don’t need it upfront - we don’t need it in the initial bundle When to use: “heavy” javascript anything temporal (tooltip, modal) - anything that will conditionally load anything ‘below the fold’ Routes - especially client-side routing - code split every single route to ensure that the only code that is getting delivered to the person’s experience is the one for the page that they are on

if a lazy-loaded module is ran, it is cached. So if it is invoked again, we don’t need to lazy-load it the process of code splitting is removing the un-needed javascript code for a particular moment in the user experience in other words, we don’t make another network fetch, we look in the module cache

leading an async bundle based on runtime conditions

we introduce asynchronous behavior

more lazy chunks means increased linear time complexity - have to create more bundles, scan the bundles to optimize where the modules should be placed

lazy-once - is there a runtime cost for small bundles? must profile before we do lazy-once (can be used in dev mode because to lazy-load it takes time to optimize, so it can slow build time)

we can make code splitting first class - suppose we have a component library if we want to focus on performance in our library, having the ability to pass in a function that returns a dynamic import to our API surface is powerful

code splitting exists to solve performance and the number one problem of performance which is the amount of javascript that is shipped on the initial experience

the point of presets is that you can add isolated functionality that we can experiment, test with a simple flag or extra script keep web pack config at the top level of your project webpack resolves the packages in node modules

use a build tool to consume the modules in a way that can be tree-shaken, scope hoisted, optimized and code split

we can’t split a library that’s been bundled - so no webpack for libraries don’t transpile either

production

we want minification, code splitting, and lazy loading - what optimizations do we make?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment