Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save leafac/e9dcbe0ed5341cdc93848f869bf5cff4 to your computer and use it in GitHub Desktop.
Save leafac/e9dcbe0ed5341cdc93848f869bf5cff4 to your computer and use it in GitHub Desktop.
  • Worker threads
    • “Workers (threads) are useful for performing CPU-intensive JavaScript operations. They do not help much with I/O-intensive work. The Node.js built-in asynchronous I/O operations are more efficient than Workers can be.”
    • “Unlike child_process or cluster, worker_threads can share memory.”
    • But they can’t share server ports and they don’t come with load balancing
  • https://www.npmjs.com/package/workerpool
    • Manage pool of workers (I suppose in case of crashes and things like that)
    • Makes it easier to pass values from and to worker
    • Makes it easier to define worker code inline, but the usefulness of that is limited, because it isn’t a real closure
  • https://www.npmjs.com/package/piscina
    • Less magical than workerpool, just handles pool
  • Cluster
    • “When process isolation is not needed, use the worker_threads module instead, which allows running multiple application threads within a single Node.js instance.”
    • Child processes with shared server ports.
    • Simple round-robin load balancing. No advanced features found in Caddy, for example, sticky sessions.
    • Not automatically respawned by Node.js, or anything like that.
    • Probably more use of memory.
  • Child processes
    • Maximum flexibility, not terribly complicated

  • Pros of child processes
    • In our case, we have two roles: web & workers (for background jobs)
    • Besides, we monitor other child processes, for example, Caddy, and saml-idp.
    • Things are more separated in case of crashes
    • In theory we could replace the internal process manager with something else, by calling the application with the hidden command-line flags.
    • Much more traditional (available since early versions of Node.js)
  • Pros of worker threads
    • Consumes less resources, for example, memory, but it isn’t magic

  • Spawn multiple worker threads with the same function?
    • That’s possible
  • Start one server, accept the connection, and forward it to worker thread?
    • Doesn’t work, because native code can’t be cloned.

import cluster from "node:cluster";
import express from "express";

console.log(
  JSON.stringify({
    isPrimary: cluster.isPrimary,
    processId: process.pid,
  })
);

const application = express();

application.get("/", (request, response) => {
  response.send(`Hi from process.id = ${process.pid}`);
});

if (cluster.isPrimary) {
  for (let processIdentifier = 0; processIdentifier < 20; processIdentifier++) {
    new cluster.fork();
  }
} else {
  application.listen(3000);
}

import workerThreads from "node:worker_threads";
import express from "express";

console.log(
  JSON.stringify({
    isMainThread: workerThreads.isMainThread,
    threadId: workerThreads.threadId,
    workerData: workerThreads.workerData,
  })
);

const application = express();

application.get("/", (request, response) => {
  response.send(
    `Hi from workerIdentifier = ${workerThreads.workerData.workerIdentifier}`
  );
});

if (workerThreads.isMainThread) {
  for (let workerIdentifier = 0; workerIdentifier < 20; workerIdentifier++) {
    new workerThreads.Worker(new URL(import.meta.url), {
      workerData: { workerIdentifier },
    });
  }
} else {
  application.listen(3000 + workerThreads.workerData.workerIdentifier);
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment