Skip to content

Instantly share code, notes, and snippets.

@bsitruk
Created June 29, 2025 09:24
Show Gist options
  • Save bsitruk/607d49adab1431e03049f65904939244 to your computer and use it in GitHub Desktop.
Save bsitruk/607d49adab1431e03049f65904939244 to your computer and use it in GitHub Desktop.
[Guide] - Node - Library logging

Node - Library logging

When building a node.js library, what's the recommend way to provide logs to users of the library ?


Below is the approach that most mature Node.js packages follow today, together with links so you can dive deeper.


1. Default to silent logging that users can opt-in to

  • Library authors shouldn’t spam the host application’s log stream.
  • The community convention is to gate logs behind an environment variable so they are off by default and can be enabled only when the consumer needs them.
  • Two common ways to do this are:
Option How it works Pros Cons
debug package const log = require('debug')('my-lib')   → run app with DEBUG=my-lib* node index.js Zero-config for users, tiny (~2 kB), browser support, colorised output Adds a runtime dep
util.debuglog (built-in) const {debuglog} = require('node:util'); const log = debuglog('my-lib');   → NODE_DEBUG=my-lib node index.js No extra dependency, identical env-var UX Node-only, minimal features

Either option gives users a one-liner to switch the library’s logs on when they’re troubleshooting but keeps production output clean.


2. Expose a pluggable logger for serious apps

If your library may run inside services that already use Winston, Pino, or another structured logger, let the host supply its own logger:

// inside your library
let logger = require('debug')('my-lib');          // sensible default

function setLogger(custom) {
  // expect an object with debug/info/warn/error (Pino-style) or log methods
  logger = custom;
}

module.exports = { setLogger, /* …other API… */ };

// usage by the consumer
const pino = require('pino')();
const myLib = require('my-lib');
myLib.setLogger(pino);     // all my-lib messages now flow through Pino

This “logger injection” pattern is considered good practice because it avoids hard-wiring a dependency and lets the application choose its transports, log-levels and formats. (stackoverflow.com)


3. Use structured log entries and standard levels

Even if you keep an internal fallback logger:

  • Emit JSON (or at least key-value) so logs are machine-parsable. Both Pino (high-performance) and Winston (feature-rich, multiple transports) make this trivial. (dev.tobetterstack.com)
  • Stick to the de-facto levels trace | debug | info | warn | error | fatal.
  • Include a short static msg plus relevant metadata (e.g. userIddurationMs) so upstream log aggregators can filter and query.

4. Document the contract for your users

Add a small “Logging” section to your README explaining:

  1. How to enable debug output (DEBUG=my-lib* or NODE_DEBUG=my-lib).
  2. How to inject a custom logger (show a Pino/Winston example).
  3. The levels and fields you emit.

Clear docs save support tickets later.


5. Leave fatal situations to exceptions, not logs

If the library hits an unrecoverable state, throw an Error and let the application decide whether to log or crash-handle. Logs should describe what happened; control-flow remains with the caller.


TL;DR

  • Use debug or Node’s built-in util.debuglog so your logs are silent unless the consumer opts-in.
  • Provide a setLogger() (or similar) hook so sophisticated apps can plug in Winston, Pino, etc.
  • Emit structured, levelled messages and document the feature.
  • Throw errors for truly exceptional cases; don’t just log them.

Following this pattern keeps your library unobtrusive for casual users while giving power users full control over where—and how—your log data goes.






What about using node.js diagnostics_channel ?

_
_

diagnostics_channel sits in a different spot on the observability-stack than “regular” logging. Think of it as an in-process pub/sub bus for diagnostic events rather than a logger that prints strings.


What it is

  • core module (node:diagnostics_channel, stability 2 - Stable since Node 18 LTS) that lets you create named channels and publish() arbitrary JSON-ish messages on them. Nothing is emitted unless something has subscribed, so the default overhead is essentially zero. (nodejs.org)
  • Node itself uses built-in channels (httpnetprocess, etc.) so APM vendors can hook in without patching core.
  • You can expose your own channel (my-lib:something) for tooling to consume.

Where it shines

  • Instrumentation hooks – libraries that want to expose rich, structured telemetry (timings, payload sizes, SQL queries, etc.) without deciding how or where it should be logged.
  • High-performance paths – you can guard heavy message creation with if (channel.hasSubscribers) { … } so nothing is allocated when no one listens. (nodejs.orgnodejs.org)
  • Trace correlation – the tracingChannel() API (experimental) and bindStore() helpers integrate with AsyncLocalStorage, letting you attach context (trace-ids, spans) as the call flows. (nodejs.org)

Where it doesn’t replace a logger

  • No levels, formatting or transports – it doesn’t print; consumers decide what to do (forward to Pino, send to OpenTelemetry, drop, …).
  • Requires a subscriber – if your typical user just wants to see readable log lines, they’d still need to wire one up.
  • Ecosystem adoption is smaller – many libraries expose debug logs or let you inject a Pino/Winston instance; far fewer expose diagnostics channels today.

Practical pattern for a library

import dc from 'node:diagnostics_channel';
const reqCh = dc.channel('my-lib:request');  // public channel

export async function fetchUser(id) {
  if (reqCh.hasSubscribers) reqCh.publish({ phase: 'start', id, ts: Date.now() });

  try {
    const res = await fetch(`https://api.service/users/${id}`);
    if (reqCh.hasSubscribers) reqCh.publish({ phase: 'finish', id, status: res.status });
    return res.json();
  } catch (err) {
    if (reqCh.hasSubscribers) reqCh.publish({ phase: 'error', id, err });
    throw err;
  }
}

A consumer that wants the info can subscribe:

import dc from 'node:diagnostics_channel';
dc.subscribe('my-lib:request', (msg, name) => {
  console.log(`[${name}]`, msg);        // or send to a tracer
});

Zero configuration users will never see anything; advanced users and APMs get a structured event stream.


Recommendation

  • Keep your user-facing logs as you described earlier – e.g. a debug default plus a setLogger() hook.
  • Optionally add diagnostics_channel for deep instrumentation if your library does something mission-critical (DB driver, HTTP client, queue).
  • Document the channel names and message shapes so external tools can rely on them. Channel names should include your package prefix (my-lib:*) to avoid collisions. (nodejs.org)
  • Treat tracingChannel() as opt-in only – it’s still experimental today. (nodejs.org)

In short, diagnostics_channel is a great augment to, not a substitute for, conventional logging: it offers a zero-overhead tap that observability tools can drink from while your everyday users keep the console clean.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment