This utility library aims to monitor the event loop and report any lags above a certain threshold to a consumer. It is using heavy to monitor the event loop. It is also logging lags from time to time.
In my case, I'm using this utility because I've a lot of jobs spread accross consumers, those consumers need to manage what they can('t) do. When the event loop becomes laggy it means the single thread (event loop) is too heavily used and that space for I/O is lacking. Hence I'm registering a listener that breaks the parrallelism of the process.
Why am I not using a fork? Because I want to avoid too many I/O, hence I'm storing plenty of jobs in memory at once and then treating them in parallel because each of them as I/O involved. Using IPC in my case might lead to jobs being lost which I'm not ready to trade againt Rabbit.
const monitor = require('event-loop-monitoring.js');
monitor.underPressure(function () {
console.log('Event loop is under pressure.');
});
monitor.start();
LOG_INTERVAL = 5000 // Do not log more than once within 5 seconds
INTERVAL_MEASURE = 500, // Check event-loop delays each 500ms
MAX_LAG = 70, // Tolerated lag of the Event Loop is 70ms
If you have any suggestion, heads up :)