You have a single Java codebase in one repo to put all your code in. The code ultimately runs about 30 services/batch jobs (call these end points, or EPs). Some EPs share literally no code with others. All in all, we have well over 250 kloc compiled in one go to produce a single artefact (a jar, let's call it LEVIATHAN)
Each time a change is committed, the code is built and a new LEVIATHAN artefact version is emitted - hundreds of them, thousands even. There is a database where each EP is associated with the version of the code which should run it.
Stable EPs can thus be hundreds of versions behind current. Let's say that you need to update such a stable EP by adding some new feature. You are then in a position of having to bring EP "foobar" up from v1.2.3 of LEVIATHAN to v8.9.10. You have absolutely no idea whether code that Foobar depends on has changed problematically in this space.
I'm not in any way saying that this is a sensible way of organising a monorepo and I'm sure there are plenty of legitimate responses along the lines of "use your VCS, duh" or "you have some tests, right?" but this whole situation would have been impossible using enforced boundaries between what are supposed to be modules
In our other suite of systems where we have essentially the exact opposite of a monorepo. I would have known instantly just how tied in to the rest of the codebase foobar was. Instead of having a difference of 1.2.3->8.9.10 I've probably only got that foobar depends on batbaz 2.3.4 but current batbaz is 3.4.5. I know exactly what has changed, just how little it has changed and thus the panic rising in my chest is more manageable
Obviously scenarios are incredibly tooling specific - it's entirely reasonable that you have tooling which enforces your boundaries and builds multiple artefacts from a single repo in such a fashion that their interdependencies is easily understood by anyone delving into the codebase.