In my previous Clojurists Together work for malli I improved the performance of validating recursive refs, bounding the amount of memory required for validation regardless of the depth of input values. This is implemented by eagerly expanding recursive schemas until recursive points are discovered, instead of lazily realizing and caching (an unbounded number of) new levels of recursion as inputs require.
While this increases the reliability of long-running systems by preventing memory leaks caused by validating large inputs, it came with the drawback that more memory was required upfront during validator compilation. This is an obstacle Metabase has been navigating when testing this optimization. While they are excited to see