Originally posted as a reply to: https://gist.github.com/reduz/c5769d0e705d8ab7ac187d63be0099b5
Turned into a gist due to high likelihood of deletion. Also edited down to not include irrelevant trolling as to be useful to someone else considering Depth Reprojection.
Yes I know SSR and Parallax Corrected Shadowmaps work, but the consequences of errors in those depth tests aren't as high.
You yourself state that this is general purpose engine, how is a technique that will have trouble with:
- deformables (cloth, vegetation, skinned meshes, etc.)
- dynamic objects (how are you going to reproject the depth of a moving object?)
general purpose at all? For now all I see is that the only occluders you'll support must be static triangle meshes. And by static, I mean truly static no movement from frame to frame even as a rigid body.
I thought general purpose could mean like a First Person game with you know, animated characters which could occlude vast portions of the screen?
You reiterate time and time again that this is not an AAA engine, hence don't you think that it would be nice NOT to require artists/users to make specialized simplified "occluder geometries" and have to remember to set them?
I mean you expect ray-tracing to be fast enough to replace a z-prepass, and hoping this will be the case "because there's only a few pixels to trace for". Given that you want this to run on mobiles and the web and your raytracing software fallback layer will have to be faster than a z-prepass (because thats the only reason not to just do a z-prepass and not occlusion cull) which will probably necessitate a separate, simpler BLAS per occluder, than the one you'll use for the shadow raytracing. Nice fun way to increase your memory footprint for no reason.
This is before I even point out that your users will sure appreciate having to "bake" occluder BLASes for their occluder mesh, which they'll also appreciate having to make and maintain.
The final nail in the coffin comes from the fact that unless you want to give up on streaming static chunks or like building the TLAS yourself (which you might for a fallback layer with Embree), Vulkan's Acceleration Structure is a black box. If you want to use as much as a single different BLAS in an otherwise identical TLAS, you'll need to build a new TLAS from scratch (you can't just copy the shadow raytracing TLAS and hotswap the pointers to make it point at different simpler BLASes even if the input BLAS count and AABBs match). This workload does not scale with resolution, needs to be done every frame, even if you make your culling depth buffer 1x1
Again the AAA argument, you don't have the resources nor the expertise to maintain complex and duplicated codepaths.
Your design forces (I hope you're aware, but with every reply I loose faith) the renderer to partition the drawing into two distinct stages:
- static objects
- everything else
You then need split your renderpass into two, so that you can "save a copy" of the depth buffer before you draw other non-static things into it. Them tiled mobile GPUs are sure gonna love that. The fun part (as I promised to expand upon) is that as soon as something starts moving (i.e. a door) you'll need to exclude it from the static set and not draw its occluder, because you cannot reproject its depth.
Basically it depends on whether you want rasterization or compute:
- rasterization => abuse depth-test only, draw simple conservative occludee bounding volumes (simples is AABB or OBB, can be convex hull) but with a fragment shader which writes out per-drawable visibilty to SSBO (z-prepass like)
- compute => HiZ by mip-mapping the depth and only testing a screenspace 2x2 AABB or the 3D AABB, like
vkguide
The HW occlusion pixel counter queries are not an option, because only one can be active per drawcall and they are super slow even with conditional rendering (which was invented to save you from GPU->CPU readbacks). Its suckiness the reason why that Depth Buffer + Occlusion Testing at low res on the CPU was popular at DICE and Crytek.
So anyway, at some point before you even start testing objects for visibility after frustum culling, you'd need to reproject that previous frame partial depth buffer and raytrace the holes, but you can't do that before polling for input. Then you need to do the occlusion tests, you don't have a shadowpass or anything else to keep the GPU busy in the meantime.
The divergence on the Reprojection and Raytracing shader is gonna be some next level stuff, I'd personally love to see the Nsight trace of how much time your SM spends idling if you ever get far enough to implementing it.
You'll probably dig yourself into a hole so deep you'll consider doing "poor man's Shader Invocation Reordering" at that point and blog about it as some cool invention.
You're probably not the first person to come up with "last frame depth reprojection" as an idea, now think about why nobody went through with it.
Raytracing to "fill gaps" doesn't make the idea special.
There is simply nothing to reproject, depths are point sampled and you cannot interpolate between them (even with a NEAREST filter). The depth values are defined and valid ONLY for pixel centers from the last frame.
A depth buffer used for culling needs to be conservative (or some people say eager), therefore the depth values for such a depth buffer can only be FARTHER than "ground truth".
No matter if you run a gather (SSR-like) or a scatter (imageAtomicMax/Min
- then you've really lost your marbles).
Don't believe me, try reprojecting the depth buffer formed by static chain linked fence (alpha tested or not does not matter) and call me back.
Essentially every pixel turns into a gap that needs to be raytraced.
The only sane way to reproject is via a gather, which is basically the same process as Screen Space Reflections or Parallax Occlusion Mapping.
Let me remind you that a z-prepass usually takes <1ms and if it takes more than that alternative methods are considered for culling.
You've now taken one of the most insanely expensive post-processes (maybe except for SSAO) and made it your pre-requisite to culling (slow clap).
To put the icing on the cake, a reprojected depth (programmatically written) disables HiZ, so any per-pixel visibility tests (if you use that) done by rasterizing the Occludee's Conservative Bounding Volue get magically many times slower.
Finally there's that whole polling for input, frustum culling, depth reprojection, occlusion culling dependency of the first renderpass which increases your latency.
Now imagine, if only a solution existed that gave you 99% correct visibility and at full resolution in far less time than a z-prepass or this weird SSR?
I gave you a solution thats "essentially free", it gives you all the visibility data in the course of performing work you'd already be performing anyway which is the most robust thing that will ever exist for rasterization, it:
- has actually been implemented before and used in production
- requires no special HW to be efficient (unlike Ray-Tracing)
- gives 100% pixel-perfect last frame visible drawable set
- is doable in Forward+ as long as you have a z-prepass which you should have anyway
- knows 95% of its Potentially Visible Set before the next frame starts, so you can start drawing right away, without incurring extra latency
- has no issues with procedural or deformable geometry
- requires no prebaking
- requires no extra special geometries, metadata, settings or parameters/heuristics to tweak
- is completely transparent to the user (no popping, no intervention needed)
- 100% accurate and artefact free (the second depth testing pass takes care of disocclusions)
- is scalable (you can interleave / subsample the visibility info, you'll just have more "disocclusions")
In case it wasn't clear both the "last frame visible" and "disocclusion" sets come from the intersection of the "post-frustum cull" set for the new frame, not the whole scene.
I wouldn't be surprised tbh: https://twitter.com/reduzio/status/1398467670056570880
Why do anything hard if Epic Rocks™️ and they're gonna release the source code? Except you can't copypaste that code in your engine, but just use Unreal and it's all good