Skip to content

Instantly share code, notes, and snippets.

@bazhenovc
Last active July 18, 2025 01:57
Show Gist options
  • Save bazhenovc/c0aa56cdf50df495fda84de58ef1de5e to your computer and use it in GitHub Desktop.
Save bazhenovc/c0aa56cdf50df495fda84de58ef1de5e to your computer and use it in GitHub Desktop.
The Sane Rendering Manifesto

The Sane Rendering Manifesto

The goal of this manifesto is to provide an easy to follow and reasonable rules that realtime and video game renderers can follow.

These rules highly prioritize image clarity/stability and pleasant gameplay experience over photorealism and excess graphics fidelity.

Keep in mind that shipping a game has priority over everything else and it is allowed to break the rules of the manifesto when there are no other good options in order to ship the game.

Do not use dynamic resolution.

Fractional upscaling makes the game look bad on most monitors, especially if the scale factor changes over time.

What is allowed:

  1. Rendering to an internal buffer at an integer scale factor followed by blit to native resolution with a point/nearest filtering.
  2. Integer scale factor that matches the monitor resolution exactly after upscaling.
  3. The scale factor should be fixed and determined by the quality preset in the settings.

What is not allowed:

  1. Adjusting the scale factor dynamically at runtime.
  2. Fractional scale factors.
  3. Any integer scale factor that doesn't exactly match the monitor/TV resolution after upscale.
  4. Rendering opaque and translucent objects at different resolutions.

Implementation recommendations:

  1. Rendering at lower resolution internally, but outputting to native.
  2. Render at lower resolution render target, then do integer upscale and postprocess at native resolution.
  3. Use letterboxing to work around weird resolutions.

Do not render at lower refresh rates.

Low refresh rates (under 60Hz) increase input latency and make the gameplay experience worse for the player.

What is allowed:

  1. In case of a high refresh rate monitors (90Hz, 120Hz, 244Hz etc) it is allowed to render at 60Hz.
  2. It is always allowed to render at the highest refresh rate the hardware supports, even if it's lower than 60Hz (for example incorrect cable/HW configuration or user explicitly configured power/battery saving settings).
  3. Offering alternative graphics presets to reach target refresh rate.

What is not allowed:

  1. Explicitly targeting 30Hz refresh rate during development.
  2. Using any kind of frame generation - it does not improve the input latency which is the whole point of having higher refresh rates.

Implementation recommendations:

  1. Decouple your game logic update from the rendering code.
  2. Use GPU-driven rendering to avoid CPU bottlenecks.
  3. Try to target native monitor refresh rate and use the allowed integer scaling to match it.
  4. Use vendor-specific low-latency input libraries.

Do not use temporal amortization.

If you cannot compute something in the duration of 1 frame then stop and rethink what you are doing.

You are making a game, make sure it looks great in motion first and foremost. Nobody cares how good your game looks on static screenshots.

In many cases bad TAA or unstable temporally amortized effects is an accessibility issue that can cause health issues for your players.

What is allowed:

  1. Ray tracing is allowed as long as the work is not distributed across multiple frames.
  2. Any king of lighting or volume integration is allowed as long as it can be computed or converged during 1 rendering frame.
  3. Variable rate shading is allowed as long as it does not change the shading rate based on the viewing angle and does not introduce aliasing.

What is not allowed:

  1. Reusing view-dependent computation results from previous frames.
  2. TAA, including AI-assisted TAA. It never looked good in motion, even with AI it breaks on translucent surfaces and particles.
  3. Trying to interpolate or denoise missing data in cases of disocclusion or fast camera movement.

Implementation recommendations:

  1. Prefilter your roughness textures with vMF filtering.
  2. Use AI-based tools to generate LOD and texture mipmaps.
  3. Use AI-based tools to assist with roughness texture prefiltering, take supersampled image as an input and train the AI to prefilter it to have less shader aliasing.
  4. Enforce consistent texel density in the art production pipeline.
  5. Enforce triangle density constraints in the art production pipeline.
@bazhenovc
Copy link
Author

@ThreatInteractive

I took a look, while I admire your passion, it seems to be very early in the brainstorming phase. I'm all about practical application and testing, so I typically dive into in-depth discussions that have some real-world implementation or prototypes to back them up.

I'm going to have to pass on further review unless you have something concrete to show. I'm fairly busy with the projects that are already in motion.

I hope you understand my perspective and I wish you all the best with the implementation. Let me know if/when you have a working prototype.

@bazhenovc
Copy link
Author

@ThreatInteractive I don't like the video for the following reasons:

  • You don't present any new information/findings, all the stuff you're talking about is relatively common knowledge and there wasn't anything new for me there.
  • There are no actionable items, just 20 minutes of non-constructive flaming
  • I think you need to calm down a bit, the world is not ending, there isn't some kind of a graphics mafia oppressing everyone - don't be so angry.

Good luck with your game and further research, I sincerely hope that you succeed.

@krupitskas
Copy link

Hi Kirill!
As far as I agree about TAA and temporal effects, myself prefer more MSAA / SMAA, Im not sure I agree about GI solution which should converge over one frame. I've worked with various GI techniques - LPV propagates over frames / RTXGI accumulate irradiance per probe over frames.
I think we still can try to keep geometry as much sharp as we can, however light can be spatially upscaled / temporaly accumulated because we don't have good solution yet, unfortunately.
Also a question, do you know is it possible to make friends MSAA and V-Buffer? If we render geometry and triangle indices into intermediate buffer, Im not sure how we can utilize MSAA here. Feels like SMAA is only option?

@bazhenovc
Copy link
Author

@krupitskas In the visibility buffer shading is decoupled from geometry raster, you can render the VB triangle ID into MSAA and then during shading pass you can fetch individual subsamples and shade them as if they were regular pixels and blend the result, this basically supersampling and I'd say it's not going to be practical. In theory it is slightly cheaper than actual supersampling because you render the triangle ID buffer once, but the shading cost is going to be exorbitant.

LPV doesn't have to propagate over multiple frames, you can run more than one propagation step per frame if performance allows.

@Johan-Hammes
Copy link

Hi Kirill,

Love the general direction, although I have to agree with krupitskas that some GI effects likely have to run over multiple frames and accumulate. I still fee that you are mostly referring to effects that have spacial as well as temporal data mixed and leaves ghosting artifacts as a result.

At the risk of self promotion, I think that AA has a far deeper solution. Most aliasing we see today are the result of miss calculating the colors of pixels at the edge of a mesh, and all methods are only trying to solve that by filtering and blurring the error away. JHFAA on the other hand goes to the source of the problem, fixes up all the normals before shading, and as a result AA is almost not needed for most meshes.
https://www.johanhammes.com/earthworks/shaders

and specular aliasing is also a 'solved' problem. The main issue there is that nobody seems willing to fix their meshes. As long as you insist on passing broken meshes into your engine and hoping some programmer can magically fix the data at runtime, it becomes almost impossible to do, and definitely impossible to do well.
The only point is to realize that we tend to use roughness wrong. It is not intrinsic to the material applied, but rather the amount of spreading of all the normals inside a single pixel on screen.
Once you recognize that and introduce the concept of geometric roughness (roughness value based on mesh curvature), and teh way that changes with the distance to the mesh, and likely add anisotropic data to that, it is solved.

@bazhenovc
Copy link
Author

Hi @Johan-Hammes , it's an interesting idea, I'd like to know more if you don't mind. Can you post a more technical overview of what your solution is doing?

From what I understood, you are blending the colors on the edges of the object with the reflection cubemap or SSR, is that correct?

How does it work if the background is not a cubemap, but another set of complex geometry? The blended color is not going to exactly match in that case, even if you have SSR you will not have the correct data if the object is "self occluding" (for the lack of a better term).

You actually can see this problem on your own screenshot:
image

That could potentially work if you use previous frame SSR but then there's a big can of worms with disocclusion and reprojection + potentially issues with animated geometry.

Based on my initial understanding, I can definitely see how this can work as a specialized antialiasing method - for cases when you can clearly separate foreground and background objects, you can use it for foreground objects but not the background objects. I'm not quite getting yet how it's going to work as a general purpose solution that fits everything.

Could you please also elaborate what exactly do you mean by "fixing the meshes"? I want to be sure we're on the same page there.

@Johan-Hammes
Copy link

It seems all my emails bounce, so lets try writing here directly

Its not magic ;-) but I disagree with your noAA conclusion. If you look carefully there is some degree of AA there, its just that with such a high contrast between the two, and a very sharp edge (the section where Fresnel really matters is maximum 2 pixels wide if that), just Fresnel is not quite enough.

Sometimes I wonder about calling it AA since its really not, it just fixes errors in lighting that requires a lot of AA to try and solve,

DCS_AA
If you take a look at the DCS screengrab from a Pimax review video, you can see at the bottom of the grab handle that the reflection vector is calculated wrong, and it points upwards towards the sky picking up a ton of bright light. This then requires a lot of AA to try and smooth away. Frequently with high end Vr headsets users turn AA off to squeeze out performance and this becomes clear

As for my Fresnel code. Although my image was over a cube map, I am not using the cube map at all for reflection, but instead using screen space reflections with the previous frame.

If you contact me directly, I can send you the document I wrote for Epic on the matter, highlighting all the errors in Unreal and ways to fix them.

@bazhenovc
Copy link
Author

Yeah I'd love to chat, what is the best way of reaching you?

My email is in my github profile so feel free to use that if you want (I checked my spam folder today and didn't find any emails from you there).

I'm fine chatting here as well if you prefer it.

Sometimes I wonder about calling it AA since its really not, it just fixes errors in lighting that requires a lot of AA to try and solve,

I think this would be a better way to describe the idea, yeah. It is not physically correct right (not that it needs to be, as long as it looks good)?

@bazhenovc
Copy link
Author

As for my Fresnel code. Although my image was over a cube map, I am not using the cube map at all for reflection, but instead using screen space reflections with the previous frame.

It makes sense, thanks for the explanation!

I've got a few follow up questions:
Do you reproject the previous frame?
How are you handling disocclusion or missing data?
Any issues with animated characters or procedural animation?

@Johan-Hammes
Copy link

About physically accurate, I would argue that my fix is way more physically accurate than almost all games out there. The bright pixels on that grab handle appears because the reflection vector is pointing into the the handle itself and back out the other side. This is impossible in real life. Usually but not always as a result of normal vectors pointing away from the camera (due to a flat triangle replacing curved geometry and interpolating normals) My shader code fixes all of those to be physically accurate before doing any light calculations.

As for SSR, I am using this only on the strong Fresnel portions. I have other reflection solutions for the rest of my scene. personally I still favor planar reflections for water over SSR with its occlusion problems etc.

  • No I do not reproject, and have never seen it as a visual error
  • But we are only talking about the last 2-3 pixels right at the edge. By the time Fresnel makes it shiny enough to reflect, the angle is so tiny, that the SSR reflection is usually within that 10-100 pixel distance from the pixel we are lighting, and when the reflection is really strong that shrinks. It also means that occlusion is almost not a problem and can be ignored.
  • I haven't seen many issues with animations If you look at this video (select 4K so youtubes compression doesn't destroy it) the issues with animation is minimal in my opinion https://youtu.be/6T-2T_R8g0c

@bazhenovc
Copy link
Author

Thanks for the info!

I'll find time to implement it eventually, it's an interesting idea.

How exactly are you fixing normals after interpolation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment