Skip to content

Instantly share code, notes, and snippets.

@bazhenovc
Last active July 18, 2025 01:57
Show Gist options
  • Save bazhenovc/c0aa56cdf50df495fda84de58ef1de5e to your computer and use it in GitHub Desktop.
Save bazhenovc/c0aa56cdf50df495fda84de58ef1de5e to your computer and use it in GitHub Desktop.
The Sane Rendering Manifesto

The Sane Rendering Manifesto

The goal of this manifesto is to provide an easy to follow and reasonable rules that realtime and video game renderers can follow.

These rules highly prioritize image clarity/stability and pleasant gameplay experience over photorealism and excess graphics fidelity.

Keep in mind that shipping a game has priority over everything else and it is allowed to break the rules of the manifesto when there are no other good options in order to ship the game.

Do not use dynamic resolution.

Fractional upscaling makes the game look bad on most monitors, especially if the scale factor changes over time.

What is allowed:

  1. Rendering to an internal buffer at an integer scale factor followed by blit to native resolution with a point/nearest filtering.
  2. Integer scale factor that matches the monitor resolution exactly after upscaling.
  3. The scale factor should be fixed and determined by the quality preset in the settings.

What is not allowed:

  1. Adjusting the scale factor dynamically at runtime.
  2. Fractional scale factors.
  3. Any integer scale factor that doesn't exactly match the monitor/TV resolution after upscale.
  4. Rendering opaque and translucent objects at different resolutions.

Implementation recommendations:

  1. Rendering at lower resolution internally, but outputting to native.
  2. Render at lower resolution render target, then do integer upscale and postprocess at native resolution.
  3. Use letterboxing to work around weird resolutions.

Do not render at lower refresh rates.

Low refresh rates (under 60Hz) increase input latency and make the gameplay experience worse for the player.

What is allowed:

  1. In case of a high refresh rate monitors (90Hz, 120Hz, 244Hz etc) it is allowed to render at 60Hz.
  2. It is always allowed to render at the highest refresh rate the hardware supports, even if it's lower than 60Hz (for example incorrect cable/HW configuration or user explicitly configured power/battery saving settings).
  3. Offering alternative graphics presets to reach target refresh rate.

What is not allowed:

  1. Explicitly targeting 30Hz refresh rate during development.
  2. Using any kind of frame generation - it does not improve the input latency which is the whole point of having higher refresh rates.

Implementation recommendations:

  1. Decouple your game logic update from the rendering code.
  2. Use GPU-driven rendering to avoid CPU bottlenecks.
  3. Try to target native monitor refresh rate and use the allowed integer scaling to match it.
  4. Use vendor-specific low-latency input libraries.

Do not use temporal amortization.

If you cannot compute something in the duration of 1 frame then stop and rethink what you are doing.

You are making a game, make sure it looks great in motion first and foremost. Nobody cares how good your game looks on static screenshots.

In many cases bad TAA or unstable temporally amortized effects is an accessibility issue that can cause health issues for your players.

What is allowed:

  1. Ray tracing is allowed as long as the work is not distributed across multiple frames.
  2. Any king of lighting or volume integration is allowed as long as it can be computed or converged during 1 rendering frame.
  3. Variable rate shading is allowed as long as it does not change the shading rate based on the viewing angle and does not introduce aliasing.

What is not allowed:

  1. Reusing view-dependent computation results from previous frames.
  2. TAA, including AI-assisted TAA. It never looked good in motion, even with AI it breaks on translucent surfaces and particles.
  3. Trying to interpolate or denoise missing data in cases of disocclusion or fast camera movement.

Implementation recommendations:

  1. Prefilter your roughness textures with vMF filtering.
  2. Use AI-based tools to generate LOD and texture mipmaps.
  3. Use AI-based tools to assist with roughness texture prefiltering, take supersampled image as an input and train the AI to prefilter it to have less shader aliasing.
  4. Enforce consistent texel density in the art production pipeline.
  5. Enforce triangle density constraints in the art production pipeline.
@bazhenovc
Copy link
Author

For MSAA to work you need to have a low-poly geometry, otherwise the performance cost will bite you and it will be much harder to run at 60Hz. SMAA was in my opinion the best non-temporal AA solution. This is a very good related overview: https://alextardif.com/Antialiasing.html

@BattleAngelAlita
Copy link

LPV -> RH
SMAA -> CMAA2
High poly -> Parallax. I rly don't understand why everyone forgot about it. Yeah, 0.5 meter deep displacement trash all your cash, do a lot of aliasing and don't fit to any geometry more complex than a box. But 2-3cm don't have all that artifact but still add a huge amount of details.

@bazhenovc
Copy link
Author

what is RH, radiance hints?

@BattleAngelAlita
Copy link

Yeah. Look better that LPV and have a slightly more physical sense.

@Ophiolith
Copy link

Ophiolith commented Mar 21, 2024

I am by no means a graphics programmer, just an occasional gamedev hobbyist, so take my thoughts for what they are, but thanks for putting your points across as you have. I've had some very similar feelings for a while now. I would also like to add:

Do not use unsharp masks or other sharpening postprocesses to disguise loss of detail

These are often added in with TAA or TAAU in order to compensate for perceptual loss of fine detail, but they often make the final resolve look even worse. At the very least, there should always be a player-facing option to disable this if it isn't desirable.

One other thing worth considering with regards to reducing dependence on TAA is to investigate modern implementations of something like alpha-testing or alpha-to-coverage for semi-transparent / masked objects like foliage or hair. Those are often the primary things that 'break' when TAA is disabled in many modern games (Starfield and Horizon: Zero Dawn are good examples). This would greatly reduce a lot of complaints about 'shimmer', at least not those connected to specular aliasing or undersampling.

@EpochWon
Copy link

I'm going to just post some of my thoughts here, a bit disorganized.

The problem with any sort of post processed AA is that as an object goes further away from the camera the space between each pixel grows larger and you get shimmer, SMAA can't resolve this as it only sees edges. TAA is able to sort of resolve this with its frame jittering approximating what MSAA does, but the only proper way to get that small detail back is with super sampling or MSAA.

There have also been a string of papers from Square Enix about specular AA, with the latest being from 2021 and its implemented in Godot by default, but it doesn't fully clean everything up since it relies on geometric detail to work.
https://www.jp.square-enix.com/tech/library/pdf/ImprovedGeometricSpecularAA.pdf

For thin geometry falling apart at a distance, there is this technique from humus https://www.humus.name/index.php?page=3D&ID=89 which never really got used in anything as TAA started being used shortly after. It's currently implemented in Source 2 and is applied to the wire entities, you can see this in heavy use in Half-Life Alyx and Counter-Strike 2. It basically just scales the radius of the wire mesh to be within a pixel, and when the wire would be smaller than a pixel it offsets that by moving it to an MSAA coverage sample to fade it out proportionally.

I want to see work being done to try and resolve various forms of shader aliasing. You can get bad shimmer from normal mapped surfaces when the light is at an extreme angle, you get harsh clipping between the lit and shadowed portions of the surface, made worse when the mip level shifts. I wonder if something could be done in the same vein as the specular anti aliasing and offset the normal map based on the light angle.

I think Voxel Global Illumination should come back, its only used by CryEngine now because Nvidia dropped all work on it when they released their first RTX cards. It updates extremely fast and the results look on par with raytraced GI, it even allows for reflections.

Screen Space Reflection abuse is also something that should be in your manifesto, SSR can be fine for detail reflections, but when it gets used across everything for whole screen reflections it looks terrible and is more distracting than no reflections at all. Cyberpunk is an example of this, they use it as a pseudo GI on every surface and it shimmers and fades in and out randomly. There should always be cubemap fallbacks, and there should always be an option to disable them.

@Ophiolith
Copy link

The problem with any sort of post processed AA is that as an object goes further away from the camera the space between each pixel grows larger and you get shimmer, SMAA can't resolve this as it only sees edges. TAA is able to sort of resolve this with its frame jittering approximating what MSAA does, but the only proper way to get that small detail back is with super sampling or MSAA.

Unless I'm misunderstanding what you're referring to here, isn't this exactly what pre-filtered roughness maps and authored LOD / texture mipmaps are designed to resolve? Ideally, almost no material in your scene should be undersampled with this setup, and specular is ramped down approximately in line with the likelihood of specular aliasing occurring. Apologies if I've misunderstood.

@EpochWon
Copy link

EpochWon commented Mar 21, 2024

Unless I'm misunderstanding what you're referring to here, isn't this exactly what pre-filtered roughness maps and authored LOD / texture mipmaps are designed to resolve? Ideally, almost no material in your scene should be undersampled with this setup, and specular is ramped down approximately in line with the likelihood of specular aliasing occurring. Apologies if I've misunderstood.

No that's just for in object surface aliasing. What I mean is that the actual mesh itself slides into subpixels and you get edge shimmer. The best example of this is wires and cables (although I did link a solution specifically for these), the same thing applies to all geometry. It's not an issue if you render distance isn't high enough or if your meshes are large enough that they don't have edges that slide into subpixels.

Here is an example in Half-Life 2, look at the powerline with MSAA off vs 8x
image
image

SMAA requires a complete edge for it to be able to resolve, which with this it doesn't have as it shifts in and out of the rendered pixel. Using TAA for this, because of how it averages samples, if the pixel isn't in enough of the jitter frames then TAA erases the object entirely (which it also does with specular highlights and its why TAA is a bad solution for specular anti aliasing, or at the very least needs something else on top of it before the TAA is applied or else you lose surface detail)

@Ophiolith
Copy link

No that's just for in object surface aliasing. What I mean is that the actual mesh itself slides into subpixels and you get edge shimmer. The best example of this is wires and cables (although I did link a solution specifically for these), the same thing applies to all geometry. It's not an issue if you render distance isn't high enough or if your meshes are large enough that they don't have edges that slide into subpixels.

Ahh right, totally understand now, thank you!

@bazhenovc
Copy link
Author

Do not use unsharp masks or other sharpening postprocesses to disguise loss of detail

These are often added in with TAA or TAAU in order to compensate for perceptual loss of fine detail, but they often make the final resolve look even worse. At the very least, there should always be a player-facing option to disable this if it isn't desirable.

I think that's already covered under the TAA rules, if you don't have TAA then the need for sharpening post process is reduced. I don't want to be overly specific, the rules are intentionally somewhat loose to allow for common sense interpretations. Also sharpening (and other things like film grain, chromatic abberations etc) can be part of an artistic vision of the game and in that case it is specifically outside of the scope of this manifesto.

The problem with any sort of post processed AA is that as an object goes further away from the camera the space between each pixel grows larger and you get shimmer, SMAA can't resolve this as it only sees edges.

Phone-wire AA is a good and extremely cheap option, I'd like to see more specialized AA tricks for different things. Generally if you have good LODs you don't get subpixel shimmering on regular geometry.

@bazhenovc
Copy link
Author

Screen Space Reflection abuse is also something that should be in your manifesto, SSR can be fine for detail reflections, but when it gets used across everything for whole screen reflections it looks terrible and is more distracting than no reflections at all. Cyberpunk is an example of this, they use it as a pseudo GI on every surface and it shimmers and fades in and out randomly. There should always be cubemap fallbacks, and there should always be an option to disable them.

I'm a bit hesitant to just outright ban SSR and screen space stuff in general, but I agree that it's often abused too much and doesn't look good.

@bazhenovc
Copy link
Author

@ThreatInteractive

Listen. I really disagree.

That is great, challenge me and my views! Maybe after a good debate we could find better solutions.

If I didn't care about realism, I wouldn't care about TAA issues. I and others I know won't PLAY games that don't offer photorealism to large degree, and it becomes more harsh as time and hardware goes by.

When it comes to photorealism we are already very VERY deep into the diminishing returns, you can have roughly the same level of graphical fidelity and photorealism as you can seen in modern PS4 games (i.e. the GoW reboot, HZD etc) without violating this manifesto (i.e. Destiny 2).

I and many THOUSANDS people do not pay for hardware at least 6x more powerful(not including the 8 years worth of architectural advancements) than the PS4 which provided pretty awesome photorealistic results.

Most players cannot afford enthusiast level GPUs, when was the last time you checked steam HW survey? 15% of the PC market is still using GTX 10xx series or equivalent GPUs, GTX 1060 being the most popular at ~4% of the total market share.

image

If you are making a PC game then this is roughly the HW range (covers ~30% of market share) you need to optimize for (don't forget to test and optimize for AMD and Intel too, not just NV), not to mention how vastly important the steam deck market is. If you spent "THOUSANDS" on the hardware then you're part of an approximately 0.8% of the market - good for you, you can run the game with 8x true supersampling or at a 240Hz refresh rate, the rest should still be able to enjoy their 1080p or 4K "at least 60Hz" experience.

Sorry that's just how the basic economics of this type of business work. And you need to target PC and all "current-gen" consoles, they all have roughly equivalent market shares.

Also, my art direction derives from real and live cinematography. In order to get the look I want, I need a physically based world first, then art style can be expressed though lighting, color grading, UI, and cinematography.

I used the term "excessive fidelity" and "photorealism" to criticize modern rendering, the manifesto itself explicitly doesn't touch anything related to artistic choices or art direction. Nothing from your list is banned or even mentioned in the manifesto, there's even a call for better non-temporal specular AA research to improve the quality of PBR rendering.

Perhaps I should have used different wording for that, I can see that it can be confusing.

MSAA is not worth the cost. It's a poor approach. High cost, poor value compared to alternatives.

I never mentioned MSAA in the video and I never suggested it in the first place. I even made a lengthy comment under the video explaining MSAA performance issues.

You said never use TAA, but I do believe I can change you mind on that with about a years' worth of investigation on it.
Let me explain my research on TAA.

No, show me the implementation. I don't want to discuss TAA theory without working implementation behind it, build a small demo that implements your ideas and compare it to the current industry standard - then we can discuss it.

I'm open to changing my opinion in response to hard facts and evidence.

@TheHans255
Copy link

How would we apply this manifesto to underpowered/mobile hardware? When I think about the first two points, I think about the Nintendo Switch, which made heavy uses of both dynamic resolution and 30 FPS targets (and even then aren't always successful at preventing stutter) in most of its first-party games.

The dynamic resolution thing feels like both a mix of trying to get detail where you can and abandon it when you can't, and making an effort to avoid exposing the user base to graphics settings. The latter feels like a social choice (for better or for worse), but I would hazard that the former should be solved with more careful attention to exactly which resources should be made cheaper.

As for 30 FPS, it seems like that could be a more fundamental choice depending on the expectations of the platform. 60 FPS has been possible and common since the beginning, of course, but many consoles have historically targeted 30, such as all of fifth-gen (N64, PSX, Saturn), and many mobile platforms have to render at 30 to approximate graphical parity with a console or PC port. Would we say that 30 FPS could be an acceptable choice for constrained systems like this, or is 60 FPS simply too important for user experience to accept anything less?

If neither of these things is acceptable, would we say that Switch game developers should have spent more time targeting 60 FPS without dynamic resolution for its titles, that some of the games that the Switch struggles to run simply should not have been made, or that the Switch should have been released with more powerful/expensive hardware that could more readily accommodate the games being written for it?

I think these things are definitely worth asking, because at least a few of the Switch's piracy losses were likely due to the fact that Yuzu could run Tears of the Kingdom at 4K/60 FPS, and that Yuzu was able to readily advertise that fact.

@bazhenovc
Copy link
Author

@TheHans255 apply it using common sense and interpret it based on your specific circumstances (I've added a paragraph about it at the start of the manifesto), it's not a set of laws and in the end of the day the most important thing is shipping a game, not blindly following some arbitrary advice on the internet 😄

Would we say that 30 FPS could be an acceptable choice for constrained systems like this, or is 60 FPS simply too important for user experience to accept anything less?
would we say that Switch game developers should have spent more time targeting 60 FPS without dynamic resolution for its titles

Like I've said, the most important thing is to ship the game at all. If you're 1 month away from the release date and can't hit 60FPS then realistically what other choices do you have? Provide an option for the player to choose 30FPS at full res or 60FPS with dynamic res and the associated loss of motion clarity and call it a day, there will be time to do it better during your next project.

that some of the games that the Switch struggles to run simply should not have been made

Well that's a big stretch, I'm not sure how did you get that conclusion.

or that the Switch should have been released with more powerful/expensive hardware that could more readily accommodate the games being written for it?

I think that developers need to have better awareness of the limits of their target hardware, you can do a lot on the Switch with good art direction. There even are many PS2 games that still look gorgeous today, especially in 4K resolution.

Switch's piracy losses were likely due to the fact that Yuzu could run Tears of the Kingdom at 4K/60 FPS, and that Yuzu was able to readily advertise that fact.

Piracy issues are out of scope for this manifesto, let's focus on the technical side of things.

@TheHans255
Copy link

@TheHans255 apply it using common sense and interpret it based on your specific circumstances (I've added a paragraph about it at the start of the manifesto), it's not a set of laws and in the end of the day the most important thing is shipping a game, not blindly following some arbitrary advice on the internet 😄

That's totally fair, though IMO that undermines the weight that something bearing the title of "manifesto" should probably carry. Visual fidelity and performance usually involve tradeoffs with limited resources, after all, and it seems reasonable to say that you should at least initially plan to target 60 FPS, to use either native resolution or integer scaling, and to render all your stuff in one frame, only allowing one of those things to slip if you need to do it to hit a release date. Don't soften your thoughts where you don't have to.

Would we say that 30 FPS could be an acceptable choice for constrained systems like this, or is 60 FPS simply too important for user experience to accept anything less?
would we say that Switch game developers should have spent more time targeting 60 FPS without dynamic resolution for its titles

Like I've said, the most important thing is to ship the game at all. If you're 1 month away from the release date and can't hit 60FPS then realistically what other choices do you have? Provide an option for the player to choose 30FPS at full res or 60FPS with dynamic res and the associated loss of motion clarity and call it a day, there will be time to do it better during your next project.

I agree with that as well, though the question I was posing was more about targeting 30 FPS from the start because you "know" that the hardware can't keep up with 60, which does seem to be a bad thing to do according to this manifesto. (And also, I might aside that really the best choice you have is to abstain from announcing a release date for your game at all until your game has fully gone gold, though I recognize that's not always possible if the game is annualized, or attached to a franchise with other media that all has to release at the same time).

that some of the games that the Switch struggles to run simply should not have been made

Well that's a big stretch, I'm not sure how did you get that conclusion.

I haven't - I was just posing a hypothetical (the other two being "the devs should have given their games more time in the oven" or "the Switch itself should have had more time in the oven to more readily compete with the PS4 and Xbox One"). Though that may well be true for some games - most of the Pokemon releases on Switch are poorly optimized, for instance, and fans have been calling for deannualization of the franchise for years now in order to give these titles the time they deserve (deannualization that would have led to some games not being released yet, or not being released at all, in this timeline).

or that the Switch should have been released with more powerful/expensive hardware that could more readily accommodate the games being written for it?

I think that developers need to have better awareness of the limits of their target hardware, you can do a lot on the Switch with good art direction. There even are many PS2 games that still look gorgeous today, especially in 4K resolution.

I agree, though there is the issue of competition - the Switch is able to easily compete with mobile phones and stuff like the PS Vita in terms of performance, but when connected to a TV or monitor, it has to compete with the Playstation, Xbox, or PC, where it finds itself at a disadvantage. This manifesto would suggest that devs should have focused less on what the Switch's competitors could do and focused more on making a fluid, well-rendered experience for the games they were releasing. (Either that, or that the Switch, being a mobile console that could take a permanent fixture by a TV, was a poorly conceived product and should have either stayed out of the TV space or should have made itself more powerful in order to compete properly and support 60 FPS and fixed resolution in more games.)

Switch's piracy losses were likely due to the fact that Yuzu could run Tears of the Kingdom at 4K/60 FPS, and that Yuzu was able to readily advertise that fact.

Piracy issues are out of scope for this manifesto, let's focus on the technical side of things.

Apologies. I mainly just brought that up to make a point that these sorts of decisions can lead to real, tangible financial losses that might not be worth the money being saved, but we don't have to dwell on that too long.

@bazhenovc
Copy link
Author

and it seems reasonable to say that you should at least initially plan to target 60 FPS, to use either native resolution or integer scaling, and to render all your stuff in one frame, only allowing one of those things to slip if you need to do it to hit a release date. Don't soften your thoughts where you don't have to.

Fair enough, I've updated that section again.

I agree with that as well, though the question I was posing was more about targeting 30 FPS from the start because you "know" that the hardware can't keep up with 60, which does seem to be a bad thing to do according to this manifesto.

There are Switch games that target 60 FPS from the start, I don't think they look worse or are worse games because of that. When you're targeting 60 FPS you need to do different type of art direction to make up for less fidelity and I think this is largely ok and accepted by the players.

it has to compete with the Playstation, Xbox, or PC, where it finds itself at a disadvantage.

I don't think that the Switch is trying to compete with PC or other consoles, it has found its own niche and largely does not overlap with the rest of the industry. It does have to compete with the Steam Deck though and I think it's going to be a pretty healthy competition in the future if Valve doesn't abandon it.

@TheHans255
Copy link

There are Switch games that target 60 FPS from the start, I don't think they look worse or are worse games because of that. When you're targeting 60 FPS you need to do different type of art direction to make up for less fidelity and I think this is largely ok and accepted by the players.

Agreed - I was mostly referring to the games that do target 30 FPS for the sake of visuals, such as Zelda: Breath of the Wild, Zelda: Tears of the Kingdom, Mario Kart 8 Deluxe in 3- or 4-player splitscreen mode, Animal Crossing: New Horizons, and all of the mainline Pokemon games, as well as some third party ports like Apex Legends and Fortnite. Pretty much all of those have gameplay that markedly improves when accelerated to a steady 60 FPS (even Animal Crossing, which has timing elements for catching bugs/fish and harvesting minerals).

I don't think that the Switch is trying to compete with PC or other consoles, it has found its own niche and largely does not overlap with the rest of the industry.

Agreed there as well, though it ends up doing that a bit anyway be the fact that it connects to a TV at all, and thus competing for the TV user's attention with other consoles and entertainment devices that rely on that TV. Though admittedly, the fact that the Switch can seamlessly transition from docked to handheld lessens that impact (since the TV can be freed up for another purpose without disturbing the game), though then again, that was hurt by the fact that the JoyCon thumbsticks wore down in a matter of months and made handheld play a lot more expensive or subpar than it should have been.

@bazhenovc
Copy link
Author

@ThreatInteractive

I took a look, while I admire your passion, it seems to be very early in the brainstorming phase. I'm all about practical application and testing, so I typically dive into in-depth discussions that have some real-world implementation or prototypes to back them up.

I'm going to have to pass on further review unless you have something concrete to show. I'm fairly busy with the projects that are already in motion.

I hope you understand my perspective and I wish you all the best with the implementation. Let me know if/when you have a working prototype.

@bazhenovc
Copy link
Author

@ThreatInteractive I don't like the video for the following reasons:

  • You don't present any new information/findings, all the stuff you're talking about is relatively common knowledge and there wasn't anything new for me there.
  • There are no actionable items, just 20 minutes of non-constructive flaming
  • I think you need to calm down a bit, the world is not ending, there isn't some kind of a graphics mafia oppressing everyone - don't be so angry.

Good luck with your game and further research, I sincerely hope that you succeed.

@krupitskas
Copy link

Hi Kirill!
As far as I agree about TAA and temporal effects, myself prefer more MSAA / SMAA, Im not sure I agree about GI solution which should converge over one frame. I've worked with various GI techniques - LPV propagates over frames / RTXGI accumulate irradiance per probe over frames.
I think we still can try to keep geometry as much sharp as we can, however light can be spatially upscaled / temporaly accumulated because we don't have good solution yet, unfortunately.
Also a question, do you know is it possible to make friends MSAA and V-Buffer? If we render geometry and triangle indices into intermediate buffer, Im not sure how we can utilize MSAA here. Feels like SMAA is only option?

@bazhenovc
Copy link
Author

@krupitskas In the visibility buffer shading is decoupled from geometry raster, you can render the VB triangle ID into MSAA and then during shading pass you can fetch individual subsamples and shade them as if they were regular pixels and blend the result, this basically supersampling and I'd say it's not going to be practical. In theory it is slightly cheaper than actual supersampling because you render the triangle ID buffer once, but the shading cost is going to be exorbitant.

LPV doesn't have to propagate over multiple frames, you can run more than one propagation step per frame if performance allows.

@Johan-Hammes
Copy link

Hi Kirill,

Love the general direction, although I have to agree with krupitskas that some GI effects likely have to run over multiple frames and accumulate. I still fee that you are mostly referring to effects that have spacial as well as temporal data mixed and leaves ghosting artifacts as a result.

At the risk of self promotion, I think that AA has a far deeper solution. Most aliasing we see today are the result of miss calculating the colors of pixels at the edge of a mesh, and all methods are only trying to solve that by filtering and blurring the error away. JHFAA on the other hand goes to the source of the problem, fixes up all the normals before shading, and as a result AA is almost not needed for most meshes.
https://www.johanhammes.com/earthworks/shaders

and specular aliasing is also a 'solved' problem. The main issue there is that nobody seems willing to fix their meshes. As long as you insist on passing broken meshes into your engine and hoping some programmer can magically fix the data at runtime, it becomes almost impossible to do, and definitely impossible to do well.
The only point is to realize that we tend to use roughness wrong. It is not intrinsic to the material applied, but rather the amount of spreading of all the normals inside a single pixel on screen.
Once you recognize that and introduce the concept of geometric roughness (roughness value based on mesh curvature), and teh way that changes with the distance to the mesh, and likely add anisotropic data to that, it is solved.

@bazhenovc
Copy link
Author

Hi @Johan-Hammes , it's an interesting idea, I'd like to know more if you don't mind. Can you post a more technical overview of what your solution is doing?

From what I understood, you are blending the colors on the edges of the object with the reflection cubemap or SSR, is that correct?

How does it work if the background is not a cubemap, but another set of complex geometry? The blended color is not going to exactly match in that case, even if you have SSR you will not have the correct data if the object is "self occluding" (for the lack of a better term).

You actually can see this problem on your own screenshot:
image

That could potentially work if you use previous frame SSR but then there's a big can of worms with disocclusion and reprojection + potentially issues with animated geometry.

Based on my initial understanding, I can definitely see how this can work as a specialized antialiasing method - for cases when you can clearly separate foreground and background objects, you can use it for foreground objects but not the background objects. I'm not quite getting yet how it's going to work as a general purpose solution that fits everything.

Could you please also elaborate what exactly do you mean by "fixing the meshes"? I want to be sure we're on the same page there.

@Johan-Hammes
Copy link

It seems all my emails bounce, so lets try writing here directly

Its not magic ;-) but I disagree with your noAA conclusion. If you look carefully there is some degree of AA there, its just that with such a high contrast between the two, and a very sharp edge (the section where Fresnel really matters is maximum 2 pixels wide if that), just Fresnel is not quite enough.

Sometimes I wonder about calling it AA since its really not, it just fixes errors in lighting that requires a lot of AA to try and solve,

DCS_AA
If you take a look at the DCS screengrab from a Pimax review video, you can see at the bottom of the grab handle that the reflection vector is calculated wrong, and it points upwards towards the sky picking up a ton of bright light. This then requires a lot of AA to try and smooth away. Frequently with high end Vr headsets users turn AA off to squeeze out performance and this becomes clear

As for my Fresnel code. Although my image was over a cube map, I am not using the cube map at all for reflection, but instead using screen space reflections with the previous frame.

If you contact me directly, I can send you the document I wrote for Epic on the matter, highlighting all the errors in Unreal and ways to fix them.

@bazhenovc
Copy link
Author

Yeah I'd love to chat, what is the best way of reaching you?

My email is in my github profile so feel free to use that if you want (I checked my spam folder today and didn't find any emails from you there).

I'm fine chatting here as well if you prefer it.

Sometimes I wonder about calling it AA since its really not, it just fixes errors in lighting that requires a lot of AA to try and solve,

I think this would be a better way to describe the idea, yeah. It is not physically correct right (not that it needs to be, as long as it looks good)?

@bazhenovc
Copy link
Author

As for my Fresnel code. Although my image was over a cube map, I am not using the cube map at all for reflection, but instead using screen space reflections with the previous frame.

It makes sense, thanks for the explanation!

I've got a few follow up questions:
Do you reproject the previous frame?
How are you handling disocclusion or missing data?
Any issues with animated characters or procedural animation?

@Johan-Hammes
Copy link

About physically accurate, I would argue that my fix is way more physically accurate than almost all games out there. The bright pixels on that grab handle appears because the reflection vector is pointing into the the handle itself and back out the other side. This is impossible in real life. Usually but not always as a result of normal vectors pointing away from the camera (due to a flat triangle replacing curved geometry and interpolating normals) My shader code fixes all of those to be physically accurate before doing any light calculations.

As for SSR, I am using this only on the strong Fresnel portions. I have other reflection solutions for the rest of my scene. personally I still favor planar reflections for water over SSR with its occlusion problems etc.

  • No I do not reproject, and have never seen it as a visual error
  • But we are only talking about the last 2-3 pixels right at the edge. By the time Fresnel makes it shiny enough to reflect, the angle is so tiny, that the SSR reflection is usually within that 10-100 pixel distance from the pixel we are lighting, and when the reflection is really strong that shrinks. It also means that occlusion is almost not a problem and can be ignored.
  • I haven't seen many issues with animations If you look at this video (select 4K so youtubes compression doesn't destroy it) the issues with animation is minimal in my opinion https://youtu.be/6T-2T_R8g0c

@bazhenovc
Copy link
Author

Thanks for the info!

I'll find time to implement it eventually, it's an interesting idea.

How exactly are you fixing normals after interpolation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment