-
-
Save logiclrd/287140934c12bed1fd4be75e8624c118 to your computer and use it in GitHub Desktop.
ffmpeg -i "HD Splice 1080p No Grain.mkv" -i "HD Splice 1080p No Grain.mkv" -filter_complex " | |
color=black:d=3006.57:s=3840x2160:r=24000/1001, | |
geq=lum_expr=random(1)*256:cb=128:cr=128, | |
deflate=threshold0=15, | |
dilation=threshold0=10, | |
eq=contrast=3, | |
scale=1920x1080 [n]; | |
[0] eq=saturation=0,geq=lum='0.15*(182-abs(75-lum(X,Y)))':cb=128:cr=128 [o]; | |
[n][o] blend=c0_mode=multiply,negate [a]; | |
color=c=black:d=3006.57:s=1920x1080:r=24000/1001 [b]; | |
[1][a] alphamerge [c]; | |
[b][c] overlay,ass=Subs.ass" | |
-c:a copy -c:v libx264 -tune grain -preset veryslow -crf 12 -y Output-1080p-Grain.mkv |
I figured it out - for future reference: It was the framerate, you used 24000/1001 for the grain, while my source was 25fps.
Ah, right, that makes sense! Thanks for writing that out for future readers. Always make sure the framerate is the same for all inputs :-)
Hello, I want to explore this art, but does add grain with this method increase video size much ? I'm looking for a way to add synthetic noise without much (or any at all) file size.
Noise is extremely difficult to encode well. As a rough approximation, video compression works by separating the signal out into different "frequencies" -- a gradual gradient in a background is low-frequency, while sharp edges and noise are high-frequency. Each band of frequencies is then encoded separately; low-frequency data requires very little bandwidth to encode reasonably well. High-frequency data, though, has a great deal of information for the same number of pixels. When you constrain a video encoder's bitrate, it is the high-frequency data that is most heavily affected. If you legitimately want noise at the pixel level throughout every frame, then you need to give the encoder lots of bits to work with, otherwise the noise will get filtered out, and will probably serve only to decrease the quality of the end result, because it may cause the boundaries between macroblocks to be less likely to match up.
I haven't experimented heavily with this. In my application, having a 17 GB file for 45 minutes of video is entirely no big deal. I encourage you to try different quality levels and see what happens to the noise. My settings are probably way overkill, I just set it high enough to be absolutely sure I wouldn't run into issues with the available bits constraining the noise in any visible way.
I've made a (contrived) example to demonstrate what I'm talking about. This animation switches between an original image that has noise at various levels, including per-pixel, and that same image saved and reloaded using frequency-domain compression:
You can see that the compressed version has lost the finest detail of the noise. It's still "noisy", but that noise has a resolution much larger than a pixel, and in fact ends up being distractingly blocky because the compressor is being pushed past its limits with regard to the edges of blocks of compressed pixels matching up.
This is exactly what will happen in a video file if you add a lot of high-frequency noise to it but try to keep the filesize small.
Thanks for the detailed answer, really appreciate that!
I find an alternative art: add noise at run time
For example:
using VLC player, we could Tools->Effects and Filters->Video Effects->Film Grain
For browsers: programmatically add noise via canvas api
(I'm working at a streaming firm - increasing filesize a lot is very out of question 😃).
One drawback about this approach is the band effect when encode video in 8-bit in low bitrate:
The feeling of banding effect (low quality) + film noise (high quality) feel really weird (like someone haven't take a shower in week put on a fragrant) - it could be mitigate if using 10-bit video encode - but currently not any browser support that, I really hope they do in the future.
(sharing this if anyone looking for the same thing as me)
I suspect the only way to really achieve what you're looking for will be to add film grain only if the bitrate is high enough to eliminate macroblocking. But, perhaps judiciously applying a denoise filter before encoding could allow a lower bitrate to do a good job conveying smooth frames, and then that would be a suitable thing to add fake film grain to at playback time.
I am trying to beautify a tv series with this. But in every episode after 50 mins and 6 secs (movie length) ffmpeg throws "EOF timestamp not reliable" and from now on the grain is not changing anymore between the frames (like not temporal). Any idea what could cause this?
OP, what is your goal?
kocoten1992, this is similar to what AV1 is trying to do. I am unsure how well it works in real-world tests.
My goal is simulated film grain that looks more like the real thing than just per-pixel noise.
Hmm, is it possible that you have a resize or crop filter that's not being applied to every input the video is sourced with?