-
-
Save logiclrd/287140934c12bed1fd4be75e8624c118 to your computer and use it in GitHub Desktop.
ffmpeg -i "HD Splice 1080p No Grain.mkv" -i "HD Splice 1080p No Grain.mkv" -filter_complex " | |
color=black:d=3006.57:s=3840x2160:r=24000/1001, | |
geq=lum_expr=random(1)*256:cb=128:cr=128, | |
deflate=threshold0=15, | |
dilation=threshold0=10, | |
eq=contrast=3, | |
scale=1920x1080 [n]; | |
[0] eq=saturation=0,geq=lum='0.15*(182-abs(75-lum(X,Y)))':cb=128:cr=128 [o]; | |
[n][o] blend=c0_mode=multiply,negate [a]; | |
color=c=black:d=3006.57:s=1920x1080:r=24000/1001 [b]; | |
[1][a] alphamerge [c]; | |
[b][c] overlay,ass=Subs.ass" | |
-c:a copy -c:v libx264 -tune grain -preset veryslow -crf 12 -y Output-1080p-Grain.mkv |
If you want to play with this, you can tune the amount of grain that is applied by altering the 0.15*
in the geq
filter near the middle. This implementation always pulls down the brightness with the grain, so the grainier you make it, the darker you make it -- you may want to add another filter to push the brightness back up a bit in that case.
Here's a walk-through of the computations:
- It starts with white noise:
- Then it uses the "deflate" and "dilation" filters to cause certain features to expand out to multiple pixels:
The effect is pretty subtle but you can see that there are a few larger "blobs" of white and black in amongst the noise. This means that the features of the noise aren't just straight-up single pixels any more.
- Then, that image gets halved in resolution, because it was being rendered at twice the resolution of the target video.
The highest-resolution detail is now softened, and the clumps of pixels are reduced in size to be 1-2 pixels in size. So, this is the noise plane.
Then, I take the source video and do some processing on it.
- Desaturate:
- Filter luminance so that the closer an input pixel was to luminance level 75 (arrived at experimentally), the brighter the pixel is. If the input pixel was darker or brighter, the output pixel is uniformly darker. This creates "bands" of brightness where the luminance level is close to 75.
- This is then scaled down, and this is where the level of noise is "tuned". This band selection means that we will be adding noise specifically in the areas of the frame where it will be most noticed. Not adding noise in other areas leaves more bits to encode the noise.
- This scaled mask is then applied to the previously-computed noise. In this screenshot, I've removed the tuning so that the noise is easily visible:
The areas not selected by the band filter are greatly scaled down and are essentially black; the noise variation fades to nothing.
Here's what it looks like with a scaling factor of 0.32 -- pretty subtle:
- I then invert this image, so that the parts with no noise are solid white, and then areas with noise pull down slightly from the white:
- Finally, I pull another copy of the same source video, apply this computed image to it as an alpha channel and overlay it on black, so that the film grain dots, which are slightly less white, become slightly darker pixels.
The effect is pretty subtle, hard to see in a still like that when it's not moving, but if you tune the noise way up, you can get frames like this:
Hmm, is it possible that you have a resize or crop filter that's not being applied to every input the video is sourced with?
I figured it out - for future reference: It was the framerate, you used 24000/1001 for the grain, while my source was 25fps.
Ah, right, that makes sense! Thanks for writing that out for future readers. Always make sure the framerate is the same for all inputs :-)
Hello, I want to explore this art, but does add grain with this method increase video size much ? I'm looking for a way to add synthetic noise without much (or any at all) file size.
Noise is extremely difficult to encode well. As a rough approximation, video compression works by separating the signal out into different "frequencies" -- a gradual gradient in a background is low-frequency, while sharp edges and noise are high-frequency. Each band of frequencies is then encoded separately; low-frequency data requires very little bandwidth to encode reasonably well. High-frequency data, though, has a great deal of information for the same number of pixels. When you constrain a video encoder's bitrate, it is the high-frequency data that is most heavily affected. If you legitimately want noise at the pixel level throughout every frame, then you need to give the encoder lots of bits to work with, otherwise the noise will get filtered out, and will probably serve only to decrease the quality of the end result, because it may cause the boundaries between macroblocks to be less likely to match up.
I haven't experimented heavily with this. In my application, having a 17 GB file for 45 minutes of video is entirely no big deal. I encourage you to try different quality levels and see what happens to the noise. My settings are probably way overkill, I just set it high enough to be absolutely sure I wouldn't run into issues with the available bits constraining the noise in any visible way.
I've made a (contrived) example to demonstrate what I'm talking about. This animation switches between an original image that has noise at various levels, including per-pixel, and that same image saved and reloaded using frequency-domain compression:
You can see that the compressed version has lost the finest detail of the noise. It's still "noisy", but that noise has a resolution much larger than a pixel, and in fact ends up being distractingly blocky because the compressor is being pushed past its limits with regard to the edges of blocks of compressed pixels matching up.
This is exactly what will happen in a video file if you add a lot of high-frequency noise to it but try to keep the filesize small.
Thanks for the detailed answer, really appreciate that!
I find an alternative art: add noise at run time
For example:
using VLC player, we could Tools->Effects and Filters->Video Effects->Film Grain
For browsers: programmatically add noise via canvas api
(I'm working at a streaming firm - increasing filesize a lot is very out of question 😃).
One drawback about this approach is the band effect when encode video in 8-bit in low bitrate:
The feeling of banding effect (low quality) + film noise (high quality) feel really weird (like someone haven't take a shower in week put on a fragrant) - it could be mitigate if using 10-bit video encode - but currently not any browser support that, I really hope they do in the future.
(sharing this if anyone looking for the same thing as me)
I suspect the only way to really achieve what you're looking for will be to add film grain only if the bitrate is high enough to eliminate macroblocking. But, perhaps judiciously applying a denoise filter before encoding could allow a lower bitrate to do a good job conveying smooth frames, and then that would be a suitable thing to add fake film grain to at playback time.
I am trying to beautify a tv series with this. But in every episode after 50 mins and 6 secs (movie length) ffmpeg throws "EOF timestamp not reliable" and from now on the grain is not changing anymore between the frames (like not temporal). Any idea what could cause this?
OP, what is your goal?
kocoten1992, this is similar to what AV1 is trying to do. I am unsure how well it works in real-world tests.
My goal is simulated film grain that looks more like the real thing than just per-pixel noise.
Update: took out
-aq-strength 1.9
, because I did a comparison and couldn't tell the difference at CRF 12 between-tune grain
's-aq-strength 0.5
and-aq-strength 1.9
. I've just decided to go with what-tune grain
suggests.