Last active
August 6, 2024 02:14
-
-
Save shizukachan/6374ea615d0925ccd59c3e5cc8595016 to your computer and use it in GitHub Desktop.
C027 1080p-as-1080i capture "fix" script
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#CaptureFix v3 | |
LoadPlugin("R:\ffms2-libav.dll") | |
LoadPlugin("R:\interp\svpflow1.dll") | |
LoadPlugin("R:\interp\svpflow2.dll") | |
LoadPlugin("R:\interp\decomb.dll") | |
LoadPlugin("R:\interp\TDeint.dll") | |
Import("R:\interp\InterFrame2.avsi") | |
#if you're capturing yv12, you're doing it wrong. Use 4:2:2 capture. The C027 captures TFF. | |
mew = ffvideosource("Nights of Azure 2 G.mp4") | |
meow = mew.AssumeFPS(30000,1001).ConvertToYV16().AssumeTFF().SeparateFields() | |
#SHIT FRAME HANDLING BEGIN | |
#The C027 has a horrible habit of failing to capture some frames in 1080i (and even 720p) mode, where it just fails to | |
#capture a field and basically writes zeroes to the 0th line, which shows up as dark green in YUV. Everything else in the field | |
#is basically from a previous buffer, so it causes visible judder. Eeeew. I choose to motion interpolate this missing field. | |
#Usually this green line is between 320 and 448 pixels long; I've never seen anything less than 256, so we'll test the | |
#last 256 lines of the frame for YUV(0,0). If you're using a compressed capture, you'll have to use a lossiness threshold, below. | |
Even = meow.SelectEvery(2,0) | |
Odd = meow.SelectEvery(2,1) | |
clipped = Even.Crop(1664,0,256,1).ConvertToYV16() | |
Reference = BlankClip(Even,pixel_type="YV16",color_yuv=0,width=256,height=1) | |
Bad = BlankClip(Even,pixel_type="YV16",color=$ff0000) | |
EvenEven = Even.SelectEvery(2,0) | |
EvenOdd = Even.SelectEvery(2,1) | |
ClipEE = clipped.SelectEvery(2,0) | |
ClipEO = clipped.SelectEvery(2,1) | |
#hack to obtain slightly better chroma resolution for Interframe (which only supports YV12) - this way, we don't completely lose our chroma resolution | |
#InterpEE = EvenEven.ConvertToYV12(interlaced=False).Interframe(GPU=false,Cores=1,FrameDouble=true).ConvertToYV16(interlaced=False).SelectEvery(2,1) | |
#InterpEO = BlankClip(EvenOdd,length=1)+EvenOdd.ConvertToYV12(interlaced=False).Interframe(GPU=false,Cores=1,FrameDouble=true).ConvertToYV16(interlaced=False).SelectEvery(2,1) | |
InterpEE = EvenEven.PointResize(1920,1080).ConvertToYV12(interlaced=False).Interframe(GPU=false,Cores=1,FrameDouble=true).ConvertToYV16(interlaced=False).PointResize(1920,540).SelectEvery(2,1) | |
InterpEO = BlankClip(EvenOdd,length=1)+EvenOdd.PointResize(1920,1080).ConvertToYV12(interlaced=False).Interframe(GPU=false,Cores=1,FrameDouble=true).ConvertToYV16(interlaced=False).PointResize(1920,540).SelectEvery(2,1) | |
#lossiness threshold for lossy captures. A sane value is 12 at x264 4:2:2 crf=17. | |
#For lossless encodes, equals 0 works. | |
EvenEvenFixed = ConditionalFilter(ClipEE,InterpEO,EvenEven, "LumaDifference(ClipEE,Reference)+ChromaUDifference(ClipEE,Reference)+ChromaVDifference(ClipEE,Reference)","lessthan","12") | |
EvenOddFixed = ConditionalFilter(ClipEO,InterpEE,EvenOdd, "LumaDifference(ClipEO,Reference)+ChromaUDifference(ClipEO,Reference)+ChromaVDifference(ClipEO,Reference)","lessthan","12") | |
EvenFixed = Interleave(EvenEvenFixed,EvenOddFixed) | |
T=Interleave(EvenFixed,Odd).Weave().ConvertToYUY2() | |
#SHIT FRAME HANDLING END | |
#This source should now be (nearly) equal to any proper interlaced capture; so much work to fix the C027's bugs. | |
#If you're actually working with a 60i source, this is where you'd split your script off to process the video your way. | |
#Everything after this point is for conversion of 30p-as-60i to progressive 30p. | |
#DEINTERLACE INTO PROGRESSIVE (for actual interlaced frames!) | |
#We can't use FPSDivisor=2 because it will just return frames interpolated from even fields. | |
#These are the SHIT FRAMES which we fixed via motion interpolation, but why bother when | |
#we can just use odd frames (fields) from QTGMC and interpolate from those instead. | |
Q=T.QTGMC(preset="Medium", ediThreads=8).SelectEvery(2,1) | |
#'DEINTERLACE' INTO PROGRESSIVE (for actual progressive frames!) | |
#Sometimes Telecide gets it wrong, so we actually want to have a deinterlacer after it when it makes a mistake. | |
#U=T.Telecide(show=True, post=1, vthresh=64) #for debugging | |
U=T.Telecide(show=False, post=0, vthresh=64) | |
#CATCH COMBED FRAMES | |
#Telecide's field matching is great, but sometimes miscategorizes combed frames. We'll ignore Telecide's hints and use | |
#TDeint to detect and deinterlace combed frames instead. | |
#For quality, we can use QTGMC to deinterlace; for speed, remove edeint=Q to use TDeint's internal deinterlacer instead. | |
V=U.TDeint(mode=0, clip2=T, hints=False, full=false, map=0, | |
\ field=0, edeint=Q, mthreshL=3, | |
\ cthresh=7, blockx=32, blocky=32, MI=224) | |
#Line 0: TDeint mode of operation | |
# Not much to say here, set map=1 or map=2 for debugging. Must use clip2=T because of field matching. | |
#Line 1: Deinterlacing configuration | |
# Ensure the bottom field is kept, and the top field is interpolated. Set mthreshL low because QTGMC is quite good, | |
# and combing artifacts are worse than over-deinterlacing. This is particularly true for the few-interlaced-frames case. | |
#Line 2: Combed frame detection | |
# Residual combing is usually encode lossiness, SHIT FRAMES, or Telecide fails. Raise cthresh to 7 to mitigate encode | |
# lossiness triggering the combed frame detector. Deinterlacing only produces better results in the last two cases. | |
return V.ConvertToYV12() |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Updated to use QTGMC instead. You can no longer use this script for real time capture fixing :(
Ironically, using QTGMC output means we no longer need to worry about SHIT FRAMES anymore, assuming they only affect top fields; instead of temporally interpolating using motion interpolating, we spatially interpolate using the presumably good bottom fields of the SHIT FRAMES, using QTGMC. This usually produces a sufficiently good result, much better than the previous method.