Skip to content

Instantly share code, notes, and snippets.

@rossy
Created December 21, 2014 00:53
Show Gist options
  • Select an option

  • Save rossy/16703760d18e44e05e65 to your computer and use it in GitHub Desktop.

Select an option

Save rossy/16703760d18e44e05e65 to your computer and use it in GitHub Desktop.
mpv ANGLE/GLES3 output
Playing: [FFF] Chuunibyou demo Koi ga Shitai! - 07 [BD][720p-AAC][7F5F5B13] (1).mkv
[stream] Video (+) --vid=1 '10bit H.264 - 720p' (h264)
[stream] Audio (+) --aid=1 --alang=jpn (*) '2.0 AAC' (aac)
[stream] Subs (+) --sid=1 --slang=eng (*) 'URW' (ass)
[stream] Subs --sid=2 --slang=eng 'gg' (ass)
[vo/opengl-hq] Setting option 'fbo-format' = 'rgba' (flags = 0)
[vo/opengl-hq] Setting option 'backend' = 'angle' (flags = 0)
[vo/opengl-hq] Setting option 'swapinterval' = '1' (flags = 0)
[vo/opengl-hq] Detected GLES 3.0.
[vo/opengl-hq] GL_VENDOR='Google Inc.'
[vo/opengl-hq] GL_RENDERER='ANGLE (NVIDIA GeForce GTX 670 Direct3D11 vs_5_0 ps_5_0)'
[vo/opengl-hq] GL_VERSION='OpenGL ES 3.0 (ANGLE 2.1.2609bf4ca449)'
[vo/opengl-hq] GL_SHADING_LANGUAGE_VERSION='OpenGL ES GLSL ES 3.00 (ANGLE 2.1.2609bf4ca449)'
[vo/opengl-hq] Combined OpenGL extensions string:
[vo/opengl-hq] GL_OES_element_index_uint GL_OES_packed_depth_stencil GL_OES_get_program_binary GL_OES_rgb8_rgba8 GL_EXT_texture_format_BGRA8888 GL_EXT_read_format_bgra GL_NV_pixel_buffer_object GL_OES_mapbuffer GL_EXT_map_buffer_range GL_OES_texture_half_float GL_OES_texture_half_float_linear GL_OES_texture_float GL_OES_texture_float_linear GL_EXT_texture_rg GL_EXT_texture_compression_dxt1 GL_ANGLE_texture_compression_dxt3 GL_ANGLE_texture_compression_dxt5 GL_EXT_sRGB GL_ANGLE_depth_texture GL_EXT_texture_storage GL_OES_texture_npot GL_EXT_draw_buffers GL_EXT_texture_filter_anisotropic GL_EXT_occlusion_query_boolean GL_NV_fence GL_EXT_robustness GL_EXT_blend_minmax GL_ANGLE_framebuffer_blit GL_ANGLE_framebuffer_multisample GL_ANGLE_instanced_arrays GL_ANGLE_pack_reverse_row_order GL_OES_standard_derivatives GL_EXT_shader_texture_lod GL_EXT_frag_depth GL_ANGLE_texture_usage GL_ANGLE_translated_shader_source GL_EXT_color_buffer_float
[vo/opengl-hq] Detected OpenGL features:
[vo/opengl-hq] - OpenGL 2.1+ (or subset)
[vo/opengl-hq] - Framebuffers
[vo/opengl-hq] - VAOs
[vo/opengl-hq] - sRGB textures
[vo/opengl-hq] - sRGB framebuffers
[vo/opengl-hq] - Float textures
[vo/opengl-hq] - RG textures
[vo/opengl-hq] Testing user-set FBO format (0x1908)
[vo/opengl-hq] Create FBO: 16x16
[vo/opengl-hq] Display depth: R=0, G=0, B=0
[vo/opengl-hq] Testing user-set FBO format (0x1908)
[vo/opengl-hq] Create FBO: 16x16
[vo/opengl-hq] Reinit rendering.
[vo/opengl-hq] Assuming 1000.000000 FPS for framedrop.
Trying to use hardware decoding.
AO: [wasapi] 96000Hz stereo 2ch float
[ffmpeg/video] h264: decode_slice_header error
[ffmpeg/video] h264: no frame!
Error while decoding frame!
Error using hardware decoding, falling back to software decoding.
Using conversion filter.
VO: [opengl-hq] 1280x720 yuv420p
[vo/opengl-hq] screen size: 2560x1440
[vo/opengl-hq/win32] reset window bounds: 632:329:1296:759
[vo/opengl-hq/win32] move window: 640:360
[vo/opengl-hq/win32] resize window: 1280:720
[vo/opengl-hq] Resize: 1280x720
[vo/opengl-hq] aspect(0) fitin: 1280x720 monitor_par: 1.00
[vo/opengl-hq] aspect(1) wh: 1280x720 (org: 1280x720)
[vo/opengl-hq] aspect(2) wh: 1280x720 (org: 1280x720)
[vo/opengl-hq] Window size: 1280x720
[vo/opengl-hq] Video source: 1280x720 (1280x720)
[vo/opengl-hq] Video display: (0, 0) 1280x720 -> (0, 0) 1280x720
[vo/opengl-hq] Video scale: 1.000000/1.000000
[vo/opengl-hq] OSD borders: l=0 t=0 r=0 b=0
[vo/opengl-hq] Video borders: l=0 t=0 r=0 b=0
[vo/opengl-hq] Reinit rendering.
[vo/opengl-hq] Testing user-set FBO format (0x1908)
[vo/opengl-hq] Create FBO: 16x16
[vo/opengl-hq] Texture for plane 0: 1280x720
[vo/opengl-hq] Texture for plane 1: 640x360
[vo/opengl-hq] Texture for plane 2: 640x360
[vo/opengl-hq] Reinit rendering.
[vo/opengl-hq] Dither to 8.
[vo/opengl-hq] 5: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] compiling shader program 'frag_osd_libass', header:
[vo/opengl-hq] [ 1] #define VIDEO_SAMPLER sampler2D
[vo/opengl-hq] vertex shader source:
[vo/opengl-hq] [ 1] #version 300 es
[vo/opengl-hq] [ 2] #define HAVE_RG 1
[vo/opengl-hq] [ 3]
[vo/opengl-hq] [ 4] #ifdef GL_ES
[vo/opengl-hq] [ 5] precision mediump float;
[vo/opengl-hq] [ 6] #define HAVE_3DTEX (__VERSION__ >= 300)
[vo/opengl-hq] [ 7] #define HAVE_ARRAYS (__VERSION__ >= 300)
[vo/opengl-hq] [ 8] #else
[vo/opengl-hq] [ 9] // Desktop GL
[vo/opengl-hq] [ 10] #define HAVE_3DTEX 1
[vo/opengl-hq] [ 11] #define HAVE_ARRAYS 1
[vo/opengl-hq] [ 12] #endif
[vo/opengl-hq] [ 13]
[vo/opengl-hq] [ 14] // GLSL 1.20 compatibility layer
[vo/opengl-hq] [ 15] // texture() should be assumed to always map to texture2D()
[vo/opengl-hq] [ 16] #if __VERSION__ >= 130
[vo/opengl-hq] [ 17] # define texture1D texture
[vo/opengl-hq] [ 18] # define texture3D texture
[vo/opengl-hq] [ 19] # define DECLARE_FRAGPARMS \
[vo/opengl-hq] [ 20] out vec4 out_color;
[vo/opengl-hq] [ 21] #else
[vo/opengl-hq] [ 22] # define texture texture2D
[vo/opengl-hq] [ 23] # define DECLARE_FRAGPARMS
[vo/opengl-hq] [ 24] # define out_color gl_FragColor
[vo/opengl-hq] [ 25] # define in varying
[vo/opengl-hq] [ 26] #endif
[vo/opengl-hq] [ 27]
[vo/opengl-hq] [ 28] #if HAVE_RG
[vo/opengl-hq] [ 29] #define RG rg
[vo/opengl-hq] [ 30] #else
[vo/opengl-hq] [ 31] #define RG ra
[vo/opengl-hq] [ 32] #endif
[vo/opengl-hq] [ 33]
[vo/opengl-hq] [ 34] // Earlier GLSL doesn't support mix() with bvec
[vo/opengl-hq] [ 35] #if __VERSION__ >= 130
[vo/opengl-hq] [ 36] vec3 srgb_expand(vec3 v)
[vo/opengl-hq] [ 37] {
[vo/opengl-hq] [ 38] return mix(v / vec3(12.92), pow((v + vec3(0.055))/vec3(1.055), vec3(2.4)),
[vo/opengl-hq] [ 39] lessThanEqual(vec3(0.04045), v));
[vo/opengl-hq] [ 40] }
[vo/opengl-hq] [ 41]
[vo/opengl-hq] [ 42] vec3 srgb_compand(vec3 v)
[vo/opengl-hq] [ 43] {
[vo/opengl-hq] [ 44] return mix(v * vec3(12.92), vec3(1.055) * pow(v, vec3(1.0/2.4)) - vec3(0.055),
[vo/opengl-hq] [ 45] lessThanEqual(vec3(0.0031308), v));
[vo/opengl-hq] [ 46] }
[vo/opengl-hq] [ 47]
[vo/opengl-hq] [ 48] vec3 bt2020_expand(vec3 v)
[vo/opengl-hq] [ 49] {
[vo/opengl-hq] [ 50] return mix(v / vec3(4.5), pow((v + vec3(0.0993))/vec3(1.0993), vec3(1.0/0.45)),
[vo/opengl-hq] [ 51] lessThanEqual(vec3(0.08145), v));
[vo/opengl-hq] [ 52] }
[vo/opengl-hq] [ 53]
[vo/opengl-hq] [ 54] vec3 bt2020_compand(vec3 v)
[vo/opengl-hq] [ 55] {
[vo/opengl-hq] [ 56] return mix(v * vec3(4.5), vec3(1.0993) * pow(v, vec3(0.45)) - vec3(0.0993),
[vo/opengl-hq] [ 57] lessThanEqual(vec3(0.0181), v));
[vo/opengl-hq] [ 58] }
[vo/opengl-hq] [ 59] #endif
[vo/opengl-hq] [ 60]
[vo/opengl-hq] [ 61] // -- prelude end
[vo/opengl-hq] [ 62] #define VIDEO_SAMPLER sampler2D
[vo/opengl-hq] [ 63]
[vo/opengl-hq] [ 64] #if __VERSION__ < 130
[vo/opengl-hq] [ 65] # undef in
[vo/opengl-hq] [ 66] # define in attribute
[vo/opengl-hq] [ 67] # define out varying
[vo/opengl-hq] [ 68] #endif
[vo/opengl-hq] [ 69]
[vo/opengl-hq] [ 70] uniform mat3 transform;
[vo/opengl-hq] [ 71] uniform vec3 translation;
[vo/opengl-hq] [ 72] #if HAVE_3DTEX
[vo/opengl-hq] [ 73] uniform sampler3D lut_3d;
[vo/opengl-hq] [ 74] #endif
[vo/opengl-hq] [ 75] uniform mat3 cms_matrix; // transformation from file's gamut to bt.2020
[vo/opengl-hq] [ 76]
[vo/opengl-hq] [ 77] in vec2 vertex_position;
[vo/opengl-hq] [ 78] in vec4 vertex_color;
[vo/opengl-hq] [ 79] out vec4 color;
[vo/opengl-hq] [ 80] in vec2 vertex_texcoord;
[vo/opengl-hq] [ 81] out vec2 texcoord;
[vo/opengl-hq] [ 82]
[vo/opengl-hq] [ 83] void main() {
[vo/opengl-hq] [ 84] vec3 position = vec3(vertex_position, 1) + translation;
[vo/opengl-hq] [ 85] #ifndef FIXED_SCALE
[vo/opengl-hq] [ 86] position = transform * position;
[vo/opengl-hq] [ 87] #endif
[vo/opengl-hq] [ 88] gl_Position = vec4(position, 1);
[vo/opengl-hq] [ 89] color = vertex_color;
[vo/opengl-hq] [ 90]
[vo/opengl-hq] [ 91] // Although we are not scaling in linear light, both 3DLUT and SRGB still
[vo/opengl-hq] [ 92] // operate on linear light inputs so we have to convert to it before
[vo/opengl-hq] [ 93] // either step can be applied.
[vo/opengl-hq] [ 94] #ifdef USE_OSD_LINEAR_CONV_APPROX
[vo/opengl-hq] [ 95] color.rgb = pow(color.rgb, vec3(1.95));
[vo/opengl-hq] [ 96] #endif
[vo/opengl-hq] [ 97] #ifdef USE_OSD_LINEAR_CONV_BT2020
[vo/opengl-hq] [ 98] color.rgb = bt2020_expand(color.rgb);
[vo/opengl-hq] [ 99] #endif
[vo/opengl-hq] [100] #ifdef USE_OSD_LINEAR_CONV_SRGB
[vo/opengl-hq] [101] color.rgb = srgb_expand(color.rgb);
[vo/opengl-hq] [102] #endif
[vo/opengl-hq] [103] #ifdef USE_OSD_CMS_MATRIX
[vo/opengl-hq] [104] // Convert to the right target gamut first (to BT.709 for sRGB,
[vo/opengl-hq] [105] // and to BT.2020 for 3DLUT). Normal clamping here as perceptually
[vo/opengl-hq] [106] // accurate colorimetry is probably not worth the performance trade-off
[vo/opengl-hq] [107] // here.
[vo/opengl-hq] [108] color.rgb = clamp(cms_matrix * color.rgb, 0.0, 1.0);
[vo/opengl-hq] [109] #endif
[vo/opengl-hq] [110] #ifdef USE_OSD_3DLUT
[vo/opengl-hq] [111] color.rgb = pow(color.rgb, vec3(1.0/2.4)); // linear -> 2.4 3DLUT space
[vo/opengl-hq] [112] color = vec4(texture3D(lut_3d, color.rgb).rgb, color.a);
[vo/opengl-hq] [113] #endif
[vo/opengl-hq] [114] #ifdef USE_OSD_SRGB
[vo/opengl-hq] [115] color.rgb = srgb_compand(color.rgb);
[vo/opengl-hq] [116] #endif
[vo/opengl-hq] [117]
[vo/opengl-hq] [118] texcoord = vertex_texcoord;
[vo/opengl-hq] [119] }
[vo/opengl-hq] [120]
[vo/opengl-hq] vertex shader compile log (status=0):
[vo/opengl-hq] ERROR: 0:77: 'attribute' : Illegal use of reserved word
[vo/opengl-hq]
[vo/opengl-hq] fragment shader source:
[vo/opengl-hq] [ 1] #version 300 es
[vo/opengl-hq] [ 2] #define HAVE_RG 1
[vo/opengl-hq] [ 3]
[vo/opengl-hq] [ 4] #ifdef GL_ES
[vo/opengl-hq] [ 5] precision mediump float;
[vo/opengl-hq] [ 6] #define HAVE_3DTEX (__VERSION__ >= 300)
[vo/opengl-hq] [ 7] #define HAVE_ARRAYS (__VERSION__ >= 300)
[vo/opengl-hq] [ 8] #else
[vo/opengl-hq] [ 9] // Desktop GL
[vo/opengl-hq] [ 10] #define HAVE_3DTEX 1
[vo/opengl-hq] [ 11] #define HAVE_ARRAYS 1
[vo/opengl-hq] [ 12] #endif
[vo/opengl-hq] [ 13]
[vo/opengl-hq] [ 14] // GLSL 1.20 compatibility layer
[vo/opengl-hq] [ 15] // texture() should be assumed to always map to texture2D()
[vo/opengl-hq] [ 16] #if __VERSION__ >= 130
[vo/opengl-hq] [ 17] # define texture1D texture
[vo/opengl-hq] [ 18] # define texture3D texture
[vo/opengl-hq] [ 19] # define DECLARE_FRAGPARMS \
[vo/opengl-hq] [ 20] out vec4 out_color;
[vo/opengl-hq] [ 21] #else
[vo/opengl-hq] [ 22] # define texture texture2D
[vo/opengl-hq] [ 23] # define DECLARE_FRAGPARMS
[vo/opengl-hq] [ 24] # define out_color gl_FragColor
[vo/opengl-hq] [ 25] # define in varying
[vo/opengl-hq] [ 26] #endif
[vo/opengl-hq] [ 27]
[vo/opengl-hq] [ 28] #if HAVE_RG
[vo/opengl-hq] [ 29] #define RG rg
[vo/opengl-hq] [ 30] #else
[vo/opengl-hq] [ 31] #define RG ra
[vo/opengl-hq] [ 32] #endif
[vo/opengl-hq] [ 33]
[vo/opengl-hq] [ 34] // Earlier GLSL doesn't support mix() with bvec
[vo/opengl-hq] [ 35] #if __VERSION__ >= 130
[vo/opengl-hq] [ 36] vec3 srgb_expand(vec3 v)
[vo/opengl-hq] [ 37] {
[vo/opengl-hq] [ 38] return mix(v / vec3(12.92), pow((v + vec3(0.055))/vec3(1.055), vec3(2.4)),
[vo/opengl-hq] [ 39] lessThanEqual(vec3(0.04045), v));
[vo/opengl-hq] [ 40] }
[vo/opengl-hq] [ 41]
[vo/opengl-hq] [ 42] vec3 srgb_compand(vec3 v)
[vo/opengl-hq] [ 43] {
[vo/opengl-hq] [ 44] return mix(v * vec3(12.92), vec3(1.055) * pow(v, vec3(1.0/2.4)) - vec3(0.055),
[vo/opengl-hq] [ 45] lessThanEqual(vec3(0.0031308), v));
[vo/opengl-hq] [ 46] }
[vo/opengl-hq] [ 47]
[vo/opengl-hq] [ 48] vec3 bt2020_expand(vec3 v)
[vo/opengl-hq] [ 49] {
[vo/opengl-hq] [ 50] return mix(v / vec3(4.5), pow((v + vec3(0.0993))/vec3(1.0993), vec3(1.0/0.45)),
[vo/opengl-hq] [ 51] lessThanEqual(vec3(0.08145), v));
[vo/opengl-hq] [ 52] }
[vo/opengl-hq] [ 53]
[vo/opengl-hq] [ 54] vec3 bt2020_compand(vec3 v)
[vo/opengl-hq] [ 55] {
[vo/opengl-hq] [ 56] return mix(v * vec3(4.5), vec3(1.0993) * pow(v, vec3(0.45)) - vec3(0.0993),
[vo/opengl-hq] [ 57] lessThanEqual(vec3(0.0181), v));
[vo/opengl-hq] [ 58] }
[vo/opengl-hq] [ 59] #endif
[vo/opengl-hq] [ 60]
[vo/opengl-hq] [ 61] // -- prelude end
[vo/opengl-hq] [ 62] #define VIDEO_SAMPLER sampler2D
[vo/opengl-hq] [ 63] uniform sampler2D texture0;
[vo/opengl-hq] [ 64]
[vo/opengl-hq] [ 65] in vec2 texcoord;
[vo/opengl-hq] [ 66] in vec4 color;
[vo/opengl-hq] [ 67] DECLARE_FRAGPARMS
[vo/opengl-hq] [ 68]
[vo/opengl-hq] [ 69] void main() {
[vo/opengl-hq] [ 70] out_color = vec4(color.rgb, color.a * texture(texture0, texcoord).r);
[vo/opengl-hq] [ 71] }
[vo/opengl-hq] [ 72]
[vo/opengl-hq] fragment shader compile log (status=0):
[vo/opengl-hq] ERROR: 0:65: 'varying' : Illegal use of reserved word
[vo/opengl-hq]
[vo/opengl-hq] shader link log (status=0):
[vo/opengl-hq] compiling shader program 'frag_osd_rgba', header:
[vo/opengl-hq] [ 1] #define VIDEO_SAMPLER sampler2D
[vo/opengl-hq] vertex shader source:
[vo/opengl-hq] [ 1] #version 300 es
[vo/opengl-hq] [ 2] #define HAVE_RG 1
[vo/opengl-hq] [ 3]
[vo/opengl-hq] [ 4] #ifdef GL_ES
[vo/opengl-hq] [ 5] precision mediump float;
[vo/opengl-hq] [ 6] #define HAVE_3DTEX (__VERSION__ >= 300)
[vo/opengl-hq] [ 7] #define HAVE_ARRAYS (__VERSION__ >= 300)
[vo/opengl-hq] [ 8] #else
[vo/opengl-hq] [ 9] // Desktop GL
[vo/opengl-hq] [ 10] #define HAVE_3DTEX 1
[vo/opengl-hq] [ 11] #define HAVE_ARRAYS 1
[vo/opengl-hq] [ 12] #endif
[vo/opengl-hq] [ 13]
[vo/opengl-hq] [ 14] // GLSL 1.20 compatibility layer
[vo/opengl-hq] [ 15] // texture() should be assumed to always map to texture2D()
[vo/opengl-hq] [ 16] #if __VERSION__ >= 130
[vo/opengl-hq] [ 17] # define texture1D texture
[vo/opengl-hq] [ 18] # define texture3D texture
[vo/opengl-hq] [ 19] # define DECLARE_FRAGPARMS \
[vo/opengl-hq] [ 20] out vec4 out_color;
[vo/opengl-hq] [ 21] #else
[vo/opengl-hq] [ 22] # define texture texture2D
[vo/opengl-hq] [ 23] # define DECLARE_FRAGPARMS
[vo/opengl-hq] [ 24] # define out_color gl_FragColor
[vo/opengl-hq] [ 25] # define in varying
[vo/opengl-hq] [ 26] #endif
[vo/opengl-hq] [ 27]
[vo/opengl-hq] [ 28] #if HAVE_RG
[vo/opengl-hq] [ 29] #define RG rg
[vo/opengl-hq] [ 30] #else
[vo/opengl-hq] [ 31] #define RG ra
[vo/opengl-hq] [ 32] #endif
[vo/opengl-hq] [ 33]
[vo/opengl-hq] [ 34] // Earlier GLSL doesn't support mix() with bvec
[vo/opengl-hq] [ 35] #if __VERSION__ >= 130
[vo/opengl-hq] [ 36] vec3 srgb_expand(vec3 v)
[vo/opengl-hq] [ 37] {
[vo/opengl-hq] [ 38] return mix(v / vec3(12.92), pow((v + vec3(0.055))/vec3(1.055), vec3(2.4)),
[vo/opengl-hq] [ 39] lessThanEqual(vec3(0.04045), v));
[vo/opengl-hq] [ 40] }
[vo/opengl-hq] [ 41]
[vo/opengl-hq] [ 42] vec3 srgb_compand(vec3 v)
[vo/opengl-hq] [ 43] {
[vo/opengl-hq] [ 44] return mix(v * vec3(12.92), vec3(1.055) * pow(v, vec3(1.0/2.4)) - vec3(0.055),
[vo/opengl-hq] [ 45] lessThanEqual(vec3(0.0031308), v));
[vo/opengl-hq] [ 46] }
[vo/opengl-hq] [ 47]
[vo/opengl-hq] [ 48] vec3 bt2020_expand(vec3 v)
[vo/opengl-hq] [ 49] {
[vo/opengl-hq] [ 50] return mix(v / vec3(4.5), pow((v + vec3(0.0993))/vec3(1.0993), vec3(1.0/0.45)),
[vo/opengl-hq] [ 51] lessThanEqual(vec3(0.08145), v));
[vo/opengl-hq] [ 52] }
[vo/opengl-hq] [ 53]
[vo/opengl-hq] [ 54] vec3 bt2020_compand(vec3 v)
[vo/opengl-hq] [ 55] {
[vo/opengl-hq] [ 56] return mix(v * vec3(4.5), vec3(1.0993) * pow(v, vec3(0.45)) - vec3(0.0993),
[vo/opengl-hq] [ 57] lessThanEqual(vec3(0.0181), v));
[vo/opengl-hq] [ 58] }
[vo/opengl-hq] [ 59] #endif
[vo/opengl-hq] [ 60]
[vo/opengl-hq] [ 61] // -- prelude end
[vo/opengl-hq] [ 62] #define VIDEO_SAMPLER sampler2D
[vo/opengl-hq] [ 63]
[vo/opengl-hq] [ 64] #if __VERSION__ < 130
[vo/opengl-hq] [ 65] # undef in
[vo/opengl-hq] [ 66] # define in attribute
[vo/opengl-hq] [ 67] # define out varying
[vo/opengl-hq] [ 68] #endif
[vo/opengl-hq] [ 69]
[vo/opengl-hq] [ 70] uniform mat3 transform;
[vo/opengl-hq] [ 71] uniform vec3 translation;
[vo/opengl-hq] [ 72] #if HAVE_3DTEX
[vo/opengl-hq] [ 73] uniform sampler3D lut_3d;
[vo/opengl-hq] [ 74] #endif
[vo/opengl-hq] [ 75] uniform mat3 cms_matrix; // transformation from file's gamut to bt.2020
[vo/opengl-hq] [ 76]
[vo/opengl-hq] [ 77] in vec2 vertex_position;
[vo/opengl-hq] [ 78] in vec4 vertex_color;
[vo/opengl-hq] [ 79] out vec4 color;
[vo/opengl-hq] [ 80] in vec2 vertex_texcoord;
[vo/opengl-hq] [ 81] out vec2 texcoord;
[vo/opengl-hq] [ 82]
[vo/opengl-hq] [ 83] void main() {
[vo/opengl-hq] [ 84] vec3 position = vec3(vertex_position, 1) + translation;
[vo/opengl-hq] [ 85] #ifndef FIXED_SCALE
[vo/opengl-hq] [ 86] position = transform * position;
[vo/opengl-hq] [ 87] #endif
[vo/opengl-hq] [ 88] gl_Position = vec4(position, 1);
[vo/opengl-hq] [ 89] color = vertex_color;
[vo/opengl-hq] [ 90]
[vo/opengl-hq] [ 91] // Although we are not scaling in linear light, both 3DLUT and SRGB still
[vo/opengl-hq] [ 92] // operate on linear light inputs so we have to convert to it before
[vo/opengl-hq] [ 93] // either step can be applied.
[vo/opengl-hq] [ 94] #ifdef USE_OSD_LINEAR_CONV_APPROX
[vo/opengl-hq] [ 95] color.rgb = pow(color.rgb, vec3(1.95));
[vo/opengl-hq] [ 96] #endif
[vo/opengl-hq] [ 97] #ifdef USE_OSD_LINEAR_CONV_BT2020
[vo/opengl-hq] [ 98] color.rgb = bt2020_expand(color.rgb);
[vo/opengl-hq] [ 99] #endif
[vo/opengl-hq] [100] #ifdef USE_OSD_LINEAR_CONV_SRGB
[vo/opengl-hq] [101] color.rgb = srgb_expand(color.rgb);
[vo/opengl-hq] [102] #endif
[vo/opengl-hq] [103] #ifdef USE_OSD_CMS_MATRIX
[vo/opengl-hq] [104] // Convert to the right target gamut first (to BT.709 for sRGB,
[vo/opengl-hq] [105] // and to BT.2020 for 3DLUT). Normal clamping here as perceptually
[vo/opengl-hq] [106] // accurate colorimetry is probably not worth the performance trade-off
[vo/opengl-hq] [107] // here.
[vo/opengl-hq] [108] color.rgb = clamp(cms_matrix * color.rgb, 0.0, 1.0);
[vo/opengl-hq] [109] #endif
[vo/opengl-hq] [110] #ifdef USE_OSD_3DLUT
[vo/opengl-hq] [111] color.rgb = pow(color.rgb, vec3(1.0/2.4)); // linear -> 2.4 3DLUT space
[vo/opengl-hq] [112] color = vec4(texture3D(lut_3d, color.rgb).rgb, color.a);
[vo/opengl-hq] [113] #endif
[vo/opengl-hq] [114] #ifdef USE_OSD_SRGB
[vo/opengl-hq] [115] color.rgb = srgb_compand(color.rgb);
[vo/opengl-hq] [116] #endif
[vo/opengl-hq] [117]
[vo/opengl-hq] [118] texcoord = vertex_texcoord;
[vo/opengl-hq] [119] }
[vo/opengl-hq] [120]
[vo/opengl-hq] vertex shader compile log (status=0):
[vo/opengl-hq] ERROR: 0:77: 'attribute' : Illegal use of reserved word
[vo/opengl-hq]
[vo/opengl-hq] fragment shader source:
[vo/opengl-hq] [ 1] #version 300 es
[vo/opengl-hq] [ 2] #define HAVE_RG 1
[vo/opengl-hq] [ 3]
[vo/opengl-hq] [ 4] #ifdef GL_ES
[vo/opengl-hq] [ 5] precision mediump float;
[vo/opengl-hq] [ 6] #define HAVE_3DTEX (__VERSION__ >= 300)
[vo/opengl-hq] [ 7] #define HAVE_ARRAYS (__VERSION__ >= 300)
[vo/opengl-hq] [ 8] #else
[vo/opengl-hq] [ 9] // Desktop GL
[vo/opengl-hq] [ 10] #define HAVE_3DTEX 1
[vo/opengl-hq] [ 11] #define HAVE_ARRAYS 1
[vo/opengl-hq] [ 12] #endif
[vo/opengl-hq] [ 13]
[vo/opengl-hq] [ 14] // GLSL 1.20 compatibility layer
[vo/opengl-hq] [ 15] // texture() should be assumed to always map to texture2D()
[vo/opengl-hq] [ 16] #if __VERSION__ >= 130
[vo/opengl-hq] [ 17] # define texture1D texture
[vo/opengl-hq] [ 18] # define texture3D texture
[vo/opengl-hq] [ 19] # define DECLARE_FRAGPARMS \
[vo/opengl-hq] [ 20] out vec4 out_color;
[vo/opengl-hq] [ 21] #else
[vo/opengl-hq] [ 22] # define texture texture2D
[vo/opengl-hq] [ 23] # define DECLARE_FRAGPARMS
[vo/opengl-hq] [ 24] # define out_color gl_FragColor
[vo/opengl-hq] [ 25] # define in varying
[vo/opengl-hq] [ 26] #endif
[vo/opengl-hq] [ 27]
[vo/opengl-hq] [ 28] #if HAVE_RG
[vo/opengl-hq] [ 29] #define RG rg
[vo/opengl-hq] [ 30] #else
[vo/opengl-hq] [ 31] #define RG ra
[vo/opengl-hq] [ 32] #endif
[vo/opengl-hq] [ 33]
[vo/opengl-hq] [ 34] // Earlier GLSL doesn't support mix() with bvec
[vo/opengl-hq] [ 35] #if __VERSION__ >= 130
[vo/opengl-hq] [ 36] vec3 srgb_expand(vec3 v)
[vo/opengl-hq] [ 37] {
[vo/opengl-hq] [ 38] return mix(v / vec3(12.92), pow((v + vec3(0.055))/vec3(1.055), vec3(2.4)),
[vo/opengl-hq] [ 39] lessThanEqual(vec3(0.04045), v));
[vo/opengl-hq] [ 40] }
[vo/opengl-hq] [ 41]
[vo/opengl-hq] [ 42] vec3 srgb_compand(vec3 v)
[vo/opengl-hq] [ 43] {
[vo/opengl-hq] [ 44] return mix(v * vec3(12.92), vec3(1.055) * pow(v, vec3(1.0/2.4)) - vec3(0.055),
[vo/opengl-hq] [ 45] lessThanEqual(vec3(0.0031308), v));
[vo/opengl-hq] [ 46] }
[vo/opengl-hq] [ 47]
[vo/opengl-hq] [ 48] vec3 bt2020_expand(vec3 v)
[vo/opengl-hq] [ 49] {
[vo/opengl-hq] [ 50] return mix(v / vec3(4.5), pow((v + vec3(0.0993))/vec3(1.0993), vec3(1.0/0.45)),
[vo/opengl-hq] [ 51] lessThanEqual(vec3(0.08145), v));
[vo/opengl-hq] [ 52] }
[vo/opengl-hq] [ 53]
[vo/opengl-hq] [ 54] vec3 bt2020_compand(vec3 v)
[vo/opengl-hq] [ 55] {
[vo/opengl-hq] [ 56] return mix(v * vec3(4.5), vec3(1.0993) * pow(v, vec3(0.45)) - vec3(0.0993),
[vo/opengl-hq] [ 57] lessThanEqual(vec3(0.0181), v));
[vo/opengl-hq] [ 58] }
[vo/opengl-hq] [ 59] #endif
[vo/opengl-hq] [ 60]
[vo/opengl-hq] [ 61] // -- prelude end
[vo/opengl-hq] [ 62] #define VIDEO_SAMPLER sampler2D
[vo/opengl-hq] [ 63] uniform sampler2D texture0;
[vo/opengl-hq] [ 64]
[vo/opengl-hq] [ 65] in vec2 texcoord;
[vo/opengl-hq] [ 66] DECLARE_FRAGPARMS
[vo/opengl-hq] [ 67]
[vo/opengl-hq] [ 68] void main() {
[vo/opengl-hq] [ 69] out_color = texture(texture0, texcoord);
[vo/opengl-hq] [ 70] }
[vo/opengl-hq] [ 71]
[vo/opengl-hq] fragment shader compile log (status=0):
[vo/opengl-hq] ERROR: 0:65: 'varying' : Illegal use of reserved word
[vo/opengl-hq]
[vo/opengl-hq] shader link log (status=0):
[vo/opengl-hq] compiling shader program 'indirect', header:
[vo/opengl-hq] [ 1] #define VIDEO_SAMPLER sampler2D
[vo/opengl-hq] [ 2] #define USE_CONV CONV_PLANAR
[vo/opengl-hq] [ 3] #define USE_COLORMATRIX 1
[vo/opengl-hq] [ 4] #define SAMPLE_C(p0, p1, p2) sample_bilinear(p0, p1, p2, filter_param1_c)
[vo/opengl-hq] [ 5] #define SAMPLE_L SAMPLE_BILINEAR
[vo/opengl-hq] [ 6] #define FIXED_SCALE 1
[vo/opengl-hq] vertex shader source:
[vo/opengl-hq] [ 1] #version 300 es
[vo/opengl-hq] [ 2] #define HAVE_RG 1
[vo/opengl-hq] [ 3]
[vo/opengl-hq] [ 4] #ifdef GL_ES
[vo/opengl-hq] [ 5] precision mediump float;
[vo/opengl-hq] [ 6] #define HAVE_3DTEX (__VERSION__ >= 300)
[vo/opengl-hq] [ 7] #define HAVE_ARRAYS (__VERSION__ >= 300)
[vo/opengl-hq] [ 8] #else
[vo/opengl-hq] [ 9] // Desktop GL
[vo/opengl-hq] [ 10] #define HAVE_3DTEX 1
[vo/opengl-hq] [ 11] #define HAVE_ARRAYS 1
[vo/opengl-hq] [ 12] #endif
[vo/opengl-hq] [ 13]
[vo/opengl-hq] [ 14] // GLSL 1.20 compatibility layer
[vo/opengl-hq] [ 15] // texture() should be assumed to always map to texture2D()
[vo/opengl-hq] [ 16] #if __VERSION__ >= 130
[vo/opengl-hq] [ 17] # define texture1D texture
[vo/opengl-hq] [ 18] # define texture3D texture
[vo/opengl-hq] [ 19] # define DECLARE_FRAGPARMS \
[vo/opengl-hq] [ 20] out vec4 out_color;
[vo/opengl-hq] [ 21] #else
[vo/opengl-hq] [ 22] # define texture texture2D
[vo/opengl-hq] [ 23] # define DECLARE_FRAGPARMS
[vo/opengl-hq] [ 24] # define out_color gl_FragColor
[vo/opengl-hq] [ 25] # define in varying
[vo/opengl-hq] [ 26] #endif
[vo/opengl-hq] [ 27]
[vo/opengl-hq] [ 28] #if HAVE_RG
[vo/opengl-hq] [ 29] #define RG rg
[vo/opengl-hq] [ 30] #else
[vo/opengl-hq] [ 31] #define RG ra
[vo/opengl-hq] [ 32] #endif
[vo/opengl-hq] [ 33]
[vo/opengl-hq] [ 34] // Earlier GLSL doesn't support mix() with bvec
[vo/opengl-hq] [ 35] #if __VERSION__ >= 130
[vo/opengl-hq] [ 36] vec3 srgb_expand(vec3 v)
[vo/opengl-hq] [ 37] {
[vo/opengl-hq] [ 38] return mix(v / vec3(12.92), pow((v + vec3(0.055))/vec3(1.055), vec3(2.4)),
[vo/opengl-hq] [ 39] lessThanEqual(vec3(0.04045), v));
[vo/opengl-hq] [ 40] }
[vo/opengl-hq] [ 41]
[vo/opengl-hq] [ 42] vec3 srgb_compand(vec3 v)
[vo/opengl-hq] [ 43] {
[vo/opengl-hq] [ 44] return mix(v * vec3(12.92), vec3(1.055) * pow(v, vec3(1.0/2.4)) - vec3(0.055),
[vo/opengl-hq] [ 45] lessThanEqual(vec3(0.0031308), v));
[vo/opengl-hq] [ 46] }
[vo/opengl-hq] [ 47]
[vo/opengl-hq] [ 48] vec3 bt2020_expand(vec3 v)
[vo/opengl-hq] [ 49] {
[vo/opengl-hq] [ 50] return mix(v / vec3(4.5), pow((v + vec3(0.0993))/vec3(1.0993), vec3(1.0/0.45)),
[vo/opengl-hq] [ 51] lessThanEqual(vec3(0.08145), v));
[vo/opengl-hq] [ 52] }
[vo/opengl-hq] [ 53]
[vo/opengl-hq] [ 54] vec3 bt2020_compand(vec3 v)
[vo/opengl-hq] [ 55] {
[vo/opengl-hq] [ 56] return mix(v * vec3(4.5), vec3(1.0993) * pow(v, vec3(0.45)) - vec3(0.0993),
[vo/opengl-hq] [ 57] lessThanEqual(vec3(0.0181), v));
[vo/opengl-hq] [ 58] }
[vo/opengl-hq] [ 59] #endif
[vo/opengl-hq] [ 60]
[vo/opengl-hq] [ 61] // -- prelude end
[vo/opengl-hq] [ 62] #define VIDEO_SAMPLER sampler2D
[vo/opengl-hq] [ 63] #define USE_CONV CONV_PLANAR
[vo/opengl-hq] [ 64] #define USE_COLORMATRIX 1
[vo/opengl-hq] [ 65] #define SAMPLE_C(p0, p1, p2) sample_bilinear(p0, p1, p2, filter_param1_c)
[vo/opengl-hq] [ 66] #define SAMPLE_L SAMPLE_BILINEAR
[vo/opengl-hq] [ 67] #define FIXED_SCALE 1
[vo/opengl-hq] [ 68]
[vo/opengl-hq] [ 69] #if __VERSION__ < 130
[vo/opengl-hq] [ 70] # undef in
[vo/opengl-hq] [ 71] # define in attribute
[vo/opengl-hq] [ 72] # define out varying
[vo/opengl-hq] [ 73] #endif
[vo/opengl-hq] [ 74]
[vo/opengl-hq] [ 75] uniform mat3 transform;
[vo/opengl-hq] [ 76] uniform vec3 translation;
[vo/opengl-hq] [ 77] #if HAVE_3DTEX
[vo/opengl-hq] [ 78] uniform sampler3D lut_3d;
[vo/opengl-hq] [ 79] #endif
[vo/opengl-hq] [ 80] uniform mat3 cms_matrix; // transformation from file's gamut to bt.2020
[vo/opengl-hq] [ 81]
[vo/opengl-hq] [ 82] in vec2 vertex_position;
[vo/opengl-hq] [ 83] in vec4 vertex_color;
[vo/opengl-hq] [ 84] out vec4 color;
[vo/opengl-hq] [ 85] in vec2 vertex_texcoord;
[vo/opengl-hq] [ 86] out vec2 texcoord;
[vo/opengl-hq] [ 87]
[vo/opengl-hq] [ 88] void main() {
[vo/opengl-hq] [ 89] vec3 position = vec3(vertex_position, 1) + translation;
[vo/opengl-hq] [ 90] #ifndef FIXED_SCALE
[vo/opengl-hq] [ 91] position = transform * position;
[vo/opengl-hq] [ 92] #endif
[vo/opengl-hq] [ 93] gl_Position = vec4(position, 1);
[vo/opengl-hq] [ 94] color = vertex_color;
[vo/opengl-hq] [ 95]
[vo/opengl-hq] [ 96] // Although we are not scaling in linear light, both 3DLUT and SRGB still
[vo/opengl-hq] [ 97] // operate on linear light inputs so we have to convert to it before
[vo/opengl-hq] [ 98] // either step can be applied.
[vo/opengl-hq] [ 99] #ifdef USE_OSD_LINEAR_CONV_APPROX
[vo/opengl-hq] [100] color.rgb = pow(color.rgb, vec3(1.95));
[vo/opengl-hq] [101] #endif
[vo/opengl-hq] [102] #ifdef USE_OSD_LINEAR_CONV_BT2020
[vo/opengl-hq] [103] color.rgb = bt2020_expand(color.rgb);
[vo/opengl-hq] [104] #endif
[vo/opengl-hq] [105] #ifdef USE_OSD_LINEAR_CONV_SRGB
[vo/opengl-hq] [106] color.rgb = srgb_expand(color.rgb);
[vo/opengl-hq] [107] #endif
[vo/opengl-hq] [108] #ifdef USE_OSD_CMS_MATRIX
[vo/opengl-hq] [109] // Convert to the right target gamut first (to BT.709 for sRGB,
[vo/opengl-hq] [110] // and to BT.2020 for 3DLUT). Normal clamping here as perceptually
[vo/opengl-hq] [111] // accurate colorimetry is probably not worth the performance trade-off
[vo/opengl-hq] [112] // here.
[vo/opengl-hq] [113] color.rgb = clamp(cms_matrix * color.rgb, 0.0, 1.0);
[vo/opengl-hq] [114] #endif
[vo/opengl-hq] [115] #ifdef USE_OSD_3DLUT
[vo/opengl-hq] [116] color.rgb = pow(color.rgb, vec3(1.0/2.4)); // linear -> 2.4 3DLUT space
[vo/opengl-hq] [117] color = vec4(texture3D(lut_3d, color.rgb).rgb, color.a);
[vo/opengl-hq] [118] #endif
[vo/opengl-hq] [119] #ifdef USE_OSD_SRGB
[vo/opengl-hq] [120] color.rgb = srgb_compand(color.rgb);
[vo/opengl-hq] [121] #endif
[vo/opengl-hq] [122]
[vo/opengl-hq] [123] texcoord = vertex_texcoord;
[vo/opengl-hq] [124] }
[vo/opengl-hq] [125]
[vo/opengl-hq] vertex shader compile log (status=0):
[vo/opengl-hq] ERROR: 0:82: 'attribute' : Illegal use of reserved word
[vo/opengl-hq]
[vo/opengl-hq] fragment shader source:
[vo/opengl-hq] [ 1] #version 300 es
[vo/opengl-hq] [ 2] #define HAVE_RG 1
[vo/opengl-hq] [ 3]
[vo/opengl-hq] [ 4] #ifdef GL_ES
[vo/opengl-hq] [ 5] precision mediump float;
[vo/opengl-hq] [ 6] #define HAVE_3DTEX (__VERSION__ >= 300)
[vo/opengl-hq] [ 7] #define HAVE_ARRAYS (__VERSION__ >= 300)
[vo/opengl-hq] [ 8] #else
[vo/opengl-hq] [ 9] // Desktop GL
[vo/opengl-hq] [ 10] #define HAVE_3DTEX 1
[vo/opengl-hq] [ 11] #define HAVE_ARRAYS 1
[vo/opengl-hq] [ 12] #endif
[vo/opengl-hq] [ 13]
[vo/opengl-hq] [ 14] // GLSL 1.20 compatibility layer
[vo/opengl-hq] [ 15] // texture() should be assumed to always map to texture2D()
[vo/opengl-hq] [ 16] #if __VERSION__ >= 130
[vo/opengl-hq] [ 17] # define texture1D texture
[vo/opengl-hq] [ 18] # define texture3D texture
[vo/opengl-hq] [ 19] # define DECLARE_FRAGPARMS \
[vo/opengl-hq] [ 20] out vec4 out_color;
[vo/opengl-hq] [ 21] #else
[vo/opengl-hq] [ 22] # define texture texture2D
[vo/opengl-hq] [ 23] # define DECLARE_FRAGPARMS
[vo/opengl-hq] [ 24] # define out_color gl_FragColor
[vo/opengl-hq] [ 25] # define in varying
[vo/opengl-hq] [ 26] #endif
[vo/opengl-hq] [ 27]
[vo/opengl-hq] [ 28] #if HAVE_RG
[vo/opengl-hq] [ 29] #define RG rg
[vo/opengl-hq] [ 30] #else
[vo/opengl-hq] [ 31] #define RG ra
[vo/opengl-hq] [ 32] #endif
[vo/opengl-hq] [ 33]
[vo/opengl-hq] [ 34] // Earlier GLSL doesn't support mix() with bvec
[vo/opengl-hq] [ 35] #if __VERSION__ >= 130
[vo/opengl-hq] [ 36] vec3 srgb_expand(vec3 v)
[vo/opengl-hq] [ 37] {
[vo/opengl-hq] [ 38] return mix(v / vec3(12.92), pow((v + vec3(0.055))/vec3(1.055), vec3(2.4)),
[vo/opengl-hq] [ 39] lessThanEqual(vec3(0.04045), v));
[vo/opengl-hq] [ 40] }
[vo/opengl-hq] [ 41]
[vo/opengl-hq] [ 42] vec3 srgb_compand(vec3 v)
[vo/opengl-hq] [ 43] {
[vo/opengl-hq] [ 44] return mix(v * vec3(12.92), vec3(1.055) * pow(v, vec3(1.0/2.4)) - vec3(0.055),
[vo/opengl-hq] [ 45] lessThanEqual(vec3(0.0031308), v));
[vo/opengl-hq] [ 46] }
[vo/opengl-hq] [ 47]
[vo/opengl-hq] [ 48] vec3 bt2020_expand(vec3 v)
[vo/opengl-hq] [ 49] {
[vo/opengl-hq] [ 50] return mix(v / vec3(4.5), pow((v + vec3(0.0993))/vec3(1.0993), vec3(1.0/0.45)),
[vo/opengl-hq] [ 51] lessThanEqual(vec3(0.08145), v));
[vo/opengl-hq] [ 52] }
[vo/opengl-hq] [ 53]
[vo/opengl-hq] [ 54] vec3 bt2020_compand(vec3 v)
[vo/opengl-hq] [ 55] {
[vo/opengl-hq] [ 56] return mix(v * vec3(4.5), vec3(1.0993) * pow(v, vec3(0.45)) - vec3(0.0993),
[vo/opengl-hq] [ 57] lessThanEqual(vec3(0.0181), v));
[vo/opengl-hq] [ 58] }
[vo/opengl-hq] [ 59] #endif
[vo/opengl-hq] [ 60]
[vo/opengl-hq] [ 61] // -- prelude end
[vo/opengl-hq] [ 62] #define VIDEO_SAMPLER sampler2D
[vo/opengl-hq] [ 63] #define USE_CONV CONV_PLANAR
[vo/opengl-hq] [ 64] #define USE_COLORMATRIX 1
[vo/opengl-hq] [ 65] #define SAMPLE_C(p0, p1, p2) sample_bilinear(p0, p1, p2, filter_param1_c)
[vo/opengl-hq] [ 66] #define SAMPLE_L SAMPLE_BILINEAR
[vo/opengl-hq] [ 67] #define FIXED_SCALE 1
[vo/opengl-hq] [ 68] uniform VIDEO_SAMPLER texture0;
[vo/opengl-hq] [ 69] uniform VIDEO_SAMPLER texture1;
[vo/opengl-hq] [ 70] uniform VIDEO_SAMPLER texture2;
[vo/opengl-hq] [ 71] uniform VIDEO_SAMPLER texture3;
[vo/opengl-hq] [ 72] uniform vec2 textures_size[4];
[vo/opengl-hq] [ 73] uniform vec2 chroma_center_offset;
[vo/opengl-hq] [ 74] uniform vec2 chroma_div;
[vo/opengl-hq] [ 75] uniform sampler2D lut_c;
[vo/opengl-hq] [ 76] uniform sampler2D lut_l;
[vo/opengl-hq] [ 77] #if HAVE_3DTEX
[vo/opengl-hq] [ 78] uniform sampler3D lut_3d;
[vo/opengl-hq] [ 79] #endif
[vo/opengl-hq] [ 80] uniform sampler2D dither;
[vo/opengl-hq] [ 81] uniform mat3 colormatrix;
[vo/opengl-hq] [ 82] uniform vec3 colormatrix_c;
[vo/opengl-hq] [ 83] uniform mat3 cms_matrix;
[vo/opengl-hq] [ 84] uniform mat2 dither_trafo;
[vo/opengl-hq] [ 85] uniform vec3 inv_gamma;
[vo/opengl-hq] [ 86] uniform float input_gamma;
[vo/opengl-hq] [ 87] uniform float conv_gamma;
[vo/opengl-hq] [ 88] uniform float dither_quantization;
[vo/opengl-hq] [ 89] uniform float dither_center;
[vo/opengl-hq] [ 90] uniform float filter_param1_l;
[vo/opengl-hq] [ 91] uniform float filter_param1_c;
[vo/opengl-hq] [ 92] uniform vec2 dither_size;
[vo/opengl-hq] [ 93]
[vo/opengl-hq] [ 94] in vec2 texcoord;
[vo/opengl-hq] [ 95] DECLARE_FRAGPARMS
[vo/opengl-hq] [ 96]
[vo/opengl-hq] [ 97] #define CONV_NV12 1
[vo/opengl-hq] [ 98] #define CONV_PLANAR 2
[vo/opengl-hq] [ 99]
[vo/opengl-hq] [100] vec4 sample_bilinear(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord, float param1) {
[vo/opengl-hq] [101] return texture(tex, texcoord);
[vo/opengl-hq] [102] }
[vo/opengl-hq] [103]
[vo/opengl-hq] [104] #define SAMPLE_BILINEAR(p0, p1, p2) sample_bilinear(p0, p1, p2, 0.0)
[vo/opengl-hq] [105]
[vo/opengl-hq] [106] // Explanation how bicubic scaling with only 4 texel fetches is done:
[vo/opengl-hq] [107] // http://www.mate.tue.nl/mate/pdfs/10318.pdf
[vo/opengl-hq] [108] // 'Efficient GPU-Based Texture Interpolation using Uniform B-Splines'
[vo/opengl-hq] [109] // Explanation why this algorithm normally always blurs, even with unit scaling:
[vo/opengl-hq] [110] // http://bigwww.epfl.ch/preprints/ruijters1001p.pdf
[vo/opengl-hq] [111] // 'GPU Prefilter for Accurate Cubic B-spline Interpolation'
[vo/opengl-hq] [112] vec4 calcweights(float s) {
[vo/opengl-hq] [113] vec4 t = vec4(-0.5, 0.1666, 0.3333, -0.3333) * s + vec4(1, 0, -0.5, 0.5);
[vo/opengl-hq] [114] t = t * s + vec4(0, 0, -0.5, 0.5);
[vo/opengl-hq] [115] t = t * s + vec4(-0.6666, 0, 0.8333, 0.1666);
[vo/opengl-hq] [116] vec2 a = vec2(1, 1) / vec2(t.z, t.w);
[vo/opengl-hq] [117] t.xy = t.xy * a + vec2(1, 1);
[vo/opengl-hq] [118] t.x = t.x + s;
[vo/opengl-hq] [119] t.y = t.y - s;
[vo/opengl-hq] [120] return t;
[vo/opengl-hq] [121] }
[vo/opengl-hq] [122]
[vo/opengl-hq] [123] vec4 sample_bicubic_fast(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord, float param1) {
[vo/opengl-hq] [124] vec2 pt = 1.0 / texsize;
[vo/opengl-hq] [125] vec2 fcoord = fract(texcoord * texsize + vec2(0.5, 0.5));
[vo/opengl-hq] [126] vec4 parmx = calcweights(fcoord.x);
[vo/opengl-hq] [127] vec4 parmy = calcweights(fcoord.y);
[vo/opengl-hq] [128] vec4 cdelta;
[vo/opengl-hq] [129] cdelta.xz = parmx.RG * vec2(-pt.x, pt.x);
[vo/opengl-hq] [130] cdelta.yw = parmy.RG * vec2(-pt.y, pt.y);
[vo/opengl-hq] [131] // first y-interpolation
[vo/opengl-hq] [132] vec4 ar = texture(tex, texcoord + cdelta.xy);
[vo/opengl-hq] [133] vec4 ag = texture(tex, texcoord + cdelta.xw);
[vo/opengl-hq] [134] vec4 ab = mix(ag, ar, parmy.b);
[vo/opengl-hq] [135] // second y-interpolation
[vo/opengl-hq] [136] vec4 br = texture(tex, texcoord + cdelta.zy);
[vo/opengl-hq] [137] vec4 bg = texture(tex, texcoord + cdelta.zw);
[vo/opengl-hq] [138] vec4 aa = mix(bg, br, parmy.b);
[vo/opengl-hq] [139] // x-interpolation
[vo/opengl-hq] [140] return mix(aa, ab, parmx.b);
[vo/opengl-hq] [141] }
[vo/opengl-hq] [142]
[vo/opengl-hq] [143] #if HAVE_ARRAYS
[vo/opengl-hq] [144] float[2] weights2(sampler2D lookup, float f) {
[vo/opengl-hq] [145] vec2 c = texture(lookup, vec2(0.5, f)).RG;
[vo/opengl-hq] [146] return float[2](c.r, c.g);
[vo/opengl-hq] [147] }
[vo/opengl-hq] [148] float[6] weights6(sampler2D lookup, float f) {
[vo/opengl-hq] [149] vec4 c1 = texture(lookup, vec2(0.25, f));
[vo/opengl-hq] [150] vec4 c2 = texture(lookup, vec2(0.75, f));
[vo/opengl-hq] [151] return float[6](c1.r, c1.g, c1.b, c2.r, c2.g, c2.b);
[vo/opengl-hq] [152] }
[vo/opengl-hq] [153] #endif
[vo/opengl-hq] [154]
[vo/opengl-hq] [155] // For N=n*4 with n>1 (N==4 is covered by weights4()).
[vo/opengl-hq] [156] #define WEIGHTS_N(NAME, N) \
[vo/opengl-hq] [157] float[N] NAME(sampler2D lookup, float f) { \
[vo/opengl-hq] [158] float r[N]; \
[vo/opengl-hq] [159] for (int n = 0; n < N / 4; n++) { \
[vo/opengl-hq] [160] vec4 c = texture(lookup, \
[vo/opengl-hq] [161] vec2(1.0 / (N / 2) + n / float(N / 4), f)); \
[vo/opengl-hq] [162] r[n * 4 + 0] = c.r; \
[vo/opengl-hq] [163] r[n * 4 + 1] = c.g; \
[vo/opengl-hq] [164] r[n * 4 + 2] = c.b; \
[vo/opengl-hq] [165] r[n * 4 + 3] = c.a; \
[vo/opengl-hq] [166] } \
[vo/opengl-hq] [167] return r; \
[vo/opengl-hq] [168] }
[vo/opengl-hq] [169]
[vo/opengl-hq] [170] // The DIR parameter is (0, 1) or (1, 0), and we expect the shader compiler to
[vo/opengl-hq] [171] // remove all the redundant multiplications and additions.
[vo/opengl-hq] [172] #define SAMPLE_CONVOLUTION_SEP_N(NAME, DIR, N, LUT, WEIGHTS_FUNC) \
[vo/opengl-hq] [173] vec4 NAME(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord) { \
[vo/opengl-hq] [174] vec2 pt = (vec2(1.0) / texsize) * DIR; \
[vo/opengl-hq] [175] float fcoord = dot(fract(texcoord * texsize - vec2(0.5)), DIR); \
[vo/opengl-hq] [176] vec2 base = texcoord - fcoord * pt - pt * vec2(N / 2 - 1); \
[vo/opengl-hq] [177] float weights[N] = WEIGHTS_FUNC(LUT, fcoord); \
[vo/opengl-hq] [178] vec4 res = vec4(0); \
[vo/opengl-hq] [179] for (int n = 0; n < N; n++) { \
[vo/opengl-hq] [180] res += vec4(weights[n]) * texture(tex, base + pt * vec2(n)); \
[vo/opengl-hq] [181] } \
[vo/opengl-hq] [182] return res; \
[vo/opengl-hq] [183] }
[vo/opengl-hq] [184]
[vo/opengl-hq] [185] #define SAMPLE_CONVOLUTION_N(NAME, N, LUT, WEIGHTS_FUNC) \
[vo/opengl-hq] [186] vec4 NAME(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord) { \
[vo/opengl-hq] [187] vec2 pt = vec2(1.0) / texsize; \
[vo/opengl-hq] [188] vec2 fcoord = fract(texcoord * texsize - vec2(0.5)); \
[vo/opengl-hq] [189] vec2 base = texcoord - fcoord * pt - pt * vec2(N / 2 - 1); \
[vo/opengl-hq] [190] vec4 res = vec4(0); \
[vo/opengl-hq] [191] float w_x[N] = WEIGHTS_FUNC(LUT, fcoord.x); \
[vo/opengl-hq] [192] float w_y[N] = WEIGHTS_FUNC(LUT, fcoord.y); \
[vo/opengl-hq] [193] for (int y = 0; y < N; y++) { \
[vo/opengl-hq] [194] vec4 line = vec4(0); \
[vo/opengl-hq] [195] for (int x = 0; x < N; x++) \
[vo/opengl-hq] [196] line += vec4(w_x[x]) * texture(tex, base + pt * vec2(x, y));\
[vo/opengl-hq] [197] res += vec4(w_y[y]) * line; \
[vo/opengl-hq] [198] } \
[vo/opengl-hq] [199] return res; \
[vo/opengl-hq] [200] }
[vo/opengl-hq] [201]
[vo/opengl-hq] [202] #ifdef DEF_SCALER0
[vo/opengl-hq] [203] DEF_SCALER0
[vo/opengl-hq] [204] #endif
[vo/opengl-hq] [205] #ifdef DEF_SCALER1
[vo/opengl-hq] [206] DEF_SCALER1
[vo/opengl-hq] [207] #endif
[vo/opengl-hq] [208]
[vo/opengl-hq] [209] // Unsharp masking
[vo/opengl-hq] [210] vec4 sample_sharpen3(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord, float param1) {
[vo/opengl-hq] [211] vec2 pt = 1.0 / texsize;
[vo/opengl-hq] [212] vec2 st = pt * 0.5;
[vo/opengl-hq] [213] vec4 p = texture(tex, texcoord);
[vo/opengl-hq] [214] vec4 sum = texture(tex, texcoord + st * vec2(+1, +1))
[vo/opengl-hq] [215] + texture(tex, texcoord + st * vec2(+1, -1))
[vo/opengl-hq] [216] + texture(tex, texcoord + st * vec2(-1, +1))
[vo/opengl-hq] [217] + texture(tex, texcoord + st * vec2(-1, -1));
[vo/opengl-hq] [218] return p + (p - 0.25 * sum) * param1;
[vo/opengl-hq] [219] }
[vo/opengl-hq] [220]
[vo/opengl-hq] [221] vec4 sample_sharpen5(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord, float param1) {
[vo/opengl-hq] [222] vec2 pt = 1.0 / texsize;
[vo/opengl-hq] [223] vec2 st1 = pt * 1.2;
[vo/opengl-hq] [224] vec4 p = texture(tex, texcoord);
[vo/opengl-hq] [225] vec4 sum1 = texture(tex, texcoord + st1 * vec2(+1, +1))
[vo/opengl-hq] [226] + texture(tex, texcoord + st1 * vec2(+1, -1))
[vo/opengl-hq] [227] + texture(tex, texcoord + st1 * vec2(-1, +1))
[vo/opengl-hq] [228] + texture(tex, texcoord + st1 * vec2(-1, -1));
[vo/opengl-hq] [229] vec2 st2 = pt * 1.5;
[vo/opengl-hq] [230] vec4 sum2 = texture(tex, texcoord + st2 * vec2(+1, 0))
[vo/opengl-hq] [231] + texture(tex, texcoord + st2 * vec2( 0, +1))
[vo/opengl-hq] [232] + texture(tex, texcoord + st2 * vec2(-1, 0))
[vo/opengl-hq] [233] + texture(tex, texcoord + st2 * vec2( 0, -1));
[vo/opengl-hq] [234] vec4 t = p * 0.859375 + sum2 * -0.1171875 + sum1 * -0.09765625;
[vo/opengl-hq] [235] return p + t * param1;
[vo/opengl-hq] [236] }
[vo/opengl-hq] [237]
[vo/opengl-hq] [238] void main() {
[vo/opengl-hq] [239] vec2 chr_texcoord = texcoord;
[vo/opengl-hq] [240] #ifdef USE_RECTANGLE
[vo/opengl-hq] [241] chr_texcoord = chr_texcoord * chroma_div;
[vo/opengl-hq] [242] #else
[vo/opengl-hq] [243] // Texture coordinates are [0,1], and chroma plane coordinates are
[vo/opengl-hq] [244] // magically rescaled.
[vo/opengl-hq] [245] #endif
[vo/opengl-hq] [246] chr_texcoord = chr_texcoord + chroma_center_offset;
[vo/opengl-hq] [247] #ifndef USE_CONV
[vo/opengl-hq] [248] #define USE_CONV 0
[vo/opengl-hq] [249] #endif
[vo/opengl-hq] [250] #if USE_CONV == CONV_PLANAR
[vo/opengl-hq] [251] vec4 acolor = vec4(SAMPLE_L(texture0, textures_size[0], texcoord).r,
[vo/opengl-hq] [252] SAMPLE_C(texture1, textures_size[1], chr_texcoord).r,
[vo/opengl-hq] [253] SAMPLE_C(texture2, textures_size[2], chr_texcoord).r,
[vo/opengl-hq] [254] 1.0);
[vo/opengl-hq] [255] #elif USE_CONV == CONV_NV12
[vo/opengl-hq] [256] vec4 acolor = vec4(SAMPLE_L(texture0, textures_size[0], texcoord).r,
[vo/opengl-hq] [257] SAMPLE_C(texture1, textures_size[1], chr_texcoord).RG,
[vo/opengl-hq] [258] 1.0);
[vo/opengl-hq] [259] #else
[vo/opengl-hq] [260] vec4 acolor = SAMPLE_L(texture0, textures_size[0], texcoord);
[vo/opengl-hq] [261] #endif
[vo/opengl-hq] [262] #ifdef USE_COLOR_SWIZZLE
[vo/opengl-hq] [263] acolor = acolor. USE_COLOR_SWIZZLE ;
[vo/opengl-hq] [264] #endif
[vo/opengl-hq] [265] #ifdef USE_ALPHA_PLANE
[vo/opengl-hq] [266] acolor.a = SAMPLE_L(texture3, textures_size[3], texcoord).r;
[vo/opengl-hq] [267] #endif
[vo/opengl-hq] [268] vec3 color = acolor.rgb;
[vo/opengl-hq] [269] float alpha = acolor.a;
[vo/opengl-hq] [270] #ifdef USE_YGRAY
[vo/opengl-hq] [271] // NOTE: actually slightly wrong for 16 bit input video, and completely
[vo/opengl-hq] [272] // wrong for 9/10 bit input
[vo/opengl-hq] [273] color.gb = vec2(128.0/255.0);
[vo/opengl-hq] [274] #endif
[vo/opengl-hq] [275] #ifdef USE_INPUT_GAMMA
[vo/opengl-hq] [276] // Pre-colormatrix input gamma correction (eg. for MP_IMGFLAG_XYZ)
[vo/opengl-hq] [277] color = pow(color, vec3(input_gamma));
[vo/opengl-hq] [278] #endif
[vo/opengl-hq] [279] #ifdef USE_COLORMATRIX
[vo/opengl-hq] [280] // Conversion from Y'CbCr or other spaces to RGB
[vo/opengl-hq] [281] color = mat3(colormatrix) * color + colormatrix_c;
[vo/opengl-hq] [282] #endif
[vo/opengl-hq] [283] #ifdef USE_CONV_GAMMA
[vo/opengl-hq] [284] // Post-colormatrix converted gamma correction (eg. for MP_IMGFLAG_XYZ)
[vo/opengl-hq] [285] color = pow(color, vec3(conv_gamma));
[vo/opengl-hq] [286] #endif
[vo/opengl-hq] [287] #ifdef USE_CONST_LUMA
[vo/opengl-hq] [288] // Conversion from C'rcY'cC'bc to R'Y'cB' via the BT.2020 CL system:
[vo/opengl-hq] [289] // C'bc = (B'-Y'c) / 1.9404 | C'bc <= 0
[vo/opengl-hq] [290] // = (B'-Y'c) / 1.5816 | C'bc > 0
[vo/opengl-hq] [291] //
[vo/opengl-hq] [292] // C'rc = (R'-Y'c) / 1.7184 | C'rc <= 0
[vo/opengl-hq] [293] // = (R'-Y'c) / 0.9936 | C'rc > 0
[vo/opengl-hq] [294] //
[vo/opengl-hq] [295] // as per the BT.2020 specification, table 4. This is a non-linear
[vo/opengl-hq] [296] // transformation because (constant) luminance receives non-equal
[vo/opengl-hq] [297] // contributions from the three different channels.
[vo/opengl-hq] [298] color.br = color.br * mix(vec2(1.5816, 0.9936), vec2(1.9404, 1.7184),
[vo/opengl-hq] [299] lessThanEqual(color.br, vec2(0))) + color.gg;
[vo/opengl-hq] [300] #endif
[vo/opengl-hq] [301] #ifdef USE_COLORMATRIX
[vo/opengl-hq] [302] // CONST_LUMA involves numbers outside the [0,1] range so we make sure
[vo/opengl-hq] [303] // to clip here, after the (possible) USE_CONST_LUMA calculations are done,
[vo/opengl-hq] [304] // instead of immediately after the colormatrix conversion.
[vo/opengl-hq] [305] color = clamp(color, 0.0, 1.0);
[vo/opengl-hq] [306] #endif
[vo/opengl-hq] [307] // If we are scaling in linear light (SRGB or 3DLUT option enabled), we
[vo/opengl-hq] [308] // expand our source colors before scaling. This shader currently just
[vo/opengl-hq] [309] // assumes everything uses the BT.2020 12-bit gamma function, since the
[vo/opengl-hq] [310] // difference between this and BT.601, BT.709 and BT.2020 10-bit is well
[vo/opengl-hq] [311] // below the rounding error threshold for both 8-bit and even 10-bit
[vo/opengl-hq] [312] // content. It only makes a difference for 12-bit sources, so it should be
[vo/opengl-hq] [313] // fine to use here.
[vo/opengl-hq] [314] #ifdef USE_LINEAR_LIGHT_APPROX
[vo/opengl-hq] [315] // We differentiate between approximate BT.2020 (gamma 1.95) ...
[vo/opengl-hq] [316] color = pow(color, vec3(1.95));
[vo/opengl-hq] [317] #endif
[vo/opengl-hq] [318] #ifdef USE_LINEAR_LIGHT_BT2020
[vo/opengl-hq] [319] // ... and actual BT.2020 (two-part function)
[vo/opengl-hq] [320] color = bt2020_expand(color);
[vo/opengl-hq] [321] #endif
[vo/opengl-hq] [322] #ifdef USE_LINEAR_LIGHT_SRGB
[vo/opengl-hq] [323] // This is not needed for most sRGB content since we can use GL_SRGB to
[vo/opengl-hq] [324] // directly sample RGB texture in linear light, but for things which are
[vo/opengl-hq] [325] // also sRGB but in a different format (such as JPEG's YUV), we need
[vo/opengl-hq] [326] // to convert to linear light manually.
[vo/opengl-hq] [327] color = srgb_expand(color);
[vo/opengl-hq] [328] #endif
[vo/opengl-hq] [329] #ifdef USE_CONST_LUMA
[vo/opengl-hq] [330] // Calculate the green channel from the expanded RYcB
[vo/opengl-hq] [331] // The BT.2020 specification says Yc = 0.2627*R + 0.6780*G + 0.0593*B
[vo/opengl-hq] [332] color.g = (color.g - 0.2627*color.r - 0.0593*color.b)/0.6780;
[vo/opengl-hq] [333] #endif
[vo/opengl-hq] [334] // Image upscaling happens roughly here
[vo/opengl-hq] [335] #ifdef USE_GAMMA_POW
[vo/opengl-hq] [336] // User-defined gamma correction factor (via the gamma sub-option)
[vo/opengl-hq] [337] color = pow(color, inv_gamma);
[vo/opengl-hq] [338] #endif
[vo/opengl-hq] [339] #ifdef USE_CMS_MATRIX
[vo/opengl-hq] [340] // Convert to the right target gamut first (to BT.709 for sRGB,
[vo/opengl-hq] [341] // and to BT.2020 for 3DLUT).
[vo/opengl-hq] [342] color = cms_matrix * color;
[vo/opengl-hq] [343]
[vo/opengl-hq] [344] // Clamp to the target gamut. This clamp is needed because the gamma
[vo/opengl-hq] [345] // functions are not well-defined outside this range, which is related to
[vo/opengl-hq] [346] // the fact that they're not representable on the target device.
[vo/opengl-hq] [347] // TODO: Desaturate colorimetrically; this happens automatically for
[vo/opengl-hq] [348] // 3dlut targets but not for sRGB mode. Not sure if this is a requirement.
[vo/opengl-hq] [349] color = clamp(color, 0.0, 1.0);
[vo/opengl-hq] [350] #endif
[vo/opengl-hq] [351] #ifdef USE_3DLUT
[vo/opengl-hq] [352] // For the 3DLUT we are arbitrarily using 2.4 as input gamma to reduce
[vo/opengl-hq] [353] // the amount of rounding errors, so we pull up to that space first and
[vo/opengl-hq] [354] // then pass it through the 3D texture.
[vo/opengl-hq] [355] color = pow(color, vec3(1.0/2.4));
[vo/opengl-hq] [356] color = texture3D(lut_3d, color).rgb;
[vo/opengl-hq] [357] #endif
[vo/opengl-hq] [358] #ifdef USE_SRGB
[vo/opengl-hq] [359] // Adapt and compand from the linear BT2020 source to the sRGB output
[vo/opengl-hq] [360] color = srgb_compand(color);
[vo/opengl-hq] [361] #endif
[vo/opengl-hq] [362] // If none of these options took care of companding again, we have to do
[vo/opengl-hq] [363] // it manually here for the previously-expanded channels. This again
[vo/opengl-hq] [364] // comes in two flavours, one for the approximate gamma system and one
[vo/opengl-hq] [365] // for the actual gamma system.
[vo/opengl-hq] [366] #ifdef USE_CONST_LUMA_INV_APPROX
[vo/opengl-hq] [367] color = pow(color, vec3(1.0/1.95));
[vo/opengl-hq] [368] #endif
[vo/opengl-hq] [369] #ifdef USE_CONST_LUMA_INV_BT2020
[vo/opengl-hq] [370] color = bt2020_compand(color);
[vo/opengl-hq] [371] #endif
[vo/opengl-hq] [372] #ifdef USE_DITHER
[vo/opengl-hq] [373] vec2 dither_pos = gl_FragCoord.xy / dither_size;
[vo/opengl-hq] [374] #ifdef USE_TEMPORAL_DITHER
[vo/opengl-hq] [375] dither_pos = dither_trafo * dither_pos;
[vo/opengl-hq] [376] #endif
[vo/opengl-hq] [377] float dither_value = texture(dither, dither_pos).r;
[vo/opengl-hq] [378] color = floor(color * dither_quantization + dither_value + dither_center) /
[vo/opengl-hq] [379] dither_quantization;
[vo/opengl-hq] [380] #endif
[vo/opengl-hq] [381] #ifdef USE_ALPHA_BLEND
[vo/opengl-hq] [382] color = color * alpha;
[vo/opengl-hq] [383] #endif
[vo/opengl-hq] [384] #ifdef USE_ALPHA
[vo/opengl-hq] [385] out_color = vec4(color, alpha);
[vo/opengl-hq] [386] #else
[vo/opengl-hq] [387] out_color = vec4(color, 1.0);
[vo/opengl-hq] [388] #endif
[vo/opengl-hq] [389] }
[vo/opengl-hq] fragment shader compile log (status=0):
[vo/opengl-hq] ERROR: 0:94: 'varying' : Illegal use of reserved word
[vo/opengl-hq]
[vo/opengl-hq] shader link log (status=0):
[vo/opengl-hq] compiling shader program 'scale_sep', header:
[vo/opengl-hq] [ 1] #define VIDEO_SAMPLER sampler2D
[vo/opengl-hq] [ 2] #define FIXED_SCALE 1
[vo/opengl-hq] [ 3] #define DEF_SCALER0 \
[vo/opengl-hq] [ 4] SAMPLE_CONVOLUTION_SEP_N(sample_scaler0, vec2(0, 1), 6, lut_l, weights6)
[vo/opengl-hq] [ 5] #define SAMPLE_L sample_scaler0
[vo/opengl-hq] vertex shader source:
[vo/opengl-hq] [ 1] #version 300 es
[vo/opengl-hq] [ 2] #define HAVE_RG 1
[vo/opengl-hq] [ 3]
[vo/opengl-hq] [ 4] #ifdef GL_ES
[vo/opengl-hq] [ 5] precision mediump float;
[vo/opengl-hq] [ 6] #define HAVE_3DTEX (__VERSION__ >= 300)
[vo/opengl-hq] [ 7] #define HAVE_ARRAYS (__VERSION__ >= 300)
[vo/opengl-hq] [ 8] #else
[vo/opengl-hq] [ 9] // Desktop GL
[vo/opengl-hq] [ 10] #define HAVE_3DTEX 1
[vo/opengl-hq] [ 11] #define HAVE_ARRAYS 1
[vo/opengl-hq] [ 12] #endif
[vo/opengl-hq] [ 13]
[vo/opengl-hq] [ 14] // GLSL 1.20 compatibility layer
[vo/opengl-hq] [ 15] // texture() should be assumed to always map to texture2D()
[vo/opengl-hq] [ 16] #if __VERSION__ >= 130
[vo/opengl-hq] [ 17] # define texture1D texture
[vo/opengl-hq] [ 18] # define texture3D texture
[vo/opengl-hq] [ 19] # define DECLARE_FRAGPARMS \
[vo/opengl-hq] [ 20] out vec4 out_color;
[vo/opengl-hq] [ 21] #else
[vo/opengl-hq] [ 22] # define texture texture2D
[vo/opengl-hq] [ 23] # define DECLARE_FRAGPARMS
[vo/opengl-hq] [ 24] # define out_color gl_FragColor
[vo/opengl-hq] [ 25] # define in varying
[vo/opengl-hq] [ 26] #endif
[vo/opengl-hq] [ 27]
[vo/opengl-hq] [ 28] #if HAVE_RG
[vo/opengl-hq] [ 29] #define RG rg
[vo/opengl-hq] [ 30] #else
[vo/opengl-hq] [ 31] #define RG ra
[vo/opengl-hq] [ 32] #endif
[vo/opengl-hq] [ 33]
[vo/opengl-hq] [ 34] // Earlier GLSL doesn't support mix() with bvec
[vo/opengl-hq] [ 35] #if __VERSION__ >= 130
[vo/opengl-hq] [ 36] vec3 srgb_expand(vec3 v)
[vo/opengl-hq] [ 37] {
[vo/opengl-hq] [ 38] return mix(v / vec3(12.92), pow((v + vec3(0.055))/vec3(1.055), vec3(2.4)),
[vo/opengl-hq] [ 39] lessThanEqual(vec3(0.04045), v));
[vo/opengl-hq] [ 40] }
[vo/opengl-hq] [ 41]
[vo/opengl-hq] [ 42] vec3 srgb_compand(vec3 v)
[vo/opengl-hq] [ 43] {
[vo/opengl-hq] [ 44] return mix(v * vec3(12.92), vec3(1.055) * pow(v, vec3(1.0/2.4)) - vec3(0.055),
[vo/opengl-hq] [ 45] lessThanEqual(vec3(0.0031308), v));
[vo/opengl-hq] [ 46] }
[vo/opengl-hq] [ 47]
[vo/opengl-hq] [ 48] vec3 bt2020_expand(vec3 v)
[vo/opengl-hq] [ 49] {
[vo/opengl-hq] [ 50] return mix(v / vec3(4.5), pow((v + vec3(0.0993))/vec3(1.0993), vec3(1.0/0.45)),
[vo/opengl-hq] [ 51] lessThanEqual(vec3(0.08145), v));
[vo/opengl-hq] [ 52] }
[vo/opengl-hq] [ 53]
[vo/opengl-hq] [ 54] vec3 bt2020_compand(vec3 v)
[vo/opengl-hq] [ 55] {
[vo/opengl-hq] [ 56] return mix(v * vec3(4.5), vec3(1.0993) * pow(v, vec3(0.45)) - vec3(0.0993),
[vo/opengl-hq] [ 57] lessThanEqual(vec3(0.0181), v));
[vo/opengl-hq] [ 58] }
[vo/opengl-hq] [ 59] #endif
[vo/opengl-hq] [ 60]
[vo/opengl-hq] [ 61] // -- prelude end
[vo/opengl-hq] [ 62] #define VIDEO_SAMPLER sampler2D
[vo/opengl-hq] [ 63] #define FIXED_SCALE 1
[vo/opengl-hq] [ 64] #define DEF_SCALER0 \
[vo/opengl-hq] [ 65] SAMPLE_CONVOLUTION_SEP_N(sample_scaler0, vec2(0, 1), 6, lut_l, weights6)
[vo/opengl-hq] [ 66] #define SAMPLE_L sample_scaler0
[vo/opengl-hq] [ 67]
[vo/opengl-hq] [ 68] #if __VERSION__ < 130
[vo/opengl-hq] [ 69] # undef in
[vo/opengl-hq] [ 70] # define in attribute
[vo/opengl-hq] [ 71] # define out varying
[vo/opengl-hq] [ 72] #endif
[vo/opengl-hq] [ 73]
[vo/opengl-hq] [ 74] uniform mat3 transform;
[vo/opengl-hq] [ 75] uniform vec3 translation;
[vo/opengl-hq] [ 76] #if HAVE_3DTEX
[vo/opengl-hq] [ 77] uniform sampler3D lut_3d;
[vo/opengl-hq] [ 78] #endif
[vo/opengl-hq] [ 79] uniform mat3 cms_matrix; // transformation from file's gamut to bt.2020
[vo/opengl-hq] [ 80]
[vo/opengl-hq] [ 81] in vec2 vertex_position;
[vo/opengl-hq] [ 82] in vec4 vertex_color;
[vo/opengl-hq] [ 83] out vec4 color;
[vo/opengl-hq] [ 84] in vec2 vertex_texcoord;
[vo/opengl-hq] [ 85] out vec2 texcoord;
[vo/opengl-hq] [ 86]
[vo/opengl-hq] [ 87] void main() {
[vo/opengl-hq] [ 88] vec3 position = vec3(vertex_position, 1) + translation;
[vo/opengl-hq] [ 89] #ifndef FIXED_SCALE
[vo/opengl-hq] [ 90] position = transform * position;
[vo/opengl-hq] [ 91] #endif
[vo/opengl-hq] [ 92] gl_Position = vec4(position, 1);
[vo/opengl-hq] [ 93] color = vertex_color;
[vo/opengl-hq] [ 94]
[vo/opengl-hq] [ 95] // Although we are not scaling in linear light, both 3DLUT and SRGB still
[vo/opengl-hq] [ 96] // operate on linear light inputs so we have to convert to it before
[vo/opengl-hq] [ 97] // either step can be applied.
[vo/opengl-hq] [ 98] #ifdef USE_OSD_LINEAR_CONV_APPROX
[vo/opengl-hq] [ 99] color.rgb = pow(color.rgb, vec3(1.95));
[vo/opengl-hq] [100] #endif
[vo/opengl-hq] [101] #ifdef USE_OSD_LINEAR_CONV_BT2020
[vo/opengl-hq] [102] color.rgb = bt2020_expand(color.rgb);
[vo/opengl-hq] [103] #endif
[vo/opengl-hq] [104] #ifdef USE_OSD_LINEAR_CONV_SRGB
[vo/opengl-hq] [105] color.rgb = srgb_expand(color.rgb);
[vo/opengl-hq] [106] #endif
[vo/opengl-hq] [107] #ifdef USE_OSD_CMS_MATRIX
[vo/opengl-hq] [108] // Convert to the right target gamut first (to BT.709 for sRGB,
[vo/opengl-hq] [109] // and to BT.2020 for 3DLUT). Normal clamping here as perceptually
[vo/opengl-hq] [110] // accurate colorimetry is probably not worth the performance trade-off
[vo/opengl-hq] [111] // here.
[vo/opengl-hq] [112] color.rgb = clamp(cms_matrix * color.rgb, 0.0, 1.0);
[vo/opengl-hq] [113] #endif
[vo/opengl-hq] [114] #ifdef USE_OSD_3DLUT
[vo/opengl-hq] [115] color.rgb = pow(color.rgb, vec3(1.0/2.4)); // linear -> 2.4 3DLUT space
[vo/opengl-hq] [116] color = vec4(texture3D(lut_3d, color.rgb).rgb, color.a);
[vo/opengl-hq] [117] #endif
[vo/opengl-hq] [118] #ifdef USE_OSD_SRGB
[vo/opengl-hq] [119] color.rgb = srgb_compand(color.rgb);
[vo/opengl-hq] [120] #endif
[vo/opengl-hq] [121]
[vo/opengl-hq] [122] texcoord = vertex_texcoord;
[vo/opengl-hq] [123] }
[vo/opengl-hq] [124]
[vo/opengl-hq] vertex shader compile log (status=0):
[vo/opengl-hq] ERROR: 0:81: 'attribute' : Illegal use of reserved word
[vo/opengl-hq]
[vo/opengl-hq] fragment shader source:
[vo/opengl-hq] [ 1] #version 300 es
[vo/opengl-hq] [ 2] #define HAVE_RG 1
[vo/opengl-hq] [ 3]
[vo/opengl-hq] [ 4] #ifdef GL_ES
[vo/opengl-hq] [ 5] precision mediump float;
[vo/opengl-hq] [ 6] #define HAVE_3DTEX (__VERSION__ >= 300)
[vo/opengl-hq] [ 7] #define HAVE_ARRAYS (__VERSION__ >= 300)
[vo/opengl-hq] [ 8] #else
[vo/opengl-hq] [ 9] // Desktop GL
[vo/opengl-hq] [ 10] #define HAVE_3DTEX 1
[vo/opengl-hq] [ 11] #define HAVE_ARRAYS 1
[vo/opengl-hq] [ 12] #endif
[vo/opengl-hq] [ 13]
[vo/opengl-hq] [ 14] // GLSL 1.20 compatibility layer
[vo/opengl-hq] [ 15] // texture() should be assumed to always map to texture2D()
[vo/opengl-hq] [ 16] #if __VERSION__ >= 130
[vo/opengl-hq] [ 17] # define texture1D texture
[vo/opengl-hq] [ 18] # define texture3D texture
[vo/opengl-hq] [ 19] # define DECLARE_FRAGPARMS \
[vo/opengl-hq] [ 20] out vec4 out_color;
[vo/opengl-hq] [ 21] #else
[vo/opengl-hq] [ 22] # define texture texture2D
[vo/opengl-hq] [ 23] # define DECLARE_FRAGPARMS
[vo/opengl-hq] [ 24] # define out_color gl_FragColor
[vo/opengl-hq] [ 25] # define in varying
[vo/opengl-hq] [ 26] #endif
[vo/opengl-hq] [ 27]
[vo/opengl-hq] [ 28] #if HAVE_RG
[vo/opengl-hq] [ 29] #define RG rg
[vo/opengl-hq] [ 30] #else
[vo/opengl-hq] [ 31] #define RG ra
[vo/opengl-hq] [ 32] #endif
[vo/opengl-hq] [ 33]
[vo/opengl-hq] [ 34] // Earlier GLSL doesn't support mix() with bvec
[vo/opengl-hq] [ 35] #if __VERSION__ >= 130
[vo/opengl-hq] [ 36] vec3 srgb_expand(vec3 v)
[vo/opengl-hq] [ 37] {
[vo/opengl-hq] [ 38] return mix(v / vec3(12.92), pow((v + vec3(0.055))/vec3(1.055), vec3(2.4)),
[vo/opengl-hq] [ 39] lessThanEqual(vec3(0.04045), v));
[vo/opengl-hq] [ 40] }
[vo/opengl-hq] [ 41]
[vo/opengl-hq] [ 42] vec3 srgb_compand(vec3 v)
[vo/opengl-hq] [ 43] {
[vo/opengl-hq] [ 44] return mix(v * vec3(12.92), vec3(1.055) * pow(v, vec3(1.0/2.4)) - vec3(0.055),
[vo/opengl-hq] [ 45] lessThanEqual(vec3(0.0031308), v));
[vo/opengl-hq] [ 46] }
[vo/opengl-hq] [ 47]
[vo/opengl-hq] [ 48] vec3 bt2020_expand(vec3 v)
[vo/opengl-hq] [ 49] {
[vo/opengl-hq] [ 50] return mix(v / vec3(4.5), pow((v + vec3(0.0993))/vec3(1.0993), vec3(1.0/0.45)),
[vo/opengl-hq] [ 51] lessThanEqual(vec3(0.08145), v));
[vo/opengl-hq] [ 52] }
[vo/opengl-hq] [ 53]
[vo/opengl-hq] [ 54] vec3 bt2020_compand(vec3 v)
[vo/opengl-hq] [ 55] {
[vo/opengl-hq] [ 56] return mix(v * vec3(4.5), vec3(1.0993) * pow(v, vec3(0.45)) - vec3(0.0993),
[vo/opengl-hq] [ 57] lessThanEqual(vec3(0.0181), v));
[vo/opengl-hq] [ 58] }
[vo/opengl-hq] [ 59] #endif
[vo/opengl-hq] [ 60]
[vo/opengl-hq] [ 61] // -- prelude end
[vo/opengl-hq] [ 62] #define VIDEO_SAMPLER sampler2D
[vo/opengl-hq] [ 63] #define FIXED_SCALE 1
[vo/opengl-hq] [ 64] #define DEF_SCALER0 \
[vo/opengl-hq] [ 65] SAMPLE_CONVOLUTION_SEP_N(sample_scaler0, vec2(0, 1), 6, lut_l, weights6)
[vo/opengl-hq] [ 66] #define SAMPLE_L sample_scaler0
[vo/opengl-hq] [ 67] uniform VIDEO_SAMPLER texture0;
[vo/opengl-hq] [ 68] uniform VIDEO_SAMPLER texture1;
[vo/opengl-hq] [ 69] uniform VIDEO_SAMPLER texture2;
[vo/opengl-hq] [ 70] uniform VIDEO_SAMPLER texture3;
[vo/opengl-hq] [ 71] uniform vec2 textures_size[4];
[vo/opengl-hq] [ 72] uniform vec2 chroma_center_offset;
[vo/opengl-hq] [ 73] uniform vec2 chroma_div;
[vo/opengl-hq] [ 74] uniform sampler2D lut_c;
[vo/opengl-hq] [ 75] uniform sampler2D lut_l;
[vo/opengl-hq] [ 76] #if HAVE_3DTEX
[vo/opengl-hq] [ 77] uniform sampler3D lut_3d;
[vo/opengl-hq] [ 78] #endif
[vo/opengl-hq] [ 79] uniform sampler2D dither;
[vo/opengl-hq] [ 80] uniform mat3 colormatrix;
[vo/opengl-hq] [ 81] uniform vec3 colormatrix_c;
[vo/opengl-hq] [ 82] uniform mat3 cms_matrix;
[vo/opengl-hq] [ 83] uniform mat2 dither_trafo;
[vo/opengl-hq] [ 84] uniform vec3 inv_gamma;
[vo/opengl-hq] [ 85] uniform float input_gamma;
[vo/opengl-hq] [ 86] uniform float conv_gamma;
[vo/opengl-hq] [ 87] uniform float dither_quantization;
[vo/opengl-hq] [ 88] uniform float dither_center;
[vo/opengl-hq] [ 89] uniform float filter_param1_l;
[vo/opengl-hq] [ 90] uniform float filter_param1_c;
[vo/opengl-hq] [ 91] uniform vec2 dither_size;
[vo/opengl-hq] [ 92]
[vo/opengl-hq] [ 93] in vec2 texcoord;
[vo/opengl-hq] [ 94] DECLARE_FRAGPARMS
[vo/opengl-hq] [ 95]
[vo/opengl-hq] [ 96] #define CONV_NV12 1
[vo/opengl-hq] [ 97] #define CONV_PLANAR 2
[vo/opengl-hq] [ 98]
[vo/opengl-hq] [ 99] vec4 sample_bilinear(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord, float param1) {
[vo/opengl-hq] [100] return texture(tex, texcoord);
[vo/opengl-hq] [101] }
[vo/opengl-hq] [102]
[vo/opengl-hq] [103] #define SAMPLE_BILINEAR(p0, p1, p2) sample_bilinear(p0, p1, p2, 0.0)
[vo/opengl-hq] [104]
[vo/opengl-hq] [105] // Explanation how bicubic scaling with only 4 texel fetches is done:
[vo/opengl-hq] [106] // http://www.mate.tue.nl/mate/pdfs/10318.pdf
[vo/opengl-hq] [107] // 'Efficient GPU-Based Texture Interpolation using Uniform B-Splines'
[vo/opengl-hq] [108] // Explanation why this algorithm normally always blurs, even with unit scaling:
[vo/opengl-hq] [109] // http://bigwww.epfl.ch/preprints/ruijters1001p.pdf
[vo/opengl-hq] [110] // 'GPU Prefilter for Accurate Cubic B-spline Interpolation'
[vo/opengl-hq] [111] vec4 calcweights(float s) {
[vo/opengl-hq] [112] vec4 t = vec4(-0.5, 0.1666, 0.3333, -0.3333) * s + vec4(1, 0, -0.5, 0.5);
[vo/opengl-hq] [113] t = t * s + vec4(0, 0, -0.5, 0.5);
[vo/opengl-hq] [114] t = t * s + vec4(-0.6666, 0, 0.8333, 0.1666);
[vo/opengl-hq] [115] vec2 a = vec2(1, 1) / vec2(t.z, t.w);
[vo/opengl-hq] [116] t.xy = t.xy * a + vec2(1, 1);
[vo/opengl-hq] [117] t.x = t.x + s;
[vo/opengl-hq] [118] t.y = t.y - s;
[vo/opengl-hq] [119] return t;
[vo/opengl-hq] [120] }
[vo/opengl-hq] [121]
[vo/opengl-hq] [122] vec4 sample_bicubic_fast(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord, float param1) {
[vo/opengl-hq] [123] vec2 pt = 1.0 / texsize;
[vo/opengl-hq] [124] vec2 fcoord = fract(texcoord * texsize + vec2(0.5, 0.5));
[vo/opengl-hq] [125] vec4 parmx = calcweights(fcoord.x);
[vo/opengl-hq] [126] vec4 parmy = calcweights(fcoord.y);
[vo/opengl-hq] [127] vec4 cdelta;
[vo/opengl-hq] [128] cdelta.xz = parmx.RG * vec2(-pt.x, pt.x);
[vo/opengl-hq] [129] cdelta.yw = parmy.RG * vec2(-pt.y, pt.y);
[vo/opengl-hq] [130] // first y-interpolation
[vo/opengl-hq] [131] vec4 ar = texture(tex, texcoord + cdelta.xy);
[vo/opengl-hq] [132] vec4 ag = texture(tex, texcoord + cdelta.xw);
[vo/opengl-hq] [133] vec4 ab = mix(ag, ar, parmy.b);
[vo/opengl-hq] [134] // second y-interpolation
[vo/opengl-hq] [135] vec4 br = texture(tex, texcoord + cdelta.zy);
[vo/opengl-hq] [136] vec4 bg = texture(tex, texcoord + cdelta.zw);
[vo/opengl-hq] [137] vec4 aa = mix(bg, br, parmy.b);
[vo/opengl-hq] [138] // x-interpolation
[vo/opengl-hq] [139] return mix(aa, ab, parmx.b);
[vo/opengl-hq] [140] }
[vo/opengl-hq] [141]
[vo/opengl-hq] [142] #if HAVE_ARRAYS
[vo/opengl-hq] [143] float[2] weights2(sampler2D lookup, float f) {
[vo/opengl-hq] [144] vec2 c = texture(lookup, vec2(0.5, f)).RG;
[vo/opengl-hq] [145] return float[2](c.r, c.g);
[vo/opengl-hq] [146] }
[vo/opengl-hq] [147] float[6] weights6(sampler2D lookup, float f) {
[vo/opengl-hq] [148] vec4 c1 = texture(lookup, vec2(0.25, f));
[vo/opengl-hq] [149] vec4 c2 = texture(lookup, vec2(0.75, f));
[vo/opengl-hq] [150] return float[6](c1.r, c1.g, c1.b, c2.r, c2.g, c2.b);
[vo/opengl-hq] [151] }
[vo/opengl-hq] [152] #endif
[vo/opengl-hq] [153]
[vo/opengl-hq] [154] // For N=n*4 with n>1 (N==4 is covered by weights4()).
[vo/opengl-hq] [155] #define WEIGHTS_N(NAME, N) \
[vo/opengl-hq] [156] float[N] NAME(sampler2D lookup, float f) { \
[vo/opengl-hq] [157] float r[N]; \
[vo/opengl-hq] [158] for (int n = 0; n < N / 4; n++) { \
[vo/opengl-hq] [159] vec4 c = texture(lookup, \
[vo/opengl-hq] [160] vec2(1.0 / (N / 2) + n / float(N / 4), f)); \
[vo/opengl-hq] [161] r[n * 4 + 0] = c.r; \
[vo/opengl-hq] [162] r[n * 4 + 1] = c.g; \
[vo/opengl-hq] [163] r[n * 4 + 2] = c.b; \
[vo/opengl-hq] [164] r[n * 4 + 3] = c.a; \
[vo/opengl-hq] [165] } \
[vo/opengl-hq] [166] return r; \
[vo/opengl-hq] [167] }
[vo/opengl-hq] [168]
[vo/opengl-hq] [169] // The DIR parameter is (0, 1) or (1, 0), and we expect the shader compiler to
[vo/opengl-hq] [170] // remove all the redundant multiplications and additions.
[vo/opengl-hq] [171] #define SAMPLE_CONVOLUTION_SEP_N(NAME, DIR, N, LUT, WEIGHTS_FUNC) \
[vo/opengl-hq] [172] vec4 NAME(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord) { \
[vo/opengl-hq] [173] vec2 pt = (vec2(1.0) / texsize) * DIR; \
[vo/opengl-hq] [174] float fcoord = dot(fract(texcoord * texsize - vec2(0.5)), DIR); \
[vo/opengl-hq] [175] vec2 base = texcoord - fcoord * pt - pt * vec2(N / 2 - 1); \
[vo/opengl-hq] [176] float weights[N] = WEIGHTS_FUNC(LUT, fcoord); \
[vo/opengl-hq] [177] vec4 res = vec4(0); \
[vo/opengl-hq] [178] for (int n = 0; n < N; n++) { \
[vo/opengl-hq] [179] res += vec4(weights[n]) * texture(tex, base + pt * vec2(n)); \
[vo/opengl-hq] [180] } \
[vo/opengl-hq] [181] return res; \
[vo/opengl-hq] [182] }
[vo/opengl-hq] [183]
[vo/opengl-hq] [184] #define SAMPLE_CONVOLUTION_N(NAME, N, LUT, WEIGHTS_FUNC) \
[vo/opengl-hq] [185] vec4 NAME(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord) { \
[vo/opengl-hq] [186] vec2 pt = vec2(1.0) / texsize; \
[vo/opengl-hq] [187] vec2 fcoord = fract(texcoord * texsize - vec2(0.5)); \
[vo/opengl-hq] [188] vec2 base = texcoord - fcoord * pt - pt * vec2(N / 2 - 1); \
[vo/opengl-hq] [189] vec4 res = vec4(0); \
[vo/opengl-hq] [190] float w_x[N] = WEIGHTS_FUNC(LUT, fcoord.x); \
[vo/opengl-hq] [191] float w_y[N] = WEIGHTS_FUNC(LUT, fcoord.y); \
[vo/opengl-hq] [192] for (int y = 0; y < N; y++) { \
[vo/opengl-hq] [193] vec4 line = vec4(0); \
[vo/opengl-hq] [194] for (int x = 0; x < N; x++) \
[vo/opengl-hq] [195] line += vec4(w_x[x]) * texture(tex, base + pt * vec2(x, y));\
[vo/opengl-hq] [196] res += vec4(w_y[y]) * line; \
[vo/opengl-hq] [197] } \
[vo/opengl-hq] [198] return res; \
[vo/opengl-hq] [199] }
[vo/opengl-hq] [200]
[vo/opengl-hq] [201] #ifdef DEF_SCALER0
[vo/opengl-hq] [202] DEF_SCALER0
[vo/opengl-hq] [203] #endif
[vo/opengl-hq] [204] #ifdef DEF_SCALER1
[vo/opengl-hq] [205] DEF_SCALER1
[vo/opengl-hq] [206] #endif
[vo/opengl-hq] [207]
[vo/opengl-hq] [208] // Unsharp masking
[vo/opengl-hq] [209] vec4 sample_sharpen3(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord, float param1) {
[vo/opengl-hq] [210] vec2 pt = 1.0 / texsize;
[vo/opengl-hq] [211] vec2 st = pt * 0.5;
[vo/opengl-hq] [212] vec4 p = texture(tex, texcoord);
[vo/opengl-hq] [213] vec4 sum = texture(tex, texcoord + st * vec2(+1, +1))
[vo/opengl-hq] [214] + texture(tex, texcoord + st * vec2(+1, -1))
[vo/opengl-hq] [215] + texture(tex, texcoord + st * vec2(-1, +1))
[vo/opengl-hq] [216] + texture(tex, texcoord + st * vec2(-1, -1));
[vo/opengl-hq] [217] return p + (p - 0.25 * sum) * param1;
[vo/opengl-hq] [218] }
[vo/opengl-hq] [219]
[vo/opengl-hq] [220] vec4 sample_sharpen5(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord, float param1) {
[vo/opengl-hq] [221] vec2 pt = 1.0 / texsize;
[vo/opengl-hq] [222] vec2 st1 = pt * 1.2;
[vo/opengl-hq] [223] vec4 p = texture(tex, texcoord);
[vo/opengl-hq] [224] vec4 sum1 = texture(tex, texcoord + st1 * vec2(+1, +1))
[vo/opengl-hq] [225] + texture(tex, texcoord + st1 * vec2(+1, -1))
[vo/opengl-hq] [226] + texture(tex, texcoord + st1 * vec2(-1, +1))
[vo/opengl-hq] [227] + texture(tex, texcoord + st1 * vec2(-1, -1));
[vo/opengl-hq] [228] vec2 st2 = pt * 1.5;
[vo/opengl-hq] [229] vec4 sum2 = texture(tex, texcoord + st2 * vec2(+1, 0))
[vo/opengl-hq] [230] + texture(tex, texcoord + st2 * vec2( 0, +1))
[vo/opengl-hq] [231] + texture(tex, texcoord + st2 * vec2(-1, 0))
[vo/opengl-hq] [232] + texture(tex, texcoord + st2 * vec2( 0, -1));
[vo/opengl-hq] [233] vec4 t = p * 0.859375 + sum2 * -0.1171875 + sum1 * -0.09765625;
[vo/opengl-hq] [234] return p + t * param1;
[vo/opengl-hq] [235] }
[vo/opengl-hq] [236]
[vo/opengl-hq] [237] void main() {
[vo/opengl-hq] [238] vec2 chr_texcoord = texcoord;
[vo/opengl-hq] [239] #ifdef USE_RECTANGLE
[vo/opengl-hq] [240] chr_texcoord = chr_texcoord * chroma_div;
[vo/opengl-hq] [241] #else
[vo/opengl-hq] [242] // Texture coordinates are [0,1], and chroma plane coordinates are
[vo/opengl-hq] [243] // magically rescaled.
[vo/opengl-hq] [244] #endif
[vo/opengl-hq] [245] chr_texcoord = chr_texcoord + chroma_center_offset;
[vo/opengl-hq] [246] #ifndef USE_CONV
[vo/opengl-hq] [247] #define USE_CONV 0
[vo/opengl-hq] [248] #endif
[vo/opengl-hq] [249] #if USE_CONV == CONV_PLANAR
[vo/opengl-hq] [250] vec4 acolor = vec4(SAMPLE_L(texture0, textures_size[0], texcoord).r,
[vo/opengl-hq] [251] SAMPLE_C(texture1, textures_size[1], chr_texcoord).r,
[vo/opengl-hq] [252] SAMPLE_C(texture2, textures_size[2], chr_texcoord).r,
[vo/opengl-hq] [253] 1.0);
[vo/opengl-hq] [254] #elif USE_CONV == CONV_NV12
[vo/opengl-hq] [255] vec4 acolor = vec4(SAMPLE_L(texture0, textures_size[0], texcoord).r,
[vo/opengl-hq] [256] SAMPLE_C(texture1, textures_size[1], chr_texcoord).RG,
[vo/opengl-hq] [257] 1.0);
[vo/opengl-hq] [258] #else
[vo/opengl-hq] [259] vec4 acolor = SAMPLE_L(texture0, textures_size[0], texcoord);
[vo/opengl-hq] [260] #endif
[vo/opengl-hq] [261] #ifdef USE_COLOR_SWIZZLE
[vo/opengl-hq] [262] acolor = acolor. USE_COLOR_SWIZZLE ;
[vo/opengl-hq] [263] #endif
[vo/opengl-hq] [264] #ifdef USE_ALPHA_PLANE
[vo/opengl-hq] [265] acolor.a = SAMPLE_L(texture3, textures_size[3], texcoord).r;
[vo/opengl-hq] [266] #endif
[vo/opengl-hq] [267] vec3 color = acolor.rgb;
[vo/opengl-hq] [268] float alpha = acolor.a;
[vo/opengl-hq] [269] #ifdef USE_YGRAY
[vo/opengl-hq] [270] // NOTE: actually slightly wrong for 16 bit input video, and completely
[vo/opengl-hq] [271] // wrong for 9/10 bit input
[vo/opengl-hq] [272] color.gb = vec2(128.0/255.0);
[vo/opengl-hq] [273] #endif
[vo/opengl-hq] [274] #ifdef USE_INPUT_GAMMA
[vo/opengl-hq] [275] // Pre-colormatrix input gamma correction (eg. for MP_IMGFLAG_XYZ)
[vo/opengl-hq] [276] color = pow(color, vec3(input_gamma));
[vo/opengl-hq] [277] #endif
[vo/opengl-hq] [278] #ifdef USE_COLORMATRIX
[vo/opengl-hq] [279] // Conversion from Y'CbCr or other spaces to RGB
[vo/opengl-hq] [280] color = mat3(colormatrix) * color + colormatrix_c;
[vo/opengl-hq] [281] #endif
[vo/opengl-hq] [282] #ifdef USE_CONV_GAMMA
[vo/opengl-hq] [283] // Post-colormatrix converted gamma correction (eg. for MP_IMGFLAG_XYZ)
[vo/opengl-hq] [284] color = pow(color, vec3(conv_gamma));
[vo/opengl-hq] [285] #endif
[vo/opengl-hq] [286] #ifdef USE_CONST_LUMA
[vo/opengl-hq] [287] // Conversion from C'rcY'cC'bc to R'Y'cB' via the BT.2020 CL system:
[vo/opengl-hq] [288] // C'bc = (B'-Y'c) / 1.9404 | C'bc <= 0
[vo/opengl-hq] [289] // = (B'-Y'c) / 1.5816 | C'bc > 0
[vo/opengl-hq] [290] //
[vo/opengl-hq] [291] // C'rc = (R'-Y'c) / 1.7184 | C'rc <= 0
[vo/opengl-hq] [292] // = (R'-Y'c) / 0.9936 | C'rc > 0
[vo/opengl-hq] [293] //
[vo/opengl-hq] [294] // as per the BT.2020 specification, table 4. This is a non-linear
[vo/opengl-hq] [295] // transformation because (constant) luminance receives non-equal
[vo/opengl-hq] [296] // contributions from the three different channels.
[vo/opengl-hq] [297] color.br = color.br * mix(vec2(1.5816, 0.9936), vec2(1.9404, 1.7184),
[vo/opengl-hq] [298] lessThanEqual(color.br, vec2(0))) + color.gg;
[vo/opengl-hq] [299] #endif
[vo/opengl-hq] [300] #ifdef USE_COLORMATRIX
[vo/opengl-hq] [301] // CONST_LUMA involves numbers outside the [0,1] range so we make sure
[vo/opengl-hq] [302] // to clip here, after the (possible) USE_CONST_LUMA calculations are done,
[vo/opengl-hq] [303] // instead of immediately after the colormatrix conversion.
[vo/opengl-hq] [304] color = clamp(color, 0.0, 1.0);
[vo/opengl-hq] [305] #endif
[vo/opengl-hq] [306] // If we are scaling in linear light (SRGB or 3DLUT option enabled), we
[vo/opengl-hq] [307] // expand our source colors before scaling. This shader currently just
[vo/opengl-hq] [308] // assumes everything uses the BT.2020 12-bit gamma function, since the
[vo/opengl-hq] [309] // difference between this and BT.601, BT.709 and BT.2020 10-bit is well
[vo/opengl-hq] [310] // below the rounding error threshold for both 8-bit and even 10-bit
[vo/opengl-hq] [311] // content. It only makes a difference for 12-bit sources, so it should be
[vo/opengl-hq] [312] // fine to use here.
[vo/opengl-hq] [313] #ifdef USE_LINEAR_LIGHT_APPROX
[vo/opengl-hq] [314] // We differentiate between approximate BT.2020 (gamma 1.95) ...
[vo/opengl-hq] [315] color = pow(color, vec3(1.95));
[vo/opengl-hq] [316] #endif
[vo/opengl-hq] [317] #ifdef USE_LINEAR_LIGHT_BT2020
[vo/opengl-hq] [318] // ... and actual BT.2020 (two-part function)
[vo/opengl-hq] [319] color = bt2020_expand(color);
[vo/opengl-hq] [320] #endif
[vo/opengl-hq] [321] #ifdef USE_LINEAR_LIGHT_SRGB
[vo/opengl-hq] [322] // This is not needed for most sRGB content since we can use GL_SRGB to
[vo/opengl-hq] [323] // directly sample RGB texture in linear light, but for things which are
[vo/opengl-hq] [324] // also sRGB but in a different format (such as JPEG's YUV), we need
[vo/opengl-hq] [325] // to convert to linear light manually.
[vo/opengl-hq] [326] color = srgb_expand(color);
[vo/opengl-hq] [327] #endif
[vo/opengl-hq] [328] #ifdef USE_CONST_LUMA
[vo/opengl-hq] [329] // Calculate the green channel from the expanded RYcB
[vo/opengl-hq] [330] // The BT.2020 specification says Yc = 0.2627*R + 0.6780*G + 0.0593*B
[vo/opengl-hq] [331] color.g = (color.g - 0.2627*color.r - 0.0593*color.b)/0.6780;
[vo/opengl-hq] [332] #endif
[vo/opengl-hq] [333] // Image upscaling happens roughly here
[vo/opengl-hq] [334] #ifdef USE_GAMMA_POW
[vo/opengl-hq] [335] // User-defined gamma correction factor (via the gamma sub-option)
[vo/opengl-hq] [336] color = pow(color, inv_gamma);
[vo/opengl-hq] [337] #endif
[vo/opengl-hq] [338] #ifdef USE_CMS_MATRIX
[vo/opengl-hq] [339] // Convert to the right target gamut first (to BT.709 for sRGB,
[vo/opengl-hq] [340] // and to BT.2020 for 3DLUT).
[vo/opengl-hq] [341] color = cms_matrix * color;
[vo/opengl-hq] [342]
[vo/opengl-hq] [343] // Clamp to the target gamut. This clamp is needed because the gamma
[vo/opengl-hq] [344] // functions are not well-defined outside this range, which is related to
[vo/opengl-hq] [345] // the fact that they're not representable on the target device.
[vo/opengl-hq] [346] // TODO: Desaturate colorimetrically; this happens automatically for
[vo/opengl-hq] [347] // 3dlut targets but not for sRGB mode. Not sure if this is a requirement.
[vo/opengl-hq] [348] color = clamp(color, 0.0, 1.0);
[vo/opengl-hq] [349] #endif
[vo/opengl-hq] [350] #ifdef USE_3DLUT
[vo/opengl-hq] [351] // For the 3DLUT we are arbitrarily using 2.4 as input gamma to reduce
[vo/opengl-hq] [352] // the amount of rounding errors, so we pull up to that space first and
[vo/opengl-hq] [353] // then pass it through the 3D texture.
[vo/opengl-hq] [354] color = pow(color, vec3(1.0/2.4));
[vo/opengl-hq] [355] color = texture3D(lut_3d, color).rgb;
[vo/opengl-hq] [356] #endif
[vo/opengl-hq] [357] #ifdef USE_SRGB
[vo/opengl-hq] [358] // Adapt and compand from the linear BT2020 source to the sRGB output
[vo/opengl-hq] [359] color = srgb_compand(color);
[vo/opengl-hq] [360] #endif
[vo/opengl-hq] [361] // If none of these options took care of companding again, we have to do
[vo/opengl-hq] [362] // it manually here for the previously-expanded channels. This again
[vo/opengl-hq] [363] // comes in two flavours, one for the approximate gamma system and one
[vo/opengl-hq] [364] // for the actual gamma system.
[vo/opengl-hq] [365] #ifdef USE_CONST_LUMA_INV_APPROX
[vo/opengl-hq] [366] color = pow(color, vec3(1.0/1.95));
[vo/opengl-hq] [367] #endif
[vo/opengl-hq] [368] #ifdef USE_CONST_LUMA_INV_BT2020
[vo/opengl-hq] [369] color = bt2020_compand(color);
[vo/opengl-hq] [370] #endif
[vo/opengl-hq] [371] #ifdef USE_DITHER
[vo/opengl-hq] [372] vec2 dither_pos = gl_FragCoord.xy / dither_size;
[vo/opengl-hq] [373] #ifdef USE_TEMPORAL_DITHER
[vo/opengl-hq] [374] dither_pos = dither_trafo * dither_pos;
[vo/opengl-hq] [375] #endif
[vo/opengl-hq] [376] float dither_value = texture(dither, dither_pos).r;
[vo/opengl-hq] [377] color = floor(color * dither_quantization + dither_value + dither_center) /
[vo/opengl-hq] [378] dither_quantization;
[vo/opengl-hq] [379] #endif
[vo/opengl-hq] [380] #ifdef USE_ALPHA_BLEND
[vo/opengl-hq] [381] color = color * alpha;
[vo/opengl-hq] [382] #endif
[vo/opengl-hq] [383] #ifdef USE_ALPHA
[vo/opengl-hq] [384] out_color = vec4(color, alpha);
[vo/opengl-hq] [385] #else
[vo/opengl-hq] [386] out_color = vec4(color, 1.0);
[vo/opengl-hq] [387] #endif
[vo/opengl-hq] [388] }
[vo/opengl-hq] fragment shader compile log (status=0):
[vo/opengl-hq] ERROR: 0:93: 'varying' : Illegal use of reserved word
[vo/opengl-hq]
[vo/opengl-hq] shader link log (status=0):
[vo/opengl-hq] compiling shader program 'final', header:
[vo/opengl-hq] [ 1] #define VIDEO_SAMPLER sampler2D
[vo/opengl-hq] [ 2] #define USE_DITHER 1
[vo/opengl-hq] [ 3] #define DEF_SCALER0 \
[vo/opengl-hq] [ 4] SAMPLE_CONVOLUTION_SEP_N(sample_scaler0, vec2(1, 0), 6, lut_l, weights6)
[vo/opengl-hq] [ 5] #define SAMPLE_L sample_scaler0
[vo/opengl-hq] vertex shader source:
[vo/opengl-hq] [ 1] #version 300 es
[vo/opengl-hq] [ 2] #define HAVE_RG 1
[vo/opengl-hq] [ 3]
[vo/opengl-hq] [ 4] #ifdef GL_ES
[vo/opengl-hq] [ 5] precision mediump float;
[vo/opengl-hq] [ 6] #define HAVE_3DTEX (__VERSION__ >= 300)
[vo/opengl-hq] [ 7] #define HAVE_ARRAYS (__VERSION__ >= 300)
[vo/opengl-hq] [ 8] #else
[vo/opengl-hq] [ 9] // Desktop GL
[vo/opengl-hq] [ 10] #define HAVE_3DTEX 1
[vo/opengl-hq] [ 11] #define HAVE_ARRAYS 1
[vo/opengl-hq] [ 12] #endif
[vo/opengl-hq] [ 13]
[vo/opengl-hq] [ 14] // GLSL 1.20 compatibility layer
[vo/opengl-hq] [ 15] // texture() should be assumed to always map to texture2D()
[vo/opengl-hq] [ 16] #if __VERSION__ >= 130
[vo/opengl-hq] [ 17] # define texture1D texture
[vo/opengl-hq] [ 18] # define texture3D texture
[vo/opengl-hq] [ 19] # define DECLARE_FRAGPARMS \
[vo/opengl-hq] [ 20] out vec4 out_color;
[vo/opengl-hq] [ 21] #else
[vo/opengl-hq] [ 22] # define texture texture2D
[vo/opengl-hq] [ 23] # define DECLARE_FRAGPARMS
[vo/opengl-hq] [ 24] # define out_color gl_FragColor
[vo/opengl-hq] [ 25] # define in varying
[vo/opengl-hq] [ 26] #endif
[vo/opengl-hq] [ 27]
[vo/opengl-hq] [ 28] #if HAVE_RG
[vo/opengl-hq] [ 29] #define RG rg
[vo/opengl-hq] [ 30] #else
[vo/opengl-hq] [ 31] #define RG ra
[vo/opengl-hq] [ 32] #endif
[vo/opengl-hq] [ 33]
[vo/opengl-hq] [ 34] // Earlier GLSL doesn't support mix() with bvec
[vo/opengl-hq] [ 35] #if __VERSION__ >= 130
[vo/opengl-hq] [ 36] vec3 srgb_expand(vec3 v)
[vo/opengl-hq] [ 37] {
[vo/opengl-hq] [ 38] return mix(v / vec3(12.92), pow((v + vec3(0.055))/vec3(1.055), vec3(2.4)),
[vo/opengl-hq] [ 39] lessThanEqual(vec3(0.04045), v));
[vo/opengl-hq] [ 40] }
[vo/opengl-hq] [ 41]
[vo/opengl-hq] [ 42] vec3 srgb_compand(vec3 v)
[vo/opengl-hq] [ 43] {
[vo/opengl-hq] [ 44] return mix(v * vec3(12.92), vec3(1.055) * pow(v, vec3(1.0/2.4)) - vec3(0.055),
[vo/opengl-hq] [ 45] lessThanEqual(vec3(0.0031308), v));
[vo/opengl-hq] [ 46] }
[vo/opengl-hq] [ 47]
[vo/opengl-hq] [ 48] vec3 bt2020_expand(vec3 v)
[vo/opengl-hq] [ 49] {
[vo/opengl-hq] [ 50] return mix(v / vec3(4.5), pow((v + vec3(0.0993))/vec3(1.0993), vec3(1.0/0.45)),
[vo/opengl-hq] [ 51] lessThanEqual(vec3(0.08145), v));
[vo/opengl-hq] [ 52] }
[vo/opengl-hq] [ 53]
[vo/opengl-hq] [ 54] vec3 bt2020_compand(vec3 v)
[vo/opengl-hq] [ 55] {
[vo/opengl-hq] [ 56] return mix(v * vec3(4.5), vec3(1.0993) * pow(v, vec3(0.45)) - vec3(0.0993),
[vo/opengl-hq] [ 57] lessThanEqual(vec3(0.0181), v));
[vo/opengl-hq] [ 58] }
[vo/opengl-hq] [ 59] #endif
[vo/opengl-hq] [ 60]
[vo/opengl-hq] [ 61] // -- prelude end
[vo/opengl-hq] [ 62] #define VIDEO_SAMPLER sampler2D
[vo/opengl-hq] [ 63] #define USE_DITHER 1
[vo/opengl-hq] [ 64] #define DEF_SCALER0 \
[vo/opengl-hq] [ 65] SAMPLE_CONVOLUTION_SEP_N(sample_scaler0, vec2(1, 0), 6, lut_l, weights6)
[vo/opengl-hq] [ 66] #define SAMPLE_L sample_scaler0
[vo/opengl-hq] [ 67]
[vo/opengl-hq] [ 68] #if __VERSION__ < 130
[vo/opengl-hq] [ 69] # undef in
[vo/opengl-hq] [ 70] # define in attribute
[vo/opengl-hq] [ 71] # define out varying
[vo/opengl-hq] [ 72] #endif
[vo/opengl-hq] [ 73]
[vo/opengl-hq] [ 74] uniform mat3 transform;
[vo/opengl-hq] [ 75] uniform vec3 translation;
[vo/opengl-hq] [ 76] #if HAVE_3DTEX
[vo/opengl-hq] [ 77] uniform sampler3D lut_3d;
[vo/opengl-hq] [ 78] #endif
[vo/opengl-hq] [ 79] uniform mat3 cms_matrix; // transformation from file's gamut to bt.2020
[vo/opengl-hq] [ 80]
[vo/opengl-hq] [ 81] in vec2 vertex_position;
[vo/opengl-hq] [ 82] in vec4 vertex_color;
[vo/opengl-hq] [ 83] out vec4 color;
[vo/opengl-hq] [ 84] in vec2 vertex_texcoord;
[vo/opengl-hq] [ 85] out vec2 texcoord;
[vo/opengl-hq] [ 86]
[vo/opengl-hq] [ 87] void main() {
[vo/opengl-hq] [ 88] vec3 position = vec3(vertex_position, 1) + translation;
[vo/opengl-hq] [ 89] #ifndef FIXED_SCALE
[vo/opengl-hq] [ 90] position = transform * position;
[vo/opengl-hq] [ 91] #endif
[vo/opengl-hq] [ 92] gl_Position = vec4(position, 1);
[vo/opengl-hq] [ 93] color = vertex_color;
[vo/opengl-hq] [ 94]
[vo/opengl-hq] [ 95] // Although we are not scaling in linear light, both 3DLUT and SRGB still
[vo/opengl-hq] [ 96] // operate on linear light inputs so we have to convert to it before
[vo/opengl-hq] [ 97] // either step can be applied.
[vo/opengl-hq] [ 98] #ifdef USE_OSD_LINEAR_CONV_APPROX
[vo/opengl-hq] [ 99] color.rgb = pow(color.rgb, vec3(1.95));
[vo/opengl-hq] [100] #endif
[vo/opengl-hq] [101] #ifdef USE_OSD_LINEAR_CONV_BT2020
[vo/opengl-hq] [102] color.rgb = bt2020_expand(color.rgb);
[vo/opengl-hq] [103] #endif
[vo/opengl-hq] [104] #ifdef USE_OSD_LINEAR_CONV_SRGB
[vo/opengl-hq] [105] color.rgb = srgb_expand(color.rgb);
[vo/opengl-hq] [106] #endif
[vo/opengl-hq] [107] #ifdef USE_OSD_CMS_MATRIX
[vo/opengl-hq] [108] // Convert to the right target gamut first (to BT.709 for sRGB,
[vo/opengl-hq] [109] // and to BT.2020 for 3DLUT). Normal clamping here as perceptually
[vo/opengl-hq] [110] // accurate colorimetry is probably not worth the performance trade-off
[vo/opengl-hq] [111] // here.
[vo/opengl-hq] [112] color.rgb = clamp(cms_matrix * color.rgb, 0.0, 1.0);
[vo/opengl-hq] [113] #endif
[vo/opengl-hq] [114] #ifdef USE_OSD_3DLUT
[vo/opengl-hq] [115] color.rgb = pow(color.rgb, vec3(1.0/2.4)); // linear -> 2.4 3DLUT space
[vo/opengl-hq] [116] color = vec4(texture3D(lut_3d, color.rgb).rgb, color.a);
[vo/opengl-hq] [117] #endif
[vo/opengl-hq] [118] #ifdef USE_OSD_SRGB
[vo/opengl-hq] [119] color.rgb = srgb_compand(color.rgb);
[vo/opengl-hq] [120] #endif
[vo/opengl-hq] [121]
[vo/opengl-hq] [122] texcoord = vertex_texcoord;
[vo/opengl-hq] [123] }
[vo/opengl-hq] [124]
[vo/opengl-hq] vertex shader compile log (status=0):
[vo/opengl-hq] ERROR: 0:81: 'attribute' : Illegal use of reserved word
[vo/opengl-hq]
[vo/opengl-hq] fragment shader source:
[vo/opengl-hq] [ 1] #version 300 es
[vo/opengl-hq] [ 2] #define HAVE_RG 1
[vo/opengl-hq] [ 3]
[vo/opengl-hq] [ 4] #ifdef GL_ES
[vo/opengl-hq] [ 5] precision mediump float;
[vo/opengl-hq] [ 6] #define HAVE_3DTEX (__VERSION__ >= 300)
[vo/opengl-hq] [ 7] #define HAVE_ARRAYS (__VERSION__ >= 300)
[vo/opengl-hq] [ 8] #else
[vo/opengl-hq] [ 9] // Desktop GL
[vo/opengl-hq] [ 10] #define HAVE_3DTEX 1
[vo/opengl-hq] [ 11] #define HAVE_ARRAYS 1
[vo/opengl-hq] [ 12] #endif
[vo/opengl-hq] [ 13]
[vo/opengl-hq] [ 14] // GLSL 1.20 compatibility layer
[vo/opengl-hq] [ 15] // texture() should be assumed to always map to texture2D()
[vo/opengl-hq] [ 16] #if __VERSION__ >= 130
[vo/opengl-hq] [ 17] # define texture1D texture
[vo/opengl-hq] [ 18] # define texture3D texture
[vo/opengl-hq] [ 19] # define DECLARE_FRAGPARMS \
[vo/opengl-hq] [ 20] out vec4 out_color;
[vo/opengl-hq] [ 21] #else
[vo/opengl-hq] [ 22] # define texture texture2D
[vo/opengl-hq] [ 23] # define DECLARE_FRAGPARMS
[vo/opengl-hq] [ 24] # define out_color gl_FragColor
[vo/opengl-hq] [ 25] # define in varying
[vo/opengl-hq] [ 26] #endif
[vo/opengl-hq] [ 27]
[vo/opengl-hq] [ 28] #if HAVE_RG
[vo/opengl-hq] [ 29] #define RG rg
[vo/opengl-hq] [ 30] #else
[vo/opengl-hq] [ 31] #define RG ra
[vo/opengl-hq] [ 32] #endif
[vo/opengl-hq] [ 33]
[vo/opengl-hq] [ 34] // Earlier GLSL doesn't support mix() with bvec
[vo/opengl-hq] [ 35] #if __VERSION__ >= 130
[vo/opengl-hq] [ 36] vec3 srgb_expand(vec3 v)
[vo/opengl-hq] [ 37] {
[vo/opengl-hq] [ 38] return mix(v / vec3(12.92), pow((v + vec3(0.055))/vec3(1.055), vec3(2.4)),
[vo/opengl-hq] [ 39] lessThanEqual(vec3(0.04045), v));
[vo/opengl-hq] [ 40] }
[vo/opengl-hq] [ 41]
[vo/opengl-hq] [ 42] vec3 srgb_compand(vec3 v)
[vo/opengl-hq] [ 43] {
[vo/opengl-hq] [ 44] return mix(v * vec3(12.92), vec3(1.055) * pow(v, vec3(1.0/2.4)) - vec3(0.055),
[vo/opengl-hq] [ 45] lessThanEqual(vec3(0.0031308), v));
[vo/opengl-hq] [ 46] }
[vo/opengl-hq] [ 47]
[vo/opengl-hq] [ 48] vec3 bt2020_expand(vec3 v)
[vo/opengl-hq] [ 49] {
[vo/opengl-hq] [ 50] return mix(v / vec3(4.5), pow((v + vec3(0.0993))/vec3(1.0993), vec3(1.0/0.45)),
[vo/opengl-hq] [ 51] lessThanEqual(vec3(0.08145), v));
[vo/opengl-hq] [ 52] }
[vo/opengl-hq] [ 53]
[vo/opengl-hq] [ 54] vec3 bt2020_compand(vec3 v)
[vo/opengl-hq] [ 55] {
[vo/opengl-hq] [ 56] return mix(v * vec3(4.5), vec3(1.0993) * pow(v, vec3(0.45)) - vec3(0.0993),
[vo/opengl-hq] [ 57] lessThanEqual(vec3(0.0181), v));
[vo/opengl-hq] [ 58] }
[vo/opengl-hq] [ 59] #endif
[vo/opengl-hq] [ 60]
[vo/opengl-hq] [ 61] // -- prelude end
[vo/opengl-hq] [ 62] #define VIDEO_SAMPLER sampler2D
[vo/opengl-hq] [ 63] #define USE_DITHER 1
[vo/opengl-hq] [ 64] #define DEF_SCALER0 \
[vo/opengl-hq] [ 65] SAMPLE_CONVOLUTION_SEP_N(sample_scaler0, vec2(1, 0), 6, lut_l, weights6)
[vo/opengl-hq] [ 66] #define SAMPLE_L sample_scaler0
[vo/opengl-hq] [ 67] uniform VIDEO_SAMPLER texture0;
[vo/opengl-hq] [ 68] uniform VIDEO_SAMPLER texture1;
[vo/opengl-hq] [ 69] uniform VIDEO_SAMPLER texture2;
[vo/opengl-hq] [ 70] uniform VIDEO_SAMPLER texture3;
[vo/opengl-hq] [ 71] uniform vec2 textures_size[4];
[vo/opengl-hq] [ 72] uniform vec2 chroma_center_offset;
[vo/opengl-hq] [ 73] uniform vec2 chroma_div;
[vo/opengl-hq] [ 74] uniform sampler2D lut_c;
[vo/opengl-hq] [ 75] uniform sampler2D lut_l;
[vo/opengl-hq] [ 76] #if HAVE_3DTEX
[vo/opengl-hq] [ 77] uniform sampler3D lut_3d;
[vo/opengl-hq] [ 78] #endif
[vo/opengl-hq] [ 79] uniform sampler2D dither;
[vo/opengl-hq] [ 80] uniform mat3 colormatrix;
[vo/opengl-hq] [ 81] uniform vec3 colormatrix_c;
[vo/opengl-hq] [ 82] uniform mat3 cms_matrix;
[vo/opengl-hq] [ 83] uniform mat2 dither_trafo;
[vo/opengl-hq] [ 84] uniform vec3 inv_gamma;
[vo/opengl-hq] [ 85] uniform float input_gamma;
[vo/opengl-hq] [ 86] uniform float conv_gamma;
[vo/opengl-hq] [ 87] uniform float dither_quantization;
[vo/opengl-hq] [ 88] uniform float dither_center;
[vo/opengl-hq] [ 89] uniform float filter_param1_l;
[vo/opengl-hq] [ 90] uniform float filter_param1_c;
[vo/opengl-hq] [ 91] uniform vec2 dither_size;
[vo/opengl-hq] [ 92]
[vo/opengl-hq] [ 93] in vec2 texcoord;
[vo/opengl-hq] [ 94] DECLARE_FRAGPARMS
[vo/opengl-hq] [ 95]
[vo/opengl-hq] [ 96] #define CONV_NV12 1
[vo/opengl-hq] [ 97] #define CONV_PLANAR 2
[vo/opengl-hq] [ 98]
[vo/opengl-hq] [ 99] vec4 sample_bilinear(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord, float param1) {
[vo/opengl-hq] [100] return texture(tex, texcoord);
[vo/opengl-hq] [101] }
[vo/opengl-hq] [102]
[vo/opengl-hq] [103] #define SAMPLE_BILINEAR(p0, p1, p2) sample_bilinear(p0, p1, p2, 0.0)
[vo/opengl-hq] [104]
[vo/opengl-hq] [105] // Explanation how bicubic scaling with only 4 texel fetches is done:
[vo/opengl-hq] [106] // http://www.mate.tue.nl/mate/pdfs/10318.pdf
[vo/opengl-hq] [107] // 'Efficient GPU-Based Texture Interpolation using Uniform B-Splines'
[vo/opengl-hq] [108] // Explanation why this algorithm normally always blurs, even with unit scaling:
[vo/opengl-hq] [109] // http://bigwww.epfl.ch/preprints/ruijters1001p.pdf
[vo/opengl-hq] [110] // 'GPU Prefilter for Accurate Cubic B-spline Interpolation'
[vo/opengl-hq] [111] vec4 calcweights(float s) {
[vo/opengl-hq] [112] vec4 t = vec4(-0.5, 0.1666, 0.3333, -0.3333) * s + vec4(1, 0, -0.5, 0.5);
[vo/opengl-hq] [113] t = t * s + vec4(0, 0, -0.5, 0.5);
[vo/opengl-hq] [114] t = t * s + vec4(-0.6666, 0, 0.8333, 0.1666);
[vo/opengl-hq] [115] vec2 a = vec2(1, 1) / vec2(t.z, t.w);
[vo/opengl-hq] [116] t.xy = t.xy * a + vec2(1, 1);
[vo/opengl-hq] [117] t.x = t.x + s;
[vo/opengl-hq] [118] t.y = t.y - s;
[vo/opengl-hq] [119] return t;
[vo/opengl-hq] [120] }
[vo/opengl-hq] [121]
[vo/opengl-hq] [122] vec4 sample_bicubic_fast(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord, float param1) {
[vo/opengl-hq] [123] vec2 pt = 1.0 / texsize;
[vo/opengl-hq] [124] vec2 fcoord = fract(texcoord * texsize + vec2(0.5, 0.5));
[vo/opengl-hq] [125] vec4 parmx = calcweights(fcoord.x);
[vo/opengl-hq] [126] vec4 parmy = calcweights(fcoord.y);
[vo/opengl-hq] [127] vec4 cdelta;
[vo/opengl-hq] [128] cdelta.xz = parmx.RG * vec2(-pt.x, pt.x);
[vo/opengl-hq] [129] cdelta.yw = parmy.RG * vec2(-pt.y, pt.y);
[vo/opengl-hq] [130] // first y-interpolation
[vo/opengl-hq] [131] vec4 ar = texture(tex, texcoord + cdelta.xy);
[vo/opengl-hq] [132] vec4 ag = texture(tex, texcoord + cdelta.xw);
[vo/opengl-hq] [133] vec4 ab = mix(ag, ar, parmy.b);
[vo/opengl-hq] [134] // second y-interpolation
[vo/opengl-hq] [135] vec4 br = texture(tex, texcoord + cdelta.zy);
[vo/opengl-hq] [136] vec4 bg = texture(tex, texcoord + cdelta.zw);
[vo/opengl-hq] [137] vec4 aa = mix(bg, br, parmy.b);
[vo/opengl-hq] [138] // x-interpolation
[vo/opengl-hq] [139] return mix(aa, ab, parmx.b);
[vo/opengl-hq] [140] }
[vo/opengl-hq] [141]
[vo/opengl-hq] [142] #if HAVE_ARRAYS
[vo/opengl-hq] [143] float[2] weights2(sampler2D lookup, float f) {
[vo/opengl-hq] [144] vec2 c = texture(lookup, vec2(0.5, f)).RG;
[vo/opengl-hq] [145] return float[2](c.r, c.g);
[vo/opengl-hq] [146] }
[vo/opengl-hq] [147] float[6] weights6(sampler2D lookup, float f) {
[vo/opengl-hq] [148] vec4 c1 = texture(lookup, vec2(0.25, f));
[vo/opengl-hq] [149] vec4 c2 = texture(lookup, vec2(0.75, f));
[vo/opengl-hq] [150] return float[6](c1.r, c1.g, c1.b, c2.r, c2.g, c2.b);
[vo/opengl-hq] [151] }
[vo/opengl-hq] [152] #endif
[vo/opengl-hq] [153]
[vo/opengl-hq] [154] // For N=n*4 with n>1 (N==4 is covered by weights4()).
[vo/opengl-hq] [155] #define WEIGHTS_N(NAME, N) \
[vo/opengl-hq] [156] float[N] NAME(sampler2D lookup, float f) { \
[vo/opengl-hq] [157] float r[N]; \
[vo/opengl-hq] [158] for (int n = 0; n < N / 4; n++) { \
[vo/opengl-hq] [159] vec4 c = texture(lookup, \
[vo/opengl-hq] [160] vec2(1.0 / (N / 2) + n / float(N / 4), f)); \
[vo/opengl-hq] [161] r[n * 4 + 0] = c.r; \
[vo/opengl-hq] [162] r[n * 4 + 1] = c.g; \
[vo/opengl-hq] [163] r[n * 4 + 2] = c.b; \
[vo/opengl-hq] [164] r[n * 4 + 3] = c.a; \
[vo/opengl-hq] [165] } \
[vo/opengl-hq] [166] return r; \
[vo/opengl-hq] [167] }
[vo/opengl-hq] [168]
[vo/opengl-hq] [169] // The DIR parameter is (0, 1) or (1, 0), and we expect the shader compiler to
[vo/opengl-hq] [170] // remove all the redundant multiplications and additions.
[vo/opengl-hq] [171] #define SAMPLE_CONVOLUTION_SEP_N(NAME, DIR, N, LUT, WEIGHTS_FUNC) \
[vo/opengl-hq] [172] vec4 NAME(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord) { \
[vo/opengl-hq] [173] vec2 pt = (vec2(1.0) / texsize) * DIR; \
[vo/opengl-hq] [174] float fcoord = dot(fract(texcoord * texsize - vec2(0.5)), DIR); \
[vo/opengl-hq] [175] vec2 base = texcoord - fcoord * pt - pt * vec2(N / 2 - 1); \
[vo/opengl-hq] [176] float weights[N] = WEIGHTS_FUNC(LUT, fcoord); \
[vo/opengl-hq] [177] vec4 res = vec4(0); \
[vo/opengl-hq] [178] for (int n = 0; n < N; n++) { \
[vo/opengl-hq] [179] res += vec4(weights[n]) * texture(tex, base + pt * vec2(n)); \
[vo/opengl-hq] [180] } \
[vo/opengl-hq] [181] return res; \
[vo/opengl-hq] [182] }
[vo/opengl-hq] [183]
[vo/opengl-hq] [184] #define SAMPLE_CONVOLUTION_N(NAME, N, LUT, WEIGHTS_FUNC) \
[vo/opengl-hq] [185] vec4 NAME(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord) { \
[vo/opengl-hq] [186] vec2 pt = vec2(1.0) / texsize; \
[vo/opengl-hq] [187] vec2 fcoord = fract(texcoord * texsize - vec2(0.5)); \
[vo/opengl-hq] [188] vec2 base = texcoord - fcoord * pt - pt * vec2(N / 2 - 1); \
[vo/opengl-hq] [189] vec4 res = vec4(0); \
[vo/opengl-hq] [190] float w_x[N] = WEIGHTS_FUNC(LUT, fcoord.x); \
[vo/opengl-hq] [191] float w_y[N] = WEIGHTS_FUNC(LUT, fcoord.y); \
[vo/opengl-hq] [192] for (int y = 0; y < N; y++) { \
[vo/opengl-hq] [193] vec4 line = vec4(0); \
[vo/opengl-hq] [194] for (int x = 0; x < N; x++) \
[vo/opengl-hq] [195] line += vec4(w_x[x]) * texture(tex, base + pt * vec2(x, y));\
[vo/opengl-hq] [196] res += vec4(w_y[y]) * line; \
[vo/opengl-hq] [197] } \
[vo/opengl-hq] [198] return res; \
[vo/opengl-hq] [199] }
[vo/opengl-hq] [200]
[vo/opengl-hq] [201] #ifdef DEF_SCALER0
[vo/opengl-hq] [202] DEF_SCALER0
[vo/opengl-hq] [203] #endif
[vo/opengl-hq] [204] #ifdef DEF_SCALER1
[vo/opengl-hq] [205] DEF_SCALER1
[vo/opengl-hq] [206] #endif
[vo/opengl-hq] [207]
[vo/opengl-hq] [208] // Unsharp masking
[vo/opengl-hq] [209] vec4 sample_sharpen3(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord, float param1) {
[vo/opengl-hq] [210] vec2 pt = 1.0 / texsize;
[vo/opengl-hq] [211] vec2 st = pt * 0.5;
[vo/opengl-hq] [212] vec4 p = texture(tex, texcoord);
[vo/opengl-hq] [213] vec4 sum = texture(tex, texcoord + st * vec2(+1, +1))
[vo/opengl-hq] [214] + texture(tex, texcoord + st * vec2(+1, -1))
[vo/opengl-hq] [215] + texture(tex, texcoord + st * vec2(-1, +1))
[vo/opengl-hq] [216] + texture(tex, texcoord + st * vec2(-1, -1));
[vo/opengl-hq] [217] return p + (p - 0.25 * sum) * param1;
[vo/opengl-hq] [218] }
[vo/opengl-hq] [219]
[vo/opengl-hq] [220] vec4 sample_sharpen5(VIDEO_SAMPLER tex, vec2 texsize, vec2 texcoord, float param1) {
[vo/opengl-hq] [221] vec2 pt = 1.0 / texsize;
[vo/opengl-hq] [222] vec2 st1 = pt * 1.2;
[vo/opengl-hq] [223] vec4 p = texture(tex, texcoord);
[vo/opengl-hq] [224] vec4 sum1 = texture(tex, texcoord + st1 * vec2(+1, +1))
[vo/opengl-hq] [225] + texture(tex, texcoord + st1 * vec2(+1, -1))
[vo/opengl-hq] [226] + texture(tex, texcoord + st1 * vec2(-1, +1))
[vo/opengl-hq] [227] + texture(tex, texcoord + st1 * vec2(-1, -1));
[vo/opengl-hq] [228] vec2 st2 = pt * 1.5;
[vo/opengl-hq] [229] vec4 sum2 = texture(tex, texcoord + st2 * vec2(+1, 0))
[vo/opengl-hq] [230] + texture(tex, texcoord + st2 * vec2( 0, +1))
[vo/opengl-hq] [231] + texture(tex, texcoord + st2 * vec2(-1, 0))
[vo/opengl-hq] [232] + texture(tex, texcoord + st2 * vec2( 0, -1));
[vo/opengl-hq] [233] vec4 t = p * 0.859375 + sum2 * -0.1171875 + sum1 * -0.09765625;
[vo/opengl-hq] [234] return p + t * param1;
[vo/opengl-hq] [235] }
[vo/opengl-hq] [236]
[vo/opengl-hq] [237] void main() {
[vo/opengl-hq] [238] vec2 chr_texcoord = texcoord;
[vo/opengl-hq] [239] #ifdef USE_RECTANGLE
[vo/opengl-hq] [240] chr_texcoord = chr_texcoord * chroma_div;
[vo/opengl-hq] [241] #else
[vo/opengl-hq] [242] // Texture coordinates are [0,1], and chroma plane coordinates are
[vo/opengl-hq] [243] // magically rescaled.
[vo/opengl-hq] [244] #endif
[vo/opengl-hq] [245] chr_texcoord = chr_texcoord + chroma_center_offset;
[vo/opengl-hq] [246] #ifndef USE_CONV
[vo/opengl-hq] [247] #define USE_CONV 0
[vo/opengl-hq] [248] #endif
[vo/opengl-hq] [249] #if USE_CONV == CONV_PLANAR
[vo/opengl-hq] [250] vec4 acolor = vec4(SAMPLE_L(texture0, textures_size[0], texcoord).r,
[vo/opengl-hq] [251] SAMPLE_C(texture1, textures_size[1], chr_texcoord).r,
[vo/opengl-hq] [252] SAMPLE_C(texture2, textures_size[2], chr_texcoord).r,
[vo/opengl-hq] [253] 1.0);
[vo/opengl-hq] [254] #elif USE_CONV == CONV_NV12
[vo/opengl-hq] [255] vec4 acolor = vec4(SAMPLE_L(texture0, textures_size[0], texcoord).r,
[vo/opengl-hq] [256] SAMPLE_C(texture1, textures_size[1], chr_texcoord).RG,
[vo/opengl-hq] [257] 1.0);
[vo/opengl-hq] [258] #else
[vo/opengl-hq] [259] vec4 acolor = SAMPLE_L(texture0, textures_size[0], texcoord);
[vo/opengl-hq] [260] #endif
[vo/opengl-hq] [261] #ifdef USE_COLOR_SWIZZLE
[vo/opengl-hq] [262] acolor = acolor. USE_COLOR_SWIZZLE ;
[vo/opengl-hq] [263] #endif
[vo/opengl-hq] [264] #ifdef USE_ALPHA_PLANE
[vo/opengl-hq] [265] acolor.a = SAMPLE_L(texture3, textures_size[3], texcoord).r;
[vo/opengl-hq] [266] #endif
[vo/opengl-hq] [267] vec3 color = acolor.rgb;
[vo/opengl-hq] [268] float alpha = acolor.a;
[vo/opengl-hq] [269] #ifdef USE_YGRAY
[vo/opengl-hq] [270] // NOTE: actually slightly wrong for 16 bit input video, and completely
[vo/opengl-hq] [271] // wrong for 9/10 bit input
[vo/opengl-hq] [272] color.gb = vec2(128.0/255.0);
[vo/opengl-hq] [273] #endif
[vo/opengl-hq] [274] #ifdef USE_INPUT_GAMMA
[vo/opengl-hq] [275] // Pre-colormatrix input gamma correction (eg. for MP_IMGFLAG_XYZ)
[vo/opengl-hq] [276] color = pow(color, vec3(input_gamma));
[vo/opengl-hq] [277] #endif
[vo/opengl-hq] [278] #ifdef USE_COLORMATRIX
[vo/opengl-hq] [279] // Conversion from Y'CbCr or other spaces to RGB
[vo/opengl-hq] [280] color = mat3(colormatrix) * color + colormatrix_c;
[vo/opengl-hq] [281] #endif
[vo/opengl-hq] [282] #ifdef USE_CONV_GAMMA
[vo/opengl-hq] [283] // Post-colormatrix converted gamma correction (eg. for MP_IMGFLAG_XYZ)
[vo/opengl-hq] [284] color = pow(color, vec3(conv_gamma));
[vo/opengl-hq] [285] #endif
[vo/opengl-hq] [286] #ifdef USE_CONST_LUMA
[vo/opengl-hq] [287] // Conversion from C'rcY'cC'bc to R'Y'cB' via the BT.2020 CL system:
[vo/opengl-hq] [288] // C'bc = (B'-Y'c) / 1.9404 | C'bc <= 0
[vo/opengl-hq] [289] // = (B'-Y'c) / 1.5816 | C'bc > 0
[vo/opengl-hq] [290] //
[vo/opengl-hq] [291] // C'rc = (R'-Y'c) / 1.7184 | C'rc <= 0
[vo/opengl-hq] [292] // = (R'-Y'c) / 0.9936 | C'rc > 0
[vo/opengl-hq] [293] //
[vo/opengl-hq] [294] // as per the BT.2020 specification, table 4. This is a non-linear
[vo/opengl-hq] [295] // transformation because (constant) luminance receives non-equal
[vo/opengl-hq] [296] // contributions from the three different channels.
[vo/opengl-hq] [297] color.br = color.br * mix(vec2(1.5816, 0.9936), vec2(1.9404, 1.7184),
[vo/opengl-hq] [298] lessThanEqual(color.br, vec2(0))) + color.gg;
[vo/opengl-hq] [299] #endif
[vo/opengl-hq] [300] #ifdef USE_COLORMATRIX
[vo/opengl-hq] [301] // CONST_LUMA involves numbers outside the [0,1] range so we make sure
[vo/opengl-hq] [302] // to clip here, after the (possible) USE_CONST_LUMA calculations are done,
[vo/opengl-hq] [303] // instead of immediately after the colormatrix conversion.
[vo/opengl-hq] [304] color = clamp(color, 0.0, 1.0);
[vo/opengl-hq] [305] #endif
[vo/opengl-hq] [306] // If we are scaling in linear light (SRGB or 3DLUT option enabled), we
[vo/opengl-hq] [307] // expand our source colors before scaling. This shader currently just
[vo/opengl-hq] [308] // assumes everything uses the BT.2020 12-bit gamma function, since the
[vo/opengl-hq] [309] // difference between this and BT.601, BT.709 and BT.2020 10-bit is well
[vo/opengl-hq] [310] // below the rounding error threshold for both 8-bit and even 10-bit
[vo/opengl-hq] [311] // content. It only makes a difference for 12-bit sources, so it should be
[vo/opengl-hq] [312] // fine to use here.
[vo/opengl-hq] [313] #ifdef USE_LINEAR_LIGHT_APPROX
[vo/opengl-hq] [314] // We differentiate between approximate BT.2020 (gamma 1.95) ...
[vo/opengl-hq] [315] color = pow(color, vec3(1.95));
[vo/opengl-hq] [316] #endif
[vo/opengl-hq] [317] #ifdef USE_LINEAR_LIGHT_BT2020
[vo/opengl-hq] [318] // ... and actual BT.2020 (two-part function)
[vo/opengl-hq] [319] color = bt2020_expand(color);
[vo/opengl-hq] [320] #endif
[vo/opengl-hq] [321] #ifdef USE_LINEAR_LIGHT_SRGB
[vo/opengl-hq] [322] // This is not needed for most sRGB content since we can use GL_SRGB to
[vo/opengl-hq] [323] // directly sample RGB texture in linear light, but for things which are
[vo/opengl-hq] [324] // also sRGB but in a different format (such as JPEG's YUV), we need
[vo/opengl-hq] [325] // to convert to linear light manually.
[vo/opengl-hq] [326] color = srgb_expand(color);
[vo/opengl-hq] [327] #endif
[vo/opengl-hq] [328] #ifdef USE_CONST_LUMA
[vo/opengl-hq] [329] // Calculate the green channel from the expanded RYcB
[vo/opengl-hq] [330] // The BT.2020 specification says Yc = 0.2627*R + 0.6780*G + 0.0593*B
[vo/opengl-hq] [331] color.g = (color.g - 0.2627*color.r - 0.0593*color.b)/0.6780;
[vo/opengl-hq] [332] #endif
[vo/opengl-hq] [333] // Image upscaling happens roughly here
[vo/opengl-hq] [334] #ifdef USE_GAMMA_POW
[vo/opengl-hq] [335] // User-defined gamma correction factor (via the gamma sub-option)
[vo/opengl-hq] [336] color = pow(color, inv_gamma);
[vo/opengl-hq] [337] #endif
[vo/opengl-hq] [338] #ifdef USE_CMS_MATRIX
[vo/opengl-hq] [339] // Convert to the right target gamut first (to BT.709 for sRGB,
[vo/opengl-hq] [340] // and to BT.2020 for 3DLUT).
[vo/opengl-hq] [341] color = cms_matrix * color;
[vo/opengl-hq] [342]
[vo/opengl-hq] [343] // Clamp to the target gamut. This clamp is needed because the gamma
[vo/opengl-hq] [344] // functions are not well-defined outside this range, which is related to
[vo/opengl-hq] [345] // the fact that they're not representable on the target device.
[vo/opengl-hq] [346] // TODO: Desaturate colorimetrically; this happens automatically for
[vo/opengl-hq] [347] // 3dlut targets but not for sRGB mode. Not sure if this is a requirement.
[vo/opengl-hq] [348] color = clamp(color, 0.0, 1.0);
[vo/opengl-hq] [349] #endif
[vo/opengl-hq] [350] #ifdef USE_3DLUT
[vo/opengl-hq] [351] // For the 3DLUT we are arbitrarily using 2.4 as input gamma to reduce
[vo/opengl-hq] [352] // the amount of rounding errors, so we pull up to that space first and
[vo/opengl-hq] [353] // then pass it through the 3D texture.
[vo/opengl-hq] [354] color = pow(color, vec3(1.0/2.4));
[vo/opengl-hq] [355] color = texture3D(lut_3d, color).rgb;
[vo/opengl-hq] [356] #endif
[vo/opengl-hq] [357] #ifdef USE_SRGB
[vo/opengl-hq] [358] // Adapt and compand from the linear BT2020 source to the sRGB output
[vo/opengl-hq] [359] color = srgb_compand(color);
[vo/opengl-hq] [360] #endif
[vo/opengl-hq] [361] // If none of these options took care of companding again, we have to do
[vo/opengl-hq] [362] // it manually here for the previously-expanded channels. This again
[vo/opengl-hq] [363] // comes in two flavours, one for the approximate gamma system and one
[vo/opengl-hq] [364] // for the actual gamma system.
[vo/opengl-hq] [365] #ifdef USE_CONST_LUMA_INV_APPROX
[vo/opengl-hq] [366] color = pow(color, vec3(1.0/1.95));
[vo/opengl-hq] [367] #endif
[vo/opengl-hq] [368] #ifdef USE_CONST_LUMA_INV_BT2020
[vo/opengl-hq] [369] color = bt2020_compand(color);
[vo/opengl-hq] [370] #endif
[vo/opengl-hq] [371] #ifdef USE_DITHER
[vo/opengl-hq] [372] vec2 dither_pos = gl_FragCoord.xy / dither_size;
[vo/opengl-hq] [373] #ifdef USE_TEMPORAL_DITHER
[vo/opengl-hq] [374] dither_pos = dither_trafo * dither_pos;
[vo/opengl-hq] [375] #endif
[vo/opengl-hq] [376] float dither_value = texture(dither, dither_pos).r;
[vo/opengl-hq] [377] color = floor(color * dither_quantization + dither_value + dither_center) /
[vo/opengl-hq] [378] dither_quantization;
[vo/opengl-hq] [379] #endif
[vo/opengl-hq] [380] #ifdef USE_ALPHA_BLEND
[vo/opengl-hq] [381] color = color * alpha;
[vo/opengl-hq] [382] #endif
[vo/opengl-hq] [383] #ifdef USE_ALPHA
[vo/opengl-hq] [384] out_color = vec4(color, alpha);
[vo/opengl-hq] [385] #else
[vo/opengl-hq] [386] out_color = vec4(color, 1.0);
[vo/opengl-hq] [387] #endif
[vo/opengl-hq] [388] }
[vo/opengl-hq] fragment shader compile log (status=0):
[vo/opengl-hq] ERROR: 0:93: 'varying' : Illegal use of reserved word
[vo/opengl-hq]
[vo/opengl-hq] shader link log (status=0):
[vo/opengl-hq] update_uniforms(): OpenGL error INVALID_OPERATION.
[vo/opengl-hq] update_uniforms(): OpenGL error INVALID_OPERATION.
[vo/opengl-hq] update_uniforms(): OpenGL error INVALID_OPERATION.
[vo/opengl-hq] update_uniforms(): OpenGL error INVALID_OPERATION.
[vo/opengl-hq] update_uniforms(): OpenGL error INVALID_OPERATION.
[vo/opengl-hq] Create FBO: 1280x720
[vo/opengl-hq] Create FBO: 1280x768
[vo/opengl-hq] update_uniforms(): OpenGL error INVALID_OPERATION.
[vo/opengl-hq] update_uniforms(): OpenGL error INVALID_OPERATION.
[vo/opengl-hq] update_uniforms(): OpenGL error INVALID_OPERATION.
[vo/opengl-hq] update_uniforms(): OpenGL error INVALID_OPERATION.
[vo/opengl-hq] update_uniforms(): OpenGL error INVALID_OPERATION.
[vo/opengl-hq] Resize: 1280x720
[vo/opengl-hq] aspect(0) fitin: 1280x720 monitor_par: 1.00
[vo/opengl-hq] aspect(1) wh: 1280x720 (org: 1280x720)
[vo/opengl-hq] aspect(2) wh: 1280x720 (org: 1280x720)
[vo/opengl-hq] Window size: 1280x720
[vo/opengl-hq] Video source: 1280x720 (1280x720)
[vo/opengl-hq] Video display: (0, 0) 1280x720 -> (0, 0) 1280x720
[vo/opengl-hq] Video scale: 1.000000/1.000000
[vo/opengl-hq] OSD borders: l=0 t=0 r=0 b=0
[vo/opengl-hq] Video borders: l=0 t=0 r=0 b=0
[vo/opengl-hq] update_uniforms(): OpenGL error INVALID_OPERATION.
[vo/opengl-hq] update_uniforms(): OpenGL error INVALID_OPERATION.
[vo/opengl-hq] update_uniforms(): OpenGL error INVALID_OPERATION.
[vo/opengl-hq] update_uniforms(): OpenGL error INVALID_OPERATION.
[vo/opengl-hq] update_uniforms(): OpenGL error INVALID_OPERATION.
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] phase: 369
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] phase: 530
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] phase: 761
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] phase: 957
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] after rendering: OpenGL error INVALID_OPERATION.
[vo/opengl-hq] phase: 543
[vo/opengl-hq] phase: 771
[vo/opengl-hq] phase: 446
[vo/opengl-hq] phase: 823
[vo/opengl-hq] phase: 125
[vo/opengl-hq] phase: 836
[vo/opengl-hq] phase: 400
[vo/opengl-hq] phase: 749
[vo/opengl-hq] phase: 473
[vo/opengl-hq] phase: 547
[vo/opengl-hq] phase: 28
[vo/opengl-hq] phase: 865
[vo/opengl-hq] phase: 166
[vo/opengl-hq] phase: 578
[vo/opengl-hq] phase: 459
[vo/opengl-hq] phase: 756
[vo/opengl-hq] phase: 572
[vo/opengl-hq] phase: 622
[vo/opengl-hq] phase: 375
Exiting... (Quit)
[vo/opengl-hq/win32] uninit
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment