Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save orklann/e473c2c09fd24d8daeed984b3bb34031 to your computer and use it in GitHub Desktop.
Save orklann/e473c2c09fd24d8daeed984b3bb34031 to your computer and use it in GitHub Desktop.

Real depth in OpenGL / GLSL

http://olivers.posterous.com/linear-depth-in-glsl-for-real

So, many places will give you clues how to get linear depth from the OpenGL depth buffer, or visualise it, or other things. This, however, is what I believe to be the definitive answer:

This link http://www.songho.ca/opengl/gl_projectionmatrix.html gives a good run-down of the projection matrix, and the link between eye-space Z (z_e below) and normalised device coordinates (NDC) Z (z_n below). From there, we have

A   = -(zFar + zNear) / (zFar - zNear);
B   = -2*zFar*zNear / (zFar - zNear);
z_n = -(A*z_e + B) / z_e; // z_n in [-1, 1]

Note that the value stored in the depth buffer is actually in the range [0, 1], so the depth buffer value z_b is:

z_b = 0.5*z_n + 0.5; // z_b in [0, 1]

If we have rendered this depth buffer to a texture and wish to access the real depth in a later shader, we must undo the non-linear mapping above:

z_e = 2*zFar*zNear / (zFar + zNear - (zFar - zNear)*(2*z_b -1));

This is similar to the example given here: http://www.geeks3d.com/20091216/geexlab-how-to-visualize-the-depth-buffer-in-glsl/, except that their example is divided through by zFar, which obviously gives a better value for visualisation. Their example also uses the depth buffer value z_b directly as z_n, without shifting back to [-1, 1], which, as you can see above is wrong.

If you want to verify the equations for z_n and z_b, you can try this in your shader on the render-to-texture (RTT) pass for your scene:

// == RTT vert shader ====================================================
varying float depth;
void main(void)
{
    depth = -(gl_ModelViewMatrix * gl_Vertex).z;
}

// == RTT frag shader ====================================================
varying float depth;
void main(void)
{
    float A = gl_ProjectionMatrix[2].z;
    float B = gl_ProjectionMatrix[3].z;
    float zNear = - B / (1.0 - A);
    float zFar  =   B / (1.0 + A);
//    float depthFF = 0.5*(-A*depth + B) / depth + 0.5;
//    float depthFF = gl_FragCoord.z;
//    gl_FragDepth  = depthFF;
}

If you play with uncommenting the lines in the frag shader, you can try writing to gl_FragDepth manually, using either the value gl_FragCoord.z, which is supposed to be identical to the fixed-functionality, or alternatively using the value calculated using depth, A, and B. In all cases the results should be identical.

If you want to access the depth buffer in a later shader and recover the true depths, you then need to pass the zNear and zFar values from the first camera’s projection matrix as uniforms into the shader:

// == Post-process frag shader ===========================================
uniform sampler2D depthBuffTex;
uniform float zNear;
uniform float zFar;
varying vec2 vTexCoord;
void main(void)
{
    float z_b = texture2D(depthBuffTex, vTexCoord).x;
    float z_n = 2.0 * z_b - 1.0;
    float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
}

Alternatively, you can write your own depth value to a colour texture. If, like me, you don’t have floating point textures at your disposition, you can pack the depth value into 24 bits of a regular RGBA / UNSIGNED_BYTE colour texture as follows:

// == RTT vert shader ====================================================
varying float depth;
void main(void)
{
    depth = -(gl_ModelViewMatrix * gl_Vertex).z;
}

// == RTT frag shader ====================================================
varying float depth;
void main(void)
{
    const vec3 bitShift3 = vec3(65536.0, 256.0, 1.0);
    const vec3 bitMask3  = vec3(0.0, 1.0/256.0, 1.0/256.0);
    float A = gl_ProjectionMatrix[2].z;
    float B = gl_ProjectionMatrix[3].z;
    float zNear = - B / (1.0 - A);
    float zFar  =   B / (1.0 + A);
    float depthN = (depth - zNear)/(zFar - zNear);  // scale to a value in [0, 1]
    vec3 depthNPack3 = fract(depthN*bitShift3);
    depthNPack3 -= depthNPack3.xxy*bitMask3;
    // gl_FragData[0] = colour rendering of your scene
    gl_FragData[1] = vec4(depthNPack3, 1.0); // alpha should equal 1.0 if GL_BLEND is enabled
}

// == Post-process frag shader ===========================================
uniform sampler2D myDepthTex;   // My packed depth texture
uniform sampler2D depthBuffTex; // Texture storing OpenGL depth buffer
uniform float zNear;
uniform float zFar;
varying vec2 vTexCoord;
void main(void)
{
   const vec3 bitUnshift3 = vec3(1.0/65536.0, 1.0/256.0, 1.0);
   float z_e_mine = dot(texture2D(myDepthTex, vTexCoord).xyz, bitUnshift3);
   z_e_mine = mix(zNear, zFar, z_e_mine); // scale from [0, 1] to [zNear, zFar]
   float z_e_ffun = texture2D(depthBuffTex, vTexCoord).x;
   z_e_ffun = 2.0 * z_e_ffun - 1.0;
   z_e_ffun = 2.0 * zNear * zFar / (zFar + zNear - z_e_ffun * (zFar - zNear));
   gl_FragColor = vec4(vec3(z_e_mine/zFar), 1.0); // divide by zFar to visualize
   // gl_FragColor = vec4(vec3(z_e_ffun/zFar), 1.0); // divide by zFar to visualize
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment