Wednesday, January 12, 2011

Deferred Rendering: Reconstructing Position from Depth

Here is a short snippet of how to reconstruct view-/world-space position from depth in deferred rendering.

This was originally described by fpuig, but here is the cleaned up and bugfixed (hopefully..) version.

What's nice about this code is that it works both for light volume geometry (e.g. spheres for omni lights) and fullscreen quads:

1) In your GBuffer pass store positionInViewSpace.z in the depth rendertarget.

2) The lighting vertex-shader calculates the eye-to-pixel rays (in view-/world-space):
    OUT.position =  mul(matrixWVP, float4(IN.position,1));
    OUT.vPos = ConvertToVPos(OUT.position); // sm2 has no VPOS..
    OUT.vEyeRayVS = float3(OUT.position.x*TanHalfFOV*ViewAspect, 
                           OUT.position.y*TanHalfFOV, OUT.position.w);
    OUT.vEyeRay = mul(matrixViewInv, OUT.vEyeRayVS);
In shader model 2 we don't have the VPOS interpolator, so we can use the following function (RTWidth/Height is the size of your screen/rendertargets):
    float4 ConvertToVPos(float4 p)
    {
        return float4(0.5*(float2(p.x+p.w, p.w-p.y) + 
                      p.w*float2(1.0f/RTWidth, 1.0f/RTHeight)), p.zw);
    }
3) Then, in the pixel-shader one can compute the view-space and world-space position per pixel:
    float depth = tex2Dproj(GBufferDepthSampler, IN.vPos); // divide by W
    IN.vEyeRayVS.xyz /= IN.vEyeRayVS.z; // divide by W
    float3 pixelPosVS = IN.vEyeRayVS.xyz * depth;
    float3 pixelPosWS = IN.vEyeRay.xyz * depth + cameraPosition.xyz;

This can be optimized a lot, of course.