Screen-Space Reflections Explained


 

Some of the most difficult algorithms to properly implement for graphics developers are the family of algorithms used for screen-space reflections (SSR) in the pixel shader of a rasterized renderer. These are difficult to implement because they require programmers to navigate peculiarities of transformations and coordinate systems inherent to their rendering API, as well as a solid foundational understanding of Linear Algebra.

The code I'll be describing was written specifically in OpenGL so let me clarify some of the terms I'll be using. "Screen space" refers to a coordinate set with x/y values in range [0,1] and z value in OpenGL depth range [-1,1]. "View space" refers to coordinates transformed so that the camera is at the world space origin (the 0 vector). To convert between the two, a standard perspective projection matrix is used formatted in the style that OpenGL uses.

The simplest SSR algorithm starts with the view space calculation of the reflection vector (R) from the view (V) and normal (N) vectors using the formula V - 2.0 * dot(N,V) * N, where V = fragPosition - cameraPosition (note that the camera position in view space is at the origin, so this simplifies to V = fragPosition). Recall the semantics of creating a vector from point A to point B are vec3(B - A). The math for aforementioned reflection calculation is handled by the built-in GLSL function 'reflect().' This reflection vector is then used to march the ray from each pixel's origin until the ray's depth value matches one found at its position in the depth buffer. It's easier to do path tracing for reflections in view space rather than world space because, in view space, the camera is at the origin and all world positions are transformed to positions relative to it. Consider that if you were to calculate reflections in world space, you might run into serious issues due to ray origins far from the world space origin due to float imprecision.

To implement SSR, it is necessary to render to data buffers for depth, view space normals, and the previous frame's color buffer. In my case, I packed x and y values for normalized view space normals in with another data buffer and reconstructed the z value in situ by solving for z in the equation for normalizing 3-vectors, x^2 + y^2 + z^2 = 1. It is essential to pre-calculate the view space normals in the geometry buffer, because world space normals cannot be easily transformed to view space without serious accommodation. This is due to the nature of how normals are transformed from object space to tangent space, the inverse transpose of the model matrix. Typically, it's not possible to access the model matrix for every pixel of the normal buffer after the g-buffer pass.

The transformation for a given normal N from object space to view space in OpenGL is 'inverse(transpose(viewMatrix * modelMatrix)) * N.' For those uncertain of the reasoning for using the inverse transpose of these matrices, it is worth looking at the proof. Essentially, by using the inverse transpose of the transform, the property of the normal-tangent relationship where dot(N,T) = 0 will be preserved. Tangent space normals cannot be converted to view space simply by transforming them with the view matrix, the normal and tangent vector relationship will not be preserved and the basis will no longer be orthonormal.

For my SSR implementation I chose use historical data from the past frame for the reflections' base color. Although this will inherently produce inaccurate reflections as objects may have moved during or before render time, in practice this will likely not be an issue for most renderers. Also, lighting will remain somewhat consistent between the current frame and SSR reflections for no computational cost. However, ideally the base color of reflections would be composed of the current lit frame's data.

Vanilla ray tracing code for SSR can be relatively computationally expensive compared to the quality of the output. However, there are a few essential optimizations that can improve results in terms of quality or performance. For example, performing a search between depth values for the previous and current ray step if the current ray depth exceeds the pixel depth buffer value. Use your favorite search algorithm, etc. Also, calculation time can be reduced by 'bailing early' whenever the normal at a given hit point along the ray is orientated in the same direction as the direction of the camera to pixel V, ie where dot(N,V) > 0.

Compositing results from other calculated reflections with the results from SSR can be a somewhat arduous process. Mismatches in the look and accuracy of various methods may require tweaking values to fix. Not a lot can be done a priori to fix this.

Unblurred SSR output

 

example glsl SSR code:

    Ray ray;

    // Get view space params.
    ray.o = ViewPositionFromDepth(uv).xyz;
    vec3 V = normalize(ray.o);
    ray.viewNormal = getViewNormal(uv).xyz;
    ray.d = normalize(reflect(V, ray.viewNormal));

    // Screen space params. The screen direction vector is calculated from projected view space values.
    vec3 view_first_step = ray.o + ray.d;
    ray.o_screen = vec3(uv, getDepth(gDepth, uv));   
    vec3 screen_first_step = ViewToScreen(view_first_step);
    vec3 stepDist = screen_first_step - ray.o_screen;
    ray.d_screen = SSRinitialStepAmount * normalize(stepDist);   

    ray.prevPos = ray.o_screen;
    ray.currPos = ray.prevPos + ray.d_screen;


vec4 out_col = vec4(0.0);

// Ray march in screen space.

while(ray.steps < maxSteps)
{
        // End early if offscreen.
        if(minVec3(ray.currPos) < 0 || maxVec3(ray.currPos) > 1)
            return vec4(0.0);

    //Check ray hit:

    float diff = ray.currPos.z - getDepth(gDepth, ray.currPos.xy);
    if(diff >= 0.0 && diff < length(ray.d_screen))
    {

        // Do refinement here as necessary.

        out_col = texture(sourceTex, ray.currPos.xy);

        break;

    }

        // Iterate ray forward.
        ray.prevPos = ray.currPos;
        ray.currPos = ray.prevPos + ray.d_screen;
        ray.steps++;

}

gl_FragColor = out_col;


Issues to solve with SSR:

- Using the previous frame as the source image for reflections creates a jitter in SSR as the camera moves. This issue is most visible when the framerate is low. To fix this, reflections and opaque pixels need to each be separately lit then composited together after.

- Results are noisy since miss rays are interspersed among ray hits. SSR is only an approximation of reflection. Acceptable, not accurate, results are the goal here.

- In my implementation, depending upon the pitch angle between the camera and hit pixel, reflections had unusual warping or other inaccuracies. This is likely due to a lack of precision in my projection matrix, and is an issue I'm still working on resolving.

Steep view angle reflection warping

Improving results:
- In order to composite the SSR results with other reflection methods, I only applied SSR to pixels which register ray hits. Since the alpha channel of the SSR results is a binary hit/miss value, it is difficult to blur this texture and have the results look good. The alpha channel cannot be blended or else it will not function properly as a boolean value. Instead, I generated a blurred version of the initial output, blurred with a large kernel Gaussian blur filter, and interpolated between them linearly with the GLSL built-in 'mix()' function using glossiness as the blend factor.

- I also downsampled the SSR buffer by 2x to cut the number of pixels to be traced in half.

Things I Tried that Didn't Help:

- Multi-sampling rays per-pixel and averaging results. The computational cost isn't worth it for secondary rays.

- Applying a multipass biltateral filter weighted according to depth buffer value. This tended to make the results look too sharp at edges in a surreal way, as if the reflections were outlined.

- Applying morphological dilation to the results to fill missed rays.


References:

McGuire, Mara, College, et al. "Efficient GPU Screen-Space Ray Tracing". Journal of Computer Graphics Techniques. Vol 3, No 4, 2014. http://jcgt.org/published/0003/04/04/paper.pdf.

Wronski, Bart. "GCD follow-up: Screenspace reflections filtering and upsampling". March 23, 2014. https://bartwronski.com/2014/03/23/gdc-follow-up-screenspace-reflections-filtering-and-up-sampling/

Comments

Popular posts from this blog

Young Diagrams and The Ski Lift Problem