Advertisement
Home Virtual Reality The Sim Side: NVIDIA SPS VR Performance in iRacing – Part II

The Sim Side: NVIDIA SPS VR Performance in iRacing – Part II

The conclusion of our investigation into Nvidia's Single Pass Stereo Technology

Better living through speed.

Intro

The Conclusion of our Investigation into NVIDIA’s Single Pass Stereo Technology

Our first deep-dive into simulation virtual reality (VR) performance attempted to address the question: “What is Single Pass Stereo (SPS), and does it improve performance in iRacing?”  Under the original test-conditions, we concluded that it did not, at least in a meaningful way.  A short time later, additional research and feedback left us feeling that SPS was not sufficiently explored.

So we’re back this time with more cars, more track, and with RTX!  Buckle-up as we saturate our GPUs to the breaking point in Part II of our search for SPS VR performance in iRacing.

Porsche 911 GT3 Cup Car: iRacing

Understanding NVIDIA VRWorks Single Pass Stereo and Multi-View Rendering

To understand Single Pass Stereo (SPS) one must first understand a basic tenet of VR that everything is drawn twice.  Unlike gaming on a monitor where the scene rendered is from only a single point of view, VR demands both a left and right eye image, each slightly different so as to present properly in spatial 3D.  SPS (Pascal architecture) and MVR (Multi-View Rendering, Turing/Ampere architecture) are NVIDIA VRWorks features that allow for reducing geometric calculations from two to one per frame in VR.  For additional information on NVIDIA VRWorks and how SPS functions, please see our original review.

 

Source: NVIDIA VRWorks Single Pass Stereo
Source: NVIDIA VRWorks Features

The SPS Process

While conducting research for these articles, I had the pleasure of interviewing Ilias Kapouranis, a VR Software Engineer and specialist on VRWorks technical details. Since SPS/MVR only aids in accelerating GPU vertex (geometry) calculations, I asked him how or even if the CPU significantly factors into the SPS performance equation. He replied:

“… Shaders change the way of producing vertices from two separate computations per eye to one computation for both eyes.  From a very high level view, yes the geometry computation is halved but the developers have to greatly optimize their vertex shaders in order to actually achieve this theoretical peak.  Since SPS is not available in the fragment (or pixel) shaders, the work that is being done there is not affected OR halved.

Without SPS:

    1. Send the world data
    2. Send the left eye data
    3. Send how to process the left vertex data
    4. Send how to process the left fragment(pixel) data – Then submit the left eye to the HMD
    5.  Send the right eye data
    6. Send how to process the right vertex data
    7. Send how to process the right fragment(pixel) data – Then submit the right eye to the HMD

With SPS:

    1. Send the world data
    2. Send the left eye data
    3. Send the right eye data
    4. Send how to process the vertex data
    5. Send how to process the left fragment(pixel) data – Then submit the left eye to the HMD
    6. Send how to process the right fragment(pixel) data – Then submit the right eye to the HMD

When SPS / MVR is enabled we don’t send “how to process the vertex data” twice.  This is translated into API calls. These calls have to leave the program, communicate with the graphics driver, then the graphics driver has to validate the API calls before forwarding them to the low level hardware driver.  This is quite a long chain of communication but it doesn’t have any effect in PCVR because modern desktop CPUs are exceedingly fast when compared to mobile chipsets…”

-Ilias Kapouranis 10/24/2020

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version