Tuesday's Keynote and the VIVE Pro

Tuesday, March 27 

Opening Keynote

Jen-Hsun Huang, CEO & Co-Founder, NVIDIA

The Keynote hall was beyond packed and the press had to line up at 8 AM to be assured of a seat by the 9 AM start time.  This time, there were no tables for the press, and our notebooks became laptops.  I found a seat in the front row and to the left of the stage.   Here are some of the important highlights from Jensen’s 2-1/2 hour Keynote which set the stage for the rest of the conference.

Jensen began by pointing out that photorealistic graphics uses ray tracing (RT) which requires extreme calculations as used by the film industry.  RT in real time has been the holy grail for GPU scientists for the past 40 years.  An impressive Unreal Engine 4 RTX demo running in real time on a single $68,000 DGX using 4 Volta GPUs was shown.  It was noted that without using deep learning algorithims, RTX in real time is impossible

Media and entertainment professionals can now see and interact with their creations with accurate lighting and shadows, and do complex renders up to 10 times faster than with a CPU alone by using NVIDIA’s RTX when combined with the new Quadro GV100 GPU.  NVIDIA RTX technology was introduced last week at the annual Game Developers Conference and it is supported by 25 of the world’s leading professional design and creative applications with a combined user base of more than 25 million customers.

The Quadro GV100 GPU, with 32 GB of memory is scalable to 64 GB with multiple Quadro GPUs using NVIDIA NVLink interconnect technology, is the highest-performance platform available for RTX applications. Based on NVIDIA’s Volta GPU architecture, the GV100 provides 7.4 teraflops of double-precision, 14.8 teraflops of single-precision and 118.5 teraflops of deep learning performance. And the NVIDIA OptiX AI-denoiser built into NVIDIA RTX delivers almost 100x the performance of CPUs for real-time, noise-free rendering.  Check out the RTX video under ‘Wednesday’ which shows this very quick RT denosing.

NVIDIA unveiled a series of advances to boost performance on deep learning workloads by a factor of ten compared with the generation introduced just six months ago.  Advancements include a two times memory boost to Tesla V100 and a new GPU interconnect fabric called NVSwitch, which enables up to 16 Tesla V100 GPUs to simultaneously communicate at a record speed of 2.4 terabytes per second using 512 GB of interconnected memory.

Every single GPU can communicate with each other at 20x the bandwidth of PCIe 3.0 using a non blocking switch – not a network – with low latency.  Excellent thermal management keeps the 10KW of GPUs and electronics cool in a chassis weighting 350 pounds – all for $399,000 and multi-systems can be connected.

Jensen pointed out that a comparable CPU render farm costs 5 times more and uses 7 times the power while taking 7 times the space.
Jensen then moved on to the medical field noting that there are currently 3 million instruments in use with only 100K new ones each year.  Cloud data centers using Project CLARA can take advantage of a medical imaging supercomputer in a remote data center so any hospital can upgrade their instruments virtually.

For example, a medical team can stream their black and white ultrasound 2D image into the CLARA supercomputer to make it 3D and then improve it further using deep learning to accurately extrapolate a color 3D segmented image that provides a lot more information than the original.

Jensen then moved onto automotive to show that deep learning is essential to that trillion dollar a year mega industry.  NVIDIA believes that self driving will ultimately benefit humankind and save lives.  But it is one of the most difficult challenges ever to create a completely autonomous vehicle and NVIDIA believes they can solve it using the next generation of drive platform called Orin which uses 2 Drive Pegasus in 1 SOC.

Humans drive 10 trillion miles a year.  But 20 autonomous test cars drive only 1,000,000 miles in a year which is not enough to train these cars.  So enter Project Holodeck which allows for the training of autonomous cars’ deep learning in a simulator.

The Holodeck allows for 3 layers of inception with a driver in VR piloting a real car in the real world from the holodek using Remote Drive and Remote View which can be applied and can even go into dangerous places using a robot, and the operator can teleport into the robot via VR to control it easily just like driving a car.

Jensen’s keynote speech was inspiring although it ran about one-half hour over.

Press Q&A

After Jensen’s keynote ended, the press moved to the Q&A where lunch was ready.  Jensen was there for Q&A at a GTC for the first time since Fermi, and he covered the press questions about the fatal driving accident that Uber had the week before. Jensen revealed that NVIDIA was going to be cautious and suspend real-world autonomous testing even though Uber wasn’t using NVIDIA’s platform until the results of the investigation are revealed.

The rest of the Q&A were unfortunately limited to one or two questions and the press were directed to ask their NVIDIA contacts.  Of course, we did not expect any answers about upcoming GeForce products and we did not ask.

Next we headed off to a private meeting with VIVE.


 VIVE and the VIVE Pro

The blue front part of the HMD slides forward to accommodate eyeglass wearers

We got to spend more time in VR with the VIVE pro than we spent at CES at NVIDIA’s private exhibit.  We noted that the new HMD is significantly lighter and more comfortable than the current HMD, and best of all for us, it is adjustable to easily fit over our regular eyeglasses.  We are uncomfortable using our contact lenses just for VR, and we had to buy a pediatric pair of glasses just to wear our current HMD.

We think that the best thing about the VIVE Pro is the increased resolution that makes reading text no longer a difficult task.  This is great for RPG gamers, but especially useful for developers and those who share text in virtual reality for their work.  That is why the VIVE Pro is a prosumer device.  It will naturally be adopted by gamers who are at the cutting edge, but it is arguably more important and useful for professionals with applications for developers as well as for the medical field, government, and aviation industries who use VR for training and for simulations.

We got to check out VIVE’s version of a holodeck using a supercar that could be viewed from all angles including from inside the engine.  It was very impressive and it leveraged the higher resolution of the Pro to demonstrate what could be accomplished today in VR using consumer hardware.

We look forward to reviewing the VIVE Pro and we will benchmark VR games using its more demanding resolution.  We just purchased and downloaded Skyrim VR, and we look forward to playing it and benchmarking it today for an upcoming 20-game VR showdown between AMD’s and NVIDIA’s top GPUs.

Other Exhibits

NVIDIA had several large sections in the main exhibit hall including one for the BFGD 4K HDR 65″ gaming displays, but instead we concentrated on VR and some other unusual cutting edge exhibits.  Below we see a VR HMD that uses 5K resolution with 170 degree Field of view – however, unlike the VIVE Pro, they generally require workstations to power them and far more graphics power than consumer or prosumer graphics card set-ups can provide currently.

Wireless is absolutely crucial for mass VR adoption, and we already see several solutions for the VIVE consumer and Pro VR HMDs.

Many of Nvidia’s major partners in professional GPU computing were represented with their servers, including Dell, HP, IBM, and Super Micro, major sponsors of the GTC.  And this year, there was a jam packed VR Village where you had to make an appointment, or miss out!  We made an appointment for the next day for the Ready Player One Escape Room demo and we had no idea what to expect.

Our hotel room had a rare private balcony and the weather cooperated as friends we hadn’t seen in a long time stopped by to chat and catch up on news and rumor.