Nvidia’s GTC 2016 Wrap-Up
This is the sixth time that this editor has been privileged to attend Nvidia’s GTC (GPU Technology Conference) which was held last week, April 4-7. It has not disappointed although GeForce gaming was not the focus of this event. This year saw about 5,500 attendees which is more than double the attendance of the 2012 event. The San Jose Convention Center seemed almost too small of a venue compared with even last year because of the large number of attendees and its jam-packed schedule.
Some big announcements were made this year, including the launch of the Pascal P100 on 16nm, as detailed in Jensen’s keynote regarding the progress of self-driving cars, plus an overall emphasis on Deep Learning, as well as on Virtual Reality (VR). The usual progress reports of the quick acceptance and adoption of CUDA and of Nvidia’s GRID, also were featured during this conference.
The very first tech article that this editor wrote covered Nvision 08 – a combination of LAN party and the GPU computing technology conference that was open to the public. The following year, Nvidia held the first GTC in 2009 which was a much smaller industry event that was held at the Fairmont Hotel, across the street from the San Jose Convention Center, and it introduced Fermi architecture. This editor also attended GTC 2012, and it introduced Kepler architecture.
Two years ago we were in attendance at GTC 2014 where the big breakthrough in Deep Learning image recognition from raw pixels was “a cat”. Last year, at GTC 2015, we saw that computers can easily recognize “a bird on a branch” faster than a well-trained human which demonstrates incredible progress in using the GPU for image detection . And this year, we see computers creating works of art, and we also saw a deep learning computer beat the world champion Go player, something that researchers claimed would take decades more to accomplish.
Bearing in mind past GTCs, we cannot help but to compare them to each other. We have had just over a week to think about GTC 2016, and this is our summary of it. This recap will be briefer than usual as we we unable to attend as many sessions as we had with previous GTCs due to health reasons, and we left shortly after the Thursday Keynote session. However, we were able to attend all of the keynote sessions and some of the sessions on Tuesday, Wednesday, and on Thursday. We visited the exhibit hall on all three days, played the Eve: Valkyrie Oculus Rift VR demo, and caught up with some of our editor friends from years past. Every GTC has been about sharing, networking, and learning – everything related to GPU technology. And this year we were able to network with some programmers to possibly bring a new feature to BTR’s community.
As an invaluable resource, please check out Nvidia’s library of over 500 sessions that are being recorded and will be available for watching in their entirety. We are only going to give our readers a small slice of the GTC and our own short unique experience as a member of the press.
The GTC 2016 highlights for this editor included ongoing attempts to learn more about Nvidia’s future roadmap, as well as noting Nvidia’s progress in GPU computing over the past 8 or 9 years. Each attendee at the GTC will have their own unique account of their time spent at the GTC. The GTC is a combination trade show/networking/educational event attended by more than five thousand people, each of whom will have their own unique schedule as well as different reasons for attending. But all of them share in common a passion for GPU computing. This editor’s reasons for attending this year were the same as the previous years – an interest in GeForce and Tegra GPU technology primarily for PC and now for Android gaming, and of course, in deep learning.
As is customary, Nvidia used the GTC to showcase their new and developing technology even as they transition from current Maxwell architecture on 28nm to Pascal architecture on 16nm this year. We saw Nvidia transition two years ago from the Kepler generation of GPU processing power to Maxwell’s energy-saving yet performance-increasing architecture on the same 28nm node. Just as Maxwell architecture is much more powerful as well as significantly more energy-efficient than Kepler, we will see a similar but even larger improvement by moving to Pascal. Nvidia intends to use Pascal to continue to revolutionize GPU and cloud computing, including for gaming and for virtual reality (VR)
Everything has certainly grown since the first GTC in 2009. Nvidia is again using the San Jose Convention Center for their show. Each time the GTC schedule becomes far more packed than the previous year, and this editor was forced to make several hard choices for attending sessions that were held at the same time.
First of all, it is impossible for anyone to attend even a small portion of the 500 plus talks and sessions which are devoted to showcasing how GPU technology is being applied to some of today’s most important computing challenges. As reflected in the Nvidia CEO and co-founder Jen-Hsun Huang’s (AKA Jensen) Keynote, this is the year of deep learning – for cars, for voice and image recognition, and for a myriad of other important applications.
To facilitate ongoing research and practical applications, Nvidia has released a new SDK toolset as well as a deep learning optimized new architecture Pascal GPU and turnkey supercomputer solution in a single box. VR is also highlighted although there is less emphasis placed on gaming at this event.
The GTC at a Glance
There has been some real progress with signage at the upgraded San Jose Convention Center compared with years past. No longer does Nvidia have to make do with an entire wall covered with schedule posters, but now the electronic signs are updated hourly, and there is far less clutter making it much easier for the attendees. And there is a GTC mobile app which this editor downloaded to his SHIELD which helped him keep track of his schedule.
After years of experience with running the GTC, Nvidia has got the logistics of the GTC completely down. It runs very smoothly considering that they also make sure that lunch is provided for each of the full pass attendees daily, and they offer custom dietary choices, including vegan and “gluten free”, for which this editor is personally grateful. Nvidia’s employees that staff the GTC are extraordinarily friendly and helpful.
Now we will look at each day that we spent at the GTC and will briefly focus on the sessions that we attended.
Monday, April 4
We left the high desert above Palm Springs on Monday morning, and arrived in San Jose late Monday afternoon, completing our journey in just over seven hour hours in overall light traffic. After parking our car at the convention center parking indoor parking lot for $20 for each day, and checking in at the Marriott, we picked up our press pass.
The Convention Center’s Wi-Fi was better than tolerable and the Press Wi-Fi was also OK considering the many hundreds of users that were using it simultaneously. The wired connection (and the Wi-Fi) inside the Marriott rooms were excellent, and 6 or 7MB/s peak downloads were not unusual until the hotel and conference got packed. It took a quick call to the Marriott tech support to add the SHIELD as an additional device, and we were good to go.
Nvidia treats their attendees and press well, and this year the attendees received a very nice backpack and a commemorative GPU Technology T-shirt included with the $1,500 all-event pass to the GTC. The press gets in free. Small hardware review sites like BTR are rarely invited to attend the GTC, and we again thank Nvidia for the invitation.
Mondays are always reserved at all GTCs for the hardcore programmers and for the developer-focused sessions that are mostly advanced. There was a poster reception that was held between 4-6 PM, and anyone could talk to or interview the exhibitors who were mostly researchers from leading universities and organizations who were focusing on their GPU-enabled research. The press had a 7-9 PM evening reception across the street from the convention center and although we got the invitation too late, we still got to meet with a few of our friends from past events
There were dinners scheduled and tables reserved at some of San Jose’s finest restaurants for the purpose of getting like-minded individuals together. And discussions were scheduled at some of the dinners while other venues were devoted to discussing programming, and still others talked business – or just enjoyed the food. Instead, we got a good night’s sleep, took a shower, and headed to Jensen’s Keynote at 9 AM Tuesday morning.
The BTR Community and its readers are particularly interested in the Pascal architecture as it relates to gaming yet we were not disappointed with the keynote. Nvidia is definitely oriented toward gaming, graphics, and computing, and we eagerly listened to Nvidia’s CEO Jen-Hsun Huang (Jensen) launch their brand new mega chip, the Pascal P100.
Jensen’s Keynote on Tuesday morning reinforced Nvidia’s commitment to gaming although they have branched out into many directions including into automotive. There were no deep dives into the Maxwell architecture as we had with Fermi and with Kepler at previous GTCs, so we were pleased to see that there was a deep dive scheduled for Pascal’s P100. It covered some of the important differences between Pascal, Maxwell, and Kepler architecture after Tuesday Keynote which we did not miss.