Performance Summary Charts

Performance summary charts & graphs

Main Performance Summary Charts

Here are the summary charts of 33 games and 3 synthetic tests. The highest settings are always chosen and the settings are listed on the chart.  The benches were run at 1920×1080, 2560×1440, and at 3840×2160.  Four cards were benchmarked and they are listed in order starting with the most powerful card on the left to the least powerful on the right: the RTX 3080, the RTX 2080 Ti, The RTX 2080 SUPER, and the GTX 1080 Ti.

Most results, except for  synthetic scores, show average framerates, and higher is better. Minimum framerates are next to the averages in italics and in a slightly smaller font.  Games benched with OCAT show average framerates, but the minimums are expressed by frametimes (99th-percentile) in ms where lower are better.   

All of the games that we tested ran well on the RTX 3080 except for A Total War Saga: Troy.  However, the benchmark also had issues with the other three cards.  We suspect that it may be a game or driver issue.   The Shadow of the Tomb Raider benchmark refused to run on the GTX 1080 Ti and would crash to desktop when we attempted to access the benchmark.

The only benchmark that did not run well on the RTX 3080 is Ghost Recon: Breakpoint – but only on the Ultimate Preset at 3840×2160.  Ultimate/4K requires 11GB of vRAM and the RTX 3080 is equipped with 10GB.  This is the only benchmark where the RTX 2080 Ti beats the RTX 3080, but dropping to the Ultra Preset returns the RTX 3080 Founders Edition to complete dominance over the $1199 Turing FE flagship.

It is a blowout and the RTX 3080 FE wins every game benchmark over the RTX 2080 Ti FE.  The RTX 3080 is the first single-GPU card that is truly suitable for 4K/60 FPS using ultra settings for most modern games.  In many cases, the $699 RTX 3080 FE doubles performance over the GTX 1080 Ti which also launched at the same price!   The RTX 3080 also provides a significant performance upgrade over this generations’ RTX 2080 SUPER which also launched at $699 (although the RTX 2080 originally launched for $799).

However, the RTX 3080 is overkill for 1920×1080 – only perhaps suitable for some competitive gamers who require a 240Hz to 360Hz (or higher) display, and it exposes many engine framerate caps that were formerly hidden.  We will drop 1080P testing for future RTX 3080 reviews.

Now we look specifically at ten RTX/DLSS enabled games, each using maximum ray traced settings and the highest quality DLSS where available.

RTX/DLSS Benchmarks

The RTX 3080 maintains its performance dominance over the other cards and pulls further away when RTX/DLSS are enabled.  The GTX 1080 Ti is unable to run RTX features efficiently and DLSS is unavailable to it.

Next, we look at overclocked performance.

Overclocked benchmarks

These 15 benchmarks are run with the RTX 3080 overclocked +35MHz on the core and +700MHz on the memory versus at stock clocks.

There is a small performance increase, but not even five percent for most games.  We used the Precision X1 preview build to increase the voltage to its maximum .1mV offset, but we could not improve the performance  We won’t bother overclocking the RTX 3080 in future as NVIDIA has locked it down in an attempt to maximize performance for all Founders Edition gamers.

Let’s look at Creative applications next to see if the RTX 3080 is a good upgrade from the other video cards starting with Blender.

Blender 2.90

Blender is a very popular open source 3D content creation suite. It supports every aspect of 3D development with a complete range of tools for professional 3D creation.

We have seen Blender performance increase with faster CPU speeds, so we decided to try several Blender 2.90 benchmarks which also can measure GPU performance by timing how long it takes to render production files. We tested our four comparison cards with both CUDA and Optix running on the GPU instead of using the CPU.

For the following chart, lower is better as the benchmark renders a scene multiple times and gives the results in minutes and seconds.

Blender’s benchmark performance is highest using the RTX 3080, and often the amount of time saved is substantial over using the next fastest card, the RTX 2080 Ti.  We did not test motion blur performance which NVIDIA claims is five times faster using the RTX 3080 over the RTX 2080 SUPER.

Next, we move on to AIDA64 GPGPU benchmarks.

AIDA64 v6.25

AIDA64 is an important industry tool for benchmarkers.   Its GPGPU benchmarks measure performance and give scores to compare against other popular video cards.

AIDA64’s benchmark code methods are written in Assembly language, and they are well-optimized for every popular AMD, Intel, NVIDIA and VIA processor by utilizing the appropriate instruction set extensions.  We use the Engineer’s full version of AIDA64 courtesy of FinalWire.  AIDA64 is free to to try and use for 30 days.  This time, we compare the flagship Turing RTX 2080 Ti against the Ampere RTX 3080.  CPU results are also shown for comparison.

Here is the chart summary of the AIDA64 GPGPU benchmarks with the RTX 3080 and the RTX 2080 Ti side-by-side.

Generally the RTX 3080 is faster at almost all of the GPGPU benchmarks than the RTX 2080 Ti, sometimes overwhelmingly so as with AES-256 and in Single-Precision FLOPS.  It is only slightly slower with SHA-1 Hash.  So let’s look at Sandra 2020 next.

SiSoft Sandra 2020

To see where the CPU, GPU, and motherboard performance results differ, there is no better tool than SiSoft’s Sandra 2020.  SiSoftware SANDRA (the System ANalyser, Diagnostic and Reporting Assistant) is a excellent information & diagnostic utility in a complete package.  It is able to provide all the information about your hardware, software, and other devices for diagnosis and for benchmarking.  Sandra is derived from a Greek name that implies “defender” or “helper”.

There are several versions of Sandra, including a free version of Sandra Lite that anyone can download and use.  Sandra 2020 R10 is the latest version, and we are using the full engineer suite courtesy of SiSoft.  Sandra 2020 features continuous multiple monthly incremental improvements over earlier versions of Sandra.  It will benchmark and analyze all of the important PC subsystems and even rank your PC while giving recommendations for improvement.

We ran Sandra’s intensive GPGPU benchmarks and charted the results summarizing them.  The performance results of the RTX 3080 are  compared with the performance results of the RTX 2080 Ti.

In Sandra GPGPU benchmarks, the RTX 3080 distinguishes itself from the RTX 2080 Ti in almost every area – Processing, Cryptography, Financial and Scientific Analysis, Image Processing, and Bandwidth – although interestingly, the Ti appears to be faster at hashing.

We have completed the synthetic benching, so let’s take a look at end-to-end latency and especially in RTX Fortnite.

End-to-end Latency, LDAT, and RTX Fortnite

eSport gaming is fast becoming the most popular competitive sport in the world and latency is a big problem that many online gamers just accept.  Often gamers attribute latency to ‘ping’ without considering that their own PC’s end-to-end latency is critical to their aiming accuracy.

BTR recently purchased a Samsung Odyssey G7 (LC27G75TQSNXZA), a 27″ 2560 x 1440 240Hz (1ms GTG) G-SYNC HDR600 Monitor, and we can easily tell the difference between a 240Hz refresh rate and a 120Hz refresh rate, and between a quick response and a slower one.   We picked this G7, our fastest display, to use with LDAT which simplifies latency and display analysis – without having to spend many thousands of dollars on complex equipment.

Measuring end-to-end system latency traditionally requires recording the input and display using a high-speed camera and then manually counting the individual frames.  This is both very expensive and tedious.  To simplify the process of measuring system latency, NVIDIA has created a hardware latency tool called LDAT (Latency Display Analysis Tool). LDAT is a discrete hardware analyzer that uses a luminance sensor to quickly and accurately measure the motion-to-photon latency in a game or application. LDAT works with all GPUs including Intel’s and AMD’s.

The LDAT sensor sits directly against the Samsung LCD and it responds to changes in luminescence.  Unfortunately, the thick bezel of the display coupled with the R1000 curve left a gap between the screen and the sensor.  So we made a simple modification using a plastic twist tie to hold the bottom of the LDAT sensor snugly against the screen.  Our end-to-end latency benchmarks were performed on the Samsung G7 Odyssey display at its native resolution, 2560×1440, and at 1920×1080 for testing Fortnite RTX latency.

Using LDAT is easy.  All we had to do was slide the LDAT sensor onto the Samsung screen, and then position it over an area that changes luminance (like a weapon muzzle flash) when the mouse button is pressed.  The LDAT kit comes with a modified Logitech G203 Prodigy gaming mouse that plugs into the sensor which allows it to measure the entire end-to-end PC latency.  Just open the LDAT software, click the mouse button, and LDAT automatically measures the PC system latency in real time.  And it also can be set to flash automatically up to 100 times.  So we ran it twice (x100) for each of our two test cards.

After 200 automatic flashes – which would have taken literally days to accomplish with a high speed camera and by manually counting the frames, we got the above results in a few minutes.  The RTX 3080 is about 1ms faster in our PC than the RTX 2080 Ti, a very small difference.  But what does this mean for games?  Well, we used the upcoming NVIDIA Creative RTX map in Fortnite and we also tested Reflex.

Fortnite Latency Measured

Fortnite has received ray traced visual effects, DLSS, and Reflex in an all new custom RTX map for reviewers that will be available to all Fortnite gamers in Chapter 2, Season 4. These new ray tracing effects in Fortnite will be implemented in the ‘Battle Royale’, ‘Save the World’, and ‘Creative’ worlds.  Ray-traced shadows and reflections provide a large visual contrast over using traditional rasterization methods, while ray-traced ambient occlusion and global illumination add quite a bit to the overall ambiance.

NVIDIA has also worked with Fortnite content creators to develop a new Creative Mode map known as the ‘RTX Treasure Run’ that has been specifically designed to highlight ray tracing.  Players will arrive at the entrance to a museum where they are challenged to a scavenger hunt that highlights multiple ray traced effects. Along the way, players may explore a hall of mirrors, a medieval castle, a jungle, climb a giant statue, and explore a science lab to uncover the most treasures in twenty minutes.  The RTX Treasure Run will be available with the launch of Fortnite RTX.

Let’s look at BTR’s Fortnite benchmark at 1920×1080 – first with RTX Off.

It’s pretty much vanilla Fortnite.  But now check out the same benchmark run with RTX On.

Fortnite RTX has surprised us and it looks impressive.  But what about RTX 3080 performance with RTX On versus RTX Off?  Fortnite features three DLSS options: Quality, Balanced, and Performance. These options control the DLSS rendering resolution, allowing a gamer to choose a balance between image quality and FPS. Using Quality DLSS, can improve upon the native image while including a DLSS performance boost.  However, we decided to benchmark with all settings to Epic/maximum including RTX ray tracing and DLSS Quality to really put a strain on our two top cards.

In all of the Fortnite benchmarks, the RTX 3080 is solidly faster than the RTX 2080 Ti.   A competitive gamer will definitely not use RTX On at 4K due to the performance hit, and probably not at 1440P; but if you are playing Fortnite non-competitively, it’s a superb option to bring amazing visuals and eye candy that are not in the vanilla game.

Fortnite is a cultural phenomenon with more than 350 million players. It is also a platform for gamers and creators to make and play unlimited games/experiences and this is part of the reason it’s the most popular game in the world.  It looks like ray tracing/DLSS and the system latency improvements from Reflex will be particularly popular for Fortnite gamers wishing to experience its creative side.  So lets look at Reflex and measure end-to-end latency in Fortnite.

NVIDIA Reflex: Low Latency Technology in Fortnite

NVIDIA Reflex is a new low latency mode integrated in Fortnite that allows gamers to find targets faster, react quicker, and to potentially increase their aim precision.  With Reflex, gamers can increase settings, resolution, and turn on RTX while still maintaining the responsiveness needed to play competitively.   Aim accuracy improves significantly when the average system latency drops from 55ms to 31ms according to NVIDIA.

Source: NVIDIA

Reflex technology reduces the back pressure on the CPU, reduces the render queue to zero, and boosts GPU clocks, all of which combine to give some pretty impressive results.  We set up LDAT and measured the end-to-end latency comparing the RTX 3080 with the Turing flagship, the RTX 2080 Ti.  Fortnite RTX has a built-in tool that flashes a small square white that LDAT is centered over (against a contrasting background) when a mouse press occurs.

The white flash on the left corresponds to a mouse press.

First, here is Fortnite’s end-to-end latency measured without using Reflex – this time at 1920×1080 which is the resolution a competitive gamer might choose – even with maxed-out/Epic RTX On/DLSS Quality settings as we have chosen.

The RTX 3080 has lower latency in the same PC than the RTX 2080 Ti, but latency is still a bit high.  So we kept everything at maxed-out Epic/RTX On settings, but simply turned Reflex+Boost on in the Fortnite settings.

RTX 3080 on Left – RTX 2080 Ti (not 3080 Ti; typo) on Right

The visuals still look the same, the framerate has not changed, but the latency has dropped significantly for both video cards.  Reflex may give a gamer a real competitive advantage over a gamer not using it.

Let’s head to our conclusion.

Contents

10 COMMENTS

  1. Slight grammatical error near the beginning of the article:

    “We have also overclocked the RTX 3080 and will compare it’s overclocked performance versus stock.”

    This should be “and will compare its overclocked performance”, without an apostrophe.

  2. Page 4, last picture:
    3080 Ti should be 2080 Ti.

    The gap between 3080 and 2080 Ti is actually much smaller than expected. In fact even my 2080 Ti FE with a BIOS flash can be on par with it. Also it feels like OC on 3080 is limited on purpose to distance from 3090.

    • I captioned the picture earlier to reflect the typo.

      I don’t think the gap is smaller than expected unless expectations were too high before the review. This launch review summary by ComputerBase which includes BTR’s review and shows it is very much in-line with the other reviewers. It isn’t a huge upgrade – which is why I suggested that 2080 Ti owners may want to wait for the reviews to upgrade to the RTX 3090. It will be the Ampere flagship card to replace the Turing flagship.

      https://www.3dcenter.org/news/geforce-rtx-3080-launchreviews-die-testresultate-zur-ultrahd4k-performance-im-ueberblick

      I don’t think the OC is limited on purpose to distance the 3080 from the 3090. Nvidia did what AMD did – they mostly eliminated the performance headroom to give all 3080 gamers a similar experience at the highest possible overall core clocks. It’s near the edge which is not a bad thing, but disappointing for enthusiasts who are used to substantial performance from overclocking. I am guessing without knowing that the 3090 will also not have much OC headroom.

      It’s up to the AIBs to deliver cards that can handle higher voltage and overclocks using 3×8-pin PCIe cables. And of course, they will come at a premium price.