Print this page
Published in Reviews

Nvidia Geforce GTX 1070 Founders Edition review

by on03 October 2016


Performs just above GTX 980 Ti, now with enhanced HDR support


A little over a decade ago, Nvidia brought some useful features into the desktop graphics rendering pipeline like HDR-based tone mapping and hardware-accelerated H.264 GPU video decoding that have now become adopted by devices of all shapes and sizes for enterprise, desktop and mobile users alike.

Today, we are now talking about enabling playback of HDR-capable H.265 HEVC video files and cinematic movies in the 4K Ultra HD display format, not to mention having the capacity to play video games at rich, detailed and expansive 4K (3840x2160p) and 8K (7680x4320p) resolutions. In the timeframe between Nvidia’s launch of the first CUDA-accelerated programmable GPUs beginning with Geforce 8 nearly ten years ago and now, the film and entertainment industries have finally grasped the extraordinary potential of utilizing programmable shaders to offload heavily computational creative tasks directly onto the GPU, allowing professionals to finish complex, sophisticated and creative art projects in much smaller amounts of time.

Indeed, the steady progression of innovation and architecture in parallel computing has driven the video game industry several quantum leaps forward over the past decade. Consumers now experience performance and horsepower once reserved for high-performance computing (HPC) environments on tiny mobile devices powered by ARM and x86 designs, along with notebook PCs powered by the latest Nvidia Geforce 10-series GPUs and AMD’s upcoming Polaris GPUs.

With 1080p revolution complete, the road to 4K adoption began in 2012

The graphics industry has been driven by 28nm architecture since the first Geforce 600 series Kepler cards and refreshes were made available in early 2012, enabling native 4K resolution outputs for the very first time, yet not being able to achieve nearly the amount of performance required to render games in them.

Fast forward two and a half years, and the company introduces second-generation Maxwell with the Geforce 900 series, for the first time taking the 4K scaling problem by the ropes and managing to yield some decent and reasonable framerates, albeit using SLI for most cards in the lineup. This time around with Pascal, Nvidia’s hopes to not only improve the 4K framerate issue at lower power requirements, but also plans to unleash an ambitious yet determined development into the fields of HDR gaming and content production that it hopes will excite consumers, attract developers and sell full-featured UHD ready graphics products that most television panel makers like Samsung, LG and Sony won’t be able to replicate so easily.


Hardware – Nvidia Founders Edition

geforce gtx 1070 specs

For this GPU product cycle, Nvidia has decided to get back to selling its own original graphics cards away from OEMs, which it is now calling “Founders Edition” units. This is basically Nvidia’s new naming scheme for what used to be called “reference designs.” These cards featuring identical clock frequencies as the plastic-shrouded units shipping to its add-in board partners. However, they receive a heatsink upgrade featuring a metal shroud design, vapor chamber cooling on the inside, a VRM blower fan, and a large alloy heatsink underneath the shroud. Nvidia’s “Founders Edition” cards also ship with backplates by default for additional cooling improvements.

gtx 1070 founders edition inside heatsink

Taking apart the heatsink is a two-step process that first requires unscrewing the polygon-shaped “GTX 1070” aluminum housing and carefully lifting a transparent plastic cover plate from it to avoid scratches. Underneath these two pieces is the actual heatpipe unit, complete with pre-applied thermal paste that sits on top of the GP104-200 GPU. Nvidia’s new heatsink design is very reminiscent of taking apart an Apple product, as there is now an additional level of internal organization that gives the impression of more assembly-level “neatness.” Then again, this is the company’s self-branded product and might serve as a way to differentiate its own heatsink design from those of its add-in board partners.

gtx 1070 founders edition side

The Geforce GTX 1070 Founders Edition features a polygon-shaped aluminum shroud with the words “Geforce GTX” spelled out in green LEDs on the side, similar to the GTX 690 and higher flagship cards introduced over the past few years.

gtx 1070 founders edition inside heatsink 2

On the inside of the heatsink, there is a removable blower fan that exhausts hot air through the rear I/O plate, and on the front plate is a dual-link DVI port followed by three DisplayPort 1.2 ports (1.3/1.4 "Ready") and an HDMI 2.0b port.

gtx 1070 founders edition ports


Geforce GTX 1070 Specifications

gtx 1070 highlighted geforce 10 series specs

Nvidia Geforce 10 series specifications (Larger image here)

The Geforce GTX 1070 Founders Edition uses a GP104-200 Pascal GPU and the desktop version features 1920 cores, while the notebook version features 2048 cores. The card also uses 8GB of standard GDDR5 memory and features 6.5 teraflops of single-precision floating point performance. This number places the card right in between a Geforce GTX Titan X (7 teraflops) and the Geforce GTX 980 Ti (5.63 teraflops). The card features five display outputs, including three DisplayPort 1.2 ports (1.4 “ready” supporting 4K at 60Hz), one HDMI 2.0b port (4K at 60Hz), and one DL-DVI port.

For reference of Maxwell single precision, the Geforce GTX 980 Ti (June 2015) gets 5.63 teraflops, the Geforce GTX Titan Black (February 2014) gets 5.1 teraflops, the Geforce GTX 980 gets 4.61 teraflops and the Geforce GTX 970 gets 3.49 teraflops.

nvidia hb sli bridge sizes

Nvidia’s Geforce Pascal series also support the company’s new High Bandwidth SLI bridge that is claimed to double the available transfer bandwidth over Maxwell. From what we have read, however, this claim doesn’t hold for every single game based on initial reports and will reach variable performance at 4K and higher resolutions depending on the title. Even for games where there is no framerate advantage over single or dual SLI bridges, the HB SLI bridge can still provide an internal benefit for frame quality and screen tear reduction.

Physical dimensions

As for physical dimensions, this is a dual-slot card measuring 4.375 inches tall and 10.5 inches long. It is a tight fit in some ATX cases or E-ATX cases with radiators and water cooling configurations. In fact, we had a difficult time getting the card to fit inside a SilverStone Raven RV02 with a 280-millimeter Corsair H110 radiator and dual 140mm fans seated at the bottom of the case. With this type of cooling setup the card would only fit in PCI-Express Slot 4.

Of course, if you are running an Intel Core i7 CPU with only 28 PCI-Express lanes such as the Core i7 4820K or 5820K, you will soon realize that Slots 2 and 4 only run at PCI-E 3.0 x8 bandwidth (7.88GB/s) on most X79 and X99 motherboards, rather than full x16 bandwidth (15.76GB/s). Later in this review, however, we explain how PCI-E 3.0 x16 actually is not needed for a majority of recent titles unless they require heavier post-processing workloads on the CPU.

Pricing

The price has gone up for owning Nvidia’s midrange performance card this generation, as the previous GTX 970 launched at $329 in September 2014. The Geforce GTX 1070 is now available at $379 for standard models while the Geforce GTX 1070 Founders Edition sells for $449. As of October, the cards are still selling around the same retail prices as AMD prepares more Polaris products over the next few quarters.


Lower voltage, less transistor leakage, and a higher transistor density

During the official Geforce GTX 1080 and 1070 announcement back in May, CEO Jen-Hsun Huang noted that the company’s engineers have made extensive improvements to the power design and circuitry of its 10-series lineup. Where the Geforce GTX 980 was rated at a 209-millivolt peak-to-peak power efficiency, the GTX 1080 could reach peak-to-peak efficiency at 120-millivolts.

Additionally, GTX 970 idle voltage is 0.856v and Boost voltage is 1.218v, while GTX 1070 idle voltage is 0.625v and Boost voltage is 1.062v. The transition from 28nm transistors to 14nm FinFET is showing some noticeable gains for voltage efficiency, though Nvidia has previously noted that there is a steeper frequency and voltage curve using the 14nm FinFET design.

In terms of the power variance, Jen-Hsun says “with a 1V input, 100mV is all we see. We want to deliver that level of power and current across our entire operating range of GPUs.”

The Geforce GTX 1070 is designed with a 150W TDP, up five watts from the 145W-rated GTX 970. However, the Pascal-based performance card only requires a single 8-pin PCI-E power connector, while the GTX 970 requires dual 6-pin connectors. Maximum GPU temperature has also dropped from a rated 98C with the GTX 970 to 94C with the GTX 1070.


Nvidia's Pascal GP104

As we mentioned during GTC conference earlier this spring, Nvidia’s full GP100 chip features 15.3 billion transistors in a 610mm^2 die package with 3,584 cores across 56 streaming multiprocessor (SM) clusters containing 64 SMs each.

For the Geforce GTX 1070, the company is using a cut-down version of this die called GP104-200, featuring 7.2 billion transistors in a size estimated at 333m^2. The chip has 1920 cores across 15 SM clusters, 64 ROPs and 120 texture units. Meanwhile, the GP104-400 in the Geforce GTX 1080 has 2560 cores across 20 SM clusters, 64 ROPs and 128 texture units.

It is important to note that Nvidia disabled 5 SM clusters in the Geforce GTX 1070 desktop card and then released the mobile Geforce GTX 1070 variant for notebooks a few weeks ago with 16 SM clusters enabled. The mobile version brings the core count up to 2048 cores with 128 texture units. As some sites point out, however, the benefit of an additional SM cluster is countered by having lower core clock frequencies by about 4 percent over the desktop version. But in terms of peak gigaflops, the notebook Geforce GTX 1070 variant will still deliver 6.74 teraflops at lower frequencies versus 6.46 on desktop.

nvidia gp104 block diagram

Nvidia GP104-200 block diagram (Larger image here)

In terms of architecture, Pascal shares many similarities with Maxwell in that both cores contain four graphics processing clusters (GPC) each. The GP104-200, however, has one of the four GPCs disabled in the die. When Fermi was introduced several years ago, the company introduced a pipeline unit called the PolyMorph Engine, each of which (16 at the time) contained a vertex fetcher and tessellator, among other key units. All the way up until Maxwell, this unit was integrated into the SMs, but with Pascal it has been reintroduced into a separate Texture Processing Cluster (TPC) that sits between the Raster Engine and each SM block.

nvidia maxwell vs pascal sm block diagram

According to some reports, the number of units, engines and ROPs between Maxwell GM204 and Pascal GP104, along with their throughput per-clock speeds make the architectures to be performance identical, at least on paper. The differences can likely be attributed to SM block layout changes, pipeline scheduling patterns, and any new load balancing techniques.

Nvidia is shipping the GTX 1070 with GDDR5 memory while the GTX 1080 receives GDDR5X, and the GP104 features a memory controller with 256-bit width designed to support the tighter signaling demands of GDDR5X while offering backward compatibility with GDDR5. The GTX 1070 operates at 8Gbps frequency and delivers 256GB/s memory bandwidth, while the GTX 1080 operates at 10Gbps frequency and delivers 320MB/s of bandwidth.


High Dynamic Range support for games

Back when the Geforce 6 series launched in 2004, HDR in-game rendering was made possible using Shader Model 3.0 to use lighting calculating for expanding contrast and creating more realistic scenes. Half Life 2: Lost Coast was a gold standard of this technology during the time, and some alternatives also showed up along the way such as Bloom Lighting, although much less dynamic in nature. The issue with in-game HDR rendering at the moment, however, is that dynamic details often had to be tone-mapped down to standard dynamic range (SDR) in order to properly display on SDR displays.

nvidia pascal hdr coming to games soon

Beginning with Pascal, Nvidia now wants to take HDR outside to the display and its connectivity standards by supporting 10 and 12-bit Deep Color to a user’s HDR-capable display panel and streaming the panel’s HDR information back into the game through Windows and related display APIs. Of course, this will require extensive driver rebuilding on Nvidia’s part, but if it can be managed, perhaps AMD will follow suit and the two companies can convince the display panel market to begin preparing for HDR-ready desktop and notebook panels when the required API programming is set in place.

Display controller now supports HDR signaling bandwidth

Nvidia’s Geforce 10 series brings a new display controller to the table this time featuring support for the latest DisplayPort, HDMI and HDR metadata standards. While the GTX 970 and 980 featured three DisplayPort 1.2 ports and one HDMI 2.0a port, the GTX 1070 and 1080 feature three DisplayPort 1.2 ports listed as 1.3 and 1.4 “Ready” and a single HDMI 2.0b port with HDR support. The new DisplayPort 1.3 standard offers High Bit Rate 3 (HBR3) signaling mode, bringing a full DP signal from 21.4Gbps to 32.4Gbps, while DisplayPort 1.4 adds support for HDR metadata using the same standard available in HDMI 2.0a and 2.0b.


H.264 and HEVC playback on Geforce Pascal

The first card to support hardware acceleration of H.264 and VC-1 video playback was the Geforce 6600 with the introduction of Nvidia’s PureVideo technology. This was further improved in the Geforce 8400, 8500 and 8600 series with a dedicated H.264 pipeline. The Geforce 8400GS then followed as the first card to natively decode Blu-ray disc formats, while Kepler-based Geforce 600 cards were the first to natively support 4K decoding.

The company then introduced dedicated H.265 HEVC decoding blocks with Geforce GTX 950, 960 and GTX 750 SE. Meanwhile, other Geforce 900 series cards have partial HEVC playback in a hybrid solution involving both the CPU and GPU arrays.

nvidia pascal hdr slide

With Geforce 10, however, HEVC playback is now natively supported on the entire card lineup and supports 10-bit and 12-bit decoding formats. The cards can also support decoding H.264 and HEVC in 4K at 120Hz or 8K at 30Hz, or dual stream encoding of H.264 and HEVC each in 4K at 60Hz.


Single, double and half-precision performance

The Geforce GTX 1070 runs 5.78 teraflops in single precision and 0.18 teraflops in double precision, with the FP64 calculations being performed at a 1/32 FP32 ratio. For single precision, this puts the card just above the Geforce GTX 980 Ti (5.63 / 0.18) and just below the Radeon R9 390X (5.91 / 0.74), yet still placing above the Geforce GTX Titan Black (5.12 / 1.71) for single precision and close to the Maxwell-based Titan X (6.14 / 0.19) in double precision performance.

In terms of FP16 performance, Nvidia has only included a single FP16x2 core for every 128 FP32 cores, which means the performance relative to FP32 is 1/128, or about 0.13 teraflops.

Below is a list of over a half-dozen consumer GPUs based on their rankings in teraflop performance:

amd nvidia single and double precision table


PCI-E 3.0 x8 and PCI-E 3.0 x16 bandwidth

One recent topic of investigation surrounding PCI-Express 3.0 graphics cards is the possibility of framerate performance variations from game to game depending on the engine’s CPU-to-GPU processing requirements.

We did some investigating on whether or not there is actually any perceptible difference between PCI-E 3.0 x8 (PCI-E 2.0 x16 equivalent) and PCI-E 3.0 x16 for the latest generation of cards, and discovered a pretty comprehensive review from October 2014 by W1zzard, the author of GPU-Z. Two years ago, he compared Geforce GTX 980 (Maxwell) framerate performance across different PCI-E bandwidth configurations (x4/x8/x16 1.1, x4/x8/x16 2.0, x4/x8/x16 3.0) and discovered less than a 1 percent performance difference between PCI-E 3.0 x8 and PCI-E 3.0 x16 for one of the previous generation’s top cards.

He explained that the driving factor for PCI-Express bandwidth limitations is high framerates, rather than higher resolutions. For instance, a game running at 100fps at 1600x900 resolution is going to require more bandwidth than the same game running at 30fps at 3840x2160p resolution.

In fact, the bandwidth constraint only becomes noticeable with games that constantly stream large amounts of data between the CPU and GPU for post-processing effects (Ryse: Son of Rome), games that use virtual textures (Wolfenstein: The New Order), and games that use a deferred rendering engine (WoW: Warloads of Draenor). We did not test any of these three games, but for those who do make an effort to play them, using PCI-E 3.0 x16 bandwidth should prove to be useful at lower resolutions.

For our test configuration, we used an Intel Core i7 5820K processor featuring 28 PCI-Express lanes along with an EVGA X99 Classified motherboard. Due to the bandwidth limitations of a 28-lane CPU, however, the graphics card can only operate at full PCI-E 3.0 x16 bandwidth when installed in PCI-E Slot 1. Along with heatsink size restrictions in our setup, we were only able to place the GTX 1070 in Slot 2, 3 and 4, all of which operate at PCI-E 3.0 x8 bandwidth.


Test setup

Our full test configuration featured an Intel Core i7 5820K CPU at 4.33GHz, an EVGA X99 Classified motherboard, 16GB (4 x 4GB) of Kingston HyperX DDR4 2800MHz CL14, a Samsung 840 Pro SSD as the primary drive, an EVGA SuperNOVA 750 G2 PSU and a Corsair Carbide Air 540 gaming case. The system was operating in a room temperature environment with side panels off, using an air-cooled Thermalright TRUE copper CPU heatsink, two Noctua heatsink fans and five Corsair case fans.

All benchmarks were completed using Geforce 372.70 drivers on Windows 10 Pro x64 version 1607 with an LG 27UD68 IPS 3840x2160p monitor. The EVGA Geforce GTX 970 SC ran with a factory-overclocked 1140MHz base clock / 1279MHz Boost clock, while the Nvidia Geforce GTX 1070 Founders Edition ran with a default 1506MHz base clock / 1683MHz Boost clock.

gtx 1070 founders edition gpu z


Fallout 4

gtx 1070 founders edition fallout 4 3840x2160p benchmark

gtx 1070 founders edition fallout 4 2560x1440p benchmark

gtx 1070 founders edition fallout 4 1920x1080p benchmark

In the 3840x2160p test, the GTX 1070 Founders Edition results show a 49.8 percent improvement on average over the GTX 970 SC, while maximum framerate improves by 14 frames, or 46.7 percent.

Meanwhile, the 1440p test shows the GTX 1070 with a 46.8 percent improvement, while maximum framerate improves by 28 frames, or 49.9 percent.

In the 1080p test, the GTX 1070 improves over the GTX 970 SC by 40.9 percent, while maximum framerate improves by 46 frames, or 54.8 percent.


Far Cry 4

gtx 1070 founders edition far cry 4 3840x2160p benchmark

gtx 1070 founders edition far cry 4 2560x1440p benchmark

gtx 1070 founders edition far cry 4 1920x1080p benchmark

In the 3840x2160p test, the GTX 1070 Founders Edition shows a 47.3 percent improvement over the GTX 970 SC, while maximum framerate improves by 15 frames, or 48.4 percent.

At 1440p resolution, the GTX 1070 improves over the GTX 970 SC by 37.1 percent, while maximum framerate improves by 30 frames, or 52.6 percent.

In the 1080p test, the GTX 1070 manages a 79.2 percent improvement on average, while the maximum framerate improves by 49 frames, or 58.3 percent.


No Man's Sky

gtx 1070 founders edition no mans sky 3840x2160p benchmark

gtx 1070 founders edition no mans sky 2560x1440p benchmark

gtx 1070 founders edition no mans sky 1920x1080p benchmark

In the 3840x2160p test, the GTX 1070 Founders Edition shows a 15.8 percent improvement over the GTX 970 SC, while maximum framerate improves by 19 frames, or 48.7 percent.

In 1440p resolution, the GTX 1070 increases performance on average by 48.8 percent, while maximum framerate improves by 33 frames, or 52 percent.

As for the 1080p test, the GTX 1070 increases performance by 91.4 percent, while maximum framerate improves by 45 frames, or 49.5 percent.


The Elder Scrolls V: Skyrim

gtx 1070 founders edition skyrim 3840x2160p benchmark

gtx 1070 founders edition skyrim 2560x1440p benchmark

In the 3840x2160p test, the GTX 1070 Founders Edition brings the game into above-60fps territory at 77.53fps, a 33.1 percent improvement over the GTX 970 SC at 42.75fps. Meanwhile, maximum framerate improves by 44 frames, or 78.6 percent.

At 1440p resolution, the GTX 1070 brings the game noticeably above 144fps levels with 58.3 percent improvement over the GTX 970 SC, while maximum framerates improve by 64 frames, or 57.1 percent.


Just Cause 3

gtx 1070 just cause 3 3840x2160p benchmark

gtx 1070 just cause 3 2560x1440p benchmark

gtx 1070 just cause 3 1920x1080p benchmark

In the 3840x2160p test, the GTX 1070 Founders Edition shows a 43.8 percent improvement over the GTX 970 SC, while maximum framerates increase by 14 frames, or 43.8 percent.

At 1440p, the GTX 1070 increases performance by 41.1 percent over the GTX 970 SC, while maximum framerates increase by 28 frames, or 45.9 percent.

In the 1080p test, the GTX 1070 shows an 83.1 percent performance improvement over the GTX 970 SC, while maximum framerates increase by 25 frames, or 37.3 percent.


GPU Boost 3.0

With Nvidia’s introduction of Pascal, it is releasing the third update to its GPU Boost overclocking feature, which originally debuted with the launch of the Geforce GTX 680. The company says that after acquiring data from hundreds of thousands of users following the GTX 680 launch, its engineers determined that GPU temperature was more commonly an inhibitor to performance than power consumption was, leaving some margins of spare potential.

One solution was to develop a new software feature to fill in gaps of unused performance by dynamically overclocking the GPU to use all the available TDP headroom below a GPU’s target power limit. GPU Boost works by continuously monitoring power usage and temperatures, making real-time adjustments to clock speeds and voltages several times per second based on an application’s demands until the GPU reaches its TDP limit.

In GPU Boost 1.0 and 2.0, a combination of power consumption, temperature, GPU and memory usage data is collected through algorithms that use this information to adjust Boost clock, memory frequency and voltages. More specifically, with GPU Boost 2.0, the highest stable overclock was limited to the lowest adjustable stable frequency for every single voltage point on the curve. For example, if one point somewhere along the curve could overclock by 35MHz and nothing higher, this was the highest frequency at which the other points could be set.

gpu boost 2.0 vs 3.0

Now with GPU Boost 3.0, the company is allowing users to manually adjust the clockspeed offsets at every individual voltage point. This results in a much more granular, fine-tuning experience over the overclocking process and should effectively make up for what the company refers to as “lost opportunity.”

Thankfully, EVGA’s Precision XOC utility can do most of the required guesswork by automatically scanning each manually adjusted voltage point for stability, and then creating a custom frequency curve with little user intervention. Mileage will vary of course, and the results will depend on whether the system freezes or locks up before it reaches the end. There is also a way of manually entering a slope value for a custom frequency curve rather than setting individual voltage points.


Conclusion – 4K is playable with a single GTX 1070, but still depends on the game

The advent of 3840x2160p Ultra HD resolution for PCs and televisions over the past four years has really prompted companies along the display supply chain to meet all the requirements of offering successful, well-functioning products that consumers have come to expect from the Full HD 1080p revolution that occurred about a decade ago. With the PC market, Nvidia is in a tougher spot just like AMD because the companies also strive to meet all the requirements of the home theater market (HDR standards, Rec.2020, adequate transfer bandwidth) while aiming for an eventual minimum 60fps goal for PC gamers.

With the GTX 1070, it seems like Nvidia is in a much better spot on meeting this performance target than it was two years ago with Maxwell-based GTX 980 and 970, although there are still some games that will require at least two GTX 1070s to reach 60fps in 4K. Based on our tests, most of which were performed on maximum settings with at least 4X anti-aliasing enabled, we witnessed 3840x2160p results anywhere between 28fps and 110fps. At every resolution tested, all the games still benefitted from the GTX 1070 over the GTX 970 SC, with some more noticeably than others.

The most demanding title we tested was Fallout 4, which averaging just 19fps in 3840x2160p on Ultra settings with Nvidia’s Vault 1080 Mod with the GTX 970 SC and improved to 28.5fps with the GTX 1070. More reasonably, however, Just Cause 3 performed at just 28.2fps on the GTX 970 SC, while the GTX 1070 bumped this up to a fairly reasonable 40.6fps, though this still falls short from matching our 4K display’s 60Hz refresh rate.

No Man’s Sky also showed some adequate improvement in 4K, increasing from 32.2fps on the GTX 970 SC to 49.2fps on the GTX 1070. There was also Far Cry 4, which had somewhat unplayable 4K framerates of 26.7fps on the GTX 970 SC and jumped to a modest 39.3fps under the GTX 1070.

Gaming in 1440p should not be overlooked between Maxwell and Pascal either. In Far Cry 4, the game averaged 49.5fps on the GTX 970 SC and improved to 75fps on the GTX 1070, making the card a great choice on desktops, notebooks and external GPU enclosures at this resolution as well.

All in all, however, there is still a sense that the GTX 1070 sits firmly near the center of a larger spectrum of 4K-capable graphics processing units in the Maxwell and Pascal series product lineups. For those who are series about 1440p gaming, this card can be taken seriously on its own without need for SLI. Yet for resolutions including 3440x1440p and 3840x2160p using maximum in-game settings, neither a single GTX 1070 nor a single GTX 1080 can quite do this trick, and this is either where SLI or a single Nvidia Titan X comes into consideration.

Great choice in preparing for 4K HDR gaming and media standards

In terms of price-to-performance from the GTX 970 in September 2014, the GTX 1070 Founders Edition with its $449 retail price tag is still a high sticker price compared to the previous unit’s $329 launch price, followed by its reduction to $239 earlier this summer shortly before the Geforce 10 series launches. Realistically speaking, it is probably more reasonable at this point to price the GTX 980 Ti against the GTX 1070 Founders Edition, the latter of which can be found at just $419 and $429 refurbished and often even including warranties. The GTX 1070 pulls ahead of the GTX 980 Ti by just a few frames in 4K, 1440p and 1080p based on other reports comparing recent GPUs. Yet even with this considered, framerate alone is by no means a definitive reason to pass up a $20 USD difference between the two cards (update: EVGA now sells the GTX 980 Ti and GTX 980 Ti Classified for $339 and $439, respectively). The power efficiency improvements offered by Pascal, not to mention the HDR-supporting DisplayPort standards, Ultra HD Blu-ray playback support, and full upcoming support for HDR gaming make the GTX 1070 Founders Edition a top choice for any gamer, display quality aficionado or film enthusiast as a means to prepare for 4K content at every level of the PC, Web and entertainment industries.

fudz recommended ny

Last modified on 05 October 2016
Rate this item
(7 votes)