titleYou have all seen and heard the rumors for months now about Nvidia’s upcoming GPU code named Kepler. When we got the call that Nvidia was inviting editors out to San Francisco to show us what they have been working on we jumped at the chance to finally put the rumors to rest and see what they really had up their sleeve. Today we can finally tell you all the gory details and dig into the performance of Nvidia’s latest flagship video card. Will it be the fastest single GPU card in the world? We finally find out!

Product Name: Nvidia GTX 680 Kepler

Review Sample Provided by: Nvidia

Review by: Wes

Pictures by: Wes


Kepler what’s it all about

geforcelogo

During editors day Nvidia opened things up by boldly stating that the new GTX 680 will be the most powerful and most efficient card on the market. They went on soon after to bring out Mark Rein, Vice President of Epic Games to show off a demo that they made last year called the Unreal Samaritan Demo. Last year  when the demo was originally shown they required a three GTX 580 powered rig to show it off. While playing the demo he pointed out that that same demo is now running great on just one card, the Kepler based GTX 680. Obviously the GTX 680 isn’t three times as powerful as a single GTX 580, but throughout the day they went over all of the pieces that fit together to help them realize that performance increase.

dieshot

First let’s jump into the hardware itself because that is obviously a big part of the performance increase. At the core of the GTX 580 there were 16 Streaming Multiprocessor’s (SM), each having 32 cores. With the GTX 680 Nvidia has moved to what they are calling the SMX. Each SMX has twice the performance/watt when compared to the GTX 580 and each has 192 cores. Now the GTX 680 has a total of 8 SMX multiprocessor’s making for a total of 1536 Cores!

GeForce GTX_680_Block_Diagram_FINAL

GeForce GTX_680_SM_Diagram_FINAL

Like I mentioned before they said the GTX 680 would be a very efficient card considering its performance. That was obvious as soon as they posted up its new TDC of 195 Watts, sporting just two six pin connections. For those who are counting that’s 50 watts less than the GTX 580 and around 20 less than the HD 7970.

GeForce GTX_680_3qtr_No_Thermal

GeForce GTX_680_B

It’s clear that Nvidia took a look at what Intel has been doing on the CPU side of things when creating the GTX 680. Kepler introduces a new Nvidia technology called GPU Boost. This is similar to what Intel’s Turbo mode is, but with a different approach. The idea is that some games end up requiring less actually wattage to compute than other games. In the past they have set their clock speed by the maximum power usage in the worst case application. With GPU boost the GPU is able to notice that its being underutilized and it will boost core clock speed and on a lesser level memory clock speed. You shouldn’t look at this as overclocking, because you can still overclock the GTX on your own. GPU boost runs all of the time no matter what meaning you can’t turn it off, even if you would like to. When you are overclocking in the future you will actually being changing the base clock speed, GPU boost will still be on top of that. For those that are wondering, GPU Boost polls every 100ms to readjust the boost. This means if a game gets to a demanding part you’re not going to have your card crash from pulling too much voltage.

gpuboost

Like I said before, more powerful hardware are only part of the picture when it comes to Unreal’s demo system going from three GTX 580’s to one GTX 680. Nvidia spend a lot of time working on FXAA and their new TXAA Anti-Aliasing techniques. In the demo mentioned they went from using MSAA 4x to FXAA 3. Not only did they see better performance but it actually did a better job smoothing out the image. I’ve included the images below for you guys to see, I think you will agree.

Samaritan No_AA

Samaritan 4xMSAA

TXAA Demo_TXAA

As I mentioned before on top of talking about FXAA Nvidia launched a new Anti-Aliasing called TXAA. TXAA us a mixture of hardware anti-aliasing, custom CG film style AA resolve, and when using TXAA 2 an optional temporal component for better image quality. By taking away some of the traditional hardware anti-aliasing TXAA is able to look better than MSAA while performing better as well. For example TXAA 1 performs like 2x MSAA but looks better than 8xMSAA.

txaa

TXAA Demo_NoAA

TXAA Demo_8xMSAA

TXAA Demo_TXAA

Another feature being implemented with Kepler is one called Adaptive V Sync. This is designed to better your gaming experience. One complaint that people have with high end GPU’s is screen tearing. One way to prevent that is to turn on VSync. But the downside to this is when you drop below 60PFS it will force you to 30FPS, this causes a major hiccup and slowdown when the GPU may be able to actually give you 59 FPS. Adaptive VSync watches for this and turns VSync off whenever you drop below 60 FPS making the transition smoother.

vsync1

vsync2

On top of all of those important features that improve your gaming experience there are a few other smaller things introduces also that I would like to quickly go over. First, 2D surround now only requires one GPU to run three monitors. This means you don’t need to run and pick up a second card just to game with three monitors, as far as performance goes, it will still depend on what you’re playing. To go along with this they are also introducing 3+1. This is a setup that allows you to run a fourth monitor above your triple monitors. You can run this while still gaming on the three, this is perfect for watching notifications or TV shows during slow periods of your gaming.

NV 3DVSN_SRRND_SKYRIM_KV_31_no_glasses_FINAL_hires

Other surround improvements include finally moving the task bar to the middle monitor, something that should have been done from the start. You can now also maximize windows to a single monitor also. A small tweak added now allows you to turn off the bezel adjustment on the fly when running multiple monitors; this is great if you are running your macros in wow across two monitors for example, you won’t miss out on anything.

Lastly they also have tweaked the GPU’s performance to be a little more optimized when you are running a single monitor game on one of your triple monitors.

 

Log in to comment

garfi3ld's Avatar
garfi3ld replied the topic: #24104 22 Mar 2012 17:01

titleYou have all seen and heard the rumors for months now about Nvidia’s upcoming GPU code named Kepler. When we got the call that Nvidia was inviting editors out to San Francisco to show us what they have been working on we jumped at the chance to finally put the rumors to rest and see what they really had up their sleeve. Today we can finally tell you all the gory details and dig into the performance of Nvidia’s latest flagship video card. Will it be the fastest single GPU card in the world? We finally find out!

Read more...
Dreyvas's Avatar
Dreyvas replied the topic: #24106 22 Mar 2012 17:41
Awesome. EVGA step-up process initiated. :)
L0rdG1gabyt3's Avatar
L0rdG1gabyt3 replied the topic: #24107 22 Mar 2012 17:44
When I saw that this was up, a tear formed in my eye! When I saw that the leaked specs and performance numbers were just about spot on, I laughed triumphantly! When I saw the MSRP, I just about messed my pants!

This WILL be my next video card. (And it makes it easier for me to pitch it to my wife with such a low price!)
Wingless92's Avatar
Wingless92 replied the topic: #24112 22 Mar 2012 20:04
I would wait for the 4gb cards. Might as well, double the frame buffer is always a good idea.
Myndmelt's Avatar
Myndmelt replied the topic: #24113 22 Mar 2012 20:16
They are doing a Live review over at pcper.com/live right now.
Reaper's Avatar
Reaper replied the topic: #24115 22 Mar 2012 20:26
I think I'll be picking one of these cards up. Time to let my esteemed 5870 rotate to my next lowest system. Thanks for posting your review.

Wingless92 wrote: I would wait for the 4gb cards. Might as well, double the frame buffer is always a good idea.


I don't argue with concept of double anything being bad, but the frame buffer is a bit misleading... Let's say you have a full 1080p eyefinity setup going (which obviously you won't have eyefinity with nVidia... but for discussion sake)... the resolution is 5760 x 1080 x 3 (24-bit color depth, you'll never have an alpha channel streamed) = ~17.8 MB. Most games, at best support triple-buffering, so ~54 MB. Your vertex shader inputs are going to cumulatively be more than that.

In general the only real thing you're effecting by increasing VRAM these days is how many texture maps you can fit in RAM. A ridiculous maximum texture size (8192 x 8192 x 4) still only amounts to ~333 MB of VRAM, even when mipmapped.

Just saying there's no practical reason (that I can determine) not to go ahead with one of these cards in case anyone else is on the fence.
Wingless92's Avatar
Wingless92 replied the topic: #24116 22 Mar 2012 20:54
When I had my Surround setup I know if I would have had another 3gb EVGA Classified card I would have seen better FPS with the bigger frame buffer. The game would run better with the more that it can load into the buffer. Also, now that i'm on a 30in 2560x1600 panel its still pushing the GPU's really hard. Once the 580's come down I'm going to do a third.

With everything cranked to the max in Alan Wake I am seeing right around 60FPS.

Sure, if you are on a 1080p monitor, then this is overkill. Save the cash and buy something less powerful. These cards are meant for multiple displays or 30in panels. Other than Battlefield 3 no one is playing intensive games. Most people play LoL, TF2 and DOTA2. You don't need a 680 to run these games. A 560Ti would be fine. If I had a lower end card I would be looking for the new 660Ti whenever that comes out.
garfi3ld's Avatar
garfi3ld replied the topic: #24117 22 Mar 2012 21:04
wait your looking for more than 60FPS? Whats the point in that :-P
Wingless92's Avatar
Wingless92 replied the topic: #24118 22 Mar 2012 21:08
I was happy with that. Before the 1.04 patch it would only do 30FPS.
jj_Sky5000's Avatar
jj_Sky5000 replied the topic: #24119 22 Mar 2012 22:20
The price is the most shocking & they have left the door open as well, with teh lower power consumption!!!! I think i will be adding Keplers to my collection!!!
Reaper's Avatar
Reaper replied the topic: #24120 22 Mar 2012 22:41

Wingless92 wrote: When I had my Surround setup I know if I would have had another 3gb EVGA Classified card I would have seen better FPS with the bigger frame buffer. The game would run better with the more that it can load into the buffer. Also, now that i'm on a 30in 2560x1600 panel its still pushing the GPU's really hard. Once the 580's come down I'm going to do a third.

Hrm... you'd need to clarify that all up for me before I'd give a real serious reply but just quickly and to keep it all in context of what I was sayin' earlier, a frame buffer has zero to do with frame rate on today's graphics cards due to its negligible size - it's a small drop in a huge bucket. The (primary) frame buffer is a static reserved area in VRAM based upon the currently selected monitor resolution and color depth. The GPU streams the frame buffer to the monitor (or monitors) once per VSYNC strobe, regardless of whether you have VSYNC delay turned on in your game. So frame buffer really doesn't effect frame rate.

You mentioned "had another 3gb ... card", what I'm not sure of here is whether you mean two SLI'd cards. Of course that's going to process faster but everyone knows that so not sure what you were getting at there :-D .

jj_Sky5000 wrote: The price is the most shocking & they have left the door open as well, with teh lower power consumption!!!! I think i will be adding Keplers to my collection!!!


Amen brotha... Now just time to pick a manufacturer!
Dreyvas's Avatar
Dreyvas replied the topic: #24121 22 Mar 2012 23:00
EVGA's website seems to be down right now, heh.
Myndmelt's Avatar
Myndmelt replied the topic: #24122 22 Mar 2012 23:07
Check out this video about it.
Wingless92's Avatar
Wingless92 replied the topic: #24127 23 Mar 2012 00:09
The new Precision and OC Scanner apps from EVGA are looking pretty good. I use OC Scanner all the time to test my cards out.

Anyone else look over on Newegg and see that PNY was trolling and asking $20 more for their cards? I'll take EVGA until they go downhill which I hope will be never.
Reaper's Avatar
Reaper replied the topic: #24130 23 Mar 2012 00:26

Wingless92 wrote: Anyone else look over on Newegg and see that PNY was trolling and asking $20 more for their cards? I'll take EVGA until they go downhill which I hope will be never.


It's actually $30 (right now) ... but yeah I am angling towards picking up an EVGA. Several NewEgg gift cards ready to be used up :-)
Arxon's Avatar
Arxon replied the topic: #24158 23 Mar 2012 09:53
Sadly this card isn't much better than a 7970. Wait till ATi/AMD come out with the 7990.

"AMD is expected to release its flagship dual-GPU graphics card based on the "Tahiti" platform in 2012. We are learning under the codename "New Zealand", AMD is working on the monster HD 7990, which will utilize two HD 7970's and 6GB of total graphics memory (3 GB per GPU system). If AMD will indeed go for the HD 7970, this could mean that the card will include 62 compute units for a total of 4096 stream processors, 256 texture units and 64 full color ROPs."
garfi3ld's Avatar
garfi3ld replied the topic: #24159 23 Mar 2012 10:32
Arxon you have to remember two things. A two GPU card isn't "topping" Nvidia, they can do the same thing. Second, not being "much" better is still a large gap considering the price difference right at launch.

This isn't coming from someone with any bias, I gave two awards to AMD cards this week even.
garfi3ld's Avatar
garfi3ld replied the topic: #24160 23 Mar 2012 10:37
Also I think the Nvidia drivers, although much more mature and stable than what AMD launched with, still have a lot of room for improvement with ingame performance. 3DMark shows that the card is capable of. I wouldn't be shocked if we saw major improvements in some games by the time we get around to cards like the GTX 670 or 660.
Arxon's Avatar
Arxon replied the topic: #24161 23 Mar 2012 10:56
I didn't realize the 7990 had to gpus when i made the first comment then i read about it. As for price from what I see $50 cheaper but the ATI has more stream processors faster ram but a 8ghz slower core on stock models. Not much of a gain for 50 but i wouldn't have to buy a second one for triple monitors. Unless Nvidia changed that.(just watched that video and at the end they did change it to where you only need one card.)
garfi3ld's Avatar
garfi3ld replied the topic: #24162 23 Mar 2012 11:02
Nvidia changed that.. And I Wouldn't pay the same price let alone more for a GPU that is slower. No matter how many stream processors it has. :-P

We will have to revisit everyone once the dust settles
Arxon's Avatar
Arxon replied the topic: #24163 23 Mar 2012 11:05
Yeah, I wanna see the 690 now to see if it is gonna be like the 7990 with dual 680s
THUMPer's Avatar
THUMPer replied the topic: #24173 23 Mar 2012 18:14
Wheres the GPU Compute performance? After all, Nvidia created the GPGPU market and CUDA programming environment. NV can tweak their drivers for games and benchmarks, that's obvious, but they shat where they sleep.
AMD needs to bring their prices down or they won't sell anything. I would like to see more info on the gpu boost.
Wingless92's Avatar
Wingless92 replied the topic: #24175 23 Mar 2012 18:18
I'm liking the new card. Is it going to make me run out and buy 3 new cards? Na, i'll just buy another 580, run tri-SLi and be set for a long time.
THUMPer's Avatar
THUMPer replied the topic: #24176 23 Mar 2012 18:35
L0rdG1gabyt3's Avatar
L0rdG1gabyt3 replied the topic: #24179 23 Mar 2012 18:48

THUMPer wrote: Wheres the GPU Compute performance? After all, Nvidia created the GPGPU market and CUDA programming environment. NV can tweak their drivers for games and benchmarks, that's obvious, but they shat where they sleep.
AMD needs to bring their prices down or they won't sell anything. I would like to see more info on the gpu boost.

According to the live stream over at pcper yesterday.... the nvidia rep said that the compute performance is going to be more focused on the Quattro line as a way to differentiate the two lines of cards. Its sad.

On another note.... the 680s seem to scale pretty well!


We have 1503 guests and one member online

supportus