titleYou have all seen and heard the rumors for months now about Nvidia’s upcoming GPU code named Kepler. When we got the call that Nvidia was inviting editors out to San Francisco to show us what they have been working on we jumped at the chance to finally put the rumors to rest and see what they really had up their sleeve. Today we can finally tell you all the gory details and dig into the performance of Nvidia’s latest flagship video card. Will it be the fastest single GPU card in the world? We finally find out!

Product Name: Nvidia GTX 680 Kepler

Review Sample Provided by: Nvidia

Review by: Wes

Pictures by: Wes


Kepler what’s it all about

geforcelogo

During editors day Nvidia opened things up by boldly stating that the new GTX 680 will be the most powerful and most efficient card on the market. They went on soon after to bring out Mark Rein, Vice President of Epic Games to show off a demo that they made last year called the Unreal Samaritan Demo. Last year  when the demo was originally shown they required a three GTX 580 powered rig to show it off. While playing the demo he pointed out that that same demo is now running great on just one card, the Kepler based GTX 680. Obviously the GTX 680 isn’t three times as powerful as a single GTX 580, but throughout the day they went over all of the pieces that fit together to help them realize that performance increase.

dieshot

First let’s jump into the hardware itself because that is obviously a big part of the performance increase. At the core of the GTX 580 there were 16 Streaming Multiprocessor’s (SM), each having 32 cores. With the GTX 680 Nvidia has moved to what they are calling the SMX. Each SMX has twice the performance/watt when compared to the GTX 580 and each has 192 cores. Now the GTX 680 has a total of 8 SMX multiprocessor’s making for a total of 1536 Cores!

GeForce GTX_680_Block_Diagram_FINAL

GeForce GTX_680_SM_Diagram_FINAL

Like I mentioned before they said the GTX 680 would be a very efficient card considering its performance. That was obvious as soon as they posted up its new TDC of 195 Watts, sporting just two six pin connections. For those who are counting that’s 50 watts less than the GTX 580 and around 20 less than the HD 7970.

GeForce GTX_680_3qtr_No_Thermal

GeForce GTX_680_B

It’s clear that Nvidia took a look at what Intel has been doing on the CPU side of things when creating the GTX 680. Kepler introduces a new Nvidia technology called GPU Boost. This is similar to what Intel’s Turbo mode is, but with a different approach. The idea is that some games end up requiring less actually wattage to compute than other games. In the past they have set their clock speed by the maximum power usage in the worst case application. With GPU boost the GPU is able to notice that its being underutilized and it will boost core clock speed and on a lesser level memory clock speed. You shouldn’t look at this as overclocking, because you can still overclock the GTX on your own. GPU boost runs all of the time no matter what meaning you can’t turn it off, even if you would like to. When you are overclocking in the future you will actually being changing the base clock speed, GPU boost will still be on top of that. For those that are wondering, GPU Boost polls every 100ms to readjust the boost. This means if a game gets to a demanding part you’re not going to have your card crash from pulling too much voltage.

gpuboost

Like I said before, more powerful hardware are only part of the picture when it comes to Unreal’s demo system going from three GTX 580’s to one GTX 680. Nvidia spend a lot of time working on FXAA and their new TXAA Anti-Aliasing techniques. In the demo mentioned they went from using MSAA 4x to FXAA 3. Not only did they see better performance but it actually did a better job smoothing out the image. I’ve included the images below for you guys to see, I think you will agree.

Samaritan No_AA

Samaritan 4xMSAA

TXAA Demo_TXAA

As I mentioned before on top of talking about FXAA Nvidia launched a new Anti-Aliasing called TXAA. TXAA us a mixture of hardware anti-aliasing, custom CG film style AA resolve, and when using TXAA 2 an optional temporal component for better image quality. By taking away some of the traditional hardware anti-aliasing TXAA is able to look better than MSAA while performing better as well. For example TXAA 1 performs like 2x MSAA but looks better than 8xMSAA.

txaa

TXAA Demo_NoAA

TXAA Demo_8xMSAA

TXAA Demo_TXAA

Another feature being implemented with Kepler is one called Adaptive V Sync. This is designed to better your gaming experience. One complaint that people have with high end GPU’s is screen tearing. One way to prevent that is to turn on VSync. But the downside to this is when you drop below 60PFS it will force you to 30FPS, this causes a major hiccup and slowdown when the GPU may be able to actually give you 59 FPS. Adaptive VSync watches for this and turns VSync off whenever you drop below 60 FPS making the transition smoother.

vsync1

vsync2

On top of all of those important features that improve your gaming experience there are a few other smaller things introduces also that I would like to quickly go over. First, 2D surround now only requires one GPU to run three monitors. This means you don’t need to run and pick up a second card just to game with three monitors, as far as performance goes, it will still depend on what you’re playing. To go along with this they are also introducing 3+1. This is a setup that allows you to run a fourth monitor above your triple monitors. You can run this while still gaming on the three, this is perfect for watching notifications or TV shows during slow periods of your gaming.

NV 3DVSN_SRRND_SKYRIM_KV_31_no_glasses_FINAL_hires

Other surround improvements include finally moving the task bar to the middle monitor, something that should have been done from the start. You can now also maximize windows to a single monitor also. A small tweak added now allows you to turn off the bezel adjustment on the fly when running multiple monitors; this is great if you are running your macros in wow across two monitors for example, you won’t miss out on anything.

Lastly they also have tweaked the GPU’s performance to be a little more optimized when you are running a single monitor game on one of your triple monitors.

 

Posted: 2 years 8 months ago by Wingless92 #24473
Wingless92's Avatar
I like the Bitcoin test. Also, folding has a huge following that people use some crazy setups for.
Posted: 2 years 8 months ago by garfi3ld #24472
garfi3ld's Avatar
yeah the scoring doesn't do a good job comparing cards. It gives as much favor to temperature performance as it does FPS. The problem is they can't be compared on the same scale. By doing it the way they do it, You end up with higher scores for lower performing cards because they put out more heat. Their score does show what card is the most efficient heat/performance, but thats not the card that most people are going to be looking for.
Posted: 2 years 8 months ago by Shin0bi272 #24471
Shin0bi272's Avatar
I agree. Synthetic benchmarks designed to try to kill your video card arent a good benchmark for anything other than the cooler on the card and the power supply in your pc. What irks me though is their own website says the object of their program is to get the highest fps with the lowest temp then their burn-in score is lower for a card that got higher fps and lower temp than another card. Its like saying the purpose of your bake sale is to send the senior class on a trip and then taking the money and blowing it all on shoes.

[edit] ok the reply system didnt act like I was thinking it would... sorry. This reply was to Nacelle[/edit]
Posted: 2 years 8 months ago by Shin0bi272 #24470
Shin0bi272's Avatar
the fps and temps wouldnt be a bad way to compare generally but their scoring system seems out of whack. Tomshardware and anandtech are moving to other programs (like bitcoin) to stress the hardware with a program that doesnt kick the safeguards in.
Posted: 2 years 8 months ago by garfi3ld #24455
garfi3ld's Avatar
Wingcmdr77 wrote:
So am I to assume like most series of cards this is going to start dropping the 500 series
a bit more that it has already? Might need to get another 570. Seems that's how its typically been..

it will drop with/ahead of the launch of comparable cards. But you also have to watch it because as they start to run out of stock the prices will actually go up a lot too
Posted: 2 years 8 months ago by Wingcmdr77 #24454
Wingcmdr77's Avatar
So am I to assume like most series of cards this is going to start dropping the 500 series
a bit more that it has already? Might need to get another 570. Seems that's how its typically been..
Posted: 2 years 8 months ago by Nacelle #24449
Nacelle's Avatar
Furmark is one of those benchmarks that I ignore, along with 3dmark. They're kind of meaningless. I go by the benchmarks from games that I actually play.
Posted: 2 years 8 months ago by garfi3ld #24440
garfi3ld's Avatar
Yeah, we only use Furmark for the temperatures, the FPS and scores are not a good way to compare cards, but we do keep them in the graph for anyone to see them if they would like.

Both AMD and Nvidia have safeguards to actually pull performance to protect the card from being damaged when using Furmark.
Posted: 2 years 8 months ago by Shin0bi272 #24439
Shin0bi272's Avatar
But it should still score higher if the fps are higher and the temps are lower... but it doesnt. Its a bad benchmark that claims a card that performs worse is a better card.
Posted: 2 years 8 months ago by Nacelle #24438
Nacelle's Avatar
It's because of the auto-overclock in the 680. If thermals are low, it will move the clock speed up. Since Furmark is intended to peg a GPU and make it hot, the overclock doesn't happen much, if at all.
Posted: 2 years 8 months ago by Shin0bi272 #24437
Shin0bi272's Avatar
Did anyone else notice that the furmark burn-in scores are basically whacked out? 7970 black OC edition scores higher than the 680 even though it got lower fps and higher temps. Maybe furmark is just biased towards amd
Posted: 2 years 8 months ago by L0rdG1gabyt3 #24184
L0rdG1gabyt3's Avatar
From what I hear, it does have enough compute power to still help when using Adobe CS5+ or other video editing software like Sony Vegas.
Posted: 2 years 8 months ago by THUMPer #24183
THUMPer's Avatar
L0rdG1gabyt3 wrote:
According to the live stream over at pcper yesterday.... the nvidia rep said that the compute performance is going to be more focused on the Quattro line as a way to differentiate the two lines of cards. Its sad.
Yeah, they really crippled the 680 to force people to buy there $1200 card. AMD looks good for compute if that's what you want to do.
Posted: 2 years 8 months ago by L0rdG1gabyt3 #24179
L0rdG1gabyt3's Avatar
THUMPer wrote:
Wheres the GPU Compute performance? After all, Nvidia created the GPGPU market and CUDA programming environment. NV can tweak their drivers for games and benchmarks, that's obvious, but they shat where they sleep.
AMD needs to bring their prices down or they won't sell anything. I would like to see more info on the gpu boost.
According to the live stream over at pcper yesterday.... the nvidia rep said that the compute performance is going to be more focused on the Quattro line as a way to differentiate the two lines of cards. Its sad.

On another note.... the 680s seem to scale pretty well!

Posted: 2 years 8 months ago by THUMPer #24176
Posted: 2 years 8 months ago by Wingless92 #24175
Wingless92's Avatar
I'm liking the new card. Is it going to make me run out and buy 3 new cards? Na, i'll just buy another 580, run tri-SLi and be set for a long time.
Posted: 2 years 8 months ago by THUMPer #24173
THUMPer's Avatar
Wheres the GPU Compute performance? After all, Nvidia created the GPGPU market and CUDA programming environment. NV can tweak their drivers for games and benchmarks, that's obvious, but they shat where they sleep.
AMD needs to bring their prices down or they won't sell anything. I would like to see more info on the gpu boost.
Posted: 2 years 8 months ago by Arxon #24163
Arxon's Avatar
Yeah, I wanna see the 690 now to see if it is gonna be like the 7990 with dual 680s
Posted: 2 years 8 months ago by garfi3ld #24162
garfi3ld's Avatar
Nvidia changed that.. And I Wouldn't pay the same price let alone more for a GPU that is slower. No matter how many stream processors it has. :-P

We will have to revisit everyone once the dust settles
Posted: 2 years 8 months ago by Arxon #24161
Arxon's Avatar
I didn't realize the 7990 had to gpus when i made the first comment then i read about it. As for price from what I see $50 cheaper but the ATI has more stream processors faster ram but a 8ghz slower core on stock models. Not much of a gain for 50 but i wouldn't have to buy a second one for triple monitors. Unless Nvidia changed that.(just watched that video and at the end they did change it to where you only need one card.)
Posted: 2 years 8 months ago by garfi3ld #24160
garfi3ld's Avatar
Also I think the Nvidia drivers, although much more mature and stable than what AMD launched with, still have a lot of room for improvement with ingame performance. 3DMark shows that the card is capable of. I wouldn't be shocked if we saw major improvements in some games by the time we get around to cards like the GTX 670 or 660.
Posted: 2 years 8 months ago by garfi3ld #24159
garfi3ld's Avatar
Arxon you have to remember two things. A two GPU card isn't "topping" Nvidia, they can do the same thing. Second, not being "much" better is still a large gap considering the price difference right at launch.

This isn't coming from someone with any bias, I gave two awards to AMD cards this week even.
Posted: 2 years 8 months ago by Arxon #24158
Arxon's Avatar
Sadly this card isn't much better than a 7970. Wait till ATi/AMD come out with the 7990.

"AMD is expected to release its flagship dual-GPU graphics card based on the "Tahiti" platform in 2012. We are learning under the codename "New Zealand", AMD is working on the monster HD 7990, which will utilize two HD 7970's and 6GB of total graphics memory (3 GB per GPU system). If AMD will indeed go for the HD 7970, this could mean that the card will include 62 compute units for a total of 4096 stream processors, 256 texture units and 64 full color ROPs."
Posted: 2 years 8 months ago by Reaper #24130
Reaper's Avatar
Wingless92 wrote:
Anyone else look over on Newegg and see that PNY was trolling and asking $20 more for their cards? I'll take EVGA until they go downhill which I hope will be never.

It's actually $30 (right now) ... but yeah I am angling towards picking up an EVGA. Several NewEgg gift cards ready to be used up :-)
Posted: 2 years 8 months ago by Wingless92 #24127
Wingless92's Avatar
The new Precision and OC Scanner apps from EVGA are looking pretty good. I use OC Scanner all the time to test my cards out.

Anyone else look over on Newegg and see that PNY was trolling and asking $20 more for their cards? I'll take EVGA until they go downhill which I hope will be never.

 

We have 534 guests and 5 members online

rsssteam 

channel.png
LanOC
home.png

channel.png
Team 1

channel.png
Team 2

channel.png
Looking To Play

channel.png
Highwayshark
locked.png

channel.png
CS:GO

channel.png
Diablo III

channel.png
FFXIV

channel.png
Dota 2

channel.png
LoL

channel.png
Minecraft

channel.png
World of Warcraft

channel.png
The Vapor Chamber

channel.png
AFK

 
client_player.png
Garfi3ld
 
client_input_muted.png
Kelf

Our Server Info: ts.lanoc.org