Advanced search

Message boards : Graphics cards (GPUs) : Fermi released

Author Message
showa
Send message
Joined: 2 Mar 09
Posts: 28
Credit: 4,975,808
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 15992 - Posted: 27 Mar 2010 | 0:09:44 UTC

Now that benchmarks are flourishing all over the net, I hope that someone very soon will put hands on a 470 (or even a 480) and will crunch Gpugrid wus. For sure, gpucomputing isn't part of a "standard" bench...
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15993 - Posted: 27 Mar 2010 | 0:14:32 UTC - in response to Message 15992.

Anand says at folding a GTX480 is 3.5 times faster than a GTX285. Impressive!

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Zydor
Send message
Joined: 8 Feb 09
Posts: 252
Credit: 1,309,451
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 15996 - Posted: 27 Mar 2010 | 2:57:28 UTC
Last modified: 27 Mar 2010 | 3:49:31 UTC

Excellent review (as always) from Guru3d at: GTX 480 Review Guru3d

The hardware based tessellation is superb, and the new 32AA mode promises much better graphics quality.

Power, as widely flagged up pre release is horrendous - adding 250w to base system+single card for each card in SLI is going to eat up PSUs, and will need to go up a PSU in many cases. That will give many pause for thought. GTX 480 Power needs

The clear performance improvement of 10% or so over a 5870 is not reflected in the 3DMark - usually with Guru3d reviews that benchmark is not far off the real world game tests they do, no so this time - as its a benchmark its academic as real world use obviously has more credence, but interesting non the less. 3DMark Vantage (DirectX 10) Performance

GTX295 appears to be EOL, in practice if not formally as they dont make them any more. So in the world of NVidia and CUDA, Fermi is it. Lets hope there are are others lower down the totem pole (aka 420/430/440 whatever), cant believe there will not be - because without them they will have a hard time against the "average" PC User & ATI due to cost and power.

I get the impression its "unfinished business" and there is much more there to come if they can only sort out power needs. At 250w TDP its going to be one hell of a hot mother .... and will need a PSU one step above the "norm", and a review of case cooling for high end users. As for 2/3/4 of them in SLI ....

Personaly, given the funds and availability, I'll probably end up getting one as my second box has a 600W PSU anyway - but only because of the impending new GPUGrid Project. Without any CUDA needs, I would not buy one, the power, heating, and cost are just too much in comparison to 5970/5870, and those aspects are not offset enough by performance increases worthwhile taking the pain in those areas in my personal situation - everyone has their own needs and drivers of course.

Overall, nice card, performs well, shame about heat and power, and I wish they had come out with this last year .... it will keep them in touch with ATI, but will not set the graphics race alight until mid term refresh or Fermi2.

Regards
Zy

Jack
Send message
Joined: 18 May 09
Posts: 10
Credit: 200,701,509
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15997 - Posted: 27 Mar 2010 | 3:28:11 UTC

I'll be sticking with my 216-core GTX260 for now because I can't afford an upgrade, but the specs are incredible. It looks like the card is made using 40nm manufacturing tech, maybe they'll be able to shrink the die to 32nm by the time I have enough to $ get myself one.

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 15999 - Posted: 27 Mar 2010 | 5:25:38 UTC

not really excited. Performance wise against my OC'ed GTX275 - from 0% to 50%, mainly - 25-35%. Really not that much. I'm not sure what Anand talking about - just tens %s only.

Power consumption - I've got good 720W PSU, so it's not an issue for me. But: moving towards 40nm process ATI half-year ago managed to go down below 200W even for 5870. Nvidia's lower-end GTX470 consumes more...

Heat. This is a problem. Running 24/7 that hot GPU IMO requires certain measures for better cooling. I've got CM HAF932, so it should not be that bad for me, but still...

Voltage tweaking. Another problem. It's really pity that it I fix heat problem somehow, I can not OC the card beyond granted 100 MHz limit.

Price. Well overpriced against ATI.

I general - I did not disappointed, coz I was ready :-) see next door thread, that information was right, even about Vantage :-) Will I consider to buy it? At least not A3 revision. And again - it make sense to get dual GPU card for GPUGRID, which will be available in Q3 this year. BTW, early Q3 ATI releasing 28nm cards (5890 & 5990?) and nvidia will release 28nm cards next year only (fermi2 - may be).

Zydor
thx for the link :-)
____________

showa
Send message
Joined: 2 Mar 09
Posts: 28
Credit: 4,975,808
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 16001 - Posted: 27 Mar 2010 | 7:29:41 UTC - in response to Message 15993.

Anand says at folding a GTX480 is 3.5 times faster than a GTX285. Impressive!

MrS

Truly is! But I'll wait for a revision: consumption and heat are too big issues to buy a Fermi now.
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16003 - Posted: 27 Mar 2010 | 9:41:41 UTC - in response to Message 16001.

The heat block on the GTX480 just begs to have a system fan blowing directly onto it.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16006 - Posted: 27 Mar 2010 | 10:36:56 UTC - in response to Message 15999.

I'm not sure what Anand talking about - just tens %s only.


When I said "folding" I meant Folding@home. In games it's of course as fast as you said.

Regarding power consumption: nVidia is using many more transistors than ATI, that's why their power consumption is naturally higher.

Voltage tweaking: with a card like the GTX480 I'd probably consider voltage tweaking.. lowering it to reduce heat, noise and power consumption! There's probably not much room left, though. And I wouldn't increase the voltage without anything less than water cooling, though and Accelero Extreme could manage the heat, if the software doesn't use the card 100% (e.g. Seti) and there's very good case ventilation.

BTW, early Q3 ATI releasing 28nm cards (5890 & 5990?)


Let's wait and see ;)
Sure, it's on some leaked roadmap, but 28 nm is not exactly easier than 40 nm, isn't it? I'll remain sceptical until I see more solid evidence. Besides: I read that ATI plans a refresh around summer. This would still be made at TSMC, as the time to work with GF and to develop their new general purpose processes wasn't enough yet. And 32 nm disappeared from TSMCs roadmaps. So I suppose the refresh will be based on TSMCs 40 nm. And the next release (28 nm, probably 6000 series, since they'd be running out of numbers in the 5000 seriees) would not come until at least a half year after that summer refresh. Whatever they end up doing, it's going to be interesting :)

MrS
____________
Scanning for our furry friends since Jan 2002

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 16013 - Posted: 27 Mar 2010 | 13:57:54 UTC - in response to Message 16006.

ExtraTerrestrial Apes,

I know that u r talking about F@H :-) I can not understand from where 3.6 times appearing. I see no reasons for this... But let's wait to see real RAC 480 can provide.

nVidia is using many more transistors than ATI, so what? they are 5-10% only faster then 5870...

about voltage tweaking. I'm absolutely agree that it's necessary to do smth with heat, but I got water cooling or Accelero Extreme - why I can do nothing?

about ATI. Global Foundaries (Fab 1 in Dresden) almost done with 28nm process and they will be ready in Q3 to produce 5890 & 5990. and furthermore - ATI is not happy with TSMC (famous story with 4770 last summer), but at that time Fab1 was not ready for 5xxx cards and ATI could not wait to release them, so they came up TSMC.
____________

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 16015 - Posted: 27 Mar 2010 | 18:56:17 UTC
Last modified: 27 Mar 2010 | 19:02:46 UTC

http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=1626

NVIDIA is limiting the double-precision speed of the desktop GF100 part to one-eighth of single-precision throughput, rather than keep it at half-speed, as per the Radeon HD 5000-series


no comments...


and a bit more "sugar in the beer"
http://www.youtube.com/watch?v=WOVjZqC1AE4
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16016 - Posted: 27 Mar 2010 | 21:19:04 UTC - in response to Message 16013.

I can not understand from where 3.6 times appearing. I see no reasons for this... But let's wait to see real RAC 480 can provide.

It's right there ;)
The point is that a GTX480 has "only" about twice as much raw single precision shading power as a GTX285, but this power can be used more efficiently: the increased caches should help to hide memory latency and the parallel execution of different kernels could speed things up tremendously. It all depends on the application, though. And games are not where these features give the most benefit.

nVidia is using many more transistors than ATI, so what? they are 5-10% only faster then 5870...

The amount of transistors directly results in the much higher power consumption (for such rather similar chips), that's the only reason I mentioned it here.

but I got water cooling or Accelero Extreme - why I can do nothing?

What do you mean?

Regarding your link to the MW forums: that would be a catastrophe for us, but go a long way towards pushing the scientific community towards Teslas. At the same time they'd devalue CUDA, as the end user also benefits from double precision. And anyone who runs mission critical apps is probably already running Teslas anyway. They could achieve a similar market segmentation by disabling ECC on the Geforce cards. And to quote myself:

That report looks dodgy. The author seems to be confused about the benefits of IEEE compliance and quotes ATIs cards as having 1/2 the sp speed in dp. It should actually be 2/5, which is 40% instead of 50%.

Just because I think it would be a stupid move and messenger is unprecise in other points does not necessarily make it untrue. It's certainly a point to keep an eye on, just don't take it for granted yet.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Zydor
Send message
Joined: 8 Feb 09
Posts: 252
Credit: 1,309,451
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 16018 - Posted: 27 Mar 2010 | 23:07:15 UTC
Last modified: 27 Mar 2010 | 23:09:17 UTC

The discussion has split into two threads - and I have a feeling neither side has realised the different mindset being used for the figures and logic in response.

The figures and discussion re the MW thread and Anand review et al turn around the DirectCompute/CUDA aspects of the card, and the crippling or otherwise of the various elemnts to do with the Compute functions (DirectCompute/CUDA software et al). Hence the figures of 6 or 3.5 or 40% et al.

That has nothing to do with the other side of the thread where individuals are taking about the real world make up of the card, shader performance, memory speed, power useage, and end user delivered performance figures.

Take one set of figures from the Compute side and refer to them against the user end side of the card (or visa versa) and confusion reigns supreme :) Its chalk and cheese ...

Scientific/Developer focus is on the Compute capabilities, by in large us mere crunching mortals focus on the real world end performance of the beast. Two different mindsets, two unrelated sets of figures, and two conversations going on in parrallel where neither side has twigged the basis of the figures from the other side of the conversation :)

Regards
Zy

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 16019 - Posted: 27 Mar 2010 | 23:15:47 UTC

Guys I do understand what u r talking about. I'm really curious to see real RAC of GTX400 in GPUDRID or F@H or whatsoever, coz these reviews all about fps in games and that's not we are looking for, right? :-)

Pls post here about real productivity of GTX400.
____________

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 16020 - Posted: 28 Mar 2010 | 0:06:45 UTC - in response to Message 16018.
Last modified: 28 Mar 2010 | 0:18:52 UTC

So there are two different arguments. But like it or not, they do correlate. How? Many who crunch on GPUDRID.net are said to have middle to high end GPU's. That not everybody has the funds, or interest, in buying a high end GPU, simply to crunch 24/7, brings in mind the motivation to buy a Fermi.

GPUGRID.net uses these Nvidia GPU's with superior compute capabilities, & that's great for GPUDRID.net & great for Nvidia. But if it's only good for GPUDRID.net, then unless I'm a super fan, I'm probably not going to buy a Fermi, if the only thing good about it is the compute capability, if I only use that for GPUGRID.net

If the selling argument is CUDA, then more software needs to suport & use CUDA in order for a niche to be the reason & motivation behind spending $350-$500 on a new GPU that eats lots of power & doesn't offer something that other GPU's can't offer. That's just my opinion of things.
____________

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 16021 - Posted: 28 Mar 2010 | 5:20:14 UTC

Me personally - I do not mind to spend $200 on GTX470 (get it for $350 and sell my GTX275 for $150), but if GTX470 will worth these $200, i.e. it will be at least twice faster then GTX275. Other then that - it's just wasting of money and makes no sense at all.

Games - I'm not that great gamer, just 2-3 hours on Friday and Saturday nights to shoot bastards - and I'm done :-)

But again - heat is a real issue for us. Cooling system is way weaker against GTX480, so I'm in doubt if GTX470 can handle that much. And to buy GTX480 just for cooling system - it's ridiculous :-)

Voltage tweaking - I agree, if GPU that hot on stock voltage, I'm pretty scared to increase beyond stock voltage.

So, to make long story short - I'm curious to see real result of GTX470 in computation.
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16022 - Posted: 28 Mar 2010 | 10:05:43 UTC

I don't see much of a controversy: for games GF100 is clearly not the best solution, period. For compute there's potential, but we don't know enough yet.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1947
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 16023 - Posted: 28 Mar 2010 | 10:17:23 UTC - in response to Message 16022.

We have not had any benchmarks yet. We are able already to compile for CUDA3.0 though.

The ~3x times performance for F@H seems reasonable and it is in line with what we said in the past. For compute, Fermi will be amazing and in particular for GPUGRID.

We had heard that double precision and EEC was disabled (or something similar on Geforce). Not a real issue for us anyway.

The new ACEMD application (100% faster once we release the new beta) on Fermi should be 6 times faster than what we had just few months ago!

GDF

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 16026 - Posted: 28 Mar 2010 | 12:07:49 UTC - in response to Message 16022.

I don't see much of a controversy: for games GF100 is clearly not the best solution, period. For compute there's potential, but we don't know enough yet.

MrS

101% agree :-) Let's be patient and wait April 12
____________

Profile Sandro
Send message
Joined: 19 Aug 08
Posts: 22
Credit: 3,660,304
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 16027 - Posted: 28 Mar 2010 | 13:09:01 UTC - in response to Message 16026.

If you test a GTX480 or GTX470, be very carefull. it may kill your Fermi.
On a german hardeware-side they tested a GTX470 with folding@home

http://www.hartware.de/review_1079_16.html


In short:
during the tests, the fan turnt to max.(3000rpm) because of the great heat, and during th test the fan-control failed, the rpm dropped down and the chip went up to 112°C, followed by black screen and freeze. fortunately, the chip was not damaged.
So, crunching on a Fermi seems to hit the chip very hard, if you have no good cooling in your case you will be in trouble.

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 16028 - Posted: 28 Mar 2010 | 13:33:17 UTC

Sandro

Thx a lot for the post :-) On top of what u just told, in the bottom of the page there nice table comparing GTX285 and GTX470. I'm not that good in German, but as fas as I understood - GTX470 faster the GTX285 on 17-22% ONLY. That's it, not that much at all. And I'm still not sure what Anand was talking about - "3.6 times faster"...
____________

Profile Sandro
Send message
Joined: 19 Aug 08
Posts: 22
Credit: 3,660,304
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 16029 - Posted: 28 Mar 2010 | 13:43:02 UTC - in response to Message 16028.

Yes, that was strange, the reason may be that they tested with a beta app of f@h, they got from NV directly.


Nvidia stellte kurz vor dem Ende der Testzeit eine Beta-Version des Folding@Home Programms zur Verfügung, das die neuen GeForce Grafikkarten unterstützt. Die Berechnungen finden dabei ausschließlich auf dem Grafikchip statt, der Prozessor wird nicht belastet und steht für andere Aufgaben zur Verfügung.


in short: they got a beta-version of f@h from NV to test the new geforces.

so, it is NOT the normal GPU2-client distributed by f@h itself. I dont know why NV should do this, because the results are not THAT good as one expected. maybe because the standard f@h client do not recognise the fermi properly.
There should be a lot of optimisation going on before we can see real world performance on folding.

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 16031 - Posted: 28 Mar 2010 | 14:51:17 UTC - in response to Message 16029.

Yes, that was strange, the reason may be that they tested with a beta app of f@h, they got from NV directly.


The story is even worse than I suspected. Special app from NV... Hm... Okay, let's try to find more reviews
____________

[boinc.at] Nowi
Send message
Joined: 4 Sep 08
Posts: 44
Credit: 3,685,033
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwat
Message 16032 - Posted: 28 Mar 2010 | 15:29:52 UTC
Last modified: 28 Mar 2010 | 15:32:11 UTC

One of the best german computer-webpages have tested the cuda performance of GF100, too, but they couldn´t get the performance like Anandtech. Only 30% speedup in F@H and Badaboom compared to a GTX285. You can read the results (in german) here.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16037 - Posted: 28 Mar 2010 | 20:07:09 UTC
Last modified: 28 Mar 2010 | 20:39:48 UTC

Folding@Home always needs to update their client to support new chips, so the new client is nothing fishy yet. I don't know where this 30% faster vs. 3 times faster comes from, though. It could be that Anand had a newer beta client or that they ran very different WUs. At F@H the different WU types correspond to different algorithms, so relative performance between cards could differ.

Edit: funny side note.. until a couple of months after the release of the HD4800 these guys were still just as fast as the 3800 series, despite going from 60 to 160 5D shaders. Because F@H wasn't quick enough to update their client to make use of the additional units.

Edit2: I took a look at the F@H benchmarks from Hartware.net. They actually tested 3 different WUs types / projects, so this is not the reason for the discrepancy. Judging the GTX470 performance it could be that it only uses 240 shaders (client not yet fully updated), but then it shouldn't get that hot & shouldn't consume as much power.

MrS
____________
Scanning for our furry friends since Jan 2002

CTAPbIi
Send message
Joined: 29 Aug 09
Posts: 175
Credit: 259,509,919
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 16059 - Posted: 29 Mar 2010 | 16:44:03 UTC

I read couple of other review and all of them saying about "tens %" but not "times" advantage over GTX285 in F@H.
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16072 - Posted: 29 Mar 2010 | 22:28:09 UTC

Due to the sheer number of shaders it should be at least a factor of 2. I don't doubt that these guys are measuring such low numbers, but something is wrong here. Maybe we should ask Anand or read over at the F@H forums. They should be running up and down the walls because of this issue right now :p

MrS
____________
Scanning for our furry friends since Jan 2002

chumbucket843
Send message
Joined: 22 Jul 09
Posts: 21
Credit: 195
RAC: 0
Level

Scientific publications
wat
Message 16228 - Posted: 9 Apr 2010 | 21:32:47 UTC

im no expert on molecular dynamics but it runs well on stream processors. the gtx480 should be excellent because it has a cache that can act as a bandwidth cushion when you have overlapping gathers. fermi also has the ability to put register spills into cache. this can help performance A LOT. if you are spilling registers your performance is terrible. (i.e. 5870 performing as fast as a 8800GT). the new arch makes it easier to write compilers due to unified address space. this all translates to great performance.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16229 - Posted: 9 Apr 2010 | 21:52:43 UTC - in response to Message 16228.

You mean this should all translate into great performance ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Post to thread

Message boards : Graphics cards (GPUs) : Fermi released

//