Advanced search

Message boards : Graphics cards (GPUs) : GF106 and GF108 Fermi cards

Author Message
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18214 - Posted: 31 Jul 2010 | 20:23:50 UTC

GF106 and GF108 Fermi cards are due to turn up from September 13th.
When they do arrive they may work at GPUGrid straight out of the box, but there is no guarantee. They may be closer to GF100 than GF104 designs, but we won’t know all the details until very close to the release date. While the scientists might take a while to get working code, and possibly longer to get optimised code, these will support CUDA, so most should eventually work well here. Just don’t rush out to buy one expecting excellent performance before they are confirmed as working, and optimised.

The top GF106 card, the GTS450 is expected to sport specs of 1GB GDDR5 running at 3,760MHz on a 128-bit bus, supported by a quietly confident 789MHz core. Should this top GF106 card have the speculated full complement of 256 Cuda Cores then it could initially be faster than the un-optimised GTX460, depending on shader design. However in theory it would fall short of the GTX460 by about 12%, if both cards were optimised (going by cuda count and reference speeds), not that reference will mean much for GF106 and GF108. As for Compute Capability, and its effect on crunching, who knows, but that’s the point – wait and see.

It is quite possible some of these cards will offer better gaming for older games and cost a bit less than the GTX460, so they might prove to be good value for money, or indeed might not.
There will be lesser versions of these GF106 and GF108 cards, no doubt with fewer cores and/or shaders; 192, 216, and 240 are the obvious candidates for the GF106 range with 128 being more likely for the GF108 cards.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18230 - Posted: 2 Aug 2010 | 21:51:51 UTC

GF106 and GF108 will be mainstream chips. That means nVidia will have to make them as small and cheap as possible. The basis will obviously be the Fermi architecture, as any futher changes cost them money (development time etc.).

So what do you get when you take a GF100, remove stuff which is not really neccessary for mainstream and try to slim it down a little? First you'd take away 64 bit capability. Second you'd try to group the shaders in larger groups to reduce the amount of control logic needed to feed them, i.e. you increase the packaging density of your raw power. At the same time you might be able to increase average hardware utilization if you do it right. And third you might want to increase the number of TMUs, as these things will actually have to run games and will still be judged by their game performance.

The careful ready will probably have noticed by now that what I just described are the changes nVidia made going from GF100 to GF104. These were surprisingly large, but play out very well in the end - the chip is much more efficient at providing game performance and new features (except 64 bit). Of course nVidia could go for further archtiecture changes for the smaller chips. However, I think they already went quite far enough with the redesign leading to GF104 and its new 8 "fat" multiprocessors. I wouldn't be surprised though if they removed 64 bit capability completely for the mainstream chips. And give GF106 6 of these bad boys (288 shaders max, 240 in the next smaller version) and maybe 4 for GF108. CUDA hardware capability level would get an upgrade due to the change in feature set again.

They might prefer a smaller shader granularity for these mid range chips.. but then they've just worked it out for GF104 (these are certainly more efficient than GF100s) and software is being updated for this new design now. They shouldn't give developers too much of a headache by switching again...

MrS
____________
Scanning for our furry friends since Jan 2002

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 18232 - Posted: 2 Aug 2010 | 22:35:21 UTC - in response to Message 18230.
Last modified: 2 Aug 2010 | 22:59:33 UTC

I don't understand why they release their high end first, then work downwards. If software maturity is what they want, can't they wait with this instead, since hardware maturity is what most people would prefer to spend their money on? Or is this exactly what they want to avoid, when they're dealing with those who can best afford to buy new stuff?

It's funny, because usually wealthier people can afford to buy healthier food products & poorer people wind up eating all the junk. But those too poor to smoke cigarettes & drink alcohol, unless they start stealing, usually wind up living healthier lives.

BTW, I just LOVED what Steve Jobs did with the iPhone 4 & look forward to see him make crazy money AGAIN when he sells an iPhone 4S with a fix to the antenna problem. I hope he does the SAME thing when he releases an iPad 4. It amuses me to see so many people waste cash because they can!

Who ever said that God wasn't fair & that The Devil was!?
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18233 - Posted: 2 Aug 2010 | 23:17:22 UTC - in response to Message 18232.

MrS, what you say is certainly more attractive; a 288 shaders GTS450 at reported frequencies would be very interesting; it could be within 5% of the GTS460 performance.

Well off topic, but liveonce’s post reminded me of something that has got to be told:
I know of several manufactures, of common household appliances, that deliberately manufacture in system failures, but with built in reset switches! So when appliances fail (overheat, or are used X number of times) the Engineer just turns up and presses a reset button on you appliance, and of course charges you £70 (Euro 100) – yes, for a Muppet to flick a switch.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18234 - Posted: 2 Aug 2010 | 23:24:16 UTC - in response to Message 18233.

I know of several manufactures, of common household appliances, that deliberately manufacture in system failures, but with built in reset switches! So when appliances fail (overheat, or are used X number of times) the Engineer just turns up and presses a reset button on you appliance, and of course charges you £70 (Euro 100) – yes, for a Muppet to flick a switch.

What brands?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18235 - Posted: 2 Aug 2010 | 23:37:17 UTC - in response to Message 18234.

Italian

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 18236 - Posted: 2 Aug 2010 | 23:53:08 UTC - in response to Message 18234.

I've had experiences too with consumer products (that I also can't remember), but they were generic no-name stuff, like a Voltage Regulator that promised more consistent, clean, & efficient power that actually wasted much more electricity, didn't work, & was a total scam.

It's OK to be stupid, but to continue just because I could, didn't make sense to me.

I don't feel like I'm pointing my finger, it feels to me more like I'm waving it. If I do start to point it, you can expect that I won't buy it.
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18237 - Posted: 3 Aug 2010 | 8:09:48 UTC - in response to Message 18232.

I don't understand why they release their high end first, then work downwards. If software maturity is what they want, can't they wait with this instead, since hardware maturity is what most people would prefer to spend their money on? Or is this exactly what they want to avoid, when they're dealing with those who can best afford to buy new stuff?


There's a simple why: it's the traditional approach. That doesn't neccessarily make it a good one, though. A better argument would be this: if you're introducing a new architecture you want people to know that it's good. You want to do it with a bang that's heard around the world. Or at least the gamer world. And that's easiest when you're setting benchmark records. That's probably why they stick to "high end first".

ATI also introduced RV870 first and subsequently worked their way down. There was comparably little delay between launches, so one could say they probably got as close to a full scale launch of the entire product line as it gets (without stocking new chips for months).

nVidia may have wanted to do the same, but couldn't as GF100 was late and required a redesign to become more economical. Now that they've got the redesign they'll probably take this architecture and use it for the smaller chips as soon as possible.

Is there anything here which you'd want them to do in a better / different way? Well, they could have designed GF100 to be competitive in the beginning.. but I suspect they tried just that ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 18239 - Posted: 4 Aug 2010 | 0:05:52 UTC - in response to Message 18237.
Last modified: 4 Aug 2010 | 0:29:13 UTC

There's a simple why: it's the traditional approach. That doesn't neccessarily make it a good one, though. A better argument would be this: if you're introducing a new architecture you want people to know that it's good. You want to do it with a bang that's heard around the world. Or at least the gamer world. And that's easiest when you're setting benchmark records. That's probably why they stick to "high end first".

ATI also introduced RV870 first and subsequently worked their way down. There was comparably little delay between launches, so one could say they probably got as close to a full scale launch of the entire product line as it gets (without stocking new chips for months).

nVidia may have wanted to do the same, but couldn't as GF100 was late and required a redesign to become more economical. Now that they've got the redesign they'll probably take this architecture and use it for the smaller chips as soon as possible.


I do understand the shock effect that's desirable when launching the fastest first. I'm just saying that there are women who look F.I.N.E. FINE, but they turn out to be a great disappointment & a total waste of time, there was nothing at all that wasn't skin deep. You saw everything there was to see & there wasn't any surprises at all.

But there's also the type that's slightly boring, slightly not. The type that doesn't turn heads, but made you look twice. There was more than meets the eyes, & the more you looked, the more there was to look at.

Mid-Range is what I'd hoped would be the start, as you can build up & you can build downwards too. It won't shock you, but it won't be something to ignore either, & if she has sisters or friends. You wouldn't likely meet them if you weren't introduced.

The Internet has so many sites dedicated to reviewing products. There are benchmarks, OCing, pros & cons, comments, etc, etc. If Ati or Nvidia had to do all that themselves, they'd have to spend much more money on marketing than they do already. Bad reviews has also an impact on sales & the constant; They're work on the problem, tweaking, & fixing. Isn't a great way to start.

If everybody knows there will be hiccups, why not take that into consideration instead of using that knowledge as an excuse to have a bad start? Everybody knows not to expect too much from The Special Olympics, but if you're watching The Olympics, you know that the slap on the face will be with the gloves off. It's not cute, when the biggest & baddest disappoint much more than everything else that's crippled from the start. They ALSO know that every time they try something new, they'll have issues with yields, so do they want to start with the best of the best, when they have such a hard time getting it?
____________

michaelius
Send message
Joined: 13 Apr 10
Posts: 5
Credit: 2,204,945
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 18241 - Posted: 4 Aug 2010 | 14:08:12 UTC - in response to Message 18239.

There's very simple reason: sales.

Let's say Ati started 5k series with radeon 5770 then it would compete directly with previous generation of products while still having nothing with more performance than 4890.

But they started with 5870 gained performance crown and those cards were selling like hotcakes without affecting 48x0 series much because of higher price points.

That's why gpu manufacturers start with top end solution - to not stop selling supplies of older cards without need for price drop.

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 18242 - Posted: 4 Aug 2010 | 14:46:18 UTC - in response to Message 18241.
Last modified: 4 Aug 2010 | 15:09:33 UTC

So it's the empty the fridge, before stocking up again argument?

Releasing the high end first would stop the sales of high end previous generation cards, to the high end market wouldn't it? But wouldn't releasing the new mid range first, if it was slightly faster or at least cheaper, also increase initial sales of the new generation card to the high end market & get the high end market to buy AGAIN when the new high end card hit the shelves? That's twice you get to milk the crowd most keen & capable of cashing up.

I only let the most boring things stay in my fridge for long periods of time. They usually go bad by the time I notice, & I throw it out.

If software maturity is what's wanted, but many are ready to get the newest thing out there just because it's new, milk the cow, work on the software, improve on the card, release it with better software support & hardware architecture, & milk the cow again.

Prices drop as time goes by, either to remain competitive, or just to sell it at all. But most sales are in the mid to low end range of cards, even though the highest profit is in the high end, but how high is that profit when it reaches EOL?
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18243 - Posted: 4 Aug 2010 | 21:05:45 UTC

So it's the empty the fridge, before stocking up again argument?


I don't think so. The HD57x0 pretty much obsoleted HD4870 - 4830 except for double precision. Yet ATI tried really hard to launch them as soon as possible after the first Cypress cards. One could argue that it was the launch of Win 7 and DX11 which made them hurry - but then something is always around the corner, be it "back to school", Christmas or whatever.

I'd rather argue that time to market is really critical in the GPU business. Besides solid execution you'll want to be first with new features (checkbox or useful - doesn't matter) and to be first to exploit manufaturing advances. That's what made ATI and nVidia survive the 3D accelerator pioneer days, whereas many others failed. On can argue that progress has slowed down in recent years (probably for good) and that the rules are being rewritten: the balance shifts from "delivering products at breathtaking schedules" to "delivering really good products". But we're not completely there yet. Speed still matters and the old spirit is still alive.

There's also the chicken-and-egg problem: you can't optimize software without the hardware. And if you've got the chip design set in stone (software development won't start prior to this), you can just as well make many chips and start to sell them.

Honestly I don't see much of an advantage in rolling the chips out in a different way. Sure, they have to be good and the software has to be good enough to avoid bad press. But apart from that I can't see any tangible benefits from changing the current way (high end first, then quickly work down the linup). Sure, testing a new process with an insanely large chip is a bad idea - but this is a separate topic. The ATIs HD4770 demonstrated how to do it.

If software maturity is what's wanted, but many are ready to get the newest thing out there just because it's new, milk the cow, work on the software, improve on the card, release it with better software support & hardware architecture, & milk the cow again.


Who said they'd want software maturity? Sure, they need it at some point.. but not at the price of shipping later. And I doubt offering midrange first and high end later would yield much more profit. They'd get bad press for the "dissappointing new chips" despite their advantages and people looking for high end would hesitate to buy. Especially since the bigger chips would probably already be in the rumor mill. And people who want to buy mid range will buy anyway.
And if you've got a "buy everything because it's new" enthusiast you'd better give him something for his money. New features, better power consumption and lower noise don't cut it - his games will still look and feel the same. He'll loose his enthusiasm if he doesn't get some perceived benefit from the purchase. And benchmark records are probably the easiest benefit to ensure him "man, this was really worth it!"

MrS
____________
Scanning for our furry friends since Jan 2002

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 18244 - Posted: 4 Aug 2010 | 21:56:34 UTC - in response to Message 18243.

Thanks MrS, I really needed someone to explain that question to me so I didn't keep asking myself, "WHY?". Its just that Nvidia, were trying so many new tricks & pulling so very many fast ones lately. So I just wondered if they were open to new, untried ideas. I don't recall that they were so keen on relabeling in the past, or making something that made almost everybody say, "oh this will never work", & they got away with both things quite well, or much better than everybody else thought they would. So I was beginning to think they were either the gambling types, or all too happy to play Russian Roulette because they were adrenalin junkies.
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18245 - Posted: 5 Aug 2010 | 7:58:23 UTC - in response to Message 18244.

Glad I could be of some assistance :)
One could probably talk about this a lot more, but I'll just leave it at that for now.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18247 - Posted: 5 Aug 2010 | 15:05:05 UTC - in response to Message 18245.

The GF106 based cards will be called GTS 450, GTS 445 and GTS 440.

http://www.techreport.com/discussions.x/19394

GF106 die shot here,
http://en.expreview.com/2010/08/04/geforce-gts-450-die-shot-tips-up/8903.html

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18254 - Posted: 5 Aug 2010 | 22:46:30 UTC

Interesting! The rumored GF106 PCB features 6 memory chips - that's a 192 bit bus nowadays. That would probably suit a 250+ shader GF106 better than 128 bit. 128 would work, but would be borderline to really limiting performance. Especially since nVidia may not yet have fixed their GDDR5 controller, so they can't clock their memory as high as ATI can.

The die shot reveals an area of 239 mm², whereas GF104 is said to be 366 mm² large. Assuming linear scaling (which does not entirely apply here) we'd get 251 shaders in GF106. Make each shader cluster smaller by removing 64 bit capability and we may indeed be looking at 288 shaders.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18255 - Posted: 5 Aug 2010 | 23:38:55 UTC - in response to Message 18254.

251 is also very close to 256, just in case.
As you say, the proported 789Mz clock is not very high, not for a card with such a small profile, so no magic wand was waved there.
Although I would err on the side of the 128-bit speculation, it would only be with about 2pence, in the hope I'm wrong.
Perhaps 6 was for the GTS440?

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18260 - Posted: 6 Aug 2010 | 8:06:22 UTC - in response to Message 18255.

Or the PCB was meant for GF104 with 192 bit and they mistakenly attributed it to GF106. Could have been some experimental design, which didn't make it out of the door.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18652 - Posted: 13 Sep 2010 | 11:13:14 UTC - in response to Message 18260.
Last modified: 13 Sep 2010 | 13:51:24 UTC

New Cards have been released by NVidia today.

The main card is the GeForce GTS 450

Ryan Smith's review

192 Shaders (Stream Processors), 16 ROPs, 783MHz Core with linked shaders at 1566MHz and 1GB RAM.
As with the other Fermi’s this card is based on 40nm technology.
It is expected to retail from about $130 / 130Euro / £95.
There are of course cards that sport better speeds (925MHz).

The ratio of shaders to ROPs is 12 to 1 with this GF106 card, but they are apparently grouped into blocks of 48shaders akin to the GF104 cards.
The GF100 cards are 10/1 and the GF104 cards are 14/1. GPUZ image of the GTS450


A range of Fermi OEM/laptop/desktop cards have also been released:

GeForce GTX 470M 288 Shaders @1100MHz, GDDR5
GeForce GTX 460M 192 Shaders @1350MHz, GDDR5

GeForce GT 445M 144 Shaders @1180MHz, DDR3/GDDR5
GeForce GT 435M 96 Shaders @1300MHz, DDR3
GeForce GT 425M 96 Shaders @1120MHz, DDR3
GeForce GT 420M 96 Shaders @1000MHz, DDR3
GeForce GT 415M 48 Shaders @1000MHz, DDR3

The GeForce GT 480M with 352 Shaders @850MHz was released a while back.
With an excellent heatsink, good airflow, and the right configuration, crunching on a notebook may be possible but it’s generally not recommended.
These cards will probably turn up in Small Form Factor OEM desktop systems too. Note that these systems can also be very compact, so check the temps and air flow if you do try to use them.

If none of these cards are for you there is still good news; the entire range of existing Fermi cards are dropping in price.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18661 - Posted: 13 Sep 2010 | 23:20:21 UTC - in response to Message 18652.

So it's 192 Sahders or 4 blocks of 48 for the GF106. Too bad the 200+ rumors didn't turn out to be true - if so I had bet on 6 shader blocks rather than 5 ;)

Interesting is also the choice of 3 64 bit memory controllers. The card doesn't seem to be bandwidth starved (neither is GF104 at 192 bits), so I get the feeling the additional memory controller is more for yield than for performance. But then it's strange to see the PCB as large as the one for GTX460 and with mounting places for a full GF106. You'd think they'd only do this if they intend to launch a part which actually makes use of it. Otherwise the mainstream market should be too price sensitive to waste so much PCB space, shouldn't it?

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18662 - Posted: 13 Sep 2010 | 23:34:10 UTC - in response to Message 18661.

One of the three sets of memory controller/ROP pairs is disabled.
I expect a GTS 455 to turn up, possibly following a competitor entrance, and perhaps along with a GTX475.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18672 - Posted: 14 Sep 2010 | 8:08:16 UTC - in response to Message 18662.

One of the three sets of memory controller/ROP pairs is disabled.


That's why I find it strange that the PCB is ready for the 3rd controller. The only way it makes sense is that a GTS455 is coming soon. Which is almost neccessary due to the competition from the HD5770 anyway :p

Enabling the 3rd controller will probably not cause any wonders to happen, but will help here and there a bit, especially with FSAA. The added 128kB L2 shouldn't hurt either. Furthermore nVidia could easily up the clock speed ~10% for such a card. But if the past is any indication they'll probably call it "GTS460" to make it sell better due to the good reputation of the GTX460 :D

BTW: where's a HD5840 right in between 5830 and 5850? That would be at least something placed against the GTX460.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18702 - Posted: 15 Sep 2010 | 23:49:12 UTC - in response to Message 18672.

Some nice pics here,
http://xtreview.com/addcomment-id-13723-view-Alternative-version-GeForce-GTS-450-list.html

Post to thread

Message boards : Graphics cards (GPUs) : GF106 and GF108 Fermi cards

//