Advanced search

Message boards : Graphics cards (GPUs) : Compute capability 2.1

Author Message
MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18449 - Posted: 29 Aug 2010 | 3:13:58 UTC

David asked the following question on the BOINC_alpha mailing list.

Date: Fri, 27 Aug 2010 17:44:15 -0700
From: David Anderson
Subject: Re: [boinc_alpha] GTX460 not correctly reported
To: boinc_alpha@ssl.berkeley.edu

Is it the case that all compute capability 2.1 chips
have 48 cores per processor?
I can't get a clear answer from nvidia on this.
-- David


As far as I know the GF104 chip is the only one reporting as compute capability 2.1 and it has 336 shaders (or cuda cores as they now like to call them).

Does anyone know about the GF106 and GF108 chips? What do they report for compute capability and how many shaders (or cores per processor)?
____________
BOINC blog

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18464 - Posted: 29 Aug 2010 | 16:37:42 UTC - in response to Message 18449.

It will be 2 weeks until we know for sure; release date is 13th Sept for GF106 cards.
The GTS450 is expected to have 4 Streaming multiprocessors (each supporting 48shaders) for a total of 192 shaders. The core is expected to contain 32 texture-units. It supposedly has a 128 bit memory interface and 1GB RAM, but going by this page there might be 2 versions of the GTS450; so one might have more RAM, similar to the GTX460 cards.
As you can see it also suggests a GTS455, GTS440, GTS430 and GTS420, as well as 9 mobile cards.
Anticipate a GTX475 soon, probably just based on the GTX460, with a full shader complement (384).

In my eyes that's 14 new reasons to better seperate GPUs and CPUs on Boinc, starting with a Tab for each.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18472 - Posted: 30 Aug 2010 | 10:33:49 UTC - in response to Message 18449.

As SK said: so far the only CC 2.1 chip is the GF104. The other ones are not officially announced yet, which is probably why nVidia didn't give a clear answer.

However, I would expect the new smaller chips to adopt the same structure as GF104. But then nVidia didn't always listen to me in the past ;)

In my eyes that's 14 new reasons to better seperate GPUs and CPUs on Boinc, starting with a Tab for each.


I'm not sure that's neccessary. However, what I would like is the ability to assign projects to individual ressources (CPUs, GPUs with different CC, ATI w/o dp etc.) first, including the definition of backup projects. Only after that should ressource share determine the scheduling. IMO that would be enough of a separation.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18473 - Posted: 30 Aug 2010 | 12:20:11 UTC - in response to Message 18472.

I'm not sure that's necessary. However, what I would like is the ability to assign projects to individual resources (CPUs, GPUs with different CC, ATI w/o dp etc.) first, including the definition of backup projects. Only after that should resource share determine the scheduling. IMO that would be enough of a separation.

There are many reasons for a CPU Tab and a separate GPU Tab. For example, it could allow people to keep a 2 or 3 days cache of CPU tasks, and only a 0.05 day cache of GPU tasks. However, the most important thing would be to simplify configurations for the user while providing more control.

I like your idea of defining a backup project, it could be very useful.
Not sure if a Boinc configuration would allow me to use a GT240 in the same system as a Fermi (assign individual cards to crunch individual tasks), that might be something GPUGrid could think about, or it could require changes in both Boinc and at GPUGrid.
It would be nice to be able to click on a task and just tell it to Run; it immediately gets top priority, and whichever task has the lowest stops running - For when you have a task sitting at 99% complete and it stops running, sits in RAM, to allow another task to run.

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18474 - Posted: 30 Aug 2010 | 12:26:29 UTC - in response to Message 18473.

The switching WUs just before one finishes has always bugged me. If BOINC let tasks that have been started finish before switching that would be nice. I understand there are a few really long WU projects (only one comes to mind ... climate prediction) but if you selected that project why wouldn't you want to finish the task? On one hand BOINC espouses "set it and forget it" but then they build rules for task switching that realistically amount to micro managing.
____________
Thanks - Steve

Post to thread

Message boards : Graphics cards (GPUs) : Compute capability 2.1

//