Advanced search

Message boards : Number crunching : What a silly idea

Author Message
Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11188 - Posted: 19 Jul 2009 | 11:32:48 UTC
Last modified: 19 Jul 2009 | 11:37:22 UTC

I attached my 8600gt 512mb to see how long it would take to complete a unit. I already knew it would complete a unit within the 5 day deadline so was no danger in losing the unit or wasting time.

However. it's running on a machine with a Q9300 so 4 CPU's. What's silly is instead of this project giving me one WU for one GPU it gave me one, then another one and then another one, it would have given me a fourth one but realising it was going to give me a WU for every CPU core I stopped it getting new work.

So, having stopped the fourth I aborted the third because there was no way it was going to finish 3 WU's in five days. that means it would have been sitting on my HDD for 5 days and then have to be resent so I saved project time by aborting and it has been resent.

I may still have to abort the second unit if the first takes more than 2.5 days because it will not finish in time.

That's what makes it silly because the project is delaying it's own results by sending a WU for every CPU core when it's a GPU project.

I am clear in conscience that my card can easily complete one WU in at least 5 but probably 3 days so why can't the administrators of this project change the preferences to one unit per GPU at a time by default and then give users the option to increase it when they have fast cards.

You know it makes sense! Saves project and user time.

EDIT to add;

May I also have a reponse to this post please. I need to hear the words
____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1620
Credit: 8,838,784,177
RAC: 19,742,431
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11189 - Posted: 19 Jul 2009 | 11:58:22 UTC

I support Betting Slip wholeheartedly. This is long-standing, idiotic behaviour: and he has accurately attributed it to 'one work request per CPU core'.

But I don't think it's something that GPUGrid can cure by reconfiguration. GPUGrid need to report it to BOINC as a bug in the infrastructure, and get it fixed that way for the benefit of all projects.

ignasi
Send message
Joined: 10 Apr 08
Posts: 254
Credit: 16,836,000
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 11199 - Posted: 20 Jul 2009 | 9:58:04 UTC - in response to Message 11189.

GPUGrid need to report it to BOINC as a bug in the infrastructure, and get it fixed that way for the benefit of all projects.


It is a well-known issue.
They are already working on it.

i

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1620
Credit: 8,838,784,177
RAC: 19,742,431
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11201 - Posted: 20 Jul 2009 | 10:45:28 UTC - in response to Message 11199.

GPUGrid need to report it to BOINC as a bug in the infrastructure, and get it fixed that way for the benefit of all projects.

It is a well-known issue.
They are already working on it.

i

Sorry, that was a bit blunt for a first post on a new project!

Yes, I'm aware that CUDA support in BOINC is very much a 'work in progress' - I'm active on both the alpha and dev mailing lists, and putting in bug reports and suggestions for improvement where I can.

But you're probably aware that the BOINC developers took a bit of a time-out to concentrate on Facebook/GridRepublic development. During that period, I think it would be fairer to say that this issue was on the "to do" list, rather than in active development. Since the over-fetching that Betting Slip reports will hurt this project (by delaying the return of the supernumerary results), I thought you might be in a good position to give them a bit of a nudge.

While we're here, may I point out a related issue? My first task on my Q9300/9800GTX+ combo, p1030000-IBUCH_3_pYEEI_com_0907-1-3-RND5364, reported:

<rsc_fpops_est>250000000000000.000000</rsc_fpops_est>
<fpops_cumulative>3436310000000000.000000</fpops_cumulative>

Those figures are out of kilter by a factor of 12 to 1 or thereabouts. The 71-GIANNI_BINDX I reported yesterday is even worse.

Since BOINC uses <rsc_fpops_est> plus DCF to estimate runtimes, new users (DCF=1) are in danger of severe over-allocation until the completion of their first task - at which point their DCF will jump somewhere near my (current) 16.7332, and the estimates will be corrected accordingly. You may be finding a large number of aborts/late reports from newly attached hosts with the current settings.

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 11220 - Posted: 21 Jul 2009 | 9:10:41 UTC - in response to Message 11201.

we tried and tried to fix the number of WUs to GPUs only, but the current BOINC code is still quite buggy in this sense. Until BOINC works properly for this issue, there is nothing more we can do that try to give feedbacks to the developers.

As they will be coming to Barcelona in September, I think that just a little more patience will be required.

gdf

Profile Edboard
Avatar
Send message
Joined: 24 Sep 08
Posts: 72
Credit: 12,410,275
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 11480 - Posted: 29 Jul 2009 | 18:55:45 UTC - in response to Message 11220.

Since yesterday I realized that there is a new limit of WUs:

Something like: number_cores + 2 * number_GPUs

In a PC with a Quad and 2 GPUs I got yesterday 8 WUs (now it has less because I stopped downloading)

In a PC with a Core 2 Duo (2 cores) and one GPU I'm getting 4 WUs
In a Pc with a Core 2 Duo (2 cores) and two GPUs I'm getting 6 WUs

All these can be seen in my account.

Profile Edboard
Avatar
Send message
Joined: 24 Sep 08
Posts: 72
Credit: 12,410,275
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 11486 - Posted: 29 Jul 2009 | 21:29:52 UTC - in response to Message 11480.
Last modified: 29 Jul 2009 | 21:30:31 UTC

I don't know whether there is any relation, but it happened after I tried the "Cojatz Conjecture" CUDA project in all three PCs.

dyeman
Send message
Joined: 21 Mar 09
Posts: 35
Credit: 591,434,551
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11496 - Posted: 30 Jul 2009 | 8:49:23 UTC

I am seeing the same thing - a C2D that used to download 2 WUs (1 running, 1 waiting) now downloads 4 (1 running 3 waiting - I assume 2 WUs per CPU core). This will result in all WUs now missing the 2-day 'bonus' window. Nothing has changed on this machine. It is running BOINC 6.6.28 and NVIDIA 185.85. This is the computer: 29936

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11517 - Posted: 30 Jul 2009 | 19:51:07 UTC - in response to Message 11496.

This will result in all WUs now missing the 2-day 'bonus' window.


Lower your cache and/or GPU-Grid ressource share. It's somewhat of a pain as these settings are not really seperated for CPUs and co-processors yet, but it can be done.
With 6.5.0 and a quad core I have GPU-Grid at ~25% ressource share and a cache of 0.2 days. That's enough so that BOINC only fetches GPU-Grid tasks for about 1 day and I can easily get the bonus.

MrS
____________
Scanning for our furry friends since Jan 2002

Post to thread

Message boards : Number crunching : What a silly idea

//