Message boards : Number crunching : What a silly idea
Author | Message |
---|---|
I attached my 8600gt 512mb to see how long it would take to complete a unit. I already knew it would complete a unit within the 5 day deadline so was no danger in losing the unit or wasting time. | |
ID: 11188 | Rating: 0 | rate: / Reply Quote | |
I support Betting Slip wholeheartedly. This is long-standing, idiotic behaviour: and he has accurately attributed it to 'one work request per CPU core'. | |
ID: 11189 | Rating: 0 | rate: / Reply Quote | |
GPUGrid need to report it to BOINC as a bug in the infrastructure, and get it fixed that way for the benefit of all projects. It is a well-known issue. They are already working on it. i | |
ID: 11199 | Rating: 0 | rate: / Reply Quote | |
GPUGrid need to report it to BOINC as a bug in the infrastructure, and get it fixed that way for the benefit of all projects. Sorry, that was a bit blunt for a first post on a new project! Yes, I'm aware that CUDA support in BOINC is very much a 'work in progress' - I'm active on both the alpha and dev mailing lists, and putting in bug reports and suggestions for improvement where I can. But you're probably aware that the BOINC developers took a bit of a time-out to concentrate on Facebook/GridRepublic development. During that period, I think it would be fairer to say that this issue was on the "to do" list, rather than in active development. Since the over-fetching that Betting Slip reports will hurt this project (by delaying the return of the supernumerary results), I thought you might be in a good position to give them a bit of a nudge. While we're here, may I point out a related issue? My first task on my Q9300/9800GTX+ combo, p1030000-IBUCH_3_pYEEI_com_0907-1-3-RND5364, reported: <rsc_fpops_est>250000000000000.000000</rsc_fpops_est> <fpops_cumulative>3436310000000000.000000</fpops_cumulative> Those figures are out of kilter by a factor of 12 to 1 or thereabouts. The 71-GIANNI_BINDX I reported yesterday is even worse. Since BOINC uses <rsc_fpops_est> plus DCF to estimate runtimes, new users (DCF=1) are in danger of severe over-allocation until the completion of their first task - at which point their DCF will jump somewhere near my (current) 16.7332, and the estimates will be corrected accordingly. You may be finding a large number of aborts/late reports from newly attached hosts with the current settings. | |
ID: 11201 | Rating: 0 | rate: / Reply Quote | |
we tried and tried to fix the number of WUs to GPUs only, but the current BOINC code is still quite buggy in this sense. Until BOINC works properly for this issue, there is nothing more we can do that try to give feedbacks to the developers. | |
ID: 11220 | Rating: 0 | rate: / Reply Quote | |
Since yesterday I realized that there is a new limit of WUs: | |
ID: 11480 | Rating: 0 | rate: / Reply Quote | |
I don't know whether there is any relation, but it happened after I tried the "Cojatz Conjecture" CUDA project in all three PCs. | |
ID: 11486 | Rating: 0 | rate: / Reply Quote | |
I am seeing the same thing - a C2D that used to download 2 WUs (1 running, 1 waiting) now downloads 4 (1 running 3 waiting - I assume 2 WUs per CPU core). This will result in all WUs now missing the 2-day 'bonus' window. Nothing has changed on this machine. It is running BOINC 6.6.28 and NVIDIA 185.85. This is the computer: 29936 | |
ID: 11496 | Rating: 0 | rate: / Reply Quote | |
This will result in all WUs now missing the 2-day 'bonus' window. Lower your cache and/or GPU-Grid ressource share. It's somewhat of a pain as these settings are not really seperated for CPUs and co-processors yet, but it can be done. With 6.5.0 and a quad core I have GPU-Grid at ~25% ressource share and a cache of 0.2 days. That's enough so that BOINC only fetches GPU-Grid tasks for about 1 day and I can easily get the bonus. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 11517 | Rating: 0 | rate: / Reply Quote | |
Message boards : Number crunching : What a silly idea