Message boards : Number crunching : Credits calculations
Author | Message |
---|---|
For transparency towards other projects, we have published even more in details how credits are computed. | |
ID: 6977 | Rating: 0 | rate: / Reply Quote | |
Of course we are interested. I need to take some time to read these papers ... but thank you for these papers ... | |
ID: 6987 | Rating: 0 | rate: / Reply Quote | |
Thanks I was looking for an explanation of the calculations. | |
ID: 7439 | Rating: 0 | rate: / Reply Quote | |
For transparency towards other projects, we have published even more in details how credits are computed. Nice. This enables an easy comparison to the other GPU enabled projects. As described in your link, you use the result of the flops counting to calculate the credits for a WU according to the old formula for benchmark based claimed credit. This leads of course to a severe "underpaying" of the GPU crunchers here when comparing to other projects. Just take SETI@home as an example. As they changed their credits system to a similar flops counting scheme, they introduced some kind of an additional "efficiency factor" to have a continous transition of the old benchmark based scale to the new one. If one brings it down to the simplest possible representation the credit calculation of SETI looks like this SETI: credit = 2.72 * TFlop/WU (single precision) If one brings your calculation to the same form one comes up with GPUGrid: credit = 1.736 * TFlop/WU (single precision) Considering GPUGrid claims to use additionaly a great deal of integer instructions (SETI is not afaik), that only increases the difference of the credit calculations. I suggest you should think about simply adapting the SETI scheme. Generally there should be some discussions between the BOINC projects and also David Anderson about this credit issue. You see what happens if there is not some kind of consensus if you look to Milkyway@home. For a lot of people it looks they are granting really excessive credits and it would be nice if there would be some common position of the projects. MW is using also flops counting, but double precision calculations. I would think some kind of premium for this is justified, especially as GPUs have between 5x to 12x the single precision perfomance compared to double precision. But looking to the future I would think a weight factor of two or so may be okay for double precision operations (which would be also right for CPUs). Using SETI as base again that would mean MW should grant credits = 5.44 * TFlop/WU (double precision). But in the moment they are granting ~37% more (7.5*TFlop). If you calculate the ratios of the equivalent credit multipliers between GPUGrid : SETI : MW (SETI is base) you get 0.64 : 1 : 1.37. So GPUGrid is actually about the same amount below SETI than MW is above (and GPUGrid is awarding less than half as MW). I've suggested the same already to the MW administrator (appears to be sick and unavailable for the last days). MW reducing their credits and GPUGrid raising theirs so both match the SETI credit multiplier would set some kind of balance between the three GPU projects as a first measure. This would also reduce the quite extreme difference between GPUGrid and MW one sees in the moment. But as I said already, as more and more projects develop GPU applications a fundamental solution to this may be desirable. It should only be the first step to reach consensus between the projects or maybe to develop more sophisticated criterias for the granted credit. What do you think about it? | |
ID: 7556 | Rating: 0 | rate: / Reply Quote | |
Sorry, I missed your answer. | |
ID: 7559 | Rating: 0 | rate: / Reply Quote | |
> What do you think about it? | |
ID: 7561 | Rating: 0 | rate: / Reply Quote | |
Severals users like http://www.gpugrid.net/results.php?hostid=22576 have seen an increase of their points attribuated to their WU... Is it normal ? Is it to compensate, if there is, a decrease of users crunching GPUgrid for milkyway ? | |
ID: 7573 | Rating: 0 | rate: / Reply Quote | |
No, | |
ID: 7580 | Rating: 0 | rate: / Reply Quote | |
We have been discussing with people at Seti and David Anderson about the multiplier. It is likely that we will adopt Seti multiplier by next application update. We were too conservative it seems. | |
ID: 7622 | Rating: 0 | rate: / Reply Quote | |
That sounds good.......It would be nice to finally see a standard used in all three projects.....And this will also be a big help in the future when more GPU projects come online..... | |
ID: 7729 | Rating: 0 | rate: / Reply Quote | |
So, | |
ID: 7832 | Rating: 0 | rate: / Reply Quote | |
Sounds great! :D | |
ID: 7834 | Rating: 0 | rate: / Reply Quote | |
| |
ID: 7837 | Rating: 0 | rate: / Reply Quote | |
| |
ID: 7838 | Rating: 0 | rate: / Reply Quote | |
Yes, I don't fully understand what the new credit scheme is going to be or what the 24 hour deal is about either ... | |
ID: 7839 | Rating: 0 | rate: / Reply Quote | |
This is the first time in boinc history that I have to by a new slower CPU to math my ATI 260 GPU(My now AMD 8650 Triple-Core), probably a AMD 5600, very nice anyway! Long Life to Gpugrind! | |
ID: 7840 | Rating: 0 | rate: / Reply Quote | |
The best is to limit the Wu per host not per core. | |
ID: 7841 | Rating: 0 | rate: / Reply Quote | |
One of the problems that this project has, much like Milky Way is that task x+1 relies on task x being returned. What this means is that the tasks are more of a task *STREAM* ... | |
ID: 7842 | Rating: 0 | rate: / Reply Quote | |
I suppose that I may be a little on the project side here because I will certainly be earning the higher rate on much of the work I do as the i7 has two GTX 295 cards and the speed at which they do the work means that I will likely see a lot of higher pay... the GTX280 and the 9800 on the other hand will be catch me if you can ... hmm - only a matter of local cache size. even my old 9600gt runs them under 24 hours. so if i run @ 0.01 cache... ....and donate a candle to santo improvisario each and every day... | |
ID: 7851 | Rating: 0 | rate: / Reply Quote | |
One of the problems that this project has, much like Milky Way is that task x+1 relies on task x being returned. What this means is that the tasks are more of a task *STREAM* ... Thus my question about simply shortening the deadline...
A 9800GT would return all work within 24 hours on a single core machine...instead of buying a new $500(US) GPU, why not spend much less than that on on older single core machine with PCIe slots? Or too be more direct, one could get more "bang-for-the-buck" with a middle range card on a single core than a relatively fast card (say a GTX 260) in a HT i7--there is no way that the 260 could return all eight downloaded workunits under 24 hours (i.e., such a credit bonus is not so straightforward in motivating one to purchase the latest and fastest equipment). | |
ID: 7855 | Rating: 0 | rate: / Reply Quote | |
Many workunits can run on a 9600GT under 24 hours, but the latest larger units (the 42xx credit ones) really push the limit. I have seen one OC 9600GT (1800 shader) which took about 25 hours to finish one of these. Your OC of 1850 might get you under 24 hours, but it will be very close. So, for most who are not willing to push their 9600 to the edge in OC, the larger units will edge just over the 24 hour mark. Indeed, even a 96 shader card needs to be pushed up to the 1650-1700 shader clock range to break the 24 hour deadline. | |
ID: 7856 | Rating: 0 | rate: / Reply Quote | |
| |
ID: 7857 | Rating: 0 | rate: / Reply Quote | |
Many workunits can run on a 9600GT under 24 hours, but the latest larger units (the 42xx credit ones) really push the limit. I have seen one OC 9600GT (1800 shader) which took about 25 hours to finish one of these. Your OC of 1850 might get you under 24 hours, but it will be very close. So, for most who are not willing to push their 9600 to the edge in OC, the larger units will edge just over the 24 hour mark. Indeed, even a 96 shader card needs to be pushed up to the 1650-1700 shader clock range to break the 24 hour deadline. of course - then it will be up to the project to keep them in the run and distribute the shorter WUs to the slower hosts. information is available, and the scheduler can make use of it. | |
ID: 7858 | Rating: 0 | rate: / Reply Quote | |
[Thus my question about simply shortening the deadline... The screams of anguish if the total deadline was shortened ... :) Even with the 4 day deadline there are folks complaining that they would like it extended ... to my mind this is better in that it is gentle suasion to have people move to faster GPUs ... The other good news on the horizon is that we are seeing more activity on the GPU front with The Lattice Project trying their application on an unsuspecting world and Milky Way looks to be making additional moves too ... heck, MW may be the first project that has a GPU application for the Mac ... of course, with my luck I won't be able to use it ... {edit} fixed teh quote to the correct one ... {/edit} | |
ID: 7860 | Rating: 0 | rate: / Reply Quote | |
...instead of buying a new $500(US) GPU, why not spend much less than that on on older single core machine with PCIe slots? If all you want to do is get GPUGRID to not download so large a WU queue so that it can return WUs in <24 hours, there's several zero cost options available that don't require you to swap out hardware. Especially if you use the computer for 'normal' purposes, you certainly don't want to swap out your quad-core for a single core cpu. Note that I've never tried any of these options, so it's possible they might not work as expected. Some options are: 1) Change your BOINC configuration to only use 1 CPU core. There's two ways of doing this: 1a) Use the BOINC Manager option to limit # of CPU cores to 25%, (quad-core), 33% (triple core), or 50% (dual core). 1b) Use the config file option to instruct BOINC to pretend it's a single core system. (I think this is the NCPUS flag, but I could be mistaken.) 2) Lower the work queue size to 0.1 days or similar so that BOINC never requests more than one WU. 3) Wait until a release of BOINC and/or Lattice comes out that assigns WUs based on the number of GPUs instead of CPUs. Note that option 1 (a or b) will reduce (or possibly elliminate) any other CPU BOINC work being done on the computer. This also applies to actually reducing the number of CPU cores with new hardware. Mike | |
ID: 7862 | Rating: 0 | rate: / Reply Quote | |
So, Unless I misunderstand what you're saying, this seems to me like an ill-advised policy. It penalizes people using multi-core CPUs and encourages users to abort WUs. Mike | |
ID: 7864 | Rating: 0 | rate: / Reply Quote | |
Unless I misunderstand what you're saying, this seems to me like an ill-advised policy. It penalizes people using multi-core CPUs and encourages users to abort WUs. Depends on how many GPU cores you have to match ... :) And aborting tasks may not be all that bad of a deal if they can get issued and returned faster that way. We just THINK it is waste to do so ... The only other thing I would say is that I have not noticed a change in the awards yet ... unless I am missing something ... | |
ID: 7873 | Rating: 0 | rate: / Reply Quote | |
Remember that reliable hosts are the one which returns results in two days and 95% success rate. These get priority. | |
ID: 7875 | Rating: 0 | rate: / Reply Quote | |
Remember that reliable hosts are the one which returns results in two days and 95% success rate. These get priority. Have you considered exposing that indicator in the computer information page? It would sure be nice to know for sure if a system qualifies or not ... | |
ID: 7895 | Rating: 0 | rate: / Reply Quote | |
Remember that reliable hosts are the one which returns results in two days and 95% success rate. These get priority. I've already lost the extra 50% Credit on a couple WU's, the WU was done in Time (The 24 hr Period) but didn't report back in Time because the Manager didn't send it back promptly after it was finished. Could we please have the Deadline extended to 36 hours at least. All my Cards are Capable of running & returning the WU's within the 24 hr period but if the Manager won't send them back in time it doesn't do any good then ... | |
ID: 7905 | Rating: 0 | rate: / Reply Quote | |
LOL ... Just got this WU in by 3 Minutes or the 50% Extra Credit would have been lost. It had been finished 2 hours ealier but sat there until I manually sent it in. | |
ID: 7907 | Rating: 0 | rate: / Reply Quote | |
···All my Cards are Capable of running & returning the WU's within the 24 hr period but if the Manager won't send them back in time it doesn't do any good then ... You can activate the <report_results_immediately> option in your cc_config.xml file. If you do so, then the WUs are sent as soon as they are done. | |
ID: 7908 | Rating: 0 | rate: / Reply Quote | |
To easy the effort, we are giving now: | |
ID: 7910 | Rating: 0 | rate: / Reply Quote | |
To easy the effort, we are giving now: Thank you ... Can you give us a chart of what we should expect? Looking at my daily totals it seems that I am getting higher returns, but, when I look at the tasks the individual tasks seem to have the same numbers as in the past. Thanks ... And for those asking for longer deadlines ... see ... sometimes you get your wish ... :) | |
ID: 7912 | Rating: 0 | rate: / Reply Quote | |
To easy the effort, we are giving now: Thanks a bunch GDF, my Cards only take 5-6 hours to run the WU's but with a Cache of 4 WU's I'm always on the Edge because 4 take 20-23 hr's to do so if they don't report right away then they go over a 24 hour deadline. But with 1 1.5 Day Deadline I'll have no problems now getting them reported ... :) | |
ID: 7913 | Rating: 0 | rate: / Reply Quote | |
···All my Cards are Capable of running & returning the WU's within the 24 hr period but if the Manager won't send them back in time it doesn't do any good then ... That work for some of the Clients & some it doesn't work, I had thought of that but hadn't got around to it yet with so many other things going on. | |
ID: 7914 | Rating: 0 | rate: / Reply Quote | |
···All my Cards are Capable of running & returning the WU's within the 24 hr period but if the Manager won't send them back in time it doesn't do any good then ... If for some reason there is a back-off from contacting the server, the report_results_immediately does not override this. However, with a manual update or the countdown to contact goes to 0, it will report immediately then. I've had this set for a while because I have 3 GPU's and *only* a Quad, often leaving one not running (if 2 WU's are finished and pending sending to server, 1 of the 3 GPU's will not be running until another WU downloaded). | |
ID: 7917 | Rating: 0 | rate: / Reply Quote | |
An intersting change. It's actually in the folding@home spirit, where their points don't neccessarily represent FLOPS, but rather the value of the calculations. E.g. a CPU-FLOP is worth more than a GPU-FLOP, as it's more flexible. And in the case of GPU-Grid a "fast-FLOP" is worth more, as it allows the project to progress faster. | |
ID: 7942 | Rating: 0 | rate: / Reply Quote | |
I don't seem to be able to locate the cc_config.xml file in my BOINC folder. Is it someplace else or can one be created and put in the folder? If so how do I do it. | |
ID: 7943 | Rating: 0 | rate: / Reply Quote | |
In your "BOINC/user data" folder create a text file and name it cc_config.xml. The contents can be:
MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 7944 | Rating: 0 | rate: / Reply Quote | |
Neither of these helped. I have a q6600 and a single 9600gso. I have it set to use at most 75% of the cpu and the message windows says "4/2/2009 10:59:40 AM||Preferences limit # CPUs to 3" when it comes up. This lets BOINC use 3 cores to run CPU applications and leaves one to feed the GPU and for whatever I am doing on the box. My max work queue size is 0.1 days. I aborted all the work in the "ready to run" state and it downloaded three more units leaving me with one running and three "ready to run". I am finishing work in "Approximate elapsed time for entire WU: 79329.953 s" which is about 20 hrs. I have set boinc to return results immediately. But none of this will help if BOINC is going to keep three days work on my machine. I suggest you reconsider your decision to penalize those of us who have more CPU cores than GPU's. | |
ID: 8075 | Rating: 0 | rate: / Reply Quote | |
You should really consider an OC of the shader and core clocks. The 96 shader 9600GSO (and 8800GS, which is the same card rebranded) is very tolerant of OC if you have good heat management in your system. I have a factory OC 9600GSO and have OC'ed my 8800GS as well to around 1700 shader clock. Modest heat increase, no errors, and increased speed in crunching. This should get you close to having better luck in the quad. As for fixes to this overall issue, a very easy solution would be to define workunit types by there preset credit totals (which are based on flops counting and equate quite nicely to run times) so that users with slower cards could opt out of the longer work. In other words, create generalized classes of work so that there might be four types: 1) less than 2400 base credit, 2) 2400 - 3000 base credit, 3) 3,000 - 3,600 base credit, and 4) more than 3,600 base credit (the number and values of these thresholds are of course just examples). Add the workunit type checkboxes to the user account GPUGRID preferences section (similar to what is done at PrimeGrid) and let users select as needed. | |
ID: 8093 | Rating: 0 | rate: / Reply Quote | |
IMO this is a terrible idea. The only way a lot of us can meet this standard is to keep aborting queued WUs constantly since you insist on sending us 1 WU for every CPU instead of GPU. You're giving the fastest users a big bonus and penalizing the rest of us by burying us even further down the stats. It's already starting to cause hard feelings and negative PR is hard to overcome. You've created a very nice project, but this is a bad move (at least until you can actually provide us with a way to make it work). The problem is not really the project's fault. Until we have proper GPU support in BOINC these issues are going to be there. 6.6.20 or what ever number they give to the release version will be the first that will start to address these kinds of problems. Note I say START to address these problems. there is a lot more dificulty in Disneyland that this ... Still, when you think of it, CUDA in BOINC is less than a year old and we have come a long way ... but there is still a long way to go ... | |
ID: 8104 | Rating: 0 | rate: / Reply Quote | |
The only way a lot of us can meet this standard is to keep aborting queued WUs constantly since you insist on sending us 1 WU for every CPU instead of GPU. I agree. When I wrote the above message I was pretty irritated after spending quite a while trying to figure out why some were getting those 4900 point WUs and some weren't. Looked in the wrong forum. If I could delete the above, I would. There are ways to avoid the problem with a bit of effort. | |
ID: 8128 | Rating: 0 | rate: / Reply Quote | |
I'll try the method you posted on the ars forum once I clear the backlog on this machine. Until then, I just abort any "Ready to Start" GPUGRID wu's I see on the machine. It's a pain, but it works. | |
ID: 8130 | Rating: 0 | rate: / Reply Quote | |
I agree. When I wrote the above message I was pretty irritated after spending quite a while trying to figure out why some were getting those 4900 point WUs and some weren't. Looked in the wrong forum. If I could delete the above, I would. There are ways to avoid the problem with a bit of effort. Well, the good news is that I did not notice that you were irritated ... :) Most of us that tender advice here are pretty laid back and tend to not get excited easily so it kinda rolls off and no need to sweat it ... heck all of us at one time or another have said (typed?) things that we wish we could unsay ... As to the "sizing" issue, well, we have not seen the end to the problems there yet. And sadly the Developers, seemingly by design are ignoring the issues that GPUs raise rather than starting to be proactive about them. I mean they are not thinking about how to solve the issue of the fact that the pool of GPU resources is not guaranteed to be symmetric in capabilities (orthogonal is another way to look at it). And sadly model numbers may not be the best way to detect this either ... At one point I had a GTX295, GTX 280 and 9800GT in one system ... which should the long running tasks be scheduled on? Anyway, I am going to try to address this subject again, because it is not going to go away if ignored... | |
ID: 8133 | Rating: 0 | rate: / Reply Quote | |
Next week, | |
ID: 8140 | Rating: 0 | rate: / Reply Quote | |
That sounds nice. | |
ID: 8149 | Rating: 0 | rate: / Reply Quote | |
That sounds nice. My quad core (GTX280) and 9800 system only seem to stock 1 or 2 spares with 0.5 cache. BOth running 6.5.0 at the moment. | |
ID: 8157 | Rating: 0 | rate: / Reply Quote | |
I'm not really into this "let's award more to those that compute sooner"... | |
ID: 8171 | Rating: 0 | rate: / Reply Quote | |
The idea if BOINC is to help groups who can't afford a supercomputer. So saying "go get a supercomputer" just doesn't cut it. | |
ID: 8184 | Rating: 0 | rate: / Reply Quote | |
The idea if BOINC is to help groups who can't afford a supercomputer. So saying "go get a supercomputer" just doesn't cut it. That is the way I have seen it from my start of boinc participation. First time that I have seen it voiced, though. ____________ mike | |
ID: 8187 | Rating: 0 | rate: / Reply Quote | |
The idea if BOINC is to help groups who can't afford a supercomputer. So saying "go get a supercomputer" just doesn't cut it. *IF* BOINC? :) I know, "of BOINC" ... :) Actually, the point of BOINC is to allow groups to use their funds more effectively than to spend it on super-computer time or in the purchase of a supercomputer ... This allows leveraging of the science funding, which is always parsimoniously granted by governments and corporations to look into the most important questions of our time. In effect, for the cost of a couple low-end to mid-range servers, a group can investigate questions for which, in the past, could not be looked at because there simply was not the funding to do this research, or that could not do if for long enough to establish a solid answer. | |
ID: 8198 | Rating: 0 | rate: / Reply Quote | |
This is exactly the point. We are the first project which uses BOINC like if it was a supercomputer in a low-latency mode, rather that just high-throughput. | |
ID: 8203 | Rating: 0 | rate: / Reply Quote | |
This is exactly the point. We are the first project which uses BOINC like if it was a supercomputer in a low-latency mode, rather that just high-throughput. Now this explanation makes sense of it all. If getting the results back quickly substantially helps the science, go for it. I think there'd be a lot less uproar if these kind of changes were posted along with the rational. While we like stats we're really here to help advance human knowledge. Thanks for the info. | |
ID: 8236 | Rating: 0 | rate: / Reply Quote | |
I totally agree. | |
ID: 8278 | Rating: 0 | rate: / Reply Quote | |
Would reducing my resource share for GPU grid help with keeping its queue short while maintaining a longer one for my CPU, or would I just end up with an idle GPU at times because I only had CPU WUs? | |
ID: 8292 | Rating: 0 | rate: / Reply Quote | |
Credits back to earned = granted? | |
ID: 8428 | Rating: 0 | rate: / Reply Quote | |
No, that was just a side effect of the server being down ... no WUs were returned within the 24 hr. bonus period. I have completed and returned 4 new WUs today and have recieved the bonus :-) | |
ID: 8440 | Rating: 0 | rate: / Reply Quote | |
The first task 522250 I received after the crash was returned well within 24 hours but received no bonus. Maybe the time is calculated from the time the task was created rather than the time it was sent?? | |
ID: 8448 | Rating: 0 | rate: / Reply Quote | |
Sorry I posted so quick based on an *assumption* that I understood the credits calculation :-( | |
ID: 8450 | Rating: 0 | rate: / Reply Quote | |
Yup, here's another example -- the original recipient's WU spanned the outage, so he didn't get bonus credit. It was subsequently sent to me -- after the first guy returned the WU -- and I got the same (non-bonus) credit as the first guy, despite my returning it in 22 hours. | |
ID: 8455 | Rating: 0 | rate: / Reply Quote | |
We are looking into it. Thanks for reporting. | |
ID: 8466 | Rating: 0 | rate: / Reply Quote | |
For this WU http://www.gpugrid.net/result.php?resultid=539356 I got less granted credit (3,460.12) than claimed credit (3,844.58). This WU was reportet in a half day and I got no bonus. Why??? Perhaps another machine (http://www.gpugrid.net/result.php?resultid=540426) reported quicker??? | |
ID: 8542 | Rating: 0 | rate: / Reply Quote | |
So, we have updated the new applications 6.63. | |
ID: 8547 | Rating: 0 | rate: / Reply Quote | |
Now, if two users return the same wu, the credit with bonus is awarded to both, in case one of the two has crunched it within two days. Shouldn't the bonus be independent for both users? If both return within 2 days both get the bonus, if only one succeeds then only this user should get the bonus. But I can imagine that BOINC is not (yet) made to award different credits for different users who crunched the same WU. So better to be safe than sorry (i.e. both get the bonus instead of none). MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 8557 | Rating: 0 | rate: / Reply Quote | |
So, we have updated the new applications 6.63. Maybe I misunderstand what you mean...But this wu does not seem to follow your logic...my gpu is the "5914" http://www.gpugrid.net/workunit.php?wuid=382081 I finished in 1 day, the other box finished in 4 days (before me). And I seem to be going backwards in the score department. ____________ Consciousness: That annoying time between naps...... Experience is a wonderful thing: it enables you to recognize a mistake every time you repeat it. | |
ID: 8578 | Rating: 0 | rate: / Reply Quote | |
This is across the changes. The first result returned with the old app. | |
ID: 8582 | Rating: 0 | rate: / Reply Quote | |
Speculation: I think it is a combination of many things. First, your claimed credits are higher because the change over to the base credit calculation (flops x2) happened after the first person was send their WU. Another differnce is that the app version they got was 6.62 and you got 6.63 which, besides a science update, I think also changed the bonus credit calculation from 1.5* if < 24 hours to 1.2* if < 48 hrs. Put this together with the *assumption* made by ETA above (BOINC can only handle 1 credit amount per WU) and the fact that the project needs to stream results as quick as possible makes this a tricky situation. | |
ID: 8584 | Rating: 0 | rate: / Reply Quote | |
I don't understand why i only got 2883 credits for a unit which run for 1,579.13 while another big unit gave 4804 credits for 1,497.56. | |
ID: 8595 | Rating: 0 | rate: / Reply Quote | |
You're probably looking at the CPU time and not the GPU time. | |
ID: 8599 | Rating: 0 | rate: / Reply Quote | |
The credit calculations changed in between the two tasks. Please read above in this thread. | |
ID: 8600 | Rating: 0 | rate: / Reply Quote | |
These are two 6.63 tasks that have given two different bonuses despite both being within 24 hours. The main difference is that one was completed on the same UTC day and the other wasn't. 537864 provided a 1.6x and 544287 was 1.25x. | |
ID: 8614 | Rating: 0 | rate: / Reply Quote | |
I have a question! | |
ID: 10485 | Rating: 0 | rate: / Reply Quote | |
Are you planing to increase credit granted per WU? I'm not an admin, but I'm sure the answer is: no. The credits have recently been increased to bring them in line with the standard set by seti. If Aqua grants many more credits per time than they either: - don't stick to the standard set by seti / UCB - overestimate their flops - extract more flops from the hardware than other projects MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 10498 | Rating: 0 | rate: / Reply Quote | |
AQUA@Home gives much more credit per hour than GPUGRID, sometimes even twice as much GPUGRID.... AQUA yet has started the CUDA app and is still in the process to make credit grantings appropriate. You can read here how this was made some days ago. BOINC Admin now released a new version 3.26 with which the credit calculation should change. ____________ Member of BOINC@Heidelberg and ATA! | |
ID: 10533 | Rating: 0 | rate: / Reply Quote | |
Hi! | |
ID: 10563 | Rating: 0 | rate: / Reply Quote | |
Your GPU worked on it for 6.5h, which looks pretty normal to me. | |
ID: 10565 | Rating: 0 | rate: / Reply Quote | |
I mean it should not be 5600 credits granted? | |
ID: 10568 | Rating: 0 | rate: / Reply Quote | |
Claimed 3681, granted 4600 - so you got the quick-return bonus correctly. Apart from that they're a little below your normal credit/time, but 5600 would be excessive.. such WUs usually take you ~27k seconds. | |
ID: 10572 | Rating: 0 | rate: / Reply Quote | |
What the amount of demanded credit depends on? | |
ID: 10576 | Rating: 0 | rate: / Reply Quote | |
Take a look at the 1st post in this thread. | |
ID: 10592 | Rating: 0 | rate: / Reply Quote | |
I have got very long task http://www.gpugrid.net/workunit.php?wuid=547633 as second cruncher (GTX260 Top, host ID: 31329). Estimated time calculated after 4h15' (6.722%) is ~63 hours. The first cruncher's claimed credit is 45,669. I have no chance to finish it within 2 days to get bonus. On the other side claimed credit looks like bonus credit included. Now is task suspended and I’m crunching next standard 8h task to get bonus. | |
ID: 10644 | Rating: 0 | rate: / Reply Quote | |
What ever happened to that task is probably not intended. | |
ID: 10647 | Rating: 0 | rate: / Reply Quote | |
So what to do with it, abort or try to run and finish it? I like long tasks, but if it crashes after 60h, I would not be happy. | |
ID: 10650 | Rating: 0 | rate: / Reply Quote | |
I've got one of these monsters running too: | |
ID: 10651 | Rating: 0 | rate: / Reply Quote | |
What ever happened to that task is probably not intended. Is intended maybe, names of that strange tasks includes "twomons". We have received not only monsters, even doublemonsters :-). | |
ID: 10654 | Rating: 0 | rate: / Reply Quote | |
Now I've got one of that monster Wus, too. | |
ID: 10658 | Rating: 0 | rate: / Reply Quote | |
..... I've got a pair of 'monsters' running on my 295. 24 hours elapsed and only showing 25-30% complete. | |
ID: 10659 | Rating: 0 | rate: / Reply Quote | |
I've got one of these monsters running too: Hmm, aborted by project, redundant result after 18:40 run time on a GTX 260 and no credit. That's 3 normal WUs worth of time. Doesn't seem too fair does it? | |
ID: 10678 | Rating: 0 | rate: / Reply Quote | |
Monster aborted by server, no credit granted. Moving to AQUA, it is not fair. | |
ID: 10682 | Rating: 0 | rate: / Reply Quote | |
You guys might like this: | |
ID: 10687 | Rating: 0 | rate: / Reply Quote | |
OK, nice | |
ID: 10694 | Rating: 0 | rate: / Reply Quote | |
Thanks! | |
ID: 10702 | Rating: 0 | rate: / Reply Quote | |
I agree. The whole idea of this setup is to use spare capacity NOT force people out to buy faster equipment for the sole use of these projects. My computer suits my needs not my needs be adapted to suit others. | |
ID: 13083 | Rating: 0 | rate: / Reply Quote | |
Wiyosaya wrote: I agree that the credit scheme should be revamped to be more fair to the volunteers.What do you mean by that? 1. Increasing of the basic credit for short queue. That means enter to the credit war, as that credit is the same as Seti granted credit for CC 1.0/1.3 GPUs. 2. Erasing time bonus. Developers needs results ASAP, that would be not with accordance to their requirements. 3. Decreasing the basic credit of long queue to the basic credit of short queue. As I cas remember, credit bonus for long queue is for two reasons: more errors and more important results. There would be no reason to crunch the long queue if credit was the same. I see the credit scheme by the other way: Granted credit include 24 hours bonus is the basic one and lower granted credit is penalty for slower crunching then the projets needs. Any other idea? | |
ID: 25494 | Rating: 0 | rate: / Reply Quote | |
So you call your response fair? People have no idea what I posted nor why I advocate changing the scheme. In fact, you've simply posted a reply to what I said in another thread completely when site moderators were participating in that thread AND also suggesting that perhaps the credit scheme should be revamped. I'm not repeating what I said there. For those truly interested in discourse - see the original post. Seems like you want to perpetuate the "credit war," I want to end it. ____________ | |
ID: 25498 | Rating: 0 | rate: / Reply Quote | |
I have read not only your post - I read whole thread. As you can see I have put NO response, I am asking you for response, not only for comment. I have commented possibilities that came on my mind and I am waiting for other possibilities. | |
ID: 25499 | Rating: 0 | rate: / Reply Quote | |
Message boards : Number crunching : Credits calculations