Advanced search

Message boards : Number crunching : Crazy credits like Bitcoin Utopia

Author Message
chr80
Send message
Joined: 19 Sep 20
Posts: 1
Credit: 6,842,742
RAC: 11
Level
Ser
Scientific publications
wat
Message 61804 - Posted: 15 Sep 2024 | 12:25:31 UTC

I have noticed that recently some tasks in the project award an unusually large amount of credits compared to other similar tasks, both in this project and in other BOINC projects. I am concerned that such inequalities may discourage some users from continuing to participate in the project and negatively affect fair competition. This may be the result of some bug in the credit awarding system or an unintended configuration. Is there a possibility that the project team could look into this issue and consider adjusting the credit awarding algorithm? I think that fair credit awarding is crucial to engaging the community and encouraging new participants to join. Thank you for your attention and I am open to further discussion on this matter."

This approach shows concern for the project and the community, while avoiding confrontation and indicating a desire to solve the problem.

Freewill
Send message
Joined: 18 Mar 10
Posts: 20
Credit: 32,349,932,894
RAC: 149,304,364
Level
Trp
Scientific publications
watwatwatwatwat
Message 61806 - Posted: 16 Sep 2024 | 11:12:37 UTC

Please provide specific examples of the tasks by putting links in this thread. Otherwise, it is hard to know how to respond. Are you referring to some buggy tasks that incorrectly ran very fast and still gave the credit? That was discussed in other threads.

Certain projects here require long run times (12-18 hrs, such as ATM, ATMML) or large amounts of GPU memory (12-16 GB) and high FP64 performance, such as QChem. Awarded points also may have a bonus for faster completion. I think they are awarded fair credit for the hardware and volunteer attention required.

As for Bitcoin Utopia, I was a top producer on that project and nothing here comes within an order of magnitude of the rate I earned on that project. BU required specialized hardware to get the highest production and could not serve any other projects.

Profile tito
Send message
Joined: 21 May 09
Posts: 22
Credit: 1,816,478,612
RAC: 6,675,810
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 61807 - Posted: 16 Sep 2024 | 11:38:12 UTC - in response to Message 61806.

I think He connects it to other projects, like Einstein@home, where similar GPU can do ~ 6-7 times less credits a day.
I personally started similar topic on Minecraft, where 1 thread of 4600h CPU can produce ~ 800k daily - far from fair to other projects.
We all know that different projects have different approach to credits, but still - there should be borders.
BU was ok as GPU produced fair amount of credits - only ASICs where many times faster. It just exploded statistics :)

Freewill
Send message
Joined: 18 Mar 10
Posts: 20
Credit: 32,349,932,894
RAC: 149,304,364
Level
Trp
Scientific publications
watwatwatwatwat
Message 61810 - Posted: 16 Sep 2024 | 21:39:47 UTC - in response to Message 61807.

Each project and subproject are so different in terms of computing needs, so I think there is no good way to compare credits. Projects can set higher credits to draw in the volunteer resources they need. Anyway, it's all for fun regarding credits as there is no cash value.

WPrion
Send message
Joined: 30 Apr 13
Posts: 96
Credit: 2,894,534,111
RAC: 19,944,238
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 61840 - Posted: 27 Sep 2024 | 13:32:59 UTC - in response to Message 61810.
Last modified: 27 Sep 2024 | 13:33:42 UTC

Each project and subproject are so different in terms of computing needs, so I think there is no good way to compare credits. Projects can set higher credits to draw in the volunteer resources they need. Anyway, it's all for fun regarding credits as there is no cash value.


Of course there are differences between the projects and between tasks within projects. The points, however, should be (and I think originally were designed to be) awarded based on the computations completed. Points were originally based on Cobblestones. https://en.wikipedia.org/wiki/BOINC_Credit_System

A computer should earn points proportional to it's computations accomplished. More powerful hardware earns points at a higher rate. There shouldn't be another layer of points awarded because the tasks are difficult or take a long time - that's already accounted for.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1352
Credit: 7,771,336,114
RAC: 10,412,060
Level
Tyr
Scientific publications
watwatwatwatwat
Message 61847 - Posted: 28 Sep 2024 | 2:43:31 UTC - in response to Message 61840.

That might have been true when only cpu applications and work were available in the beginning of BOINC.

But as soon as gpu apps made the scene, the old Cobblestone credit awarding mechanism became terminally broken.

And without some BOINC overseer that had absolute control over project scientists and administrators to stick to the Cobblestone credit algorithm on penalty of death or whatever, the Boinc infrastructure became as lawless as the Wild West.

So administrators now arbitrarily decide the work unit credit value per app of each project.

Some try and more or less to reward credit based on how hard the host had to work to complete the calculation, but some do not or award pathetically low credit for the work or in comparison to similarly sized GFLOPS needed for the work unit compared to other projects.

So we old-timers just accept the current system as it is. No point in grumbling about it. We are not going to force any changes as a volunteer.

Profile tito
Send message
Joined: 21 May 09
Posts: 22
Credit: 1,816,478,612
RAC: 6,675,810
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 61848 - Posted: 28 Sep 2024 | 5:45:30 UTC

I'm old timer and I don't accept millions of credits for nothing.
Long run bonus - ok;
special hardware bonus - ok;
High error rate bonus - ok;
Linux vs Windows bonus - ok;
Short term bonus - ok;
We made mistake and credits are high this batch - ok;

But hundrets/thousand time more credits for nothing - not ok.

mikey
Send message
Joined: 2 Jan 09
Posts: 297
Credit: 6,395,470,770
RAC: 22,941,621
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 61902 - Posted: 22 Oct 2024 | 0:30:13 UTC - in response to Message 61848.

I'm old timer and I don't accept millions of credits for nothing.
Long run bonus - ok;
special hardware bonus - ok;
High error rate bonus - ok;
Linux vs Windows bonus - ok;
Short term bonus - ok;
We made mistake and credits are high this batch - ok;

But hundrets/thousand time more credits for nothing - not ok.


You forget the option that maybe the key is to get the tasks finished as quickly as possible because of the things the Project itself is trying to do. ie we need to finish this batch asap because the people we are doing it for need the data yesterday!! We as volunteers have no way of knowing what the reasons are for releasing this batch of tasks or that batch of tasks so trying to say 'these tasks pay too much credit' is just guesswork at this stage.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1132
Credit: 10,511,497,676
RAC: 26,098,279
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwat
Message 61903 - Posted: 22 Oct 2024 | 5:20:24 UTC - in response to Message 61810.

Each project and subproject are so different in terms of computing needs, so I think there is no good way to compare credits. Projects can set higher credits to draw in the volunteer resources they need. Anyway, it's all for fun regarding credits as there is no cash value.

+ 1

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1352
Credit: 7,771,336,114
RAC: 10,412,060
Level
Tyr
Scientific publications
watwatwatwatwat
Message 61904 - Posted: 22 Oct 2024 | 20:14:35 UTC
Last modified: 22 Oct 2024 | 20:15:07 UTC

As @Mikey mentioned, this project requires fast turnaround for tasks and set a short 5-day deadline. If the admins/developers/scientists had their druthers, and be damned any complaints from the volunteers, they would set a one day deadline.

One of the reasons they award a 50% credit bonus for returning work within 24 hours and the 25% credit bonus for returning work in less than 2 days.

You might not realize that the work unit result you return is actually, sent right back out immediately after validation as the science input to the next task sent out to someone. Each new task is an iteration of the tasks that came before.

This is how they produce their science results.

jon b.
Send message
Joined: 8 Jul 12
Posts: 5
Credit: 162,022,974
RAC: 3,309,436
Level
Ile
Scientific publications
watwatwatwatwatwat
Message 61965 - Posted: 27 Nov 2024 | 23:15:53 UTC - in response to Message 61810.

While there are differences between projects that may result in an actual difference in the amount of operations performed for a given amount of computation time, it is possible to establish an upper bound for the 'fair' amount of credits based on a particular benchmark processor.

For example, an RTX 3070 GPU achieving an FP64 benchmark of 317.4 GFLOPS, should receive at most 317.4*200=63480 credits/day of computation. This is obviously violated by most projects that use GPU acceleration.

For ATMML running on my RTX 3070, I am earning about 9M credits/day, which is 142 times higher than the upper bound. Compare this to BRP7 on Einstein, which yields about 550,000 credits/day, or 8.7x the 'fair' amount based on the FP64 benchmark.

This is still a difficult problem to solve, since different applications will use different operations that may benefit differently from GPU. For instance, an app using only FP32 could be 64 times faster than an app performing the same number of FP64 ops. The rational behind the original Whetstone benchmark credit system was to control for these differences by relating credits to the the time that the device was utilize rather than the number of operations performed.

I personally avoided contributing to GPUGRID for many years due to the credit issue, and currently have my resource share set lower than other projects. While the credits are somewhat arbitrary, it may be an issue for people who value contributing to a wide range of projects.

Perhaps a better way to evaluate your contribution these days would be to join WUProp@Home and look at the computation time that you have contributed to each project. GPUGRID has awarded these inflated credit rewards for many years, and I would not expect them to change it any time soon (especially since many large contributors have a vested interest in keeping the credits high).

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1352
Credit: 7,771,336,114
RAC: 10,412,060
Level
Tyr
Scientific publications
watwatwatwatwat
Message 61966 - Posted: 28 Nov 2024 | 1:36:01 UTC - in response to Message 61965.

I hope you realize your computation of 'fair' credits is based on the flawed original "Cobblestone" credit algorithm and the main BOINC developers stated way back in the beginning that the algorithm was completely flawed once BOINC started using gpu applications.

You can't compute gpu credits in the same way. Plus, what is stated for any gpu FLOPS capability is nothing more than the theoretical manufacturer value derived from manufacturer database record lookups of which there is no standard method and none of the manufacturers use the same methods.

jon b.
Send message
Joined: 8 Jul 12
Posts: 5
Credit: 162,022,974
RAC: 3,309,436
Level
Ile
Scientific publications
watwatwatwatwatwat
Message 61967 - Posted: 28 Nov 2024 | 3:41:11 UTC - in response to Message 61966.

Yes, of course I understand that the original Cobblestone credit algorithm does not work with GPUs because it is based only on the CPU benchmark and would thus not account for work done by the GPU. The point is simply that you can compute the theoretical maximum number of operations that a GPU could possibly perform in a certain amount of time and use this to see that the credit provided by a project is not related to the actual computational difficulty of the tasks. There is only one manufacturer of Nvidia GPU chips (TSMC), and there are standardized benchmarks used in industry (notably LINPACK, which is more representative than the Whestone benchmark used by BOINC and has been implemented with CUDA acceleration).

The reason that I put 'fair' in single quotes is because a single synthetic benchmark cannot account for differences in processor architecture and differences in software instruction optimizations (e.g. whether or not a CPU has AVX-512 and whether an application utilizes it). This limitation suggests that a fixed amount of credit per task (adjusted for variable task difficulty) would be more consistent across heterogeneous hosts. It appears that this kind of system is used here for ACEMD 3, while ATMML assigns a non-adjusted fixed amount of credit without accounting for task difficulty. Maybe for some tasks it isn't possible to know the difficulty up front, but it is likely possible to count the number of iterations/timesteps/generations during execution.

To calibrate credit across projects, there would have to be some kind of consensus-based system utilizing a wide range of reference hardware to make cross-comparisons between projects. Such a system does not exist (although the data is available from WUProp), and likely never will.

Further aside: If you at the bottom of "An Incentive System for Volunteer Computing" (https://boinc.berkeley.edu/boinc_papers/credit/text.php) you will see that they did consider adding credit for additional resources ("GPU, DSP, disk/network etc"), but this was never implemented.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1352
Credit: 7,771,336,114
RAC: 10,412,060
Level
Tyr
Scientific publications
watwatwatwatwat
Message 61968 - Posted: 28 Nov 2024 | 8:36:58 UTC

What you posted is all well and good and a nice reference to one of the original credit algorithm discussions.

But at this point in time, 20 odd years later . . . . the 'cat' is out of the bag long ago, as they say.

Projects use their own versions and configuration of the server software that matches their design goals and needs.

It is impossible now to corral all of the existing projects into using one single standardized version of the server software since there is no mechanism to police such action.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2350
Credit: 16,296,203,290
RAC: 4,019,210
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 61985 - Posted: 2 Dec 2024 | 11:13:02 UTC - in response to Message 61965.

...
For ATMML running on my RTX 3070, I am earning about 9M credits/day, which is 142 times higher than the upper bound.
...
For instance, an app using only FP32 could be 64 times faster than an app performing the same number of FP64 ops.
As far as I know, the GPUGrid app use FP32 on the GPU (it's not clear that the same is true for the ATMML app, but let's suppose it is true), considering the fast return bonus of *2, 64*2=128 is not that far off from 142.
This is pure speculation though.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1074
Credit: 40,231,533,983
RAC: 161
Level
Trp
Scientific publications
wat
Message 61986 - Posted: 2 Dec 2024 | 14:29:08 UTC - in response to Message 61985.

...
For ATMML running on my RTX 3070, I am earning about 9M credits/day, which is 142 times higher than the upper bound.
...
For instance, an app using only FP32 could be 64 times faster than an app performing the same number of FP64 ops.
As far as I know, the GPUGrid app use FP32 on the GPU (it's not clear that the same is true for the ATMML app, but let's suppose it is true), considering the fast return bonus of *2, 64*2=128 is not that far off from 142.
This is pure speculation though.


which app are you referring to when you say "the GPUGrid app"? there are three different apps here which routinely have work these days.

The Quantum Chemistry app relies heavily on FP64, and that's why cards like Titan V or V100 are so much faster than even 4090's on that app.
____________

pututu
Send message
Joined: 8 Oct 16
Posts: 26
Credit: 4,153,801,869
RAC: 3,613,611
Level
Arg
Scientific publications
watwatwatwat
Message 61987 - Posted: 2 Dec 2024 | 18:04:12 UTC
Last modified: 2 Dec 2024 | 18:05:53 UTC

There is probably no ideal solution in awarding credits for each sub-projects particularly due to gpugrid WUs variation. It will be great at least to describe how the credits are awarded for each subprojects. As an example Folding@home website https://foldingathome.org/faqs/statistics-teams-usernames/how-do-you-decide-how-much-credit-a-work-unit-is-worth-how-do-you-determine-how-many-points-a-work-unit-is-worth/ describes this well and lay out the limitations of the method that they have chosen while attempting to achieve "equal pay for equal work". FAH can process the WUs either with CPU or GPU and they use Intel i5-750 cpu (linux) as reference. That's a very ancient cpu. Right now a 4090 will do about 31M-32M average PPD in FAH (linux). Yeah, credit is outrageous for 4090 but at least FAH is attempting to use a single reference point i.e. i5-750 https://folding.lar.systems/cpu_ppd/brands/intel/folding_profile/intelr_coretm_i5_cpu___750__267ghz. One can do a quick math to see how much 4090 is faster (~4000x) than i5-750 just based on average PPD, assuming uninterrupted folding (there is a QRB - quick return bonus which affects PPD if you don't run continuously).

From what I know, the project admin here has GTX 1080 that they use for testing and benchmarking. Not sure of the detail but at the end of day they have fixed credit for ATMML and Qchem, both have gone through credit revision since introduction while ACEMD3 has variable credit.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2350
Credit: 16,296,203,290
RAC: 4,019,210
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 61988 - Posted: 2 Dec 2024 | 22:48:00 UTC - in response to Message 61986.

which app are you referring to when you say "the GPUGrid app"?
I was referring to the ACEMD 3 app. Sorry for omitting that.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2350
Credit: 16,296,203,290
RAC: 4,019,210
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 61989 - Posted: 2 Dec 2024 | 23:16:48 UTC - in response to Message 61987.
Last modified: 2 Dec 2024 | 23:20:05 UTC

I like the idea of a quick return bonus (especially a very well defined one like on FAH). In my dreams however, there should be an annual interest rate applied to points earned earlier (depending on how much faster the CPUs / GPUs got on that given year - I'm aware that there is a whole can of worms here in this idea). It would make it possible (or easier) to compare present day contributions to older contributions. It feels kind of unfair against "ancient" crunchers that by using present day GPUs anyone can get easily in front of them because now anyone can produce 4.000-400.000 times more credit per day than in "ancient" times. The importance of previous contributions are not honored at all with the present credit system (neither in BOINC, or FAH).

Post to thread

Message boards : Number crunching : Crazy credits like Bitcoin Utopia

//