Message boards : Number crunching : Pausing 1 gpu when you have more than 1 in the system.
Author | Message |
---|---|
Hi all. been away for a while, but now I'm back. Quick question. | |
ID: 38909 | Rating: 0 | rate: / Reply Quote | |
Well now I just discovered something interesting. | |
ID: 38911 | Rating: 0 | rate: / Reply Quote | |
Well now I just discovered something interesting. It is available in the Advanced view: This method has a drawback: if you forgot to resume the task, it gets stuck, and the BOINC manager won't ask for new GPUGrid tasks. To avoid this, you can make two cc_config.xml files, one for using both GPUs, and one for using only one, and two shortcuts to your desktop for copying the desired cc_config.xml to the BOINC manager's folder. | |
ID: 38913 | Rating: 0 | rate: / Reply Quote | |
Seems I need to update my BOINC client. I'm still running an older version - 7.2.39 | |
ID: 38914 | Rating: 0 | rate: / Reply Quote | |
Seems I need to update my BOINC client. I'm still running an older version - 7.2.39 7.4.27 is the latest 'release' version, for Windows, but if you are using a Resource share of zero at any project do NOT upgrade to it as it has problems with that. Other that that I have not found a problem with it, and it DOES show Retvari's feature. | |
ID: 38925 | Rating: 0 | rate: / Reply Quote | |
7.4.27 is the latest 'release' version, for Windows, but if you are using a Resource share of zero at any project do NOT upgrade to it as it has problems with that. Other that that I have not found a problem with it, and it DOES show Retvari's feature. What's the problem with resource share 0? | |
ID: 38936 | Rating: 0 | rate: / Reply Quote | |
Seems I need to update my BOINC client. I'm still running an older version - 7.2.39 All good. I just wanted to be able to pause 1 of the GPUs for a hour or two to have a game, without having to pause both of them. working well for me! | |
ID: 38939 | Rating: 0 | rate: / Reply Quote | |
7.4.27 is the latest 'release' version, for Windows, but if you are using a Resource share of zero at any project do NOT upgrade to it as it has problems with that. Other that that I have not found a problem with it, and it DOES show Retvari's feature. It does NOT get any work from that project, even if all other projects don't have any units to send you. The Developers are aware of the problem and are working on a fix. An example of what people are seeing is: 15-Nov-2014 00:17:11 [Asteroids@home] [work_fetch] share 0.000 zero resource share 15-Nov-2014 00:17:11 [Milkyway@Home] [work_fetch] share 0.000 zero resource share 15-Nov-2014 00:17:11 [Einstein@Home] [work_fetch] share 0.000 blocked by project preferences Apparently it sees a zero as never get any work from that place, as opposed to the old way of just keeping a zero cache level, and only getting enough units to crunch right now. | |
ID: 38940 | Rating: 0 | rate: / Reply Quote | |
What's the problem with resource share 0? Thanks for the info. Just tried it on an ATI project. Sure enough, didn't work. When I manually polled the project set at zero, I got: Einstein@Home 11-17-14 08:53 update requested by user Einstein@Home 11-17-14 08:53 Sending scheduler request: Requested by user. Einstein@Home 11-17-14 08:53 Not requesting tasks: don't need (CPU: job cache full; NVIDIA GPU: job cache full; AMD/ATI GPU: job cache full) Einstein@Home 11-17-14 08:53 Scheduler request completed As long as they're working on this maybe I'll ask for a further addition. It would be useful to have another setting (for instance 1) that would keep a small amount of work in the queue, perhaps 1 hour's worth. This new setting for instance would be useful here for cards that are bumping up against the 24 hour limit. | |
ID: 38943 | Rating: 0 | rate: / Reply Quote | |
What's the problem with resource share 0? Wouldn't that be more of a 'work buffer' setting in the Boinc Manager than an actual resource share setting? But yes I too dislike the percentage settings there, they are meaningless when running multiple projects and don't apply well when running both cpu and gpu projects, especially when they are not the same one. I think I would prefer a separate setting for each, and even multiple ones if someone has multiple gpu's in the machine. Thru the exclude line one can put each gpu on a separate project, making a single setting a joke. Of course when they finally give us fine tooth control over each cpu core, they will need a setting for each of those too. | |
ID: 38953 | Rating: 0 | rate: / Reply Quote | |
As long as they're working on this maybe I'll ask for a further addition. It would be useful to have another setting (for instance 1) that would keep a small amount of work in the queue, perhaps 1 hour's worth. This new setting for instance would be useful here for cards that are bumping up against the 24 hour limit. It would allow running a normal work buffer while dealing with the needs of projects like this that need a fast turn around time. It would also be helpful for projects with very small WUs, projects with small WUs combined with large backoff times and projects that have large WU UL/DL sizes. I've run into all these scenarios and it would be beneficial to have such an option to address the issue in addition to the zero-share setting. I sent the above to what I hope was an appropriate thread in the alpha list. Not sure if it's something that would be easily implemented though, and of course not sure if they'll want to do it anyway. | |
ID: 38954 | Rating: 0 | rate: / Reply Quote | |
I have discovered something mildly interesting with this. | |
ID: 39047 | Rating: 0 | rate: / Reply Quote | |
This will happen in every BOINC project as the primary GPU always resumes first. | |
ID: 39050 | Rating: 0 | rate: / Reply Quote | |
This will happen in every BOINC project as the primary GPU always resumes first. Oh, OK. GPUGrid is the only BOINC GPU project I run. I never new that it paused/resumed like that. Good to know. | |
ID: 39065 | Rating: 0 | rate: / Reply Quote | |
Message boards : Number crunching : Pausing 1 gpu when you have more than 1 in the system.