Message boards : Number crunching : PAOLA_3EKO_8LIGANDS very low GPU load
Author | Message |
---|---|
PAOLA_3EKO_8LIGANDS | |
ID: 26648 | Rating: 0 | rate: / Reply Quote | |
I have 3 of these units running now and yes, they are very, very slow, around 30% utilization on windows 7, and 37% on windows xp. | |
ID: 26649 | Rating: 0 | rate: / Reply Quote | |
Same behavior with a gtx570, 38% after 5 hours, another 6 estimated, gpu usage at ~48% | |
ID: 26650 | Rating: 0 | rate: / Reply Quote | |
Same problem. Cuda31 285gtx 26 hours to complete at 46% gpu load and 33% cpu load | |
ID: 26651 | Rating: 0 | rate: / Reply Quote | |
For this group, here is a list of the GPU and runtime for the 22 most recently returned WUs: | |
ID: 26657 | Rating: 0 | rate: / Reply Quote | |
GTX460 768MB 62% GPU usage completed in 19.14 | |
ID: 26658 | Rating: 0 | rate: / Reply Quote | |
GTX670: 95.460% 16:32:28 hores runtime. | |
ID: 26660 | Rating: 0 | rate: / Reply Quote | |
Observationally on GTX670 Win7x64 the CPU usage is at a lower ratio on these PAOLA_8LIGANDS (1 GPU sec to .7 CPU sec) than the NATES (1:1). | |
ID: 26661 | Rating: 0 | rate: / Reply Quote | |
nate got me to thinking, it's taking the same amount of time on my GTX 560's and GTX 550's as it does on my GTX 670's. 1344 cuda cores vs 192 and it takes almost an identical amount of time, somethings wrong. | |
ID: 26663 | Rating: 0 | rate: / Reply Quote | |
I run Docking@home in addition to GPUGRID on my Phenom II X6. I suspended Docking to observe how much CPU resources was being consumed by my GTX460/GTX550Ti whilst these units are running and it turned out to be 33%. | |
ID: 26664 | Rating: 0 | rate: / Reply Quote | |
Reduced Docking down to 4 cores to see what effect it has. Wow, you just now figured that out? Maybe I'm not understanding what you wrote, I thought everybody knew that you need to leave 1 CPU core free for every GPU you're running. The CPU core feeds the data to the GPU, it's going to be like Christmas for you now, you're crunching times should drop sharply. ____________ | |
ID: 26665 | Rating: 0 | rate: / Reply Quote | |
Reduced Docking down to 4 cores to see what effect it has. You don't need to leave 1 CPU core free per GPU, but it helps. I have chosen to run 1 free core for 2 GPU. I find it is the most efficient was to use my resources. Docking is my main project....... 20th overall. GPUGRID is only a side project for me that compliments D@H very well, they are both in the same area of research. Used to run 6 cores Docking at the same time as GPUGRID, wasn't bothered about how much it affected GPUGRID crunch times but when I started running 2 GFX cards on GPUGRID it started hitting D@H crunch times, so I reduced to 5 cores which didn't really affect my D@H RAC. Whilst these CPU intensive PAOLA units are around I'll run D@H on 4 cores, then I'll probably return to 5. Previous to these PAOLA units I was getting ~96% CPU sage overall with 5 D@H units and 2 GPUGRID WU running. Currently seeing 97% with 4 D@H + 2 PAOLA WU. | |
ID: 26666 | Rating: 0 | rate: / Reply Quote | |
I figured you must have known, I get 0.645% CPU usage per core on my GTX 670's, using 2 cards in the same machine(do the math, that's more than one CPU core). I'm leaving 2 of my 8 cores free and the GPU usage shot up to 98% on both cards. | |
ID: 26667 | Rating: 0 | rate: / Reply Quote | |
I figured you must have known, I get 0.645% CPU usage per core on my GTX 670's, using 2 cards in the same machine(do the math, that's more than one CPU core). I'm leaving 2 of my 8 cores free and the GPU usage shot up to 98% on both cards. I see you run Bulldozers. On Docking the 6 core Phenom II's outperform the 8 core Bulldozers. One of the reasons I have stuck with my old skool Phenoms My main cruncher - a Phenom II X6 1055T i run overclocked at 3.5GHz, at that speed running 5 cores on Docking is just as effective as running 6 cores at 2.8GHz. | |
ID: 26668 | Rating: 0 | rate: / Reply Quote | |
These workunits do not use a full CPU core with Kepler GPUs, unlike any previous workunits. It's like the late swan_sync parameter wasn't set to 0. These workunits run twice as fast on my GTX 480s than on my GTX 680s. | |
ID: 26670 | Rating: 0 | rate: / Reply Quote | |
I'm running Windows 7x64 with a single GTX 670 and an i7-3770K overclocked to 4.2GHz with hyperthreading enabled (i.e. 8 logical cores). My card is the factory-overclocked triple-fan Gigabyte 670. All factory default settings. I'm running BOINC 7.0.28. No cc_config, no swan_sync. | |
ID: 26671 | Rating: 0 | rate: / Reply Quote | |
does anyone has default max 20% CPU time for GPU work set on the websiteprofiles too like me until today? perhaps this value is to low for these new units? i set it to 100% and waiting now until i get a new one of this WUs and finished it. | |
ID: 26672 | Rating: 0 | rate: / Reply Quote | |
does anyone has default max 20% CPU time for GPU work set on the websiteprofiles too like me until today? perhaps this value is to low for these new units? i set it to 100% and waiting now until i get a new one of this WUs and finished it. I feel it must be something like this, because there are some users who can compute much faster than the rest (and at speeds we were expecting). Keep us updated dskagcommunity. If anyone else wants to play with the setting, click on your username up above, then "GPUGRID preferences". "Edit Preferences", and change "Maximum CPU % for graphics..." to 100% (or whatever you prefer). Still, this might not be it. Wouldn't explain this, though, unless the cards are on different machines with different settings... These workunits do not use a full CPU core with Kepler GPUs, unlike any previous workunits. It's like the late swan_sync parameter wasn't set to 0. These workunits run twice as fast on my GTX 480s than on my GTX 680s. Let's see... | |
ID: 26673 | Rating: 0 | rate: / Reply Quote | |
If I set BOINC to use 87.5% of processors (that's equal to 7 out of 8 cores), it shuts down 1 WCG task (so only 7 remain running), but GPU load remains 45%. I've seen people advise this action here and at other BOINC forums and it seems to me that this would never work because telling BOINC to use 6 of 8 cores or 7 of 8 cores takes them away from all projects. I would think you would want to set CPU usage at you're GPUgrid account, by taking away cores in BOINC, only the operating system or programs not connected to BOINC can utilize those cores. I don't think the prefrences in our GPUGRID account allows for enough minipulation of the CPU to make it do what you want. ____________ | |
ID: 26674 | Rating: 0 | rate: / Reply Quote | |
Here's the one thing I've been able to find in common with all my video cards, the GPU memory controller stays right around 10%, it will drop to 9% and go up to 11% but never higher or lower on the 3EKO wu's. Also, they are all using the same amount of memory (+ or - 1%) witch is 628MB, these are my cards: | |
ID: 26675 | Rating: 0 | rate: / Reply Quote | |
I've seen people advise this action here and at other BOINC forums and it seems to me that this would never work because telling BOINC to use 6 of 8 cores or 7 of 8 cores takes them away from all projects. I would think you would want to set CPU usage at you're GPUgrid account, by taking away cores in BOINC, only the operating system or programs not connected to BOINC can utilize those cores. Does work for me! I set max CPU utilization to 99% on my AMD FX8150, and on the 7 of the 8 cores crunch climateprediction.net WUs and one core makes my GTX670 happy. Nice side effect my system is more stable and does not hamper workflow that much. Under Windows Work Administrator/Process (ctrl alt delete) I can see that all the cores are used to their maximum by BOINC (13% with about 70,000 to 130,000 KB Memory utilization for CP and 9% and 192,000 KB for this EKO – PAOLA WUS we are talking about (for NATHAN-WUs this is normally 13% as well and about 230,000KB)). | |
ID: 26676 | Rating: 0 | rate: / Reply Quote | |
That just don't make no sense at all. There is no way in BOINC to allocate cores to particular work units, that setting is for freeing up CPU power for the OS. If you're wu's use less than .5 CPU power, you wont see issues, anything over that and you have to suspend wu's. I don't know why I'm responding to this, I feel like I'm walking into another one. | |
ID: 26677 | Rating: 0 | rate: / Reply Quote | |
I've set my GPUGrid preferences to use 100% CPU for graphics but I *think* this refers to how much CPU to use for displaying a project's screensaver ... I'm going to do a quick taks switch to see ... | |
ID: 26678 | Rating: 0 | rate: / Reply Quote | |
You are correct, these CPU preferences are just for the screen saver and nothing to do with how much CPU is used to support a GPU. | |
ID: 26679 | Rating: 0 | rate: / Reply Quote | |
I've set my GPUGrid preferences to use 100% CPU for graphics but I *think* this refers to how much CPU to use for displaying a project's screensaver ... I'm going to do a quick taks switch to see ... 1 hour of processing and it is 4.2% complete GPU utilization hanging around 70%, GPU mem still at 6% CUDA = 4.2 app DRIVER = 301.42 MOBO PCIE2 @8X All memory stats running nominal differences to NATE --- (usage, private, pool, paged, non-paged) Page faults are high (compared to NATE) @ 173k after 1 hour No hard faults Let me know if there are any other details I can help with :-) ____________ Thanks - Steve | |
ID: 26680 | Rating: 0 | rate: / Reply Quote | |
I have 3 machines that are identical, | |
ID: 26681 | Rating: 0 | rate: / Reply Quote | |
From some quick testing on my part. | |
ID: 26682 | Rating: 0 | rate: / Reply Quote | |
Regarding the setting I mentioned ("on multiprocessors use at most xxx% of processors" - which I set to 87.5 to use 7 out of 8 cores), that only applies to tasks that use the CPU exclusively (like WCG). Tasks that use the GPU ignore that setting - they simply use as much GPU as possible, and the associated amount of CPU needed. For GPUGrid, that's 1 task per GPU (since each task uses 1 NVIDIA GPU), and hence 0.585 CPU cores per GPU task. For something like Einstein@Home, which uses 0.5GPUs per task, it runs 2 GPU tasks simultaneously, and consumes however much total CPU two Einstein GPU tasks need. | |
ID: 26683 | Rating: 0 | rate: / Reply Quote | |
Page faults are high (compared to NATE) @ 173k after 1 hour Suggests that something is not being kept in memory that should be, and is repeatedly being read from disk (which would obviously be a Lot slower). Maybe this is a CPU process for the GPU? Having an SSD would mask this to some extent - you would experience the same issue but not as severely. Having more RAM available or faster RAM might also reduce this somewhat, but it sounds like a systemic issue. Sort of explains why Luke only noticed an increase from 45% to 52% when they stopped running all CPU tasks. The more GPU's a system has the more this is a problem, and the more CPU projects are running (generally) worsens it. On a 12thread system I wouldn't use more than 10, if supporting 2GPU's. Very much depends on the CPU project too; some CPU projects eat memory (and 6GB isn't enough) while others use 10 to 100MB. Some also have extremely high I/O requirements. Both RAM shortages and high I/O are known to negatively impact upon GPU projects. If I had 4 GPU's in a system, I probably wouldn't run any CPU projects. ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 26684 | Rating: 0 | rate: / Reply Quote | |
does anyone has default max 20% CPU time for GPU work set on the websiteprofiles too like me until today? perhaps this value is to low for these new units? i set it to 100% and waiting now until i get a new one of this WUs and finished it. so it doesnt had any effect. :/ ____________ DSKAG Austria Research Team: http://www.research.dskag.at | |
ID: 26685 | Rating: 0 | rate: / Reply Quote | |
Having more RAM available or faster RAM might also reduce this somewhat, but it sounds like a systemic issue. Sort of explains why Luke only noticed an increase from 45% to 52% when they stopped running all CPU tasks. If it helps, I have 8GB RAM (of which BOINC is allowed to use 90%, or 7GB, when the computer is in use) running at 1600 MHz. I have all of Windows, and the BOINC executable, on a 128GB SSD, but my BOINC data folder is on a 2TB hard drive (both drives use the motherboard's two SATA 6GB/s ports that come from the Z77 chipset). The HDD does 150 MB/s in HDtune and the SSD does 400 MB/s. I'd tell you my page and hard faults, but I'm not being given Paola tasks - they seem to have run out and we're back to Nathan tasks. Right now with 8 WCG tasks and one GPUgrid nathan running, I'm seeing 0 hard faults per second and 56% physical memory utilisation. Gpugrid made 156,000 page faults in 35 minutes. Just to put that in context, flash player made 25 million page faults in 7 minutes of CPU time (about 3 hours of youtube videos...). | |
ID: 26686 | Rating: 0 | rate: / Reply Quote | |
I don't have time do all the stats but on Win7x64 core i7-920 HT ON, 6 GB ram, Boinc 7.25, ALL CPU tasks suspended my GTX 670 runs at about 45% | |
ID: 26688 | Rating: 0 | rate: / Reply Quote | |
i5 2500k/8Gb 1333Mgh/Asus p67/GF680GTX Win7 Ultimate 304.48 | |
ID: 26694 | Rating: 0 | rate: / Reply Quote | |
Possible solution - see post #2 in this thread: | |
ID: 26706 | Rating: 0 | rate: / Reply Quote | |
Well, I think I've found the general cause, although I can't say I have a solution yet. When I run the workunits on our machines, NOT via BOINC, the simulations use 100% of the CPU. When I run the PAOLA WUs via BOINC, the max I get is ~50% CPU usage. No doubt that's where the 2x slowdown comes from. Now, why that happens, I have to find out. I'll ask the more technical people here and hopefully have answers soon, but if anyone knows why CPU usage is limited to 50% via BOINC, feel free to explain. Is this something common for other GPU based tasks, or specific to us?... | |
ID: 26716 | Rating: 0 | rate: / Reply Quote | |
Perhaps it would be enough when the project use 1 cpu insteed 0,65? | |
ID: 26717 | Rating: 0 | rate: / Reply Quote | |
The CPU utilisation is about 7-10% on my 8-core CPU (so that's 50-75% of one of the cores), but I think you guys made it that way by design because it uses 0.585 or 0.65 CPUs (can't remember because no running any right now). | |
ID: 26718 | Rating: 0 | rate: / Reply Quote | |
Well, I think I've found the general cause, although I can't say I have a solution yet. When I run the workunits on our machines, NOT via BOINC, the simulations use 100% of the CPU. When I run the PAOLA WUs via BOINC, the max I get is ~50% CPU usage. No doubt that's where the 2x slowdown comes from. Now, why that happens, I have to find out. I'll ask the more technical people here and hopefully have answers soon, but if anyone knows why CPU usage is limited to 50% via BOINC, feel free to explain. Is this something common for other GPU based tasks, or specific to us?... Not sure how relevant this is but back quite a while ago we were using the SWAN_SYNC environmental variable to tell ACEMD to fire off a process that used a full CPU to poll the GPU rather than waiting for the GPU to make a call and the inherant latencies that involved ... then we were told that we no longer needed SWAN_SYNC as that was now baked directly into ACEMD. Perhaps that was done through a configuration mechanism in the WU generation process that got missed this time around? ____________ Thanks - Steve | |
ID: 26750 | Rating: 0 | rate: / Reply Quote | |
Are I'm the only one who abort every "PAOLA_3EKO" task I get? | |
ID: 26756 | Rating: 0 | rate: / Reply Quote | |
Are I'm the only one who abort every "PAOLA_3EKO" task I get? You really shouldn't do that. Remember that they rely on you and other volunteers to do the crunching for their research. If everyone aborted certain kinds of tasks, they'd never get any research done. If you're concerned about low utilisation, I suggest using a custom app_info.xml - I posted about it a few posts ago in tis thread. | |
ID: 26760 | Rating: 0 | rate: / Reply Quote | |
Clearly there is not an easy fix or it would have been done by now. Apparently even changing the points award is not something to be taken lightly once released so let's see what we can do to get the complete stream finished. Who knows, maybe this is some truely awesome data that will help Paola advance her research by leaps and bounds. | |
ID: 26761 | Rating: 0 | rate: / Reply Quote | |
Can anyone who is running these and getting good utilization please post up the rig specs so we can see what DOES work well? I'm curious about it as well. However, I don't expect any answer to this question, because these tasks' GPU utilization is low even on PCIe3.0 systems. | |
ID: 26764 | Rating: 0 | rate: / Reply Quote | |
Can anyone who is running these and getting good utilization please post up the rig specs so we can see what DOES work well? http://www.gpugrid.net/forum_thread.php?id=3116&nowrap=true#26657 Nate's earler post shows a couple of good runtimes, maybe he can dig out the rig specs for us ... Nate? ____________ Thanks - Steve | |
ID: 26765 | Rating: 0 | rate: / Reply Quote | |
Nate's earler post shows a couple of good runtimes, maybe he can dig out the rig specs for us ... Nate? As Nate said, these workunits have a very high variation in their runtimes. My first one is completed in 22h 19m. 2nd: 22h 2m. 3rd: 18h 31m. 4th: 14h 14m. 5th: 11h 39m. 6th: 12h 2m. 7th: 12h 25m. 8th: 12h 4m. I've found a very good rig in the toplist: The shortest runtime for a PAOLA_3EKO_8LIGANDS on this host is 7h 34m. This is a Linux system with a Core i7-2600K overclocked to 4.6GHz (according to my estimation) with two GTX 680s (I bet these are overclocked too). But even on this system the PAOLA_3EKO_8LIGANDS use less CPU time than GPU time (unlike all other workunits), so I guess even this system could have shorter runtimes if the "SWAN_SYNC=0" setting would have been applied to these workunits. | |
ID: 26768 | Rating: 0 | rate: / Reply Quote | |
Runtime on new 660Ti, power target set to 105% (One sample): | |
ID: 26769 | Rating: 0 | rate: / Reply Quote | |
The problem with looking solely at runtimes is that the number you're seeing is only the time taken for that task to complete - it doesn't say how many tasks were running simultaneously. So anyone using a custom app_info.xml and running 2 or more tasks at once might be doubling his points per second and the runtimes would look the same. | |
ID: 26770 | Rating: 0 | rate: / Reply Quote | |
Oh, and my runtime for Paola tasks is on average 13.33 hours on a stock Gigabyte GTX 670. The variation isn't that great in mine - about half an hour either way. But I haven't run GPUgrid for a few days so I wouldn't know if the newer tasks have different runtimes. | |
ID: 26771 | Rating: 0 | rate: / Reply Quote | |
I noticed almost 2 weeks ago when this all started that others were aborting these tasks or getting a lot of computational errors, that means the rest of us have to pickup the slack for those who refuse to do the work. | |
ID: 26773 | Rating: 0 | rate: / Reply Quote | |
Do you get 9x% gpu load while this memory controller load? When not, it is normal that the memory controller has lesser to do perhaps when the gpu load is less too. Only as suggest ^^ | |
ID: 26775 | Rating: 0 | rate: / Reply Quote | |
I noticed almost 2 weeks ago when this all started that others were aborting these tasks or getting a lot of computational errors, that means the rest of us have to pickup the slack for those who refuse to do the work. GPU-Z is better in my opinion (it's also free), because it displays those readings and plots graphs of them in real time as well. And you can save the data to a log file. | |
ID: 26776 | Rating: 0 | rate: / Reply Quote | |
GPU-Z is better in my opinion (it's also free), because it displays those readings and plots graphs of them in real time as well. And you can save the data to a log file. It is a very good program and I've been familiar with it for years but gpushark has a much smaller foot print and uses less resources and I leave it running 24/7 on all 4 of my computers and it gives real time info on up to 4 video cards at the same time when in advanced mode. I wasn't implying that all should use it (sorry for the misunderstanding), I was just letting folks know how I was monitoring my video cards. ____________ | |
ID: 26777 | Rating: 0 | rate: / Reply Quote | |
3EKO_19_4-PAOLA_3EKO_8LIGANDS-3-100-RND3778 | |
ID: 26783 | Rating: 0 | rate: / Reply Quote | |
3EKO_19_4-PAOLA_3EKO_8LIGANDS-3-100-RND3778 You should abort it immediately. It has been resent to another host already, which has a GTX 680, and will probably return the result much sooner than 57 hours. This is not a reasonable run time at all. | |
ID: 26784 | Rating: 0 | rate: / Reply Quote | |
I just got a new task I've never seen before, it's PAOLA_2UY5 and it's doing the exact same thing as the other PAOLA task. 30% to 50% GPU usage, is this the way of all new WU's to come? There are going to be lots of grumpy folks that have older cards. It's looking like close to 30 hours on my GTX560Ti, who the heck is writing these things? | |
ID: 26788 | Rating: 0 | rate: / Reply Quote | |
There are more liangs in the queue it seems i nearly only get this wus :/ dont want 36h to compute on one wu...thx to cuda31 i never git an error on these *cross his fingers* and still get 50% more the credits then in short queue!!! Wtf. | |
ID: 26794 | Rating: 0 | rate: / Reply Quote | |
Hello everyone, sorry I haven't posted in awhile. | |
ID: 26797 | Rating: 0 | rate: / Reply Quote | |
For the last couple of posters - have you tried using my modified app_info.xml ? It won't make the tasks any faster (in fact it might slow things down slightly), but at least you'll be doing two at once so you'll be getting almost twice the points per unit time and so the performance hit isn't as bad. | |
ID: 26799 | Rating: 0 | rate: / Reply Quote | |
have you tried using my modified app_info.xmlThat's a tough prospect as we can't count on getting only PAOLA tasks and I believe it will be counter productive if a NATE WU gets doubled up. That being said, it looks like I may have an oppportunity when I get home today but it depends on how ambitious I am because the 2 PAOLA's I have are on a 2 card rig so I'm going to pull a card to make this work as cleanly as possible. If I'm going that far I am also going to swap which slot the remaining card is in. If I can get this done and working correctly I will think about aborting the NATE's and run PAOLA exclusively. side note: the card I'm pulling is a GTX480 and I'm thinking about decomissioning it, anyone interested can send me a PM. ____________ Thanks - Steve | |
ID: 26801 | Rating: 0 | rate: / Reply Quote | |
That's a tough prospect as we can't count on getting only PAOLA tasks and I believe it will be counter productive if a NATE WU gets doubled up. It would be counterproductive to run two nathans, but you can work around that if you're willing to babysit your computer a bit (I know some people aren't). If you have two Paola tasks, leave coproc count = 0.5. If you get a nathan task, exit boinc, modify the coproc count to 1, then start up boinc again. If you have a Paola and a Nathan... er, you're out of luck on one GPU. But as you have two GPUs and hence a four task limit on that host, you might be able to work something out. | |
ID: 26806 | Rating: 0 | rate: / Reply Quote | |
That's the issue though. I tend to get more Nathan's than Paola. Further complicating things is that with 3 cards in one rig. I have 6 tasks in total to watch. | |
ID: 26808 | Rating: 0 | rate: / Reply Quote | |
... it looks like I may have an oppportunity when I get home today but it depends on how ambitious I am The deed is done ... 2 at a time is taking 30 hours on a 660Ti. Currently my 670 is going to take 16 hours to do 1. Overall this is going to kill my RAC but I'm going to try to stick with it for a while, may even do it on my 670 just to clear the queue. Anyone from the project have an estimate on how many more we will need to finish out this run? ____________ Thanks - Steve | |
ID: 26819 | Rating: 0 | rate: / Reply Quote | |
Are I'm the only one who abort every "PAOLA_3EKO" task I get? Well then maybe they shouldnt send out these workunits. Perhaps If everyone aborted these tasks maybe they would get the message and fix the problems. We shouldnt have to hack our way around badly behaving workunits. We are donating resources to their project, we have a right to expect our donated resources to be used as efficiently and effectively as possible. | |
ID: 26829 | Rating: 0 | rate: / Reply Quote | |
voss has a good point there, though I don't advocate open rebellion. Why hasn't the project scientist responded to any of these threads? Ya know, something like "Were working on rectifying the situation" or letting us know why they haven't pulled them from the hopper? I'm starting to wonder if this might not be deliberate because their getting overwhelmed from the new video cards. | |
ID: 26830 | Rating: 0 | rate: / Reply Quote | |
ATTENTION GPUGRID STAFF: | |
ID: 26831 | Rating: 0 | rate: / Reply Quote | |
I'm running a long PAOLA_3EKO_8LIGANDS task on a now fairly old GTX470 | |
ID: 26832 | Rating: 0 | rate: / Reply Quote | |
I feel guilty when I abort these work units; however, I can run 2 of these per day or 6 to eight others. If I do the latter my RAC doesn't plummet and my good will is not lessened. | |
ID: 26867 | Rating: 0 | rate: / Reply Quote | |
I feel guilty when I abort these work units; however, I can run 2 of these per day or 6 to eight others. If I do the latter my RAC doesn't plummet and my good will is not lessened. Something's not right here. You shouldn't be spending $100 a month for an RAC of just 300,000. I live in Malta, where the electricity is at least twice as expensive, and I still manage a global RAC of 800,000 on €30 a month (running cost of the computer alone) with a single GTX 670 and an i7-3770K (Ivy bridge) if I leave it on 24/7. I think the problem is that you're using older generation cards. The Nvidia 6-series cards give about twice the performance per watt compared to the 5- or 4- series ones. So you should switch over completely to 6-series cards. Consider it an investment - within 6 months they'd have paid for themselves in electricity costs. Same argument used for switching incandescent light bulbs for energy saving ones :) | |
ID: 26871 | Rating: 0 | rate: / Reply Quote | |
This IS painful. | |
ID: 26873 | Rating: 0 | rate: / Reply Quote | |
I think you should not feel guilty. It is YOUR hardware and you should choose how it is used. Aborting some work units is really no worse than if somebody decided to crunch on a different project for a while, or there was a power outage, or the server ran out of disk space, or the DNS servers got hacked and the internet didn't work, or.... (hopefully you see my point) | |
ID: 26874 | Rating: 0 | rate: / Reply Quote | |
I felt i should chime in since i am seeing nowhere near the times for this project that others are seeing. Granted, the WU's are all jacked up. I'm seeing a GPU load at 55%, Memory load 15%, at 45% power consumption. | |
ID: 26914 | Rating: 0 | rate: / Reply Quote | |
Still getting these 3-PAOLA_3EKO_8LIGANDS loooooong runs... :( | |
ID: 26956 | Rating: 0 | rate: / Reply Quote | |
All you can do is increase GPU utilization by about 10% by running fewer CPU tasks (say 4 from 8 threads - see below). That would improve your task performance by around 28%. In terms of overall Boinc Credits it's worth it, but it depends on where your priorities are. Other GPU tasks don't require this, so I would only do it if I was getting lots of these tasks, or if I spotted one, and can change settings back later. | |
ID: 26958 | Rating: 0 | rate: / Reply Quote | |
All you can do is increase GPU utilization by about 10% by running fewer CPU tasks (say 4 from 8 threads - see below). That would improve your task performance by around 28%. Seems to be right. By running fewer other CPU tasks, GPU utilization for these long runs WU increase to around 48-50%. Does this mean that there is a CPU bottleneck in supplying a actual work to GPU for these particular WUs? I see those CPU tasks are single threded. Just wondering if my CPU is for example twice as fast in single threads, would GPU utilization improve? | |
ID: 26961 | Rating: 0 | rate: / Reply Quote | |
yes that seems to be true. I tried underclocking one of my rigs from 2.67ghz to 1.6ghz and the gpu usage dropped from ~38% to ~25% iirc | |
ID: 26965 | Rating: 0 | rate: / Reply Quote | |
Clearly it's partially CPU dependant, but another bottleneck factor is at play too, otherwise if we stopped running CPU tasks altogether the GPU utilization would rise to 99% on XP systems. | |
ID: 26973 | Rating: 0 | rate: / Reply Quote | |
Anyway, it's down to the researchers to improve, if they think it's worthwhile for the project. All we can do is optimize our systems for the app/tasks that are there to run, if we want to. agree to disagree, I think it is up to the researchers to optimize the tasks for the hardware that is available to them (the volunteers' systems), while there are and should be small things we can do to squeeze out that last 5-10% since it seems that the majority of current users are having the same issue with only these tasks, there must be some major difference either in the actual work being done (could explain why it is much more CPU dependent) or a major difference in the coding that was either over-looked (accidental) or not able to be worked around (if the work being done does not benefit from parallelization for instance) ____________ XtremeSystems.org - #1 Team in GPUGrid | |
ID: 26979 | Rating: 0 | rate: / Reply Quote | |
I'm not sure we even disagree! | |
ID: 27032 | Rating: 0 | rate: / Reply Quote | |
Clearly it's partially CPU dependant, but another bottleneck factor is at play too, otherwise if we stopped running CPU tasks altogether the GPU utilization would rise to 99% on XP systems. Can't add much, but when I put my GPU into a system with a lesser CPU (IC2D 2.13GHz rather than i7-2600), the GPU Utilization dropped to 37% (when not crunching with the CPU). Both systems were DDR3 dual channel, and I used an SSD with the IC2D, to eliminate any possible I/O bottlenecks. I noted that the task was >600MB in size. The task returned in 22h for full bonus credit, but took twice as long as some tasks for the same credit. ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 27040 | Rating: 0 | rate: / Reply Quote | |
Name 3EKO_15_2-PAOLA_3EKO_8LIGANDS-23-100-RND9894_2 | |
ID: 27095 | Rating: 0 | rate: / Reply Quote | |
These are now stopped. | |
ID: 27100 | Rating: 0 | rate: / Reply Quote | |
These are now stopped. Ya, woohoooo, way to go, whooot whooot, hallelujah and praise be. Oh, sorry, got a little carried away. Seriously though, that's good news. ____________ | |
ID: 27106 | Rating: 0 | rate: / Reply Quote | |
Message boards : Number crunching : PAOLA_3EKO_8LIGANDS very low GPU load