Message boards : Number crunching : Elapsed Time vs CPU time
Author | Message |
---|---|
Why is there such a big difference between | |
ID: 19639 | Rating: 0 | rate: / Reply Quote | |
Elapsed Time: 27,489.40 seconds Yes. The app mainly runs on the GPU, it only needs a few cycles on the CPU, and only they are counted in that measure. ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki | |
ID: 19642 | Rating: 0 | rate: / Reply Quote | |
Elapsed Time: 27,489.40 seconds So this measure is the ratio CPU/GPU. In that case I would say that the CPU usage remains still very high. In a perfect world the whole WU is downloaded on the GPU (If there is enough local memory), then until completion would run on the board with no CPU interference and when finished is uploaded back to the CPU and a new WU downloaded. This should necessitate some seconds of CPU but not more than one hour CPU time (3600 seconds is one hour). These values of CPU usage mean that the CPU is doing real work as the GPU crunches. Maybe there is data exchanged and written back to the HDD as the crunching goes on or something else. | |
ID: 19658 | Rating: 0 | rate: / Reply Quote | |
So this measure is the ratio CPU/GPU. In that case I would say that the CPU usage remains still very high. The GF110 is practically a 'primitive CPU' containing 16 cores with 32 threads in each core (16*32=512 CUDA cores in nVidia terminology). I think supporting 512 GPU cores with even a whole CPU core to achieve maximum performance is a rewarding sacrifice. In a perfect world the whole WU is downloaded on the GPU (If there is enough local memory), then until completion would run on the board with no CPU interference and when finished is uploaded back to the CPU and a new WU downloaded. This should necessitate some seconds of CPU but not more than one hour CPU time (3600 seconds is one hour). While in the real world the GPU is still a coprocessor (actually, a lot of it as I mentioned above), that's why the GPU cannot do everyting on it's own, would it be calculating a 2D projection of a 3D (game)scene, or doing some 3D scientific calculation for GPUGRID. Loading the data and the code to the GPU takes only a few seconds, just like unloading it, so it would't be an hour. These values of CPU usage mean that the CPU is doing real work as the GPU crunches. Maybe there is data exchanged and written back to the HDD as the crunching goes on or something else. That's correct. As far as I know, some double precision calculation is needed for crunching these WUs, and this is done by the CPU (because the GTX's DP is slowed down by nVidia) | |
ID: 19659 | Rating: 0 | rate: / Reply Quote | |
CPU time is the time used on one CPU core/thread, so if you have an 8 thread CPU the actual time spent using the entire CPU would need to be divided by 8. If that GPU is in an 8core system then that works out at 9min of entire CPU time per 7.6h of entire GPU time. There is no point looking at the GPU as a unit and the CPU as separate cores; it would be no better than saying each of 240 GPU CUDA cores uses 2.25sec of CPU time. | |
ID: 19661 | Rating: 0 | rate: / Reply Quote | |
Thanks for your replies. All clear now. Hope the GTX580 who is double precision will do better, but I agree that the CPU contribution on a 12 thread CPU remains minimal. | |
ID: 19663 | Rating: 0 | rate: / Reply Quote | |
Message boards : Number crunching : Elapsed Time vs CPU time