Advanced search

Message boards : Number crunching : I thought CUDA was more efficient

Author Message
TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10914 - Posted: 29 Jun 2009 | 12:54:02 UTC

Hi guys, it's me again with more questions.
I have set my i920 to use only 6 cores and when running Einstein@Home, 6 WU are dealt with at the same time with a total CPU use of 75%. It takes around 6 hours and then 6 jobs are done.

Now with GPUGRID and the use of CUDA, only one WU can run at one time with my GTX285 graphics card. CPU load is 5%. So 95% of CPU has nothing to do when the pc is idle (when I sleep or are at work). It takes around 2 hours of CPU time (give by BOINC Manager) but will take also around 6 hours in real time (between start and finish). So in my opinion this is not efficient.

Another thing I noticed is that the graphics card becomes very warm. A car thermometer hanging for the outflow of the cards fan is showing 40-43 degrees Celsius. When touching the card (back on the pc beneath the dvi-connectors) it feels very hot I cannot touch it very long. This concerns me as I am afraid for the lifetime of the card. With the current software of nVidia I cannot find its core temperature.

So all in all can someone explain why CUDA is better to do WU’s crunching than with CPU cores?
The only thing I can think of is that it keeps the CPU free for doing other tasks. But the heat of the card I do not like at all, moreover it becomes very warm in the room during summer.

As always thanks for the input.

____________
Greetings from TJ

Profile Hydropower
Avatar
Send message
Joined: 3 Apr 09
Posts: 70
Credit: 6,003,024
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwat
Message 10915 - Posted: 29 Jun 2009 | 16:16:43 UTC - in response to Message 10914.

Now with GPUGRID and the use of CUDA, only one WU can run at one time with my GTX285 graphics card.


Correct, CUDA runs on your GPU, you have one GPU core, thus one task.

CPU load is 5%. So 95% of CPU has nothing to do when the pc is idle (when I sleep or are at work)


You are idle :), your CPU feeds your GPU and has power left to other things as well. You are just not using it for the other things.

. It takes around 2 hours of CPU time (give by BOINC Manager)


Forget that CPU time thing, just watch the real clock.

but will take also around 6 hours in real time (between start and finish). So in my opinion this is not efficient.


Your GPU takes around 6 hours which is an excellent score.

Another thing I noticed is that the graphics card becomes very warm. A car thermometer hanging for the outflow of the cards fan is showing 40-43 degrees Celsius.


That is not warm. My cards run 70 degrees celcius with high fan rpm. They are designed to live until 100 degrees or so. 80 is considered a reasonable limit. The temperature of your card is very good.

it feels very hot I cannot touch it very long. This concerns me as I am afraid for the lifetime of the card. With the current software of nVidia I cannot find its core temperature.


Try downloading GPU-Z or EVGA Precision if you have an EVGA card. Google for sites that have it.

So all in all can someone explain why CUDA is better to do WU’s crunching than with CPU cores?


If you would do the same calculation your graphics card does in 6 hours on a CPU, it would probably take 120 hours or longer. A conservative estimate.

moreover it becomes very warm in the room during summer.


Open a window and run a fan, it does help. (not being sarcastic)


As always thanks for the input.


You're welcome.
____________
Join team Bletchley Park, the innovators.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10922 - Posted: 29 Jun 2009 | 21:15:34 UTC

Just set your i7 to 8 tasks, should be fine ;)

And regarding your thoughts on efficiency: you're really comparing strawberries and oranges here. If you can eat a strawberry in 2 minutes and the other guy can eat an orange in 6 minutes, who's more efficient?

Regarding the temperature: take a look here.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10932 - Posted: 30 Jun 2009 | 13:40:55 UTC

@Hydropower: Thanks for your answers, this helps me a lot. I have a window open and running a fan during daytime. My computers are at the attic with only one window that can open. It is in full sun light almost all day. And as you know it is quit warm in the Netherlands currently and becomes even warmer inland as the week goes by.

The machine with the GTX 285 card is new and not my own, so I must be a bit calm with it and I really don’t want to overheat it.

@ETA: Thanks for your input as well. I’ll leave 6 cores on, they are running Einstein and I need the rest for other programs.

And regarding your thoughts on efficiency: you're really comparing strawberries and oranges here. If you can eat a strawberry in 2 minutes and the other guy can eat an orange in 6 minutes, who's more efficient?

If you put it like that, than I am indeed comparing different things. But I was thinking of only one WU in 6 hours with CUDA, and 6 WU’s of E@H in approx. 6 hours (It could be 8). And the GPU becomes hotter than the CPU. And in my case the CPU has a better cooling fan than the GPU. (To me an orange in 6 minutes is more efficient than one strawberry in 2 minutes, or it must be a giant one, helped grow with genetic modification.)

To compare: we need application X, made to run under “normal” BOINC and then crunching the same but with “CUDA”. Right?

____________
Greetings from TJ

popandbob
Send message
Joined: 18 Jul 07
Posts: 67
Credit: 40,277,822
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10934 - Posted: 30 Jun 2009 | 20:02:20 UTC

For that application X seti@home has a cuda and CPU version of Multibeam app.
It is 12x faster to run the cuda app over stock.
However it is a poor comparison as the CUDA app is poorly optimised.

GPU's are theoretically 50-100x faster than a CPU
In practicality its more like 30-50x faster
Which is a huge speed increase (1 gpu = 30-50 cpu cores)
Bob

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10938 - Posted: 30 Jun 2009 | 21:32:42 UTC - in response to Message 10932.

I'm just curious: did you try 8 threads? Usually I can't feel a difference between running BOINC full blast and leaving some ressources free (while not overloading the amount of RAM).. that's the joy of zask priorities :)

A funny thought: if you replace the 6 Einstein-WUs in your example by CPDN-WUs they'll each take weeks or months. That would be some massive strawberries :D

At Milkyway@home we can run the same WUs on CPUs and GPUs. Back in Jan or so my Phenom 9850 crunched 4 of them in ~20 mins, whereas my 4870 needed ~30s for one. That's quite impressive, a speed-up by a factor of 10 at comparable power draw, manufacturing process, die size and purchase cost.

This code is almost ideally suited for GPUs (and well optimized for both, CPU and GPU), which is not the case for the majority of our algorithms. My card reaches about 150 GFlops DP there, about half of its theoretical performance. A 3.2 GHz i7 with maximum ressource utilization can reach 50 GFlops in DP, so at MW it may reach 25 - still an advantage of a factor of 6. Comared to the best the cpu world has to offer, with a superiour manufacturing process.

Regarding the operating temperature: this really depends on the heatsink / fan (HSF) and doesn't tell you much. You can design a HSF so crap the even a Pentium 1 runs at 80 - 100°C. What you can compare is power draw or even better: energy consumed for a given task. However, today both, GPUs and high-end CPUs are essentially power-limited. Their maximum speed is dictated by the beefiest cooling system, which the user is still willing to buy (and the noise he's willing to stand). So comparing power draw does not neccessarily tell you much about efficiency of the actual chip design.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10952 - Posted: 2 Jul 2009 | 17:43:35 UTC - in response to Message 10938.

Hi ETA and others, thanks for all the answers and input. It is now CUDA for me.

Yes I did, I sometimes still do it at night when I am sleeping. Running with 8 cores Einstein does not bother the RAM, it uses approx 27% of it. But using 8 cores to crunch and at the same time rendering something with mathcad is slow. Or when let it render in the background and use mathlab for something else simultaneously. Therefore I use a max. of 6 cores.

Now you gave a good example of the use of the GPU just as Bob did in Message 10934. I will do some Milkyway to see after the weekend when it becomes cooler. I don’t want to nag over the heat again but the pc is a month old and was a bit expansive, so I am a bid careful with the graphics card. I have read somewhere that the core temperature of the GPU from a GTX285 may go to 190 degrees Celsius! Than should the 40-45 degrees on the outside of the case be no problem at all, but yes it’s me, a bit scary for heat at electrical devices.

____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10954 - Posted: 2 Jul 2009 | 19:42:40 UTC - in response to Message 10952.

Currently Milkyway only runs on CPUs and unofficially on ATIs. The project is trying to make it work on NV since the beginning of the year and have been "almost there" ever since.
Regarding heat: it is something to worry about, no problem with this. 190°C is excessive, though. I suspect he meant F. At 190°C the chip should die almost instantaneously if it's powered up. The driver based emergency shutdown for NV-GPUs is about 125°C. For 24/7 I wouldn't want to run the chip (check with e.g. GPU-Z) higher than 70°C (see here).

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10964 - Posted: 4 Jul 2009 | 15:03:01 UTC - in response to Message 10954.

Then I have a problem I think ETA. I have installed “throttle” from the BOINC ad-on page and that shows a GPU core temperature of 64°C and then is the card not doing else than showing a background on two screens and these Internet-pages.

I will run GGPUGRID again after the weekend when it becomes cooler, and will see what happens.

____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10991 - Posted: 6 Jul 2009 | 21:17:54 UTC - in response to Message 10964.

You can easily use GPU-Z to monitor the temperature. And current GPUs have temperature controlled fans (otherwise noone could stand the noise). Your card will likely heat up to 80 - 90°C under load (doesn't matter much how hot it is outside and which load), depending on what the manufacturer set. Generally this temperature is OK for the GPUs - in occasional games. However, the manufacturer didn't know we're going to run 24/7. So many people use RivaTuner to speed the fan up (to the level where the noise is still tolerable and then back off a bit). This way temperatures can mostly be kept in check (60 - 70°C)

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 11009 - Posted: 7 Jul 2009 | 22:54:17 UTC - in response to Message 10991.

I looked for GPU-Z and found it at different sites, but all with a warning from McAfee. While they all mentioned that there is no virus or so, I will not download it, as my i7 is my work pc and I will absolutely have no problems or trouble with it.

I have installed TThrottle from efmer.eu (it is listed at BOINC add-ons) and that is showing all temperatures and options to set these temperatures. This works fine for me (since a few days, I am still testing it).
____________
Greetings from TJ

Post to thread

Message boards : Number crunching : I thought CUDA was more efficient

//