Advanced search

Message boards : Graphics cards (GPUs) : Lowering VDRAM frequency saves energy

Author Message
Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26012 - Posted: 29 Jun 2012 | 17:13:58 UTC

Hello , i tryed and noticed that when i lowering the DRAM frequency

on my two gtx 460 i am saving energy.

But how low should i go ?

Shaders at 1500 mhz and DRAM at 1400 mhz.

Can i continue to lower this frequency or will i get errors ?

Will this effect the completion time of the wu ? or is that

irrelevant to crunshing performance ?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26014 - Posted: 29 Jun 2012 | 17:31:25 UTC - in response to Message 26012.

You will have to work that out for yourself. Different cards would be affected in different ways (in terms of performance). Low to mid range cards may be less affected. Lowering the GPU memory will slow the task down a bit, and the more you lower it the more it will slow, but how much will vary by task type and app (3.1/4.2). You may also reach a point were the GPU becomes unstable, or you may find that performance suddenly drops off (bad balance &/or recoverable errors). Up to ~20% doesn't usually result in a big performance drop, but after that further power savings would probably become insignificant.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26015 - Posted: 29 Jun 2012 | 17:39:23 UTC

ok, i understand there will be suddenly some surprices i wont expect. ok, i will try.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26036 - Posted: 30 Jun 2012 | 11:39:23 UTC - in response to Message 26015.

ok, i understand there will be suddenly some surprices i wont expect. ok, i will try.

Well.. you should be able to go quite low before funny things start to happen. However, GPU-Grid performance will suffer from reduced memory speed. So if you go too low you actually lower your overall power efficiency (the GPU gets starved for work but can't go to sleep either).
Back when I was running GPU-Grid I had the memory slightly overclocked.

MrS
____________
Scanning for our furry friends since Jan 2002

Paul Raney
Send message
Joined: 26 Dec 10
Posts: 115
Credit: 416,576,946
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 26040 - Posted: 30 Jun 2012 | 12:13:57 UTC - in response to Message 26015.

Please post your results as you test. We could all learn something here. When overclocking, the conventional wisdom is that higher memory clocks don't help GPUGrid much. The converse may also be true.

____________
Thx - Paul

Note: Please don't use driver version 295 or 296! Recommended versions are 266 - 285.

Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26046 - Posted: 30 Jun 2012 | 17:59:14 UTC - in response to Message 26040.
Last modified: 30 Jun 2012 | 18:33:50 UTC

So at the time i have been trying if done this:

At first i must say , it works, a small power saving is there, but its very low to one gpu. Its about 25 watts each card i am using.

Settings for one GTX 460:

Voltage at 937mV
Chipset Frequency at 750 Mhz
VDRAM at 1254 MHz (cant get lower because hardware restriction)
Shaders of course at 2*750 Mhz = 1500 Mhz
As i lowered the VDRAM frequency i have to rise the voltage about 12mV.
Using "Nvidia Inspector" software.

That´s all i can tell at this time.

It was on an Gigabyte GTX 460 OC1G-I 1024MB.



Will that help ? :)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26051 - Posted: 30 Jun 2012 | 21:05:05 UTC - in response to Message 26046.

This will help, but you'Ve got to measure performance as well. Average over at least 10 WUs (calculate something like credits/day). Then return to stock, measure performance again and report absolute power draw. For example if you save 5% electricity but crunch 10% slower, then it's not worth it.

BTW: increasing GPU voltage increases power consumption of the main consumer.. I wouldn't do this just to clock the memory lower :p

MrS
____________
Scanning for our furry friends since Jan 2002

Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26052 - Posted: 30 Jun 2012 | 21:22:05 UTC - in response to Message 26051.
Last modified: 30 Jun 2012 | 21:25:57 UTC

u must be right, overall the 25 watts mean savings by 15,625 percent. calulated with official power consumption data at 160 watts.

but i must crunsh more WU to make an average performance comparsion.

That will take some time, thanks for the advice. Keep crunshing ...

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26054 - Posted: 30 Jun 2012 | 23:54:44 UTC - in response to Message 26052.
Last modified: 30 Jun 2012 | 23:55:03 UTC

calulated with official power consumption data at 160 watts.

That's the TDP, not what the card is actually using. It's probably using more like 120W.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26063 - Posted: 1 Jul 2012 | 4:36:46 UTC - in response to Message 26054.

i will never check this out, even making a comparsion table in excel is not my strength.

here in germany we say: Never trust a study you have not self faked it. ;)

so what for do this , anyone can watch the results. I´ll never get 100% authenthic data about it.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26066 - Posted: 1 Jul 2012 | 9:01:41 UTC - in response to Message 26063.

Taking a look at your PCs.. you might want to try to merge similar ones, which appear multiple times. Makes it easier to look at the right tasks.

And then all we really need is the following data: WU run time + credits granted. We can work out the following numbers for you, no problem. However, if you can't measure the real power consumption, it will be difficult to judge whether downclocking the VRAM was worth it or not. If you could borrow a power consumption measurement device (~15€, became quite popular in the recent years) we could get hard numbers.

MrS
____________
Scanning for our furry friends since Jan 2002

Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26068 - Posted: 1 Jul 2012 | 9:11:35 UTC - in response to Message 26066.
Last modified: 1 Jul 2012 | 9:13:09 UTC

i have two of them. I also got a second pc , there i can place only one or two pcie graphicscards.

Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26073 - Posted: 1 Jul 2012 | 12:45:23 UTC
Last modified: 1 Jul 2012 | 12:53:10 UTC

I stop that experiment, on my second pc there was only a power difference of 11 watts by lowering the VDRAM frequency. Very bad i think.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26075 - Posted: 1 Jul 2012 | 13:01:07 UTC - in response to Message 26073.

It's normal that the DRAM frequency doesn't impact overall power consumption much. Otherwise we couldn't get away without actively cooling them :)

At Milkyway one can go to really low frequencies, about 1/3 the stock frequency. There it really starts to matter, saving you about 15 - 30 W per card.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26093 - Posted: 2 Jul 2012 | 11:54:16 UTC - in response to Message 26075.
Last modified: 2 Jul 2012 | 20:55:42 UTC

On a GTX470 I can save 20W by reducing the GDDR from 1700 to 1300MHz.
I don't know what performance difference it will make, but I'm 25% into a Nathan task so I will work it out later.

- It turns out that it's 6% slower.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26096 - Posted: 2 Jul 2012 | 12:10:51 UTC - in response to Message 26093.

courios , back in my system with three gpus i am saving more power than ever. Its again 25 watts or slightly more that a save per gtx 460 gpu !?

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26102 - Posted: 2 Jul 2012 | 19:27:38 UTC - in response to Message 26096.
Last modified: 2 Jul 2012 | 19:28:18 UTC

How are you measuring these 25W/GPU, Ratanplan?

BTW: if GPU performance suffers from the reduced memory clock, the GPU will also consume less power ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26103 - Posted: 2 Jul 2012 | 19:30:30 UTC - in response to Message 26102.
Last modified: 2 Jul 2012 | 19:57:59 UTC

before and after downclocking the two gtx 460 at full load.

Beforce ~500 watts Now~ 450 watts at "full" load of all gpus.

I could stop crunshing, and set colckinbgs to stock VDRAM frequency,

i try it. Back soon.

Edit:

cant tell now , running two ACEMD 2 , must wait for Long runs.

In three hours i got 3 Long Runs running, can tell then, if its my fault or real.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26109 - Posted: 2 Jul 2012 | 21:16:30 UTC - in response to Message 26103.
Last modified: 2 Jul 2012 | 21:23:24 UTC

Just for reference:
My 20W power saving (from underclocking my GTX470's memory) came at a cost of 6.05% task performance loss. The power usage was measured at the wall socket for the Main System Unit (MSU), running the same tasks (CPU and GPU) under the same settings, only a few seconds after reading it at normal frequency.

Without underclocking the GDDR:
3 NATHAN_RPS tasks at 22,046.59, 22,020.22, 22,031.58

After underclocking the GDDR:
1 NATHAN_RPS task took 23,066 seconds (but had already completed 25% of the task). So basically the remaining 75% took an extra 1000sec, which would mean a full task would take 1333seconds. 23333/22000=1.06 or 6% longer.

My MSU use to use 330W and now uses 310W. So it's now 310/330 more efficient. It just so happens that this is 6.45% which basically means the system isn't any more efficient in terms of tasks completed/Watt; it's just about the same.

If you really accurately measured this and tuned the GPU you might become about 1% more efficient per Watt, but that's about it.

So 'generally speaking' don't underclock here expecting to be more efficient; your just doing less for less electric. If you have multiple GPU's then that 0.4% improvement would increase slightly, but even then your talking about 2 or 3% more efficient and all this ignores the GPU purchase costs!

I completely expected these results, and I know that overclocking results in the same task performance per Watt; you use more energy but crunch faster (when overclocking without increasing the voltage, and up to ~15% OC). Even with a slight V increase and going up to ~20% the performance per watt is fairly similar, so long as you measure at the wall and use one GPU.

BTW my MSU just uses ~78W idle.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Rantanplan
Send message
Joined: 22 Jul 11
Posts: 166
Credit: 138,629,987
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26114 - Posted: 2 Jul 2012 | 22:39:54 UTC
Last modified: 2 Jul 2012 | 22:44:52 UTC

It must be the computers fault :o/

unbelievable , now the power comsumption wend down from 475 watts to

465 watts, and saves to say absolutly nothing. dont know what happend ? :S

before it got a total power consumption of 500 watts :S

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26143 - Posted: 3 Jul 2012 | 18:21:10 UTC

Thanks SK, I think that's as much analysis as we need :)

@Ratanplan: power consumption obviously depends on load. Some variation in backgroud tasks might cause noticeable differences if average GPU utilization was changed. Different GPUGrid WUs achieve different GPU utilization levels, so this will affect power draw as well. And there's always temperature.. if the ambient temperature varied by 10°C during measurements, leakage currents will change measureably.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Chilean
Avatar
Send message
Joined: 8 Oct 12
Posts: 98
Credit: 385,652,461
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27391 - Posted: 23 Nov 2012 | 19:28:40 UTC
Last modified: 23 Nov 2012 | 19:29:46 UTC

I OC'ed my card from 850 to 1256 MHz (Core) and 2500 to 2902 Mhz (Memory) and shaved about an hour on the normal runs (from 18K seconds to no more than 14K seconds). On the long runs, the performance gain is even better (probably same % of improvement though). I doubt the whole performance gain was from just OC'ing the core. Then again, computers work in mysterious ways.

97-98% Utilization, fed by a single HT thread (3610QM i7 CPU).

GPU: nVidia 660M.

I think I hit a wall @ 1260 MHz for the core (which is why I left it @ 1256), so now I'm gonna bump the memory by a few MHz every day and see if I get anything out of it.

BTW, does the Memory Control Unit (MCU) have anything to do with this?

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27415 - Posted: 25 Nov 2012 | 21:31:00 UTC - in response to Message 27391.

At linear scaling of GPU-Grid performance with core clock one should have expected a runtime reduction from 18k to 12.2k seconds by increasing the GPU core clock from 850 to 1256 MHz. This seems to be a good approximation of what you're actually seeing, although your quoted value is slightly worse than this. I postulate you'll see this reduction to 14k seconds at 2900 MHz memory as well as at 2500 MHz.

MrS
____________
Scanning for our furry friends since Jan 2002

Post to thread

Message boards : Graphics cards (GPUs) : Lowering VDRAM frequency saves energy

//