Advanced search

Message boards : Graphics cards (GPUs) : power optimization?

Author Message
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1812 - Posted: 27 Aug 2008 | 21:54:47 UTC

Hi guys,

is there anyone out there with a watt-meter or has already tried this? What I have in mind:

- calculations are run on the shaders, there should not be much that needs to be done in the "normal" GPU core
- the idle power draw of NV cards is quite high, so it seems like they can't switch everything off which is not needed right now

- could someone reduce their GPU clock, while keeping shader and memory clock constant, and measure if there is any reduction in system power draw?
- and observe if it affects WU runtimes
- I'd suggest lowering the GPU core clock by something between 100 and 200 MHz to get some measureable difference in power draw

Thanks in advance,
MrS
____________
Scanning for our furry friends since Jan 2002

Profile Krunchin-Keith [USA]
Avatar
Send message
Joined: 17 May 07
Posts: 512
Credit: 111,288,061
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1817 - Posted: 28 Aug 2008 | 1:19:43 UTC - in response to Message 1812.

Hi guys,

is there anyone out there with a watt-meter or has already tried this? What I have in mind:

- calculations are run on the shaders, there should not be much that needs to be done in the "normal" GPU core
- the idle power draw of NV cards is quite high, so it seems like they can't switch everything off which is not needed right now

- could someone reduce their GPU clock, while keeping shader and memory clock constant, and measure if there is any reduction in system power draw?
- and observe if it affects WU runtimes
- I'd suggest lowering the GPU core clock by something between 100 and 200 MHz to get some measureable difference in power draw

Thanks in advance,
MrS

I have BBS's with software that includes a Watt Meter. I have not tired what you suggest, I was waiting until the project stabilizes first. However I did measure the increase from running the computer with BOINC running 1 CPU task and the LCDs warmed up and active only and then switching to running BOINC with 1 GPU task, the extra draw was only 26W. I will do some additional tests but first we need a stable app here.

Additionally the GPU's I use are all the same brand and basic model with same number of stream processors and memory, the two at work are clocked at 601MHz and the one here is clocked at 632MHz, per the nvmonitor program. Those are factory settings. The faster one took around 59000s and the slower ones took around 61000s. This is sort of hard to figure at this time with all the app changes and version changes, that is why I say we need a stable app before testing what you suggest. But basically just that small difference in clock frequency makes the slower ones take 1/2 hour longer.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1821 - Posted: 28 Aug 2008 | 7:59:07 UTC

Hi Keith, thanks for the reply. I figured 4.61 was stable enough, that's why I posted this suggestion now :) (on my machine I see run time differences of about 15 min between different WUs)

I suspect your card with the higher core clock also has its shaders clocked higher, which would/could explain the difference in speed? And what's a BBS?

MrS
____________
Scanning for our furry friends since Jan 2002

Profile [XTBA>XTC] ZeuZ
Send message
Joined: 15 Jul 08
Posts: 60
Credit: 108,384
RAC: 0
Level

Scientific publications
wat
Message 1823 - Posted: 28 Aug 2008 | 8:11:28 UTC

I have a watt-meter, I will do some tests with my 9600GT

Profile Krunchin-Keith [USA]
Avatar
Send message
Joined: 17 May 07
Posts: 512
Credit: 111,288,061
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1849 - Posted: 28 Aug 2008 | 12:07:04 UTC - in response to Message 1821.

Hi Keith, thanks for the reply. I figured 4.61 was stable enough, that's why I posted this suggestion now :) (on my machine I see run time differences of about 15 min between different WUs)

I suspect your card with the higher core clock also has its shaders clocked higher, which would/could explain the difference in speed? And what's a BBS?

MrS

Battery Backup Supply - a.k.a. UPS

or

If you're from before the internet generation, a BBS use to be a Bulletin Board System. You dialed each computer system's telephone number.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1916 - Posted: 29 Aug 2008 | 18:35:13 UTC

By now I have finished 6 4.61 WUs. The average time is 44569 s, the maximum 44887 s and the minimum 44131 s. That's 1% downwards and 0.7% upwards, seems stable enough for me :)

MrS
____________
Scanning for our furry friends since Jan 2002

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1950 - Posted: 30 Aug 2008 | 13:35:29 UTC

OK guys, forget the idea! I lowered the core clock (750 -> 400 MHz) and while the power supply fan went down 50 rpm und the card became 3°C cooler, performance suffered immediately.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile pit
Send message
Joined: 23 Sep 08
Posts: 3
Credit: 8,255,016
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwat
Message 2601 - Posted: 25 Sep 2008 | 12:05:00 UTC - in response to Message 1950.

OK guys, forget the idea! I lowered the core clock (750 -> 400 MHz) and while the power supply fan went down 50 rpm und the card became 3°C cooler, performance suffered immediately.

MrS



Lowering the clock won't succeed. You should try to lower the voltage. This has an immediate impact on your energy bill.
Using a powermeter you should be able to see every movement of the voltage regulator. I expect that 20 - 40 W are possible (NV 260 / 280).

I'll give it a try soon when I can sit next to my workstation. There might be some reboots necessary ;-)

Good Luck

Jabba

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 2620 - Posted: 25 Sep 2008 | 22:45:34 UTC - in response to Message 2601.

Lowering the clock won't succeed.


That's what I said.. kind of ;)
Sure, lowering the clock only drops the dynamic power consumption linearly, but it drops it nevertheless.

You should try to lower the voltage.


I'd be glad to do so if nVidia would let me. From your post it sounds like on GT200 based cards you can control the voltage yourself? On my 9800GTX+ this doesn't seem to be the case, that's why I went to try clock speed.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile pit
Send message
Joined: 23 Sep 08
Posts: 3
Credit: 8,255,016
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwat
Message 2632 - Posted: 26 Sep 2008 | 7:04:50 UTC - in response to Message 2620.

Lowering the clock won't succeed.


That's what I said.. kind of ;)
Sure, lowering the clock only drops the dynamic power consumption linearly, but it drops it nevertheless.

You should try to lower the voltage.


I'd be glad to do so if nVidia would let me. From your post it sounds like on GT200 based cards you can control the voltage yourself? On my 9800GTX+ this doesn't seem to be the case, that's why I went to try clock speed.

MrS


In general this should work for all nVidia GPUs. You need the tool 'nvflash' which operates in DOS Mode.
!!! Make sure that your graphics adapter is not overclocked!!!!
For further details and how to handle the tool please check for some websites in your preferred language.
There are a couple of howtos available.

Greetz

Jabba

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 2639 - Posted: 26 Sep 2008 | 19:41:33 UTC

That's interesting! I could lower my GPU voltage from 1.15 V to 1.10, 1.05 or 1.00 V. I guess 1.05 V may still be stable at stock clocks, but could get borderline. And would reduce dynamic power consumption to 83%.

Maybe I'll try the bootable-USB-stick-thing tomorrow, but I don't have a PCI card at hand..

MrS
____________
Scanning for our furry friends since Jan 2002

Post to thread

Message boards : Graphics cards (GPUs) : power optimization?

//