Advanced search

Message boards : Number crunching : Specs of the GPUGRID 4x GPU lab machine

Author Message
ignasi
Send message
Joined: 10 Apr 08
Posts: 254
Credit: 16,836,000
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 4110 - Posted: 1 Dec 2008 | 15:08:23 UTC
Last modified: 1 Dec 2008 | 17:44:14 UTC

The technical specifications for this machine are the following:

Motherboard:
MSI K9A2 Platinum / AMD 790 FX 4xPCI-E 16x
CPU:
9950 AMD Phenom X4 2.60Ghz, RAM 4Gb.
GPUs:
4x NVIDIA GTX 280
Power supply:
Thermaltake Toughpower 1500W, with 4 PCI-E 8-pin and 4 PCI-E 6 pin power cables.
Box:
Antec Gamer Twelve Hundred Ultimate Gamer Case. (Ours is a Silvestone but it is more expensive).
(The computer case is quite important. There are other available, but not all fit the purpose as you need to have enough cooling and free space after the 7th PCI slot to fit a dual slot graphics card as the GTX280.)

In the future (during 2009) we may be able to use multiple GPUs for one WU which will be rewarded higher.

New GTX290 cards should be coming out soon.

Profile Stefan Ledwina
Avatar
Send message
Joined: 16 Jul 07
Posts: 464
Credit: 51,279,371
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 4111 - Posted: 1 Dec 2008 | 16:55:14 UTC - in response to Message 4110.

Wanna have it!
Even though it has an AMD CPU it looks really nice! ;)
____________

pixelicious.at - my little photoblog

STE\/E
Send message
Joined: 18 Sep 08
Posts: 366
Credit: 268,368,907
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 4112 - Posted: 1 Dec 2008 | 17:23:23 UTC

You sure thats a Antec Gamer Twelve Hundred Ultimate Gamer Case ??? I've been looking at them and in fact bought 1 this morning for $119 Shipped and it sure doesn't look like whats in the picture ... ???

That things gotta run Hot with 4 280's Packed together like that ... 0_o

ignasi
Send message
Joined: 10 Apr 08
Posts: 254
Credit: 16,836,000
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 4113 - Posted: 1 Dec 2008 | 17:40:47 UTC - in response to Message 4112.

It is not, you are right. Ours is more expensive and fits two 750W power supplies. One burnt and we got the 1500W.

STE\/E
Send message
Joined: 18 Sep 08
Posts: 366
Credit: 268,368,907
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 4114 - Posted: 1 Dec 2008 | 20:30:39 UTC

:)

Temujin
Send message
Joined: 12 Jul 07
Posts: 100
Credit: 21,848,502
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwat
Message 4116 - Posted: 1 Dec 2008 | 20:55:28 UTC - in response to Message 4110.

4x NVIDIA GTX 280


Not having any machines with multiple cards, I wonder, does a 4 card machine produce 4x the credit of single card machine or is there some trade off involved?

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1947
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 4117 - Posted: 1 Dec 2008 | 21:46:16 UTC - in response to Message 4116.
Last modified: 1 Dec 2008 | 21:46:51 UTC

4 times, maybe more from next year if we manage to use multiple cards at once.

gdf

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 4118 - Posted: 1 Dec 2008 | 21:53:31 UTC - in response to Message 4112.

I've been looking at them and in fact bought 1 this morning for $119 Shipped and it sure doesn't look like whats in the picture ... ???


Doh.. the 4 GPUs are missing in your case! ;)

MrS
____________
Scanning for our furry friends since Jan 2002

STE\/E
Send message
Joined: 18 Sep 08
Posts: 366
Credit: 268,368,907
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 4123 - Posted: 2 Dec 2008 | 0:32:30 UTC
Last modified: 2 Dec 2008 | 0:36:34 UTC

hahaha ... not for Long, I ordered 4 280's as Accessories ... ;)

peeticek_LubosKrutek
Send message
Joined: 30 Nov 08
Posts: 7
Credit: 62,377,145
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 4219 - Posted: 8 Dec 2008 | 6:59:45 UTC - in response to Message 4116.

4x NVIDIA GTX 280


Not having any machines with multiple cards, I wonder, does a 4 card machine produce 4x the credit of single card machine or is there some trade off involved?



Hi,
how long need a one GT280 card to finish one task?
how many of points per day is this card able to produce?

Im asking, because i think, that something is wrong with my 9800GT.
Every task is finished in cca 12hours.
thats too much no?
thanks ay lot for feedback - sorry for english :D
P.

Temujin
Send message
Joined: 12 Jul 07
Posts: 100
Credit: 21,848,502
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwat
Message 4221 - Posted: 8 Dec 2008 | 9:19:21 UTC - in response to Message 4219.

how long need a one GT280 card to finish one task?
how many of points per day is this card able to produce?

Depending on the specific card, a GTX280 will produce something in the order of 13,000 credits per day or 4 WUs, each at about 6 hours each

Im asking, because i think, that something is wrong with my 9800GT.
Every task is finished in cca 12hours.
thats too much no?
No, that looks about right for a 9800GT

peeticek_LubosKrutek
Send message
Joined: 30 Nov 08
Posts: 7
Credit: 62,377,145
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 4222 - Posted: 8 Dec 2008 | 11:09:29 UTC - in response to Message 4221.

thats nice performance..
so if somebody have 4xGTX280 he got cca 52 000 ppd?
OMG - wooow.. i want it!!
But the electricity bill must be very interesting afterwards :-/
last question..if i take a look on that nice machine, i dont understand, how could be cooled these cards...? Because there is verry little space between every card, so how can works the cooling of every card in this configuration?
how looks temperatures under load?
ok, back to work, thanks for info.

ByE
PeeT.

MIP Andrew
Send message
Joined: 23 Nov 08
Posts: 2
Credit: 156,456
RAC: 0
Level

Scientific publications
wat
Message 4315 - Posted: 14 Dec 2008 | 6:46:03 UTC - in response to Message 4222.

With a 1500W PSU, are you consistantly pulling 1.5KW per hour?

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1947
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 4318 - Posted: 14 Dec 2008 | 10:25:01 UTC - in response to Message 4315.
Last modified: 15 Dec 2008 | 16:05:45 UTC

Most likely not, but we have not measured it. On Linux, you would be consuming just the amount of 4 GPUs as the processor do nothing. I would say that each 280 should consume around 150W, so the total should be around 800-900W. Still quite a lot even if GPUs are 10 times more power efficient than CPU for the same flops.

gdf

We suggest a 1500W PSU.

Thamir Ghaslan
Send message
Joined: 26 Aug 08
Posts: 55
Credit: 1,475,857
RAC: 0
Level
Ala
Scientific publications
watwatwat
Message 4353 - Posted: 15 Dec 2008 | 13:14:39 UTC - in response to Message 4318.

I'm not an electric engineer, but I've seen enough share of posts all over the web claiming that the higher watt powered PSUs powering up your case, the better efficiency you'll get.

The same is true if you are plugged into 220V instead of 110V.

Internet myth? I'll only believe it if someone shows me the numbers and the tests can be scientifically duplicated elsewhere!

Profile Krunchin-Keith [USA]
Avatar
Send message
Joined: 17 May 07
Posts: 512
Credit: 111,288,061
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 4360 - Posted: 15 Dec 2008 | 16:40:05 UTC - in response to Message 4353.

I'm not an electric engineer, but I've seen enough share of posts all over the web claiming that the higher watt powered PSUs powering up your case, the better efficiency you'll get.

The same is true if you are plugged into 220V instead of 110V.

Internet myth? I'll only believe it if someone shows me the numbers and the tests can be scientifically duplicated elsewhere!

That would depend on how the PSU is built. A cheap one usually is built cheply and not as efficeint. Some of the more expessive ones are more efficient. But more cost does not necessarily more more efficientcy. Another factor would be heat. Just as with CPUs, if the PSU is running cooler, it may be more efficient. A lot of what you hear may be true, but that would be on a case by case basis. Older PSU's are rated at about 65% efficient. Some new PSU's have a '80 plus' rating, but to get that they need to be that at 20%, 50% and 100% load. Size in Watts does not matter. A 1500W can have the same efficientcy as a 250W if it is built so.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 4440 - Posted: 17 Dec 2008 | 20:31:49 UTC

Take a look here for some good information about power supplys. The bottom line is that PS usually have the best efficiency between 50 and 80% load, whereas below 20% gets ugly.

MrS
____________
Scanning for our furry friends since Jan 2002

Rowpie of The Scottish Bo...
Send message
Joined: 20 Dec 08
Posts: 4
Credit: 3,155,051
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 5386 - Posted: 8 Jan 2009 | 16:49:35 UTC

So now that the GTX 295 is out. when are you upgrading this baby to 4 of them?

8 x GPU crunching must be tempting although i think you'd need your own power station.

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 5412 - Posted: 9 Jan 2009 | 8:05:38 UTC - in response to Message 5386.
Last modified: 9 Jan 2009 | 8:06:53 UTC

So now that the GTX 295 is out. when are you upgrading this baby to 4 of them?

8 x GPU crunching must be tempting although i think you'd need your own power station.


I think they would wait for the GTX212. See this message thread.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 5445 - Posted: 10 Jan 2009 | 13:28:54 UTC

4 GTX 295 would be awesome.. if only for the sake of it :D

Might want an i7 to keep them fed, though. And likely 2 separate power supplys, which would make the machine look less elegant. Or maybe a quality 1.2 kW unit could handle it..

MrS
____________
Scanning for our furry friends since Jan 2002

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1947
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 5458 - Posted: 10 Jan 2009 | 18:02:51 UTC - in response to Message 5445.

We are trying to build one. We would like to know what is the real power consumption to see if we can cope with a 1500W PSU.

gdf

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 5462 - Posted: 10 Jan 2009 | 18:36:05 UTC

Even if we assume maximum TDP power draw of 300W for each card the system would stay below 1.5 kW. I'd estimate power draw under GPU-Grid to fall between 200 and 250 W, so sustained draw should be fine. Problems may arise during peak draw and due to load distribution within the PSU. I'm not sure if anyone could tell you reliably if it will work.. without testing.

You could switch the GTX 280 to 295 one by one and check stability, maybe monitor draw with a kill-a-watt. If it doesn't work you could stay with a 1+3 or 2+2 config. Not very convenient, but may be the only way to find out.. assuming noone else is going to try something as crazy as this ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Jack Shaftoe
Send message
Joined: 26 Nov 08
Posts: 27
Credit: 1,813,606
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 5468 - Posted: 10 Jan 2009 | 19:39:53 UTC - in response to Message 5445.
Last modified: 10 Jan 2009 | 19:46:55 UTC

4 GTX 295 would be awesome.. if only for the sake of it :D

Might want an i7 to keep them fed, though.


Is this correct? Does it really take a top of the line quad to keep them fed, whereas a Q6600 could not?

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5469 - Posted: 10 Jan 2009 | 19:43:20 UTC - in response to Message 5468.

4 GTX 295 would be awesome.. if only for the sake of it :D

Might want an i7 to keep them fed, though.


Is this correct? Does it really take a top of the line quad to keep them fed, whereas a Q6600 could not?


Who knows, maybe they want to run some real projects too ...

IN that case they want some real CPU power while they be playing with the GPU Grid thingie toys ... :)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 5470 - Posted: 10 Jan 2009 | 19:46:02 UTC - in response to Message 5468.

Is this correct? Does it really take a top of the line quad to keep them fed, whereas a Q6600 could not?


It's not because it's a top of the line cpu, it's because 4 GTX 295 have a total of 8 GPUs, so ideally you'd want 8 CPU cores to keep them busy. A system with the smallest i7 should still be cheaper and more power efficient than a dual quad core.

Ahh, my bad. I was assuming the system would run windows, which currently needs about 80% of one core for each GPU. The Devs probably prefer linux, where the CPU utilization is not a problem anyway. So forget about the i7 comment!

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Jack Shaftoe
Send message
Joined: 26 Nov 08
Posts: 27
Credit: 1,813,606
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 5471 - Posted: 10 Jan 2009 | 19:50:42 UTC - in response to Message 5470.

Ahh, my bad. I was assuming the system would run windows, which currently needs about 80% of one core for each GPU.


I run Windows Vista x64 right now, and 6.55 app on 6.5.0 BOINC uses about 6-7% of my available CPU - or about 28% of 1 core. I bet you could run 4 of these with a Q6600 (with no other projects) and not have the bottleneck be the CPU.


ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 5473 - Posted: 10 Jan 2009 | 19:54:48 UTC - in response to Message 5471.

I just checked, currently I'm at 11 - 13% of my quad, whereas it used to be 15 - 20%. Anyway, if performance on the GPUs suffers even a little bit you're going to loose thousands of credits a day.. and bite yourself in the a** for not having more cores or linux or a workaournd for win ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Jack Shaftoe
Send message
Joined: 26 Nov 08
Posts: 27
Credit: 1,813,606
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 5475 - Posted: 10 Jan 2009 | 20:04:24 UTC - in response to Message 5473.
Last modified: 10 Jan 2009 | 20:08:14 UTC

If you build a 4x GPU system, the chances of using your CPU for anything else is slim, and the i7 uses significantly more power than C2Q (and DDR3 costs a lot more too). Just think it would be wise to save a couple hundred bucks to go C2Q. Maybe Q9450 or Q9550 instead of Q6600.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 5477 - Posted: 10 Jan 2009 | 20:14:03 UTC - in response to Message 5475.
Last modified: 10 Jan 2009 | 20:20:36 UTC

I just don't like the looks of the i7's - they eat tons of power, and i've read lots of reports that they run really hot.


I don't like the rediculous price of the X58 boards with tons of stuff I neither need nor want to pay for. But the processors themselves are great. Under load they don't use more power than an equally clocked Penryn-Quad but provide about 30% more raw number crunching performance. And under medium load they use considerably less power than a Penryn, because they can switch individual cores off.
About running hot: I can imagine that the single die leads to higher temperatures at the same power consumption compared to a Penryn, where the heat is spread over 2 spatially separated dies. And I'd use proper cooling for any of my 24/7 CPUs anyway.

Edit: after reading your edit, maybe I should make my point more clear. The GPU crunches along until a step is finished. Once it's finished the CPU needs to do *something* ASAP, otherwise the GPU can not continue. So if One CPU core feeds 2 GPUs there will be cases when both GPUs finish, but the CPU is still busy dealing with GPU 1 and can not yet care about GPU 2. The toal load of that core may be less than 100%, still on average you'd loose performance on the GPUs. That's why I started my thinking with "1 core per GPU". Later I remembered that under Linux the situation is much less critical. If each GPU only needs about 1% of a core I can imagine that a quad is good enough for 8 GPUs.

MrS
____________
Scanning for our furry friends since Jan 2002

Scott Brown
Send message
Joined: 21 Oct 08
Posts: 144
Credit: 2,973,555
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 5479 - Posted: 10 Jan 2009 | 20:36:26 UTC

TDP for the GTX295 is listed as 289 watts here:

http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units



ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 5481 - Posted: 10 Jan 2009 | 22:16:04 UTC - in response to Message 5479.

Oh, yeah.. that's what I meant by "300 W" :D

MrS
____________
Scanning for our furry friends since Jan 2002

Profile [BOINC@Poland]AiDec
Send message
Joined: 2 Sep 08
Posts: 53
Credit: 9,213,937
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwat
Message 5728 - Posted: 17 Jan 2009 | 15:26:46 UTC
Last modified: 17 Jan 2009 | 15:33:08 UTC

I`m just curious how you can manage work for 4 GPU`s? With quadcore, you can have just 4 WU`s. And all WU`s will be crunched (nothing in stock). When one WU will be ready crunched, then one GPU will be idle. Even with <report_results_immediately>1, each of your GPU will be idle over some time once every 6 hours. It means for times per 6 hours. It seems that you are wasting at least few hours of your GPU`s every day.

Or maybe you have some secret method to manage work for this maschine? I`m using method (which I don`t like) with crunching GPUGrid at 100% and SETI CUDA at 10% (I have always something in stock then). What`s your method? Or maybe it doesn`t matter for you that GPU`s are idle so long time every day? And if you can fix 100% working time for these GPU`s then how you can do that with daily quota 15 WU`s? Because even stock graphics cards can crunch 16 WU`s...

I had maschine with 3x280. But it wasn`t possible to manage work for them over 100% time every day without babysitting. Without checking the computer every few hours. I can`t do that. Because of my work I`m sometimes 18 hours out of home. Then I`ve splitted GPU`s and now this maschine have just 2x 280`s and it`s approximately ok. But soon, very soon I`d like to buy 2x295. I don`t know how to manage work for these cards for 100% of time without babysitting all of the time.

Then my question is `how YOU are doing that`?
____________

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5734 - Posted: 17 Jan 2009 | 18:45:16 UTC

@AiDec

Well, I am using BOINC Manager version 6.5.0 ...

It has .4 days extra wok, connect 0.1 and I have one task in work and on that system I have 3 "spare" tasks.

On the other system, also running BOINC Manager 6.5.0 I have 0.1 day queue and no extra and I have one task in work and one or two pending ... I check every few hours and post updates as tasks are completed, but, other than that, I just let the two systems run ...

The first one has a GTX 280 which does tasks in about 4.5 hours and the slower system at the moment has a 9800GT ...

To me the keys are the version of the BOINC Manager and the queue size ...

Hope this helps ...
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 5736 - Posted: 17 Jan 2009 | 21:15:42 UTC - in response to Message 5734.

Hope this helps ...


Sorry, but I don't think so. He's asking for how people are keeping 4 GPus on a quad core fed, where you're limited to 4 WUs at one time. Buying an i7 could help, but that ruins the "good crunching power per investment" relation which makes us use the GPUs in the first place.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5738 - Posted: 17 Jan 2009 | 22:44:36 UTC - in response to Message 5736.

Hope this helps ...


Sorry, but I don't think so. He's asking for how people are keeping 4 GPus on a quad core fed, where you're limited to 4 WUs at one time. Buying an i7 could help, but that ruins the "good crunching power per investment" relation which makes us use the GPUs in the first place.

MrS



Hmm, I could have sworn that I had as many as 6 tasks locally ... my bad I guess ...

I suppose if the project is limiting the downloads to a max of 4 total then I should wait some time before I run out and get a pair of 295s ...

I suppose the problem will arise if you run more than GPU Grid on the machine. If you only run GPU Grid then when the task is done and with 0.1 queue or less should contact the scheduler an get more work ...

I guess I am still missing something ...
____________

Profile [BOINC@Poland]AiDec
Send message
Joined: 2 Sep 08
Posts: 53
Credit: 9,213,937
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwat
Message 5739 - Posted: 17 Jan 2009 | 22:48:40 UTC - in response to Message 5736.
Last modified: 17 Jan 2009 | 22:56:51 UTC

@Paul D. Buck

My post was about the main case of this thread - ` Specs of the GPUGRID 4x GPU lab machine`, not about your comp :). But anyway thx for answer ;).


Buying an i7 could help(...)


The case is not what could. The case is about what is now. Is the owner of this maschine wasting a lot of time of his cards or does he knows some tricks? The case is if there is any sense to have 4 GPU`s (cause I don`t see any sense to have more than 2x280 cause it`s impossible to manage 100% work for more than 2x280). There is just few questions which can tell us a lot about GPUGrid... I thought about multiple GPU`s at one maschine since 6 months. I had a maschine with 3x280 and I couldn`t get 100% work for all GPU`s. I`ve asked for 3xCPU tasks per comp, and I`ve asked for 2xCPU tasks per comp... And nothing happened. Then I`m asking what`s the way to fill up 4 GPU`s.
____________

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5740 - Posted: 18 Jan 2009 | 1:19:05 UTC - in response to Message 5739.

@Paul D. Buck

My post was about the main case of this thread - ` Specs of the GPUGRID 4x GPU lab machine`, not about your comp :). But anyway thx for answer ;).


Buying an i7 could help(...)


The case is not what could. The case is about what is now. Is the owner of this maschine wasting a lot of time of his cards or does he knows some tricks? The case is if there is any sense to have 4 GPU`s (cause I don`t see any sense to have more than 2x280 cause it`s impossible to manage 100% work for more than 2x280). There is just few questions which can tell us a lot about GPUGrid... I thought about multiple GPU`s at one maschine since 6 months. I had a maschine with 3x280 and I couldn`t get 100% work for all GPU`s. I`ve asked for 3xCPU tasks per comp, and I`ve asked for 2xCPU tasks per comp... And nothing happened. Then I`m asking what`s the way to fill up 4 GPU`s.


My bad ...

I guess the last that is left to me is to suggest water ... probably about 4 gallons worth ... :)

I guess I am fortunate in that I only have small GPUs with only one core per card so I don't see the walls ... I would guess the guy that we have been working one getting his 3 GTX 295s working is in for a disappointment too ...

Sadly, there are only two projects that use the GPU on BOINC at the moment ... with GPU Grid being the best run to this point with the stablest application. I probably won't go more nuts until Einstein@HOme or some other project comes out with a GPU version of their application.

If tax season does not hit me too hard I would like to build a machine again in April/May and by then may be these issues will be ironed out ...


anyhow, sorry about the confusion ...
____________

Scott Brown
Send message
Joined: 21 Oct 08
Posts: 144
Credit: 2,973,555
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 5741 - Posted: 18 Jan 2009 | 1:28:38 UTC - in response to Message 5739.

I believe that the secret is that they don't run SETI (or any other project than this one). With no CPU-based projects, can't one effectively setup BOINC to use a '0+4' alignment by adjusting the CPU use percentages in the same way that others use a '3+1' in place of the default '4+1'?


Profile Stefan Ledwina
Avatar
Send message
Joined: 16 Jul 07
Posts: 464
Credit: 51,279,371
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 5742 - Posted: 18 Jan 2009 | 7:56:25 UTC

I don't think they run BOINC on their lab machine... My guess is that they manually feed it with jobs... ;)
____________

pixelicious.at - my little photoblog

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5751 - Posted: 18 Jan 2009 | 15:38:12 UTC

Perhaps we can get ETA to prevail on the people that make decisions. We have at least one potential system out there that is going to have 6 cores that will be available for work.

If the scheduler cannot be made smarter the simplest solution is to raise the total queue to 7 ... but, I would suggest an investment to make the scheduler smarter (if possible) so that the queue size is related to the productivity and speed of the system.

I mean, heck I am almost tempted to buy a 295 pair with a better PS to put into my top system which would give me 6 cores in the one system here too ... but, there is no point in that if I cannot get enough work to keep them busy ...

Of course, as ETA has been happy to point out so often ... I have been missing the point and saying the wrong stuff... oh well ...
____________

Profile [BOINC@Poland]AiDec
Send message
Joined: 2 Sep 08
Posts: 53
Credit: 9,213,937
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwat
Message 5754 - Posted: 18 Jan 2009 | 16:18:32 UTC
Last modified: 18 Jan 2009 | 16:19:16 UTC

In my opinion there is two possibilities:

- that owner of this maschine do not care about filling up GPU`s over whole (100%) time of work
- or that he have some special rules on server side (e.g. this user/Computer ID have possibility to DL and have `in stock` 2-3x CPU WU`s, what is btw my dream...).
____________

Profile UL1
Send message
Joined: 16 Sep 07
Posts: 56
Credit: 35,013,195
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 5756 - Posted: 18 Jan 2009 | 16:55:20 UTC - in response to Message 5751.

We have at least one potential system out there that is going to have 6 cores that will be available for work.

Take a look at this one: Triple GTX295
I do hope I can keep this configuration...and am not forced to split it up again because I can't feed it 24/7...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 5791 - Posted: 19 Jan 2009 | 21:55:37 UTC

Just to point it out clearly: currently GPU-Grid limits the number of concurrent WUs a Pc can have to the number of cores, i.e. 4 on "normal" quad and 8 on an i7. That's why you can have 6 WUs overall, Paul :)

As AiDec pointed out you'll run into trouble to keep 4 GPUs fed with a normal quad core, as there are always delays in up/download and scheduler contacts. Not to mention internet connection or server failures..

And the point of the current limitation to 1 WU per CPU core is to keep slow GPUs from being routinely overloaded with many WUs which they have to abort after 3 days. I suspect the plan was to eventually make BOINC smart enough to request proper amounts of GPU work. But these efforts have been.. not very successful, so far ;) (to use kind words..)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5793 - Posted: 19 Jan 2009 | 22:08:30 UTC - in response to Message 5791.

Just to point it out clearly: currently GPU-Grid limits the number of concurrent WUs a Pc can have to the number of cores, i.e. 4 on "normal" quad and 8 on an i7. That's why you can have 6 WUs overall, Paul :)

As AiDec pointed out you'll run into trouble to keep 4 GPUs fed with a normal quad core, as there are always delays in up/download and scheduler contacts. Not to mention internet connection or server failures..

And the point of the current limitation to 1 WU per CPU core is to keep slow GPUs from being routinely overloaded with many WUs which they have to abort after 3 days. I suspect the plan was to eventually make BOINC smart enough to request proper amounts of GPU work. But these efforts have been.. not very successful, so far ;) (to use kind words..)

MrS


Ok, I mis-understood ... or was not fully up to snuff on the details ... heck, I have only been here a couple weeks ... :)

So, I can consider getting a triplex of 295's for the i7 machine ... cool ...

The real count should be number of GPU cores plus one I would think rather than the number of CPU cores. But that is just me ...

As to the other comment, part of the problem is that the BOINC developers, like Dr. Anderson don't seem to be inclined to listen to users that much. This has been a continual problem with the BOINC System in that the three groups don't really interact that well SYSTEM WIDE ... this is not a slam against GPU Grid or any other project specifically ... but, in general, the communication between BOINC Developers, Users (participants) and the project staff is, ahem, poor at best ...

THAT said, GPU Grid at the moment is one of the more responsive AT THIS TIME ...

At one point in historical time Rosetta@Home was excellent ... six months later ... well ... it has never been the same ...

Anyway, with the three groups isolated from each other and no real good structures to facilitate communication ... well ... real issues never get addressed properly ...
____________

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1947
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 5796 - Posted: 19 Jan 2009 | 22:14:59 UTC - in response to Message 5793.

Hi,
when we have finished with new applications, we will be testing a new BOINC feature which allows to send WU per GPU instead that per CPU. This way we will be fixing the limit to two/GPU.
So, this should not be a problem anymore in the near future. The feature is already in BOINC, it just needs to be tested.

gdf

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 5797 - Posted: 19 Jan 2009 | 22:16:24 UTC - in response to Message 5793.
Last modified: 19 Jan 2009 | 22:17:48 UTC

The real count should be number of GPU cores plus one I would think rather than the number of CPU cores. But that is just me ...


Edit: never mind this post, GDFs answer above says enough.

Definitely, or even more if the server sees that results are returned very quickly. But BOINC has to know and report the number of GPUs reliably, which doesn't sound too hard but may not be the item of top priority.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5802 - Posted: 19 Jan 2009 | 23:16:33 UTC - in response to Message 5796.

Hi,
when we have finished with new applications, we will be testing a new BOINC feature which allows to send WU per GPU instead that per CPU. This way we will be fixing the limit to two/GPU.
So, this should not be a problem anymore in the near future. The feature is already in BOINC, it just needs to be tested.

gdf


Thanks for the answer ... almost as if we knew what we were doing ... :)



____________

Audionut
Send message
Joined: 27 Jan 09
Posts: 1
Credit: 82,040
RAC: 0
Level

Scientific publications
watwat
Message 6099 - Posted: 28 Jan 2009 | 11:24:24 UTC - in response to Message 5458.

We are trying to build one. We would like to know what is the real power consumption to see if we can cope with a 1500W PSU.

gdf



http://www.extreme.outervision.com/psucalculator.jsp

9950 AMD Phenom X4 2.60Ghz,
2x DDR2 ram
4x NVIDIA GTX 295
2x SATA HDD
1x CD/DVD
2x 92mm Fans
4x 120mm Fans

At 100% load and adding 20% Capacitor Aging = 1429 Watts

110.9 Amps on the 12 volt rail though.

AFAIK there isn't a PSU readily available that can produce that much current on the 12 volt rail.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6138 - Posted: 28 Jan 2009 | 21:31:33 UTC - in response to Message 6099.

Some nice number, but sadly not useful. The GPUs won't run anywhere near their specified maximum power draw of ~300 W. This calculator has no idea how power the cards will draw under 100% GPU-Grid load. And generally I found most calculators gave vastly exaggerated numbers.. but I'd have to give this one the benefit of the doubt.

MrS
____________
Scanning for our furry friends since Jan 2002

DJStarfox
Send message
Joined: 14 Aug 08
Posts: 18
Credit: 16,944
RAC: 0
Level

Scientific publications
wat
Message 6800 - Posted: 20 Feb 2009 | 2:33:41 UTC

Seems very tempting to go dual 1500W power supplies.... Also, upgrading to Phenom II X4 CPU would actually reduce power consumption a little. Or forget the whole thing and build a new core i7 system w/ a dual power supply server chassis....

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6817 - Posted: 20 Feb 2009 | 18:52:34 UTC - in response to Message 6800.
Last modified: 20 Feb 2009 | 18:58:03 UTC

Dual 1.5 kW? If I look at the prices of these units i'm not tempted one little bit ;) And I'm sure you wouldn't need that much power, even for 4 GTX 295.
EDIT: just noted this post: 3 GTX 295 are fine with an Antec 850W.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Michael Goetz
Avatar
Send message
Joined: 2 Mar 09
Posts: 124
Credit: 7,573,744
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwat
Message 7521 - Posted: 16 Mar 2009 | 17:12:19 UTC - in response to Message 5445.
Last modified: 16 Mar 2009 | 17:13:48 UTC

4 GTX 295 would be awesome.. if only for the sake of it :D

Might want an i7 to keep them fed, though.


Interesting discussion about needing 8 CPU cores to feed 8 GPUs.

Leaving aside for a second the fact that *currently*, GPUGRID won't download 8 WUs unless you have 8 cores, the question is whether, or more accurately, by how much, having less than 8 CPU cores would slow down the GPUs.

After thinking about it for a while, I don't think 8 CPU cores are required. Here's why.

The argument was made that if one core is feeding two GPUs, and both GPUs need new work at the same time, one will have to wait for the other to be serviced by the CPU. That is true. Let's call such an event a 'collision'. When a collision occurs, a GPU sits idle. That's bad. But it's not an accurate description of what's actually happening inside the computer. Let me explain.

In a computer with, say, a Q6600 and 4x GTX295 (8 GPUs), the above example is simplifying the system from 4+8 to 1+2. While mathematically that arithmetic might be correct, it distorts (significantly) the metrics of how the system is going to perform.

Assume that it takes 1/8 of a CPU core to service the GPU (which is about right on my Q6600+GTX280 running Vista). In a 1 CPU + 2 GPU system, with a purely random distribution of when the GPUs need new work, you would expect a collision to occur approximately 1/8 of the time. That's a significant performance hit.

But let's look at what's happening on the real computer, which is 4+8, not 1+2. Each of the 8 GPUGRID tasks is NOT assigned to a specific CPU core. there's lots (probably a hundred or so) of tasks running on the computer, and all of them get swapped into the register set of an individual CPU core when needed. When that task is pre-empted by another task, its regsiter set is saved somewhere, and another task takes over that core. Since BOINC tasks run at lower priority than anything else, they get pre-empted almost continuously, whenever the computer needs to do anything else, such as servicing interrupts.

As a result, the BOINC tasks should be hopping around between the four cores quite a lot. The important thing is that each GPU task is not running permanently on one of the four cores in the CPU, it's running on whichever core happens to be free at that instant.

For a 1 core + 2 GPU system to have a collision, you merely need to have the second GPU need new work while the other GPU is in the process of receiving new work. There's a 1/8 chance of this.

But in the real computer, with 4 cores, in order for a collision to occur, a GPU has to need new work while *five* of the other 7 GPUs are also requesting new work. What are the odds of that? (Someone correct me if my math is wrong, it's been decades since I studied probability.) With 4 cores, up to 4 GPUs can request work at the same time with 0% probability of collision because all 4 can be serviced at once.

(note that I'm simplifying this somewhat...)

With the 5th GPU, what's the probability of a collision? In order for a collision to occur, all of the other GPUs would need to request new work at the same time. The odds of that happening are 1/(8^^4), or approximately 0.025%. That's higher than the 0.00% rate with 4 GPUs, but is certainly still an acceptable rate.

With the 6th GPU, the probability will rise. The chance of 4 of the other 5 GPUs needing servicing at the same time as the 6th GPU is (1+35)/(8^^5), which works out to 36/32768 or about 0.11%. Still pretty reasonable.

With the 7th GPU, the chance of 4 of the other 6 GPUs needing servicing at the same time is (1+42+5!*7^^2)/(8^^6). This evaluates to (1+42+120*49)/262144, or 5932/262144, or 2.26%.

With the 8th GPU, the chance of 4 of the other 7 GPUs being busy at the same time is (1+49+6!*7^^2+(6!/3!)*7^^3)/(8^^7), or (1+49+5040*49+120*343)/2097152, or (1+49+246960+4160)/2097152, or 251170/2097152, or 11.98%.

So, if you add up all the collision rates avnd average them out over all 8 GPUs, you end up with a grand total of 1.79%. Granted, there's a LOT of uncertainty and extrapolation in those calculations, but if correct, you would see less than a 2% degradation in performance by running on 4 cores instead of 8.

FYI, the 1-in-8 CPU utilization factor is based on my experience with a 2.4GHZ Q6600 running windows Vista. I understand that under Ubuntu the CPU utilization is much lower. In that case, the collision rate would drop exponentially, and 4 cores would be MORE than enough. Two (or even one) would probably surfice.

I think I read somewhere about an upcoming change to GPUGRID to change the "1 WU per CPU core" rule to "1 WU per GPU". Assuming my calculations are valid, once that change is made there's really no reason to need 8 CPU cores to run 8 GPUs.

Regards,
Mike

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1947
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 7524 - Posted: 16 Mar 2009 | 18:01:58 UTC - in response to Message 7521.

> "1 WU per CPU core"

We have tried to experiment with 1 WU per GPU, but there seem to be a bug in BOINC .

We will keep working on it when ready.

GDF

PS: yes 4 cores are more than enough for 8 GPUs.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 7535 - Posted: 16 Mar 2009 | 22:44:37 UTC - in response to Message 7521.

Hi Mike,

good analysis! I won't bother thinking through the maths, I just agree with you that in case of 1/8th core/GPU the performance hit in a "4 cores 8 GPUs" config is going to be very small.

When I wrote my posts about this back then I didn't factor thread load balancing into account. However, around the 10th of January the win client 6.62 was not yet released (if I remember correctly) and the cpu utilization was more like 80% of one core per GPU. The new client was on the horizon, but I was not sure yet, how it would work out. Luckily I also wrote "If each GPU only needs about 1% of a core [as the Linux client used to do at that time] I can imagine that a quad is good enough for 8 GPUs." :)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Michael Goetz
Avatar
Send message
Joined: 2 Mar 09
Posts: 124
Credit: 7,573,744
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwat
Message 7538 - Posted: 17 Mar 2009 | 6:50:51 UTC - in response to Message 7535.

After writing that long post, I was thinking about it some more, and realized that the impact would be even less than my calculations showed.

My analysis was based on the concept of the GPU being one big monolithic processor, and the whole GPU having to stop and wait for more data.

Perhaps the way the GPUGRID application is written, it works just like that. But that's not the way I would write it, and I suspect you didn't write it that way either.

A GTX280 (or 1/2 of a GTX 295) has 30 multiprocessors. In effect, it's a 30 core GPU. If your application loads data into each of the 30 cores independently, then when a collision occurs, only 1/30th of the GPU is actually blocked -- the other 29 cores keep on processing while that one core waits for more work.

If that's the case, then the 2% number my analysis showed is incorrect -- the actual number if 1/30th of that.

Mike

uBronan
Avatar
Send message
Joined: 1 Feb 09
Posts: 139
Credit: 575,023
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 7546 - Posted: 17 Mar 2009 | 12:32:52 UTC

Yes i guess the application is setup in multi processing thats the power of these cards do much at once.
The speed comes from all these little cores calculating different parts

So i think the impact depends also if and when the cpu gets instructions that some part is done and a new part can be calculated since the cpu does the controlling
However these instructions really is little effort for the cpu's since it goes on doing other work till it gets another ready with this and that work.

schizo1988
Send message
Joined: 16 Dec 08
Posts: 16
Credit: 10,644,256
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwat
Message 7835 - Posted: 25 Mar 2009 | 15:10:34 UTC

I don't think that the Power Supply is going to be the only problem when trying to run 4 295's, space is going to be a problem too I would imagine as these things are big. What type of motherboard are you planning on using, and what type of case. These things generate lots of heat so cooling is definitely going to be a concern as well, particularly since rather than venting the heat out the back of the card and outside the case these card vent out the side directly into your case. While the cost in the summer will be high, you will be able to save money in the Winter as you could heat a small home with 4 295's running 100% 24/7.

Clownius
Send message
Joined: 19 Feb 09
Posts: 37
Credit: 30,657,566
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 7994 - Posted: 30 Mar 2009 | 14:16:06 UTC

My GTX 295's push hot air outside the case and suck it from inside. Four i would have to see. I think my sound cables are starting to melt being directly above the 295. The heat is extreme.
Also the 295 is exactly the same size as my 280. I think all the 200 series cards are majorly massive. Im putting everything into an E-ATX case(CM HAF 932) tomorrow as my old Antec 900 cant fit both a 200 series graphics card and a HDD at the same height.

Liuqyn
Send message
Joined: 16 May 08
Posts: 5
Credit: 68,721,860
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 8287 - Posted: 7 Apr 2009 | 21:26:21 UTC

I just had a thought on the WU cache issue, why not make it a user selectable value in preferences, say default 2/host at a time up to whatever the user believes his/her box can handle?

Fean
Send message
Joined: 8 Apr 09
Posts: 1
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 8306 - Posted: 8 Apr 2009 | 16:47:54 UTC
Last modified: 8 Apr 2009 | 16:56:21 UTC

1. your motherboard is an motherboard with an AMD Chipset(https://www.megamobile.be/productinfo/103151/Moederborden/MSI_K9A2_Platinum_-4x_PCI_Express_x16,_AMD%C2%AE_CrossF/) that means it only has ATI's crossfire X so you use only the first gfx card lolzor

2. NVIDIA has only triple-way SLI not Quad-way SLI

conclusion: I think you're a kind of a show off that just wasted some money ;)

Profile Stefan Ledwina
Avatar
Send message
Joined: 16 Jul 07
Posts: 464
Credit: 51,279,371
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 8309 - Posted: 8 Apr 2009 | 17:27:29 UTC - in response to Message 8306.

1. your motherboard is an motherboard with an AMD Chipset(https://www.megamobile.be/productinfo/103151/Moederborden/MSI_K9A2_Platinum_-4x_PCI_Express_x16,_AMD%C2%AE_CrossF/) that means it only has ATI's crossfire X so you use only the first gfx card lolzor

2. NVIDIA has only triple-way SLI not Quad-way SLI

conclusion: I think you're a kind of a show off that just wasted some money ;)


as for 1. For GPUGRID SLI has to be disabled, so it also works on crossfire boards. 4 PCI-e = 4 cards that will be used, not only the first one. Do you really think the GPUGRID team built a machine with 4 graphics cards and they wouldn't know if only one card would be used?

as for 2. See point 1. SLI has to be disabled, so it doesn't matter if it is a SLI or crossfire board and if it is triple-, quad- or octa-way. ;-)

conclusion: Sorry but you obviously don't know what youre talking about... The next time do a little bit more research before posting things like this. ;-)
____________

pixelicious.at - my little photoblog

jboese
Send message
Joined: 30 Jul 08
Posts: 21
Credit: 31,229
RAC: 0
Level

Scientific publications
wat
Message 8381 - Posted: 14 Apr 2009 | 10:18:43 UTC - in response to Message 8309.

Totally off topic but instead of pimping its hardware maybe gpugrid could get itself some reliable hardware that is actually up five 9s and has workunits available occasionally. As it is I am about to go back to Folding@Home with my ps3 and say heck with the boinc credits. I would rather the machine contributes to science instead of sitting there as a paper weight.

Profile Michael Goetz
Avatar
Send message
Joined: 2 Mar 09
Posts: 124
Credit: 7,573,744
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwat
Message 8388 - Posted: 14 Apr 2009 | 11:06:51 UTC - in response to Message 8381.
Last modified: 14 Apr 2009 | 11:10:08 UTC

...instead of pimping its hardware maybe gpugrid could get itself some reliable hardware that is actually up five 9s and has workunits available occasionally.


Five 9's?

Their hardware wasn't at fault; they lost power over a holiday weekend when nobody was around.

Have you ever built a datacenter with that kind of reliability? Or rented space in a hosting facility that provides that kind of reliability?

I have.

It's horrendously expensive, but it's necessary for some applications such as financial services (can't have the stock markets going down every time there's a glitch, can we?).

It's not just having reliable hardware. No hardware is perfect. So your software and network need to be designed to continue operating regardless of any single failure, which means having redundant everything -- multiple phone companies, UPS and generator power, geographically diverse locations for your redundant datacenters, etc.

As annoying as it is, outages such as this are merely an inconvenience, and building up a high-availability system to run a BOINC project would be an incredible *RECURRING* waste of money that could be better spent elsewhere. There's no reason why a system like this needs that kind of resiliency.

That being said, I don't mean to imply that their system couldn't be more resilient than it is today. But we don't have a lot of facts about what happened over the weekend, other than it being due to a power outage. If it was a short power outage and they don't have UPS, then, yes, we can fault them for not having UPS. But if it was an extended power outage, even a large and obscenely expensive UPS wouldn't be sufficient, and backup generator power would have been required to keep the system running. Cost aside, in some locations a generator isn't even an option.

Mike
____________
Want to find one of the largest known primes? Try PrimeGrid. Or help cure disease at WCG.

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 8414 - Posted: 14 Apr 2009 | 16:07:54 UTC - in response to Message 8381.

Totally off topic but instead of pimping its hardware maybe gpugrid could get itself some reliable hardware that is actually up five 9s and has workunits available occasionally. As it is I am about to go back to Folding@Home with my ps3 and say heck with the boinc credits. I would rather the machine contributes to science instead of sitting there as a paper weight.

To add to what Michael G. said ...

In many cases large organizations schedule power outages over holiday weekends because it impacts the fewest number of people and projects on going in the affected buildings. We saw this outage because power went out on the server systems we depend upon.

That said, even Einstein that has something along the lines you suggest has been experencing outages too. And they have systems distributed across multiple locations and they could not stay on the air ... and I am not sure they are out of the woods yet.

But that is the point of BOINC, at least in theory, while GPU Grid was out we would have just worked for other projects. The problem is that GPU computing is new to the BOINC world so there were not many places to go to get work. I went onto SaH Beta and did about 50-70 K worth of work for them ... and on one system until I can saturate it back with GPU Grid work I will feed it the last of those tasks ...

jboese
Send message
Joined: 30 Jul 08
Posts: 21
Credit: 31,229
RAC: 0
Level

Scientific publications
wat
Message 8433 - Posted: 14 Apr 2009 | 20:04:21 UTC - in response to Message 8414.

I agree about costs and was being melodramatic. The reputation though of this project on the internet is that it does not reliably feed work units to speciality computing devices such as the ps3 as compared to folding@home. Some of that I am sure is due to a large difference in funding and priorities. I am hoping this will change as to be honest not only do I prefer BOINC credits but I have the machine at remote location I only visit several times a year and couldn't change it even if I wanted for some time. I was smart enough to do backup work on yoyo but again lets be honest and say neither yoyo or seti is doing the kind of work that is as critical as gpugrid and folding@home. Another worthwhile boinc project like rosetta or einstein needs to harness the power of the ps3s and then this becomes a non issue.

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 8436 - Posted: 14 Apr 2009 | 20:19:27 UTC - in response to Message 8433.

I agree about costs and was being melodramatic. The reputation though of this project on the internet is that it does not reliably feed work units to speciality computing devices such as the ps3 as compared to folding@home. Some of that I am sure is due to a large difference in funding and priorities. I am hoping this will change as to be honest not only do I prefer BOINC credits but I have the machine at remote location I only visit several times a year and couldn't change it even if I wanted for some time. I was smart enough to do backup work on yoyo but again lets be honest and say neither yoyo or seti is doing the kind of work that is as critical as gpugrid and folding@home. Another worthwhile boinc project like rosetta or einstein needs to harness the power of the ps3s and then this becomes a non issue.

Einstein is working on a new application for CUDA and possibly OpenCL as is MW ... The Lattice Project just ran a short test of the Garli application on CUDA (with about 5-10% or the tasks hanging as does the CPU application which as I read the project's posts they find "acceptable" and I find abhorrent, I guess it depends on how much you dislike waste).

There are a couple other projects in the wings with rumors ...

But this is still early days and to be honest *I* think that progress is about as fast as one could expect. Sadly, this still leads us to be in the position where we are lacking in choice for the moment. Of course, if you have an ATI card you have even less choice ... but ... soon enough we should see another project on-line ...

Of course, if it HAD been Einstein, we still would have been SOL too because they had an outage this weekend too ...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8464 - Posted: 15 Apr 2009 | 19:01:23 UTC

Nice discussion, but please do not totally forget the thread topic.. this is a sticky, after all.

MrS
____________
Scanning for our furry friends since Jan 2002

Alphadog0309
Send message
Joined: 23 Apr 09
Posts: 1
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 8806 - Posted: 23 Apr 2009 | 21:46:11 UTC

awesome system can just imagine how much work that thing is doing whew...

it must be really hot in whatever room that system is in if its on 24/7 :)

thats a Silverstone TJ07 btw... http://silverstonetek.com/products/p_contents.php?pno=tj07&area=usa

im getting that case for a summer watercooling project.... you should consider watercooling that system and overclocking... its a really nice case to watercool with

t t.
Send message
Joined: 14 Jun 09
Posts: 2
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 10556 - Posted: 14 Jun 2009 | 11:27:38 UTC - in response to Message 4110.

att:cencirly Do You Mean That....
Wauiie',What a machine Igot A' ASUS,H78D-HDMI-790 Board W/the DVI Plug,and ,a
AMD'9950'4Core'.I Put The,Thing in,My Machine And The Stuff first dident,
Work,well i,found out that it,needed to have the 8pin,plug'on it again,
It's,in an,900case.and the one w/we are sorry that and blablablabla,but a blower is but you find it,we are sure,on it,cincirly ants,,on the run......
well the 8pin on,and were ave a lift off'.
only one time8800-Glacier BetaVersion,W/512mb-768Mb.and 6pin on.no DVI But
A,A-to-B Plug,then i started it again,now it got weired,it now first at this point after 3-5 runs' presented me w/a Screen W/the HDMI-Wifii-and so on
Then it,blinkd'blue,and a,grey screen ran down ,then it dident started any
Aain,What is the matter w/that machine,A,HKM 630Watt 2x160b,Sata.2xDVD-Cd
From,sata to. samsung,504-and the 2012.W/the softwarer from,Nero Softwarer.inc.
it now dont play,DvD's with my old costelation,on, an,ASUS-M3H,Sli-Hdmi deluxe.its quite an horribel,state im in,i got seconds then it'gotta go on,the techbed'to be upgraded once and for all,ps,this lap top is my only net chance i have when i wisiting one i,girl i know,i hope one with such a,monster knows any thing about my troubels,
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10562 - Posted: 14 Jun 2009 | 13:10:38 UTC - in response to Message 10556.

Would you mind using english sentences? That would help a lot in trying to understand what you want to say.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16970 - Posted: 11 May 2010 | 21:13:18 UTC - in response to Message 4110.

The technical specifications for this machine are the following:

Motherboard:
MSI K9A2 Platinum / AMD 790 FX 4xPCI-E 16x
CPU:
9950 AMD Phenom X4 2.60Ghz, RAM 4Gb.
GPUs:
4x NVIDIA GTX 280
Power supply:
Thermaltake Toughpower 1500W, with 4 PCI-E 8-pin and 4 PCI-E 6 pin power cables.
Box:
Antec Gamer Twelve Hundred Ultimate Gamer Case. (Ours is a Silvestone but it is more expensive).
(The computer case is quite important. There are other available, but not all fit the purpose as you need to have enough cooling and free space after the 7th PCI slot to fit a dual slot graphics card as the GTX280.)

In the future (during 2009) we may be able to use multiple GPUs for one WU which will be rewarded higher.

New GTX290 cards should be coming out soon.


Old bones I know, but did you ever make any progress into running one task on 4 cards?

Thanks,

PS. I'm still using that same K9A2 Platinum Motherboard.

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1947
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 16991 - Posted: 13 May 2010 | 10:13:51 UTC - in response to Message 16970.

It is working locally, but not for GPUGRID yet.

gdf

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 16994 - Posted: 13 May 2010 | 13:10:59 UTC - in response to Message 16991.

It might allow for some faster task turn around.

Post to thread

Message boards : Number crunching : Specs of the GPUGRID 4x GPU lab machine

//