Advanced search

Message boards : Number crunching : Milkyway@home on ATI cards

Author Message
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6361 - Posted: 3 Feb 2009 | 20:48:38 UTC

DoctorNow wrote:
It's also possible now to run a new custom-made MilkyWay@Home-app on your GPU, but currently ONLY possible with an ATI-card and a 64-Bit Windows-system.
More details you can read in this thread.


Thought I'd just inform you, as it surely gets overlooked in the other thread. But be warned, currently it's really in pre-alpha stage. Buying a card for that wouldn't be fun. But if you've already got a HD38x0 (64 shader units) or HD48x0 (160 units) you might want to check it out. The speed is rediculous :)

Paul D. Buck wrote:
If they get it out the door soon I might just get a couple of the lower end ATI cards that can handle it just for the mean time till they get the Nvidia version done


NV version is not going to happen anytome soon as they use double precision exclusively. You may remember that NV included 30 double units in GT200 along with the 240 single precision shaders. Well, ATIs RV770 has 160 5-way VLIW units and all of them can run 1 or 2 doubles each clock. That's such a massive advantage, it just plain wouldn't make sense to use NV cards here.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6368 - Posted: 3 Feb 2009 | 21:46:30 UTC

Which may mean that I will have a mix of GPUs in my future ...

I don't mind, I have had good luck with ATI cards (and with Nvidia ... 6 of one ... two dozen of the other .... or something)

Oh, I got the GTX 295 card today, a day early ... cards are moved about ... and I have 4 in flight on the i7 which is nice ... all the cards are moved about ...

The wall draw of the pair of 295s is the same as the draw of the 295 and 280 card ... just for your information ...

Now I only have the disk problem on the mac pro that is raining on my life ... sigh ... less than 10% space and the disk utilities are crashing and there is an error on the disk ...

So, I got 6 new 1.5 TB drives on the way ... so how long will it take me to fill up a 3.something TB RAID 5 array ... ah well ... several days of file moving and installing and configuring the os ... sigh ...

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6388 - Posted: 4 Feb 2009 | 11:11:30 UTC - in response to Message 6361.

DoctorNow wrote:
It's also possible now to run a new custom-made MilkyWay@Home-app on your GPU, but currently ONLY possible with an ATI-card and a 64-Bit Windows-system.
More details you can read in this thread.


Thought I'd just inform you, as it surely gets overlooked in the other thread. But be warned, currently it's really in pre-alpha stage. Buying a card for that wouldn't be fun. But if you've already got a HD38x0 (64 shader units) or HD48x0 (160 units) you might want to check it out. The speed is rediculous :)

Paul D. Buck wrote:
If they get it out the door soon I might just get a couple of the lower end ATI cards that can handle it just for the mean time till they get the Nvidia version done


NV version is not going to happen anytome soon as they use double precision exclusively. You may remember that NV included 30 double units in GT200 along with the 240 single precision shaders. Well, ATIs RV770 has 160 5-way VLIW units and all of them can run 1 or 2 doubles each clock. That's such a massive advantage, it just plain wouldn't make sense to use NV cards here.

MrS


One of the guys mentioned this in BOINC_dev a while back. I recall Dr A asking for details so he could add ATI support into BOINC. As to if he got the details I don't know.
____________
BOINC blog

localizer
Send message
Joined: 17 Apr 08
Posts: 113
Credit: 1,656,514,857
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6389 - Posted: 4 Feb 2009 | 11:40:59 UTC - in response to Message 6388.

.... before we get too excited, doesn't the GPU work hit the credit/hour limit? Surely as you are using a GPU to crunch a CPU WU, as opposed to what happens here, the credits generated will be subject to the same limits.

Nice of Dr A to show an interest in ATI - I hope he has the time to do fit that in with all the other items he is currently juggling/fumbling.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6404 - Posted: 4 Feb 2009 | 20:06:35 UTC - in response to Message 6389.

One of the guys mentioned this in BOINC_dev a while back. I recall Dr A asking for details so he could add ATI support into BOINC. As to if he got the details I don't know.


Read about that a few days ago as well. They found something which would help him, but I don't know about any further progress either.

.... before we get too excited, doesn't the GPU work hit the credit/hour limit? Surely as you are using a GPU to crunch a CPU WU, as opposed to what happens here, the credits generated will be subject to the same limits.


Exactly. That's what I had in mind when I said "Buying a card for that wouldn't be fun." The credit rules could change any day, though.

MrS
____________
Scanning for our furry friends since Jan 2002

pharrg
Send message
Joined: 12 Jan 09
Posts: 36
Credit: 1,075,543
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 6637 - Posted: 14 Feb 2009 | 17:56:24 UTC

Something to keep in mind when building a new system, there are several motherboards out now that would support BOTH nVidia SLI and ATI Crossfire in a single system. You could conceivably have multiple cards of each running simultaneously... if you have the money to buy all those cards...

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6638 - Posted: 14 Feb 2009 | 18:43:18 UTC - in response to Message 6637.

Something to keep in mind when building a new system, there are several motherboards out now that would support BOTH nVidia SLI and ATI Crossfire in a single system. You could conceivably have multiple cards of each running simultaneously... if you have the money to buy all those cards...


In a system with two PCI-e, one of each ...
Three PCI-e, two of one, one of the other ...
Four PCI-e, two and two ...
Five PCI-e, major electrical fire!

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6645 - Posted: 14 Feb 2009 | 19:26:56 UTC
Last modified: 14 Feb 2009 | 19:27:50 UTC

But you wouldn't need SLI or crossfire for crunching, in fact you have to disable SLI anyway (don't know about crossfire). So the actual limit is rather the amount of power, cooling and space that you can provide.. ;)
Edit: oh, and I don't know what windows does if you mix multiple cards which require different drivers.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6648 - Posted: 14 Feb 2009 | 21:34:51 UTC - in response to Message 6645.

Edit: oh, and I don't know what windows does if you mix multiple cards which require different drivers.


In theory, as I have not tried it yet, is that you just install the card and the appropriate drivers and windows should be happy. Hard to say if there is hardware or driver incompatibility that would cause clashes though.

In my case, I would likely take the conservative position and allocate cards to machines keeping all ATI in one and Nvidia in others...

Though this is all in the future in that the first ATI application is not really ready for prime time and wide distribution on a variety of systems. As best as I can tell I have neither the right card or the right OS in my collective.

Unlike some though I can wait ... heck I still have to digest the 4 new GPUs that I have installed in the last two months ...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6656 - Posted: 15 Feb 2009 | 11:58:43 UTC

Apparently with some fiddling around you can run a game on an ATI card and use a nVidia for PhysX. So.. there's hope for some heterogeneous GPU landscape ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Jeremy
Send message
Joined: 15 Feb 09
Posts: 55
Credit: 3,542,733
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6676 - Posted: 16 Feb 2009 | 16:18:24 UTC - in response to Message 6656.

Mixing video card brands in the same box only really works in Vista and Windows 7 atm. Don't even think of trying it in XP of any flavor. It won't be happy with two different display drivers fighting each other behind the scenes from what I've read.

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6680 - Posted: 16 Feb 2009 | 18:12:41 UTC - in response to Message 6676.

Mixing video card brands in the same box only really works in Vista and Windows 7 atm. Don't even think of trying it in XP of any flavor. It won't be happy with two different display drivers fighting each other behind the scenes from what I've read.


Well, that answers that ...

I did not have an ATI card of note that would allow me to have tested this and now i don't have too ...

Though I am tempted to get one for my sole 64 bit machine so that I can take part in the GPU revolution happening at MW ...

nico342
Send message
Joined: 20 Oct 08
Posts: 11
Credit: 2,647,627
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6813 - Posted: 20 Feb 2009 | 13:49:42 UTC - in response to Message 6680.
Last modified: 20 Feb 2009 | 13:50:40 UTC

Is someone know what is require to run Milkywayathome project on ATI GPU. I'm looking for because I own an ATI 2400HD pro.

Thanks
____________

Profile Edboard
Avatar
Send message
Joined: 24 Sep 08
Posts: 72
Credit: 12,410,275
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6814 - Posted: 20 Feb 2009 | 14:07:07 UTC - in response to Message 6813.

You need an ATI card with RV670 chip and up: HD38x0, HD4670?? y HD48x0.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6818 - Posted: 20 Feb 2009 | 19:03:31 UTC - in response to Message 6814.

Last time I checked 46xx series didn't work, only 38xx and 48xx.

MrS
____________
Scanning for our furry friends since Jan 2002

Temujin
Send message
Joined: 12 Jul 07
Posts: 100
Credit: 21,848,502
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwat
Message 6863 - Posted: 22 Feb 2009 | 10:54:04 UTC - in response to Message 6818.
Last modified: 22 Feb 2009 | 10:55:36 UTC

Would anyone like to comment on the huge difference in credit awarded by the MilkyWay ATI GPU app and the GPUGrid GPU app?

For example, my GTX260-216 returns about 13,000 credits a day at GPUGrid while my ATI HD4870 returns 77,000 credits a day at MilkyWay.
I don't know how the 2 cards compare but don't imagine they are miles apart in performance.

Possible reasons
1, some exceptionally efficient coding of the ATI GPU app by Gipsel
2, Milkyway awarding a higher than average credit return (despite recent adjustments)
3, inefficient coding of the GPUGrid GPU app
4, GPUGrid having a lower than average credit award
5, ATI cards are just better at Milkyway WUs than NVidia cards are at GPUGrid WUs

I'm not suggesting a change in credit awards, I'm just puzzled at what appears to be a huge difference from what I would think are similar cards

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6865 - Posted: 22 Feb 2009 | 11:39:55 UTC - in response to Message 6863.

1, some exceptionally efficient coding of the ATI GPU app by Gipsel

yes
2, Milkyway awarding a higher than average credit return (despite recent adjustments)

yes
3, inefficient coding of the GPUGrid GPU app

no
4, GPUGrid having a lower than average credit award

Can't say for sure.
5, ATI cards are just better at Milkyway WUs than NVidia cards are at GPUGrid WUs

In some sense.. yes.

OK, that was the short version. Let me eleborate a bit:

Milkyway is an exceptional app in the way that the algorithm is perfectly suited to GPUs. The ATIs almost reach their peak FLOPS, a very rare case [if you intend do do anything useful in your code ;) ]. I think MW is still giving out a bit too much credits for CPUs.. now throw in the high-end GPUs, which are at least one order of magnitude faster, and you get a complicated situation.

The main problem is: awarding credits according to the benchmark was an idea which was bound to fail in practice. Now we have FLOP counting.. which leads to another problem: if you have a small and very well tuned app like MW you will automatically extract higher FLOPS from your hardware than with more complex code. You could say that the hardware is running this code more efficiently. So.. should we all only run apps like MW and neglect the rest, because they give many more credits per time? I don't think this is what BOINC is made for and I don't see a solution yet.

And a side note: under SETI@NV the current draw and temperatures are lower than under GPU-Grid, so you can be sure that GPU-Grid is not coded unefficiently ;)
And we won't get an apples-apples comparison between the ATI and NV cards here, because MW runs double precision, where NV cards are really weak. Their crunching power is roughly comparable at singles, though.

MrS
____________
Scanning for our furry friends since Jan 2002

STE\/E
Send message
Joined: 18 Sep 08
Posts: 368
Credit: 315,034,798
RAC: 555,655
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 6870 - Posted: 22 Feb 2009 | 14:42:04 UTC
Last modified: 22 Feb 2009 | 15:20:34 UTC

It's definitely going to draw some people away from the GPUGrid Project no matter what. If you can get 60,000 to 70,000 Per Day verses 10,000 to 13,000 Per Day here not counting the GTX 295's what you gonna do.

Even the GTX 295's are only capable of about 25,000 but cost 2 1/2 to 3 times the amount it does for a ATI 4870 it stands to reason to go with the ATI's & the Project that can use them. I won't lessen my Participation here @ the moment but I have ordered 2 ATI 4870's already & will order more as needed & if need be shut the NVidia's down over Time to save Electrical Cost.

The word is a Nvidia Application will be out @ the MWay Project but it hasn't showed up yet, that & the word is also the NVidia Application's will be 3-4 Times slower, so a a Single ATI 4870 will be able to produce as much as 2 GTX 295's @ a Quarter or less of the Cost ...

Profile UBT - Ben
Send message
Joined: 12 Aug 08
Posts: 8
Credit: 137,219
RAC: 0
Level

Scientific publications
watwatwat
Message 6874 - Posted: 22 Feb 2009 | 15:18:45 UTC - in response to Message 6870.

The only Nvidia cards which will be able to take part if MW do build a CUDA app is the GTX200 series as they can support double precision data i.e 12.3984958

However even then like poorboy has said, the GTX's won't be able to get anywhere near the top ATI cards performance.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6888 - Posted: 22 Feb 2009 | 17:49:41 UTC - in response to Message 6874.

The only Nvidia cards which will be able to take part if MW do build a CUDA app is the GTX200 series as they can support double precision data i.e 12.3984958


You're right, apart from the fact that your example is floating point, not double precision ;)

G80, G92 etc. are fine with single precision floating point, that is numbers are represented by 32 bits. Double precision requires 64 bit, which these chips can't do. If I remember correctly a GTX 280 can do about 30 GFlops in double precision, whereas an RV770 can do 200+. If MW goes CUDA, the credits will reflect this ratio.

The real problem with FLOP counting is: if you create an app which just runs a stupid loop in the CPU caches and maximises ressource utilization, you should get the most credits per time for just running this application. MW is such an application, except for the fact that they actually do science in their highly optimized / efficient hot loop. So how are you going to award credits here? Based on FLOPS you have to give out many more..

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6897 - Posted: 22 Feb 2009 | 20:23:55 UTC - in response to Message 6870.

It's definitely going to draw some people away from the GPUGrid Project no matter what. If you can get 60,000 to 70,000 Per Day verses 10,000 to 13,000 Per Day here not counting the GTX 295's what you gonna do.


Maybe, maybe not ...

In my case it is far more likely that I will be careful with my positioning of GPU cards. And assigning projects according to where I can get the most for my investment and the most science ...

For those of us that have tried to get some common sense applied to the credit system we will still have to wait until the projects will side with us to get some sort of comprehensive review of the rules and mechanisms.

In my case, yes, one ATI card looks like when running full time on MW is able to earn me as much as my suite of Nvidia cards applied to GPU Grid. What I suspect will happen is that MW will be pressured to lower the award again even though they are right now awarding according to Dr. Anderson's formula...

I mean, are you here ONLY because of the awards? Or because you also want to contribute to the project?

STE\/E
Send message
Joined: 18 Sep 08
Posts: 368
Credit: 315,034,798
RAC: 555,655
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 6900 - Posted: 22 Feb 2009 | 20:54:04 UTC
Last modified: 22 Feb 2009 | 21:00:48 UTC

Maybe, maybe not ...


Come on Paul, you know darn well People are going to be flocking even more to the Milkyway Project now that the stakes have been raised even more ...

In my case it is far more likely that I will be careful with my positioning of GPU cards. And assigning projects according to where I can get the most for my investment and the most science ...


Exactly, you hit the nail on the head Paul & right now it looks like I can or anybody else can get the most for their investment/investments @ the Milkyway Project & really from what it looks like do the most Science too.


I mean, are you here ONLY because of the awards? Or because you also want to contribute to the project?


Both really, I've always run any Project for the Credits & with the hope I'll maybe help out the Science of the Project too. But at most Projects including this one I don't have the slightest clue what the Project is up to. All I know is some Projects are into Astronomy, some Mathematics, some do Medical Research and others are into whatever their into.

I Attach to every Project that comes out & if I like the Credits, the Attitude of the Project's Dev's/Moderator's, the Participants in General then I'll stay with a Project longer than others, if I don't then I run up to a set Number of Credits & get outta Dodge as fast as I can so to say once I reach that set number of Credits never to run the project again in most cases.

That said why I run any Project is being heavily weighted to the Credits side more & more (Remember I said People called me a Credit Whore awhile back) as the expense of running them grows by leaps & bounds.

So in the end for me as I already stated when I can get 60,000 to 70,000 Credits @ 1 Project for a less than a $200 outlay verses only 13,000 to 14,000 Credits for the same amount of outlay it's a no brainier for me on which Project to lean towards & lay more resources their Doorstep ... :)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6902 - Posted: 22 Feb 2009 | 22:35:02 UTC - in response to Message 6897.
Last modified: 22 Feb 2009 | 22:36:27 UTC

I mean, are you here ONLY because of the awards? Or because you also want to contribute to the project?


Saying that works for some, but by far not for everyone.. and most importantly, doesn't solve the problem. Actually, when the first F@H clients for the ATIs appeared I thought "How the hell would you do this in BOINC? It must screw up the credits."

With GPU-Grid we're just *lucky* that the result of the flop counting is not an order of magnitude higher than what CPUs achive. That's why this fundamental problem has been lurking in the darkness a little longer.

Is a GPU-Flop worth less than a CPU-Flop, because it's less universal? That's what F@H decided, in order to keep things in perspective. With MW we're facing the problem that the GPU-Flops are obviously as good as the CPU-Flops, because both are running the same WUs.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6912 - Posted: 23 Feb 2009 | 5:00:20 UTC - in response to Message 6902.

Is a GPU-Flop worth less than a CPU-Flop, because it's less universal? That's what F@H decided, in order to keep things in perspective. With MW we're facing the problem that the GPU-Flops are obviously as good as the CPU-Flops, because both are running the same WUs.


This is supposed to be true at SaH too ... at least that is what they claim ... but, my recollection when I ran some tasks from there on my GPU is that the awards are not the same ...

@PoorBoy,

Well, you see, I don't think that what you said will be true for me ... it is hard to predict about the future, but, at least near term I doubt that I will change much of anything at all even when Milky Way comes out with a Nvidia version. On the 9800GT I am not sure that I can even run the tasks at all ... and the only reason I might add them to the other systems is as a safety measure in case GPU Grid goes down ...

Again, the lack of interest by the developers in the upcoming events in the GPU world means that we are in for a long season... Projects with OpenCL applications will mean that the difficulty of assigning projects to GPUs will be that much more complex. A point I was trying to get across during the discussion of the work fetch policy changes. Sadly, we will have to wait until that problem bursts on the scene before they will start to think about that ... sigh ...

I agree with you about most of what else you said ... though, for me, it is more about the project's attitude, science, etc. than the credits ... but, they are important ...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6926 - Posted: 23 Feb 2009 | 12:51:01 UTC - in response to Message 6912.

This is supposed to be true at SaH too ... at least that is what they claim ... but, my recollection when I ran some tasks from there on my GPU is that the awards are not the same ...


How is that supposed to work? CPU and GPU are also running the same WUs over there, so naturally the credit reward has to be the same? And we're only not seeing super high credits per time for GPUs at seti because their algorithm doesn't use the full GPU potential (yet).

MrS
____________
Scanning for our furry friends since Jan 2002

STE\/E
Send message
Joined: 18 Sep 08
Posts: 368
Credit: 315,034,798
RAC: 555,655
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 6932 - Posted: 23 Feb 2009 | 13:19:43 UTC - in response to Message 6926.

This is supposed to be true at SaH too ... at least that is what they claim ... but, my recollection when I ran some tasks from there on my GPU is that the awards are not the same ...


How is that supposed to work? CPU and GPU are also running the same WUs over there, so naturally the credit reward has to be the same? And we're only not seeing super high credits per time for GPUs at seti because their algorithm doesn't use the full GPU potential (yet).

MrS


I think the Credits are the same but with the GPU's you Supposedly (Not really convinced about that) run the WU's faster so you get more for your Buck. But the Credits are so bad even using the GPU's it doesn't really matter.

My 8800GT OC was anly getting about 20 Credits Per Hour @ Seti Project running the GPU WU's where it would get about 200 Per hour running the GPUGrid Projects WU's ...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6936 - Posted: 23 Feb 2009 | 15:47:56 UTC - in response to Message 6932.

I don't think the ratio is that bad, currently. I have a 8600GT running over there and it's got up to almost 800 RAC and looking at the daily update numbers of the last 7 days it makes 1080 credits / day. Compare that to 6000 for a 9800GTX+ at GPU-Grid with 4 times the shaders and ~60% higher shader clock.

MrS
____________
Scanning for our furry friends since Jan 2002

slozomby
Send message
Joined: 29 Jan 09
Posts: 17
Credit: 7,767,932
RAC: 0
Level
Ser
Scientific publications
watwatwat
Message 7095 - Posted: 2 Mar 2009 | 3:05:58 UTC - in response to Message 6936.

I don't think the ratio is that bad, currently. I have a 8600GT running over there and it's got up to almost 800 RAC and looking at the daily update numbers of the last 7 days it makes 1080 credits / day. Compare that to 6000 for a 9800GTX+ at GPU-Grid with 4 times the shaders and ~60% higher shader clock.

MrS


8800 gt(512) nets aroud 1k w/ seti
same system gets 5-7k with gpugrid



Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 7102 - Posted: 2 Mar 2009 | 12:23:29 UTC - in response to Message 7095.
Last modified: 2 Mar 2009 | 12:27:50 UTC

Hi,
the way we compute credits is public and transparent to other projects for correctness. As always repeated is based on flops and the way BOINC assign credits.
See
http://www.gpugrid.net/forum_thread.php?id=219
for more information.

gdf

PS: 5 times less for seti seems impossible, as they would credit similar to CPUs...

slozomby
Send message
Joined: 29 Jan 09
Posts: 17
Credit: 7,767,932
RAC: 0
Level
Ser
Scientific publications
watwatwat
Message 7108 - Posted: 2 Mar 2009 | 18:31:27 UTC - in response to Message 7102.

PS: 5 times less for seti seems impossible, as they would credit similar to CPUs...


and yet that is what i am seeing.
it could be they are undervaluing GPU time, or not using all the shaders, or their code still needs alot of tweaking, or a variety of other reasons. im not faulting anyone's point distribution methods just pointing out what i see.

i just set seti as equal priority to gpugrid i'll put up exact numbers as they come in

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 7118 - Posted: 2 Mar 2009 | 22:34:56 UTC - in response to Message 7108.

My 8600GT is up to 880 RAC and even had a one day break in between. So either seti has trouble utilizing more than 32 shaders.. or I'd tend to blame your observation.
At seti it may take a long time to see how many credits you really earn, due to the long deadline and many slow CPUs.

MrS
____________
Scanning for our furry friends since Jan 2002

Jeremy
Send message
Joined: 15 Feb 09
Posts: 55
Credit: 3,542,733
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 7120 - Posted: 3 Mar 2009 | 3:00:14 UTC - in response to Message 7118.

I've noticed that SETI has a tendency not to give me any cuda tasks if I'm running a long Astropulse calculation. Probably a scheduling issue, I'll be glad when this whole cude/OpenCL/DX11/whatever thing is finally mainstream and BOINC has figured out how to use it all without major headaches.

slozomby
Send message
Joined: 29 Jan 09
Posts: 17
Credit: 7,767,932
RAC: 0
Level
Ser
Scientific publications
watwatwat
Message 7121 - Posted: 3 Mar 2009 | 3:12:57 UTC - in response to Message 7118.

My 8600GT is up to 880 RAC and even had a one day break in between. So either seti has trouble utilizing more than 32 shaders.. or I'd tend to blame your observation.
At seti it may take a long time to see how many credits you really earn, due to the long deadline and many slow CPUs.

MrS


1 thing to note. almost every single one of the seti WU's you completed gave you less granted credit than claimed (with the exception of some very small WUs) by approximately 20%. GPUGRID has granted = claimed for every WU ive completed. seti grants credit on the lowest claimed credit which looking over your WU's is always the CPU based member of the quorum.

just some quick math
last 2 WUs running seti took 20min each = 57.8 claimed credit per WU. = 173.4 per hour = 4161.6 per day. now remove the 20% claim vs granted and we're at 3329. or roughtly half of what gpugrid was giving out. a far stretch from the 5x1 ratio but still pretty significant.

a pretty good overview is this

http://setiathome.berkeley.edu/top_hosts.php
http://www.gpugrid.net/top_hosts.php

look at the difference between the top hosts.

again im not saying either point distribution method is wrong. or that the seti app couldnt stand some tweaking. what i am saying is theres a large discrepancy between how various projects grant credit for the same gpu/cpu time.



Thamir Ghaslan
Send message
Joined: 26 Aug 08
Posts: 55
Credit: 1,475,857
RAC: 0
Level
Ala
Scientific publications
watwatwat
Message 7123 - Posted: 3 Mar 2009 | 5:36:16 UTC - in response to Message 7108.

PS: 5 times less for seti seems impossible, as they would credit similar to CPUs...


and yet that is what i am seeing.
it could be they are undervaluing GPU time, or not using all the shaders, or their code still needs alot of tweaking, or a variety of other reasons. im not faulting anyone's point distribution methods just pointing out what i see.

i just set seti as equal priority to gpugrid i'll put up exact numbers as they come in


I'm leaning towards their code is not efficient and not all shaders are utilized based on amp readings I've ran with GPUz. It draws less AMPS than GPU grid!

The highest amp readings I've seen was with distributed.net CUDA clients, due to that project utilizing integers more than floating points, and Nvidia is known to be stronger in integers than floats!

localizer
Send message
Joined: 17 Apr 08
Posts: 113
Credit: 1,656,514,857
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 7417 - Posted: 13 Mar 2009 | 12:27:20 UTC - in response to Message 7123.

Wow. Just stopped by here after a few days - lots of people (me included) seem to be spending some resources over at MW judging by the way the top ten lists have changed.....

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 7424 - Posted: 13 Mar 2009 | 18:32:10 UTC - in response to Message 7417.

Wow. Just stopped by here after a few days - lots of people (me included) seem to be spending some resources over at MW judging by the way the top ten lists have changed.....

Don't think I am anywhere near the top ten list. BUt I did add one ATI card to my systems so I am contributing that. Also the new OS-X optimized application is shaving some time off my workload so that helps a little bit too ...

Were it so that I had 10 computers like my past .., sigh ...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 7430 - Posted: 13 Mar 2009 | 21:13:33 UTC - in response to Message 7424.

Were it so that I had 10 computers like my past .., sigh ...


Don't worry, your current computers are more computer than your old machines will ever be.. ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 7434 - Posted: 14 Mar 2009 | 7:11:47 UTC - in response to Message 7430.

Were it so that I had 10 computers like my past .., sigh ...


Don't worry, your current computers are more computer than your old machines will ever be.. ;)


Yes, they are ...

But, with 10 boxes I could have 10 GPUs ... Even a slow box with the capabilities to run a GPU will now be able to return significant work. Sadly, I only have one old box that has PCI-e in it, the other two I have are AGP ...

On that note, now The Lattice Project is saying that they will be coming out with GPU tasks "soon" ... they plan to have applications for ATI, CUDA, and OpenCL ... or at least that is the way I read the note. Most interestingly is that the implementations will not be universal. By that they indicate that they may come out with an Nvidia CUDA application for one tasks type and an ATI application for another. The idea according to them is to limit the number of versions.

Though as I understand OpenCL, if they use that as their target it will be a "simple" re-compile ...

Though I suspect that targeting the native / proprietary API, at least for the near term is going to produce faster applications ...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 7435 - Posted: 14 Mar 2009 | 11:23:10 UTC - in response to Message 7434.

Though as I understand OpenCL, if they use that as their target it will be a "simple" re-compile ...

Though I suspect that targeting the native / proprietary API, at least for the near term is going to produce faster applications ...


Agreed. The current implementations are probably better optimized. And... seems like my next board should have 2 PCIe slots, although that will make silent air cooling a real challenge. And I'd need a new PSU.. not because the current one couldn't handle the load, but because its fan would get too loud!

MrS
____________
Scanning for our furry friends since Jan 2002

Scott Brown
Send message
Joined: 21 Oct 08
Posts: 144
Credit: 2,973,555
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 7438 - Posted: 14 Mar 2009 | 14:36:06 UTC - in response to Message 7434.

[ Sadly, I only have one old box that has PCI-e in it, the other two I have are AGP ...



Paul,

As I recall, some of the compute capable cards from ATI were produced in AGP (e.g., Radeon HD 2600 XT, Radeon HD 3650, Radeon HD 3850, Radeon HD 3870), though I don't know if these will work with Milkyway@home...but for other future projects???


Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 7440 - Posted: 14 Mar 2009 | 16:06:13 UTC - in response to Message 7438.

As I recall, some of the compute capable cards from ATI were produced in AGP (e.g., Radeon HD 2600 XT, Radeon HD 3650, Radeon HD 3850, Radeon HD 3870), though I don't know if these will work with Milkyway@home...but for other future projects???

Well, the cases are Antec type so they are pretty flexible and I have used them for several generations of computers. Though I have to admit I kinda like one of the new cases I got as it puts the PSU at the bottom so that it can have a 6" fan at the top of the case. Warm air rises so ...

We are still in WAY early days so I am not too spun up yet and there are tests yet to come... not sure if I will be able to get TOO expensive this year again... we borrowed money from one daughter to pay off the house that we had to borrow against to help another daughter. Now the wife is more eager to pay off the daughter than she was the bank ... oh well ...

I already have a 1K plus PSU so, if I see an deal on an i7 I might get the urge and splurge ... then I could retire one of the Dells move the GPU from there ...

But, I think I will just hang in with what I have at the moment as 100K plus a day is good enough for the moment. Heck, I even like the new Apple with 16 virtual CPUs ...

Profile Stefan Ledwina
Avatar
Send message
Joined: 16 Jul 07
Posts: 464
Credit: 135,911,881
RAC: 892
Level
Cys
Scientific publications
watwatwatwatwatwatwatwat
Message 7443 - Posted: 14 Mar 2009 | 18:00:02 UTC - in response to Message 7440.

...
But, I think I will just hang in with what I have at the moment as 100K plus a day is good enough for the moment. Heck, I even like the new Apple with 16 virtual CPUs ...


Yeah, the new MacPro looks like a pretty good cruncher... If I had enough money I sure would get one. ;)
The only downside is that there are no GPU apps for Macs `til now.
____________

pixelicious.at - my little photoblog

Wolfram1
Send message
Joined: 24 Aug 08
Posts: 45
Credit: 3,431,862
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 7448 - Posted: 14 Mar 2009 | 19:17:42 UTC - in response to Message 7438.

[ Sadly, I only have one old box that has PCI-e in it, the other two I have are AGP ...



Paul,

As I recall, some of the compute capable cards from ATI were produced in AGP (e.g., Radeon HD 2600 XT, Radeon HD 3650, Radeon HD 3850, Radeon HD 3870), though I don't know if these will work with Milkyway@home...but for other future projects???




It is of topic here. But the Author of the optimized Milkyway-GPU-Apllication wrote in the german Forum, that AGP is ok for the application, There goes only a few data from GPU to CPU

Temujin
Send message
Joined: 12 Jul 07
Posts: 100
Credit: 21,848,502
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwat
Message 7449 - Posted: 14 Mar 2009 | 19:27:59 UTC - in response to Message 7438.

As I recall, some of the compute capable cards from ATI were produced in AGP (e.g., Radeon HD 2600 XT, Radeon HD 3650, Radeon HD 3850, Radeon HD 3870), though I don't know if these will work with Milkyway@home...but for other future projects???

I have an AGP HD3850 running milkyway, was a bit of a fudge installing the drivers but it runs fine, producing ~25,000 credits per day :)

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 7451 - Posted: 14 Mar 2009 | 19:35:46 UTC

Yes, well, that is problem B ... my cards are not in that league. Those systems were dedicated BOINC crunchers in their day so the video cards are $29 specials ... no memory and no real GPU ... and they are off line ...

If I were to put another system on I have one last dual AMD 4400 that has a PCI-e slot so that would be the next system if I were that hot go trot to add another system to my pool ... keeping one fed on MW is tough enough so I am just content to wait a little bit before taking another dive into the pool.

Fun to talk about though ...

uBronan
Avatar
Send message
Joined: 1 Feb 09
Posts: 139
Credit: 575,023
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 7480 - Posted: 15 Mar 2009 | 17:03:15 UTC

Well as i was reading to this thread i see many people forget how much processing power these ATI cards really have.
A simple 4830 has allready 640 units with loads of processing power.
If i compare the rough MFLOPS against each comparable VC the ATI beats the strongest Nvidia in double precision calculations
While in single precision the Nvidia seems to be a bit better but time will tell
And if we look at the new GPU science cards ATI beats all the best Tesla has 633 GFlops while the strongest ATI card has a staggering 1.3 Tflops
thats more then double the power.
But more important is that this enormous ATI power is provided with lesser power consumption then the Nvidia cards can do.
In my country Netherlands is compared to other countries the price of our electricity one of the highest in the world.
So when i see that such power comes fairly cheap and costs less power i think the choice when you are into distributed computing is not so hard.
As to the points i think its not the high as you can see its 0.7 points per units done in 9 seconds on the fastests cards.
Excluding the science cards since i dont know anybody who bought 1 of these cards yet not for doing science on their jobs ;)
But i guess this will change in the near future also because the prices of these cards drop also very nicely.
The Tesla costs around 1500 $ and the fastest ATI about 950 $ at newegg

Scott Brown
Send message
Joined: 21 Oct 08
Posts: 144
Credit: 2,973,555
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 7504 - Posted: 15 Mar 2009 | 21:15:16 UTC - in response to Message 7480.

And if we look at the new GPU science cards ATI beats all the best Tesla has 633 GFlops while the strongest ATI card has a staggering 1.3 Tflops
thats more then double the power.


Hmmm...your figures appear to be substantially incorrect. The best single card Tesla solution (the C1060) actual has 936 GFlops. The best internal single card NVIDIA solution is actually the GTX 295 which falls just short of 1800 GFlops. The best single card ATI solution is the 4870 x2 which falls around est. 2400 GFlops. None of these is anywhere near the best Tesla from NVIDIA, which is an external rackmount system that connects via either 2 PCIe slots (s1070) or via a single PCIe slot (s1075) -- 4300+ GFlops.


uBronan
Avatar
Send message
Joined: 1 Feb 09
Posts: 139
Credit: 575,023
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 7544 - Posted: 17 Mar 2009 | 10:49:54 UTC - in response to Message 7504.
Last modified: 17 Mar 2009 | 11:10:01 UTC

You talk about the normal videocards and other non normal products.
I am referring to the ati science cards .. i am keeping myself to the knowledge provided by these facts.
And the comparable Tesla C1060 sold to be a competition for the ati's, the given 633 Mflops are provided by newegg.

Also its to make matters simple for the price of 1 - nvidia tesla C1060 i can buy almost 2 Ati 9250 with 2 x 1.3 Tflops computation power

I am not going into debating which combinations give more performance .. too risky ;)

My buddy who works with them for computations says the tesla cant keep up with the ati's at all, they are used for computations for nuclear science and are placed with 2 each in 1 dual core machine.
The ati performs the same calculations in 1/3 the time needed by the tesla.
Since these very complex computations its ofcourse only possible to compare since both need to check each other on same results.
They also run multiple times the same project to make sure the result is confirmed, the funny thing is a super mainframe seem to calculate them to check again :D

And then Ati reports a new monster is coming to the market: Firestream

I guess if you have the money i think ATI can provide a proper responce to the external products also i guess no prices are available for these little toys ;)

Scott Brown
Send message
Joined: 21 Oct 08
Posts: 144
Credit: 2,973,555
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 7548 - Posted: 17 Mar 2009 | 13:01:31 UTC - in response to Message 7544.

Okay...not trying to pick a fight here (I like both NVIDIA and ATI cards), but I just don't get where you are getting your numbers from?

You talk about the normal videocards and other non normal products.
I am referring to the ati science cards .. i am keeping myself to the knowledge provided by these facts.
And the comparable Tesla C1060 sold to be a competition for the ati's, the given 633 Mflops are provided by newegg.


The primary difference between "science cards" and the mass market gaming equipment are special optimizations for particular science applications, not necessarily more raw power. NewEgg does not sell the Tesla C1060 (or at least I cannot find it). NVIDIA list the specification for the C1060 as having up to 933 GFLOPS single-precision and up to 78 GFLOPS double-precision.


Also its to make matters simple for the price of 1 - nvidia tesla C1060 i can buy almost 2 Ati 9250 with 2 x 1.3 Tflops computation power


There is no debating the price difference...the ATI card is much cheaper (around $850 US) compared to the C1060 (around $1500 US). Still, I do not understand where you are getting the 1.3 TFLOPS figure? ATI list the card specifications as up to 1.2 TFLOPS single-precision and up to 240 GFLOPS double-precision.


I am not going into debating which combinations give more performance .. too risky ;)


Well, I'd argue that it is just an impossible debate. With some cards including optimizations for particular software (e.g., CAD, etc.), this will vary greatly even within a single company's line of products. More importantly, the fundamental structural differences between how ATI and NVIDIA have organized the unified shaders in each of their product lines means that, even given identical theoretical GFLOP/TFLOP performance, a given application can be better performed by one or the other design.


My buddy who works with them for computations says the tesla cant keep up with the ati's at all, they are used for computations for nuclear science and are placed with 2 each in 1 dual core machine.
The ati performs the same calculations in 1/3 the time needed by the tesla.
Since these very complex computations its ofcourse only possible to compare since both need to check each other on same results.


This is to be expected since physics calculations require a great deal of double-precision operations (and as noted above the 240 DP GFLOPS of the 9250 are almost exactly 3 times more than the 78 DP GLOPS of the C1060). With single precision calculations, this difference (while still in favor of the ATI card) would be considerably less.

I'd also add two last points. First, the C1060 has been out now for quite some time, while the 9250 is just appearing. Thus, it should not be surprising that the much newer ATI card has the better performance numbers. Why NVIDIA has not lowered its price (the costs are very similar to when the C1060 was introduced) is a mystery? Maybe the introduction of the 9250 will force their hand on this...

Second, your suggestion (in the earlier post) that the power consumption for the ATI card is less does not appear to be correct. Both the 9250 and C1060 are listed as having standard power consumption numbers of 160 watts, but the Tesla actually has lower peak power consumption (200 watts) than the 9250 (220 watts).


The bottom line to all of this (at least from the perspective of using a GPU in BOINC) is that the particular project application will drive which manufacturer has better performance. The Milkyway@home app. is heavy on the double-precision calculations (as I understand it), so the ATI cards dominate performance. The brief discussions at PrimeGrid have suggested the possible use of GPUs for the sieving applications where single-precision is key. In that case (if it ever happens), the performance difference may be in the opposite direction.


Of course, all of this is mostly irrelevant since I cannot afford any of these cards. ;)
Maybe we can convince Paul Buck to buy a couple and test them. :)







uBronan
Avatar
Send message
Joined: 1 Feb 09
Posts: 139
Credit: 575,023
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 7551 - Posted: 17 Mar 2009 | 14:43:37 UTC

Well yes i have been again just stating what is posted on the sites where these cards are sold so if they lie ... i obviously do also

And yes your right its basically huge double precision calculations, he stated that the machine with the tesla uses more real power then the ati machine.
So if anyone would be rich enough to buy these toys and test them ..... (" Paul ;) ")

Anyway we wont go any further on the subject since we need Paul to buy them and ofcourse test them all for us ;)


Gipsel
Send message
Joined: 17 Mar 09
Posts: 12
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 7553 - Posted: 17 Mar 2009 | 16:26:09 UTC - in response to Message 7548.

The brief discussions at PrimeGrid have suggested the possible use of GPUs for the sieving applications where single-precision is key. In that case (if it ever happens), the performance difference may be in the opposite direction.

That's true. In principle the sieving could really be accelerated quite easily using a GPU (the sieving kernels are just a few lines, much shorter than even MW for instance). But actually one would like to implement it with a mix of double precision floating point and 64Bit integer arithmetic (like on CPUs). Unfortunately it isn't that easy on GPUs yet.

If you look at the capabilities of current cards it may be the best to restrict it to a mix of single precision floating point and 32bit integer arithmetic for the time being. Given the fact that GPUs are much faster using single precision most probably that would even result in a faster app when compared to a double precision/64Bit implementation (opposed to the CPU version).

But I guess they are still in the initial planning stage as the layout of the app probably has to be changed a bit to accomodate for the requirements of a GPU implementation.

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 7567 - Posted: 17 Mar 2009 | 21:35:48 UTC - in response to Message 7551.
Last modified: 17 Mar 2009 | 21:40:17 UTC

So if anyone would be rich enough to buy these toys and test them ..... (" Paul ;) ")


Tesla Card at $1,700 and listed as 933 GFLOPS

Well, after I pay my income tax ...

But, there is no pressing need for me to add to the 9800GT, GTX280, GTX295 (2 each) and HD4870 I have working for me right now ... even more importantly, I would have to pull a card to try anything ... and there is no pressing need to do that ...

As I posted in Milky Way, several projects are mumbling about GPU applications ... but, there is nothing definite in the wind. So, again, no pressing need to consider upgrading ... though I would like to replace the two old dells I have running ...

{edit}

Sigh, forgot the rest of what I was going to say looking for comparable cards...

The issue of which is faster is moot because there is no project that runs on both platforms. Meaning, who cares if the ATI card is faster if you want to do GPU Grid ... Or the Nvidia card if you want to do MW?

The ATI and Nvidia have swapped bragging rights as long as I have been buying computers. Now, it is more interesting to me in that I never did run games that really stressed any video card I bought.

IF and when the first project comes out with applications for more than one of the three APis, then we can start playing who is faster. But, that will only be relevant for that specific project and those specific applications ... over time that may change ...

uBronan
Avatar
Send message
Joined: 1 Feb 09
Posts: 139
Credit: 575,023
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 7577 - Posted: 18 Mar 2009 | 1:39:47 UTC
Last modified: 18 Mar 2009 | 2:06:06 UTC

hihihi poor us we seem not to get Paul to buy these cards yet ;)
a heck its just a matter of time if we poor guys whine long enough :D

on the other side i seem not to be able to get my buddy to donate their old 16 core opteron server to me also, so i need to find more excuses for him to do so :D
Ashame his company have a non DPC policy or we could have tested dpc projects on those machines.

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 7586 - Posted: 18 Mar 2009 | 9:59:05 UTC - in response to Message 7577.

hihihi poor us we seem not to get Paul to buy these cards yet ;)

Not that I don't want to ...

It is just that I can't see any reason to make a change or addition ...

I cannot keep the machine with the ATI card with a full queue and running MW full time. So, what would be the point of adding more GPUs to that system (even were it possible). And do I want to buy a 1060 card when there is likely a 1070 card coming out? :)

Heck, we have not even had EaH release its application yet, though that looks like it is coming soon to a GPU near you ... the problem is that I don't know which one yet ...

Worse, we don't know what the impact will be when the GPU version comes out and people like me start to run EaH alongside GPU Grid ... will it be better than the disaster with SaH and GPU Grid with task death on SaH corrupting the GPU to error out all tasks? Ugh ...

Heck, I need to visit the tech specs to see if the C1060 card is really that much faster than say the GTX 295 card and if so, by how much?

Scott Brown
Send message
Joined: 21 Oct 08
Posts: 144
Credit: 2,973,555
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 7588 - Posted: 18 Mar 2009 | 12:17:38 UTC - in response to Message 7586.

And do I want to buy a 1060 card when there is likely a 1070 card coming out? :)


The S1070 is already out; it is a rackmount system (basically it is 4 GTX280's with a slightly higher shader clock).


Heck, I need to visit the tech specs to see if the C1060 card is really that much faster than say the GTX 295 card and if so, by how much?


For GPUGRID, the GTX295 is considerably more powerful than the C1060, the latter essentially being a specialized version of the GTX280.




Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 7601 - Posted: 18 Mar 2009 | 18:44:19 UTC - in response to Message 7588.

For GPUGRID, the GTX295 is considerably more powerful than the C1060, the latter essentially being a specialized version of the GTX280.

Which is interesting news ... now if only they were not having troubles stocking the GTX 295 ... :)

It is of no matter, not in the market at the moment. And perhaps the 285 is decent enough ... of course, if I wait 6 months perhaps I can expect to see a 305 or something ...

Clownius
Send message
Joined: 19 Feb 09
Posts: 37
Credit: 30,657,566
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 7613 - Posted: 18 Mar 2009 | 23:56:13 UTC - in response to Message 7601.

My GTX 295 OC took over a week to come into Australia. IF someone hadn't failed to pick it up in time i was going to be on back order for another 3 weeks......hope my next one doesn't take that long i want to run SLI for games and 4 GPU WU's at once.

uBronan
Avatar
Send message
Joined: 1 Feb 09
Posts: 139
Credit: 575,023
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 7635 - Posted: 19 Mar 2009 | 14:15:21 UTC

Thanx for explaining Scott, so we need to wait till something really fast comes out.
Ok ill stop bugging Paul for now till faster will be presented ;)

Scott Brown
Send message
Joined: 21 Oct 08
Posts: 144
Credit: 2,973,555
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 7638 - Posted: 19 Mar 2009 | 15:40:12 UTC - in response to Message 7635.

...and thanx back to you Webbie for pointing out the new ATI card. I really need to find more time to keep up with developments from both companies better. I think NVIDIA will come out with some modest updates to the Tesla's soon since I think all the numbers so far are on the older 65nm process, but the GTX 285 and 295 are using the newer 55nm chips...

Profile Zydor
Send message
Joined: 8 Feb 09
Posts: 252
Credit: 1,309,451
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 7670 - Posted: 20 Mar 2009 | 17:50:15 UTC - in response to Message 7638.

The successor to the 295 (GT212) is already on the stocks, due out on full production shortly - about 2 months from now - 380 shaders amongst a whole raft of other stuff), it doutless will hang out to dry the ATI - for now - until ATIs new beast comes out a short while later.

And so it goes on ........ it'll never change, they will always leapfrog each other at their respective cutting edges. I decided a while back to stay with NVidia, but frankly its just as real world effective a choice to go ATI.

Competition is a wonderful thing :)

Regards
Zy

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 7694 - Posted: 21 Mar 2009 | 11:17:00 UTC - in response to Message 7670.

And so it goes on ........ it'll never change, they will always leapfrog each other at their respective cutting edges. I decided a while back to stay with NVidia, but frankly its just as real world effective a choice to go ATI.

Well, for ordinary video you are right ...

But in the BOINC world, at the moment, it is a little more complicated. I suspect in 2-3 years this will not be the case and sticking to one side or the other will be a lot of "who cares" ... but, for the moment, the MFGR is important depending on which projects you wish to run. If you want to run Milky Way you have no choice but ATI or to run it on the CPU. For most other projects it is Nvidia.

It is looking like The Lattice Project will be the next to release a GPU application though it is not clear from the announcements if it is going to target the Nvidia or ATI cards (most likely it is Nvidia).

But the debates will not end ... it is like the Ford v. Chevy debates of my youth ...

uBronan
Avatar
Send message
Joined: 1 Feb 09
Posts: 139
Credit: 575,023
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 10008 - Posted: 20 May 2009 | 15:56:15 UTC
Last modified: 20 May 2009 | 15:58:20 UTC

Well i don't think there is a choice really, if you look at MW the version which they are making and is official will be CUDA ( nvidia )
If you ask why we don't get an asnwer but i guess it has todo with money.
Anyway the milkyway application on ati is build by cluster physik who made it from the available source of the mw application.
Ofcourse can you see this the same as the KWSN applications build for seti they are accepted but officially not supported.
Sadly the whole boinc community is somehow not very nice towards the ati community and i guess its gonna stay that way for a long time, mr. anderson has allready stated that only cuda will be the main for boinc as well.
So maybe in the next 20 years we will see support for ati cards, but again i say maybe ..... in a far far away fairytale.
But hell i no longer care i bought 2 ati cards so if boinc is not going to support them thats their problem.
I don't want to be pushed towards what mr.anderson wants i want to choose myself >.<

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 10010 - Posted: 20 May 2009 | 16:22:16 UTC

ATI support is supposed to be coming "real soon now" ... I agree that the introduction of GPU processing has not been as I would have liked. And I think they have made major mistakes along the way. With Snow Leopard rapidly approaching they are going to have these same issues and questions on the Mac platform ... though they are also going to be bugged about OpenCL which SL will support natively.

OpenCL should support both cards though I wonder if there will be two execuibles for each platform, one for each card. I thumbed through the OpenCL site and could not find a clear answer to that simple question ... so, I wait to see ...

But, OpenCL will have almost the same issues as did CUDA and as does the ATI connection.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10028 - Posted: 21 May 2009 | 11:31:29 UTC

uBronan,

it's not about "the community not being nice". ATI support was planned for BOINC right after the 6.6.x code tree with its scheduler changes was debugged.. which takes a *little* longer than *some people* expected.

You're not seeing wide spread adaption of ATIs by projects because they're even harder to program than CUDA. CUDA is a modified C, something developers know and like (I don't, but that's a different topic ;). So they "only" have to care about somehow adapting their algorithm to run efficiently on GPUs. This alone is so hard that you don't see wide spread adaption of it.

Now there's ATI CAL. Here you have to somehow adapt your algorithm to run efficiently on GPUs and you have to do this in a language which is more like assembler than C. That's a nightmare if you need to debug or change it. I think I only ever wrote one assembler program, a small and simple one, just a few lines of code in C. But, boy, did we make a ton of stupid errors and it took quite a while to figure them out, even with an excellent debugger..

Some training helps, but really: you don't want to do this with complex apps. With MW we're lucky that the actual code, the hot loop, is relatively simple. That's why cluster was able to do this in his spare time.

Paul,

I didn't bother with OpnCL much yet, but I'm sure there'll be one app for all. The instructions therein go to the vid card driver, which does the low level work for you. Similar to DirectX and OpenGL. In practise there may be different apps because people have to work around different bugs in the drivers or because they may have to use completely different algorithms / data structures to get good performance from different GPUs.

MrS
____________
Scanning for our furry friends since Jan 2002

Mr. Hankey
Send message
Joined: 2 Apr 09
Posts: 2
Credit: 100,749,476
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 11134 - Posted: 14 Jul 2009 | 16:28:58 UTC - in response to Message 6676.

Mixing video card brands in the same box only really works in Vista and Windows 7 atm. Don't even think of trying it in XP of any flavor. It won't be happy with two different display drivers fighting each other behind the scenes from what I've read.


This is not true. I run both an ATI4870 for MW and an Nvidia 8800GT running SETI/GPUGRID/AUQA all of this under 32bit Windows XP sp3

Profile Sabroe_SMC
Send message
Joined: 30 Aug 08
Posts: 24
Credit: 500,287,085
RAC: 504,368
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 11135 - Posted: 14 Jul 2009 | 16:33:17 UTC - in response to Message 11134.

How did you do this. Please explain

Mr. Hankey
Send message
Joined: 2 Apr 09
Posts: 2
Credit: 100,749,476
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 11137 - Posted: 14 Jul 2009 | 16:45:38 UTC - in response to Message 11135.

How did you do this. Please explain


I started by reading around and found these:

http://tinyurl.com/czgvyr

http://tinyurl.com/dcekgu

I then just bought a 4870 and added it in. I made the following observations:

1. When I plugged in the ATI card since the free PCIe slot I had was lower on the bus it became the default video for the bios / boot. If I want to change that I would just have to swap the ATI position with the Nvidia.

2. Windows however still used my nvida card as the primary OS card. I did have to connect a second monitor (or a second input on the same monitor) to the ATI card and extend my desktop to it under windows to get the OS to load the driver. You would need to do extend the desktop as well even if you used a dummy VGA connector.

3. For MW I am using the catalyst 8.12 drivers as I was having issues with the 9.1 driver which were causing the WUs to error out after a while. I am also using the .19e version of the MW GPU app from zslip.com

Basically I have been crunching happily since April with this configuration.

Profile Sabroe_SMC
Send message
Joined: 30 Aug 08
Posts: 24
Credit: 500,287,085
RAC: 504,368
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 11152 - Posted: 16 Jul 2009 | 9:30:06 UTC - in response to Message 11137.

Thx for quick response. I will give it a try after my holydays

Post to thread

Message boards : Number crunching : Milkyway@home on ATI cards

//