Advanced search

Message boards : Graphics cards (GPUs) : GeForce GTX Titan launching the 18th

Author Message
flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 28588 - Posted: 18 Feb 2013 | 8:20:12 UTC

There's an interesting artical here

http://videocardz.com/39536/nvidia-geforce-gtx-titan-to-be-released-on-february-18th

I guess it will be a paper launch but maybe some bench marks with the NDA's lifted

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28589 - Posted: 18 Feb 2013 | 8:36:32 UTC - in response to Message 28588.

Click instead

http://videocardz.com/39536/nvidia-geforce-gtx-titan-to-be-released-on-february-18th

____________
BOINC <<--- credit whores, pedants, alien hunters

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 28590 - Posted: 18 Feb 2013 | 9:05:39 UTC

Heres some pictures of it here

http://www.techpowerup.com/180297/NVIDIA-GeForce-GTX-Titan-Graphics-Card-Pictured-in-Full.html

Sorry for being lazy, just got excited.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28592 - Posted: 18 Feb 2013 | 12:03:38 UTC - in response to Message 28590.

Looks nice, love the window. I think I'd put it on a hinge or a slider like a sunroof to make dust blow outs easier.
____________
BOINC <<--- credit whores, pedants, alien hunters

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 28593 - Posted: 18 Feb 2013 | 13:06:00 UTC

Now their saying delayed untill the 19th, heres a whole new batch of pictures.

http://videocardz.com/39618/nvidia-geforce-gtx-titan-pictured

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28598 - Posted: 18 Feb 2013 | 17:40:19 UTC

According to its specifications, it will be ~39.5% faster than a standard GTX 680. Probably it will be more power effective.
I've read that the supply of the GeForce Titan will be very limited (10.000 cards)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28601 - Posted: 18 Feb 2013 | 21:41:53 UTC - in response to Message 28598.

At that price point 10k might actually be enough :D

MrS
____________
Scanning for our furry friends since Jan 2002

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28607 - Posted: 19 Feb 2013 | 1:31:29 UTC - in response to Message 28601.

Good looks and limited edition aren't worth that much to me, maybe in a car/truck but not a video card.
____________
BOINC <<--- credit whores, pedants, alien hunters

werdwerdus
Send message
Joined: 15 Apr 10
Posts: 123
Credit: 1,004,473,861
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28608 - Posted: 19 Feb 2013 | 6:27:27 UTC - in response to Message 28607.
Last modified: 19 Feb 2013 | 6:28:18 UTC

mostly agree, it's a component... specs matter the most. however sometimes it's worth it for peripherals (keyboard, mouse, things you actually SEE and TOUCH on a regular basis :P)

however for a gpu/cpu/etc that's just going to be outdated in a year or less, why pay the premium?
____________
XtremeSystems.org - #1 Team in GPUGrid

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28616 - Posted: 19 Feb 2013 | 18:49:32 UTC - in response to Message 28608.
Last modified: 19 Feb 2013 | 18:57:16 UTC

NVIDIA's GeForce GTX Titan, Part 1: Titan For Gaming, Titan For Compute
by Ryan Smyth, anandtech.com

I guess disabling FP64 (running at 1/24) would be better for here, but that depends on the apps and WU's.

What if anything would Dynamic Parallelism, Hyper-Q and more Registers/Thread bring to this project?

I see it's CC3.5
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28619 - Posted: 19 Feb 2013 | 19:58:47 UTC - in response to Message 28616.

I guess disabling FP64 (running at 1/24) would be better for here, but that depends on the apps and WU's.

GPU-Grid uses exactly 0 FP64, so disabling won't hurt. And not disabling Turbo by leaving FP64 at 1/24 will benefit GPU-Grid, depending on what other manual OC one might have in place instead. Crippling FP64 to get Turbo s*cks hard for people who want to run a DP backup project, though.

Generally I think nVidia is being stupid forcing this decision on us. Sure, the chip will consume more power if fully utilized.. but they've got power consumption monitoring and temperature control in place, as well as fine-grained Turbo modew with voltage adjustement. I can't see any harm in leaving Turbo and FP64 at 1/3 active and just throttling down to base clock & voltage if really neccessary. This sure beats starting at base clock & voltage irregardless of actual workload.

What if anything would Dynamic Parallelism, Hyper-Q and more Registers/Thread bring to this project?

More registers/thread avoid slowing down your more complex code. However, if the code is written to perform well on regular Keplers (and older), then you should automatically stay out of the region where this would matter.

The other 2 features have to be explicitely programmed for, so are not going to be used here unless all GPUs get them.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28622 - Posted: 20 Feb 2013 | 0:10:53 UTC - in response to Message 28619.

Unless the researchers can find something special hidden within this card I'm writing it off as a niche GPU; not for general users, gamers and 'normal' GPU crunchers. It's better suited to other things, mainly due to the price. While the Titan is essentially a Tesla for ~£830 (which will be excellent for some researchers) I cannot see GPUGrid needing 6GB GDDR5, and I can't see the uptake of this card being high enough for GPUGrid to develop specifically for it, especially if there will only be 10,000 of them - If GPUGrid started to go down that route, the project would need about 7 different apps at any one time (probably based on CC). We really need to see what the performance actually is for GPUGrid (and other projects) before drawing any solid conclusions. That said, it's clear that the price is exclusively high.

2688 was a little unexpected. Perhaps lesser (and more reasonably priced) versions will turn up (2496, 2304 and/or possibly 2112). If they don't then that suggests this is the end of the line for GK110. So this might be the launch of a different range of GPU's, or the end of a high end line. Either way I would be extremely surprised if NVidia are not already well into the design of a new supercomputer GPU.

If the Titan turns out to be 40% faster than a GTX680, for here, then it's still ~36% slower than a GTX690, which costs ~£750 (presently £80 less). At only 50W more (TDP) that would make the GTX690 cheaper to buy and more economical to run, in terms of performance per Watt. However, as the Titan really has two profiles (SP and DP) the actual performance per Watt would need to be assessed on a project by project basis.
At 250W I think it would be a tough ask to nobble two such GPU's into the one case, but if you dropped the CUDA core count and limited it to SP it might be more doable. More likely in a revamp - the GTX480 never emerged in such a format, but did turn up in the revamped GTX590.

The biggest advantage I see for the GTX Titan is the exhaust system. It would better facilitate 3 or 4 GPU's in the one case:
3 GTX Titan's would be faster and cooler than 2 GTX690's (4 GK104 GPU cores), so better for crunching and gaming (so long as you are into £4K+ main system units).
While 4 GTX Titan's would roughly match 3 GTX690's, the Titan's exhaust cooling would be much better. In fact 3 GTX690's is a bit impractical.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28623 - Posted: 20 Feb 2013 | 1:22:36 UTC - in response to Message 28622.

The biggest advantage I see for the GTX Titan is the exhaust system. It would better facilitate 3 or 4 GPU's in the one case:
3 GTX Titan's would be faster and cooler than 2 GTX690's (4 GK104 GPU cores), so better for crunching and gaming (so long as you are into £4K+ main system units).
While 4 GTX Titan's would roughly match 3 GTX690's, the Titan's exhaust cooling would be much better. In fact 3 GTX690's is a bit impractical.


What makes Titan's exhaust system better?

Why is 3 GTX690s impractical? Impractical from a cooling/exhaust perspective? If so why? (I plan on ordering a mobo and single GTX690 at month's end and plan on eventually adding 3 more GTX690 to the same mobo. If you think it's a bad idea from an exhaust/cooling perspective or any other perspective I'd appreciate hearing about it now before I make the initial purchase.)

____________
BOINC <<--- credit whores, pedants, alien hunters

werdwerdus
Send message
Joined: 15 Apr 10
Posts: 123
Credit: 1,004,473,861
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28632 - Posted: 20 Feb 2013 | 7:08:01 UTC - in response to Message 28623.

gtx 690 exhausts inside the pc case (at least half anyways). most other designs (including 680/titan) exhaust outside the pc case. This is only referring to the reference design though; many 3rd party manufacturers change the cooling systems to something completely different.

so if you have multiple 690s in 1 case, the exhaust will heat up the inside of the case and the other cards quite noticeably and you would need a lot more airflow in and out of the pc case, whereas multiple 680s/titans would be exhausting out the rear of the case and less attention could be payed to case airflow.
____________
XtremeSystems.org - #1 Team in GPUGrid

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28642 - Posted: 20 Feb 2013 | 20:04:23 UTC

Sure, this card is a niche product. It's for people who don't want to or can't afford a Tesla and make a living from crunching things on it and need either massive FP64 or can't parallelize the task well. Or just want an über-gaming rig and don't care about the money :p

Regarding the stock cooler vs. GTX690: the latter one exhausts lots of hot air to the case front, so you'll need a huge exhaust fan there and some other intakes, probably best in the side panel. Or run caseless if the crunching monster is in some distant room anyway.

MrS
____________
Scanning for our furry friends since Jan 2002

wiyosaya
Send message
Joined: 22 Nov 09
Posts: 114
Credit: 589,114,683
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28662 - Posted: 21 Feb 2013 | 18:31:15 UTC - in response to Message 28616.
Last modified: 21 Feb 2013 | 18:31:50 UTC

NVIDIA's GeForce GTX Titan, Part 1: Titan For Gaming, Titan For Compute
by Ryan Smyth, anandtech.com

I guess disabling FP64 (running at 1/24) would be better for here, but that depends on the apps and WU's.

What if anything would Dynamic Parallelism, Hyper-Q and more Registers/Thread bring to this project?

I see it's CC3.5


NVIDIA’s GeForce GTX Titan Review, Part 2: Titan's Performance Unveiled - compute performance benchmarks. Well written by a CS PhD student with a specialization in parallel computing and GPUs.

Unrivaled compute performance here in both DP and SP.

IMHO, nVidia got wise with this card. I bet it flys of the shelves to people who want this kind of compute performance at a fraction of the cost of their Teslas.

If only my wallet were large enough...
____________

wiyosaya
Send message
Joined: 22 Nov 09
Posts: 114
Credit: 589,114,683
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28663 - Posted: 21 Feb 2013 | 18:34:10 UTC - in response to Message 28622.

Unless the researchers can find something special hidden within this card I'm writing it off as a niche GPU; not for general users, gamers and 'normal' GPU crunchers.

Agreed.

____________

Profile microchip
Avatar
Send message
Joined: 4 Sep 11
Posts: 110
Credit: 326,102,587
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28664 - Posted: 21 Feb 2013 | 18:54:16 UTC

http://www.anandtech.com/show/6774/nvidias-geforce-gtx-titan-part-2-titans-performance-unveiled
____________

Team Belgium

Profile Gattorantolo [Ticino]
Avatar
Send message
Joined: 29 Dec 11
Posts: 44
Credit: 251,211,525
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwat
Message 28673 - Posted: 22 Feb 2013 | 17:03:20 UTC

What will be better...2xGTX690 (4 GPU total) or 2xGTX Titan?
____________
Member of Boinc Italy.

mynis
Send message
Joined: 31 May 12
Posts: 8
Credit: 12,361,387
RAC: 0
Level
Pro
Scientific publications
watwatwat
Message 28676 - Posted: 22 Feb 2013 | 21:25:16 UTC

I might end up saving up and grabbing one for gaming. I have a single GTX 670 right now and it's not quite cutting it. Some of the games I play don't really scale very well with multiple GPUs, and upgrading to a 680 isn't worth the time or trouble. If I was patient I'd just wait for the new chips to come out in 2014 but that's not the case. Getting insane compute performance out of it is just an added bonus!

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28677 - Posted: 22 Feb 2013 | 22:10:23 UTC - in response to Message 28673.

What will be better...2xGTX690 (4 GPU total) or 2xGTX Titan?

Two Titan's would be more expensive and do less work (in theory). The benefit of a Titans is the reference exhaust cooling system over the radial system used by the GTX690; which blows air out of the back of the case, but also into the case.

A single GTX690 is fine, but multiple GTX690's would require you to have a very large side panel input fan and exhaust both from the rear and front of the ATX case. This is probably doable for two GTX690 cards, but for 3 or 4 cards it's more difficult. Also if you intend to use Linux you could in theory have 8 GPU's (four GTX690 cards) but in Windows I don't think you can. Basically GPU's don't scale well. Adding more GPU's requires greater skills, and isn't something for the amateur cruncher.

If anyone is seriously considering 3 or more GTX690's you should primarily be thinking about cooling. Something like this case (with 4 side panel fans) could be used to blow cool air onto the GPU's, and then it could be drawn out from the rear and front. I'm sure this sort of case could handle two GTX690's but I would want to know the temperatures before adding a third, and would only add them one by one. I also like the one very large side panel fan on this case, but not the drive bays.

This open frame case looks very interesting, if you don't mind dusting.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28678 - Posted: 22 Feb 2013 | 22:54:04 UTC - in response to Message 28673.

What will be better...2xGTX690 (4 GPU total) or 2xGTX Titan?

From the performance per price point of view the GTX690 is better than a Titan, as BOINC loads scale pretty much perfectly with GPU count (except POEM). However, if you're running into limits with the amount of GPUs the Titans might be a better option. Not for 4 vs. 2 chips, though.

And one more point for the Titan: it will be able to crunch GPU-Grid tasks within the bonus time for longer. If the power consumption still allows to run the card when it's in danger of becoming too slow for the bonus.. who knows!

MrS
____________
Scanning for our furry friends since Jan 2002

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28680 - Posted: 23 Feb 2013 | 3:14:37 UTC - in response to Message 28673.
Last modified: 23 Feb 2013 | 3:18:15 UTC

What will be better...2xGTX690 (4 GPU total) or 2xGTX Titan?


I was wondering the same thing and it's confusing for newbies like me when there are statements that refer to Titan's massive compute power as well as statements like "nVIDIA finally got it right with Titan". If I understand correctly, Titan has greater double-precision FP ability than a 690 and if that's true then one certainly can and perhaps should say things like "massive compute power" and "finally got it right". But when you consider that GPUgrid doesn't need double-precision FP then Titan's advantage doesn't mean much if all it's gonna do is crunch GPUgrid tasks.

As for exhaust problems on the 690... piece o' cake. I'm going to put four 690s on one mobo if Linux, drivers and BOINC will permit and show y'all how it's done, no fancy case required. Picture 4 - GTX690s, all the same model, nicely lined up in a row, 4 exhaust ports perfectly lined up one above the other, a manifold made from a $6 heat vent boot (unless I can find one in a scrap pile somewhere first) that fits nicely over all four exhaust ports and transitions into 1 collector connected to a suck fan. Once done it's gonna have the highest RAC here for a looooong time. In fact maybe I won't show y'all because then you'll build one too.
____________
BOINC <<--- credit whores, pedants, alien hunters

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28694 - Posted: 23 Feb 2013 | 7:07:02 UTC - in response to Message 28678.

And one more point for the Titan: it will be able to crunch GPU-Grid tasks within the bonus time for longer. If the power consumption still allows to run the card when it's in danger of becoming too slow for the bonus.. who knows!
MrS

The GeForce GTX 285 was released just over 4 years ago. Until the 3.1app was deprecated it could still manage to return tasks for the full bonus. Now using the slower 4.2app (just slower for the 200 series cards) it would depend on the WU. I expect Toni's tasks to return in time, but NOELIA's Long tasks might not. They would still get the 25% bonus though... So probably good for ~4years.

If I understand correctly, Titan has greater double-precision FP ability than a 690 and if that's true then one certainly can and perhaps should say things like "massive compute power" and "finally got it right". But when you consider that GPUgrid doesn't need double-precision FP then Titan's advantage doesn't mean much if all it's gonna do is crunch GPUgrid tasks.

Exactly. FP64 is not needed here and hasn't been in the past. So it's FP64 Compute benefit isn't applicable to here.
I expect the card would still be faster at POEM than a GTX680 or GTX690, but the issue there is the CPU and PCIE over-usage. So you are never going to quite get the most out of the card.
It will probably shine at MW, but against the top ATI cards I don't think it's going to be anything special. So it's just an expensive alternative.
If an FP64 Fluid Dynamics CUDA based project suddenly appeared then Titan would be the 'bees knees'.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28697 - Posted: 23 Feb 2013 | 11:42:47 UTC

Another point I forgot yesterday: Titan can provide more registers per warp and might differ internally in cache sizes and such. I think this has manifested itself in compute performance 2 to 3 times that of a GTX680 in some benchmarks - much higher than the raw horse power implies (Anandtech Compute Bench Part 1 and next page). We can not yet say if anything like this is going to happen at GPU-Grid as well.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28699 - Posted: 23 Feb 2013 | 12:16:18 UTC - in response to Message 28697.

I think they would need to specifically develop towards finding improvements from the registers per warp increase. If it's likely that improvements can be gained from this I'm sure they will try, if they get one to test on. As Titan is CC3.5, any finds could be used in the one app (as it can identify the cards compute capability). The issue I see is the price and limited availability. I can't see a big uptake of the card here, which suggests any such development would be a waste of time, but perhaps lesser versions of the card will appear making it worthwhile.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 28700 - Posted: 23 Feb 2013 | 12:21:38 UTC - in response to Message 28697.
Last modified: 23 Feb 2013 | 13:03:47 UTC

We are now trying to get hand on one titan for optimizing the application on it.

As usual it's very difficult to find them over here, if anyone is willing to donate one please contact us.

In terms of performance, we are expecting a speed-up of 50% over a gtx680 for normal wu. For large jobs, this could be close to 100% faster. ACEMD reduces the speed when the molecular system is large but not on a titan due to the 6GB of memory.

Extra registers could also provide a further boost, but we don't know yet. Stay tuned.

gdf

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28705 - Posted: 23 Feb 2013 | 15:41:44 UTC

From the benchmarks posted I would expect the Titan's production/$ to be lower than some other solutions (660 TI, 650 TI, etc.).

Profile Gattorantolo [Ticino]
Avatar
Send message
Joined: 29 Dec 11
Posts: 44
Credit: 251,211,525
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwat
Message 28708 - Posted: 23 Feb 2013 | 17:14:41 UTC - in response to Message 28673.

What will be better...2xGTX690 (4 GPU total) or 2xGTX Titan?

With 2 GTX690 is possible to crunch 4 WU at the same time, with the GTX Titan "only" 2 WU at the same time...

____________
Member of Boinc Italy.

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28710 - Posted: 23 Feb 2013 | 18:07:12 UTC

CUDA Core Count win goes to the 2*690
____________
DSKAG Austria Research Team: http://www.research.dskag.at



Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28712 - Posted: 23 Feb 2013 | 19:16:14 UTC - in response to Message 28700.

In terms of performance, we are expecting a speed-up of 50% over a gtx680 for normal wu. For large jobs, this could be close to 100% faster. ACEMD reduces the speed when the molecular system is large but not on a titan due to the 6GB of memory.

I'm missing a part of the picture here.
According to GPU-Z, a GPUGrid job on a GTX 670 uses 384~534MB (depending on the task type) of the 2GB GPU memory. Could you explain please, why would three times more memory on the Titan be more sufficient than the 2GB on the GTX 670/680, when a task is using only the quarter of it?

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 28719 - Posted: 24 Feb 2013 | 0:22:40 UTC - in response to Message 28718.

acemd uses two types of algorithms, one faster and more memory expensive and one slower that uses less memory. The decision is made at dynamically depending on the gpu memory you have available.

Most of the simulations are small enough to use the faster algorithm these days with 2GB of memory, but some might still end up higher.

Anyway, as soon as we get one, we will report the performance.

gdf

Profile Mumak
Avatar
Send message
Joined: 7 Dec 12
Posts: 92
Credit: 225,897,225
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 28759 - Posted: 25 Feb 2013 | 12:56:17 UTC

Tom's Hardware article about TITAN says:

As we know, though, Nvidia limits those units to 1/8 clock rates by default—not to be nefarious, but to create more thermal headroom for higher clock rates. That’s why, if you want the card’s full compute potential, you need to toggle a driver switch. Doing this, in my experience so far, basically disables GPU Boost, limiting your games to the card’s base clock rate.


Does anyone know what driver switch is that? Has anyone tried that and what would be the result ?

gianni
Send message
Joined: 8 Feb 13
Posts: 5
Credit: 6,750
RAC: 0
Level

Scientific publications
wat
Message 28761 - Posted: 25 Feb 2013 | 14:56:55 UTC - in response to Message 28759.

Does anyone know what driver switch is that? Has anyone tried that and what would be the result ?


On Linux systems with K20s, these settings are controlled using the "nvidia-smi" program. Hopefully that will also be so for the Titans.

MJH

Profile Mumak
Avatar
Send message
Joined: 7 Dec 12
Posts: 92
Credit: 225,897,225
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 28762 - Posted: 25 Feb 2013 | 15:24:30 UTC - in response to Message 28761.
Last modified: 25 Feb 2013 | 15:26:28 UTC

Thanks.
So this works on workstation models only?
Do you know how to change this setting using that tool?

Does anyone know what driver switch is that? Has anyone tried that and what would be the result ?


On Linux systems with K20s, these settings are controlled using the "nvidia-smi" program. Hopefully that will also be so for the Titans.

MJH

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28767 - Posted: 25 Feb 2013 | 18:58:18 UTC

It's a simple driver option in the control panel in windows. It's only being shown if a Titan is present in the system.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28772 - Posted: 25 Feb 2013 | 19:56:45 UTC - in response to Message 28767.

It's a simple driver option in the control panel in windows. It's only being shown if a Titan is present in the system.

So that enhancement is disabled in all other consumer cards?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28775 - Posted: 25 Feb 2013 | 21:41:27 UTC - in response to Message 28772.
Last modified: 25 Feb 2013 | 22:00:17 UTC

It's a simple driver option in the control panel in windows. It's only being shown if a Titan is present in the system.

So that enhancement is disabled in all other consumer cards?

The rest of the consumer cards are GK104 (only have low FP64 ability, 1/24th of FP32; 8 FP64 units per SMX block) while Titan is GK110 (up to 1/3rd FP32; 64 FP64 units per SMX block). For Titan, FP64 is set by default to a low level (1/8th speed) and to use FP64 faster you just crank it up. As this is controlled by NVIDIA's System Management Interface (nvidia-smi), apps could turn it up and down. I wonder if this is stepped? I think the GK104 cards just perform FP64 at a speed which increases and decreases with the rest of the GPU.

Profile Mumak
Avatar
Send message
Joined: 7 Dec 12
Posts: 92
Credit: 225,897,225
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 28776 - Posted: 25 Feb 2013 | 22:46:01 UTC - in response to Message 28775.


The rest of the consumer cards are GK104 (only have low FP64 ability, 1/24th of FP32; 8 FP64 units per SMX block) while Titan is GK110 (up to 1/3rd FP32; 64 FP64 units per SMX block). For Titan, FP64 is set by default to a low level (1/8th speed) and to use FP64 faster you just crank it up. As this is controlled by NVIDIA's System Management Interface (nvidia-smi), apps could turn it up and down. I wonder if this is stepped? I think the GK104 cards just perform FP64 at a speed which increases and decreases with the rest of the GPU.


Thanks, that makes it clear.

Bedrich Hajek
Send message
Joined: 28 Mar 09
Posts: 467
Credit: 8,186,021,966
RAC: 10,556,774
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28780 - Posted: 26 Feb 2013 | 1:38:44 UTC - in response to Message 28719.

acemd uses two types of algorithms, one faster and more memory expensive and one slower that uses less memory. The decision is made at dynamically depending on the gpu memory you have available.

Most of the simulations are small enough to use the faster algorithm these days with 2GB of memory, but some might still end up higher.

Anyway, as soon as we get one, we will report the performance.

gdf


We already have a rating system for video cards that is broken down into 4 categories: most recommended, highly recommended, recommended and not recommend. So, shouldn't we have the same for video memory size?

See example:

most recommended : 4 GB +
highly recommended : 2 to 4 GB
recommended: 1 to 2 GB
not recommended : less than 1 GB


ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28784 - Posted: 26 Feb 2013 | 8:58:39 UTC - in response to Message 28772.

So that enhancement is disabled in all other consumer cards?

I'd rather say "this functionality is not present in other consumer cards".

MrS
____________
Scanning for our furry friends since Jan 2002

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28785 - Posted: 26 Feb 2013 | 9:04:36 UTC - in response to Message 28780.

We already have a rating system for video cards that is broken down into 4 categories: most recommended, highly recommended, recommended and not recommend. So, shouldn't we have the same for video memory size?

most recommended : 4 GB +
highly recommended : 2 to 4 GB
recommended: 1 to 2 GB
not recommended : less than 1 GB

That is a good idea. I'll suggest the following classification:

- sufficient to run all current tasks at maximum speed
- sufficient to run short runs at maximum speed
- sufficient to run short runs
- insufficient

These boundaries would change with apps and WUs and we'd have a hard time figuring them out, so the developers would have to take care of updating this table.

MrS
____________
Scanning for our furry friends since Jan 2002

HA-SOFT, s.r.o.
Send message
Joined: 3 Oct 11
Posts: 100
Credit: 5,879,292,399
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28789 - Posted: 26 Feb 2013 | 11:25:34 UTC - in response to Message 28719.

acemd uses two types of algorithms, one faster and more memory expensive and one slower that uses less memory. The decision is made at dynamically depending on the gpu memory you have available.

Most of the simulations are small enough to use the faster algorithm these days with 2GB of memory, but some might still end up higher.

Anyway, as soon as we get one, we will report the performance.

gdf


I can give you a remote access to my gtx titan if it helps.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28806 - Posted: 26 Feb 2013 | 19:50:51 UTC - in response to Message 28785.

We already have a rating system for video cards that is broken down into 4 categories: most recommended, highly recommended, recommended and not recommend. So, shouldn't we have the same for video memory size?

most recommended : 4 GB +
highly recommended : 2 to 4 GB
recommended: 1 to 2 GB
not recommended : less than 1 GB

That is a good idea. I'll suggest the following classification:

- sufficient to run all current tasks at maximum speed
- sufficient to run short runs at maximum speed
- sufficient to run short runs
- insufficient

Yikes, it looks like this project might be getting a little rich for my pocketbook, or are you not saying that we should need 4GB cards to do the long runs? Maybe you're not making a 1 to 1 relationship for the 2 classifications above. Right now the long run TONI WUs do OK on the GTX 460 768mb cards (about 12.5 hours). The older long run NATHANS were slow but the last one I received was smaller and ran fine. The 650 TI 1gb card seems to run everything fine currently.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28811 - Posted: 26 Feb 2013 | 20:41:31 UTC - in response to Message 28806.

Sorry if this wasn't clear: Bedrich gave an example of what he had in mind. I proposed different categories, also 4 by accident, totally unrelated to the number he wrote down for his categories. I didn't give any numbers because I don't know them, but I'm sure they'd be smaller than what Bedrich had in mind.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28822 - Posted: 26 Feb 2013 | 23:30:27 UTC - in response to Message 28811.

The analyst and cruncher:
In theory you could run one app across 4 Titans and use up to 24GB, and at the other extreme you could run more than on task on a 1GB (or less) card. It's really down to the research parameters. While I expect some in-house usages to be more exploratory, the live apps need to accommodate the main resources (our GPU's). For this reason the researchers work out what cards are available and how many are attached... With that in mind they develop the apps and perform research that utilizes the majority of what's available.

If there are enough 3GB cards available then perhaps some research will be designed to utilize from 2GB up to 3GB, and the tasks will prefer 3GB cards. Just how much they will prefer 3GB is a question in itself, and an unknown. The performance difference between a 2GB card and a 3GB card might be large or negligible, and the difference in price + running cost may or may not make the 3GB card the choicest. If there are enough 4GB or 6GB cards, ditto...

There is a ~£30 difference between the GTX660ti 2GB and 3GB; ~£180 vs ~£210. It's unknown if tasks would be 15% faster (the price difference) just because of the video memory capacity. More GDDR might also require more power, which would mean a slight loss of power to the rest of the card (capped at 150W) and thus the GPU speed might be a bit less. So the real difference to make up might be 20% from the GPU-only point of view. However, if you add the system overheads and price tag, and the perspective might change again.

The Scientist:
I'm all for using more GDDR5; it would expand the scientific boundaries (thee scientific goal) and we could be looking at larger molecular interactions, or the same interactions in more detail. This expansion of science obviously completely outweighs the projects performance in terms of steps or performance differences between 2GB and 3GB. It should thus be a key project goal.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

HA-SOFT, s.r.o.
Send message
Joined: 3 Oct 11
Posts: 100
Credit: 5,879,292,399
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28847 - Posted: 27 Feb 2013 | 22:53:57 UTC

First test ends with error for short and long 4.2

<message>
- exit code -9 (0xfffffff7)
</message>

Operator
Send message
Joined: 15 May 11
Posts: 108
Credit: 297,176,099
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28849 - Posted: 28 Feb 2013 | 0:14:19 UTC - in response to Message 28847.

OD !!

You're saying WUs are not running on your Titan?

Mine's in the mail.

This doesn't bode well....

Operator
____________

HA-SOFT, s.r.o.
Send message
Joined: 3 Oct 11
Posts: 100
Credit: 5,879,292,399
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28858 - Posted: 28 Feb 2013 | 9:20:25 UTC - in response to Message 28849.

Yes. Maybe application has to be updated for titan.

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 28860 - Posted: 28 Feb 2013 | 9:41:25 UTC - in response to Message 28858.

It's possible that it does not run until we get one here.

gdf

Profile MJH
Project administrator
Project developer
Project scientist
Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 28863 - Posted: 28 Feb 2013 | 10:12:16 UTC - in response to Message 28847.

Ah, -9, my favourite error message. It's an ongoing mystery where this comes from, since there's nothing in the application source that causes a termination with that exit code.

HA-SOFT: Since your machine with the Titan seems to be new (wasn't crunching before with an older GPU), my guess is that there is something about its setup that is causing a problem for the app (or possibly the BOINC client) making it immediately abort. I have no idea what that might be, but if you are able to get it working by playing about with the configuration, please do let me know.

MJH

HA-SOFT, s.r.o.
Send message
Joined: 3 Oct 11
Posts: 100
Credit: 5,879,292,399
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28866 - Posted: 28 Feb 2013 | 11:28:42 UTC - in response to Message 28863.
Last modified: 28 Feb 2013 | 12:15:22 UTC

I have tested 7.0.28, 7.0.52, project reset, no luck.

boinc log:

GPUGRID | [cpu_sched_debug] Ann145_r2-TONI_AGGd8-13-100-RND9865_0 sched state 0 next 2 task state 0
28.2.2013 13:10:10 | GPUGRID | Starting task Ann145_r2-TONI_AGGd8-13-100-RND9865_0 using acemdlong version 618 (cuda42) in slot 10
28.2.2013 13:10:10 | GPUGRID | [css] running Ann145_r2-TONI_AGGd8-13-100-RND9865_0 (0.81 CPUs + 1 NVIDIA GPU)
28.2.2013 13:10:10 | | [cpu_sched_debug] enforce_schedule: end
28.2.2013 13:10:10 | | [network_status] status: online
28.2.2013 13:10:10 | | [slot] removed file slots/10/init_data.xml
28.2.2013 13:10:10 | GPUGRID | [slot] linked ../../projects/www.gpugrid.net/Ann145_r2-TONI_AGGd8-13-LICENSE to slots/10/LICENSE
28.2.2013 13:10:10 | GPUGRID | [slot] linked ../../projects/www.gpugrid.net/Ann145_r2-TONI_AGGd8-13-COPYRIGHT to slots/10/COPYRIGHT
28.2.2013 13:10:10 | GPUGRID | [slot] linked ../../projects/www.gpugrid.net/Ann145_r2-TONI_AGGd8-13-pdb_file to slots/10/structure.pdb
28.2.2013 13:10:10 | GPUGRID | [slot] linked ../../projects/www.gpugrid.net/Ann145_r2-TONI_AGGd8-13-psf_file to slots/10/structure.psf
28.2.2013 13:10:10 | GPUGRID | [slot] linked ../../projects/www.gpugrid.net/Ann145_r2-TONI_AGGd8-13-par_file to slots/10/parameters
28.2.2013 13:10:10 | GPUGRID | [slot] linked ../../projects/www.gpugrid.net/Ann145_r2-TONI_AGGd8-13-conf_file_enc to slots/10/input
28.2.2013 13:10:10 | GPUGRID | [slot] linked ../../projects/www.gpugrid.net/Ann145_r2-TONI_AGGd8-13-100-RND9865_0_0 to slots/10/progress.log
28.2.2013 13:10:10 | GPUGRID | [slot] linked ../../projects/www.gpugrid.net/Ann145_r2-TONI_AGGd8-13-100-RND9865_0_1 to slots/10/output.coor
28.2.2013 13:10:10 | GPUGRID | [slot] linked ../../projects/www.gpugrid.net/Ann145_r2-TONI_AGGd8-13-100-RND9865_0_2 to slots/10/output.vel
28.2.2013 13:10:10 | GPUGRID | [slot] linked ../../projects/www.gpugrid.net/Ann145_r2-TONI_AGGd8-13-100-RND9865_0_3 to slots/10/output.idx
28.2.2013 13:10:10 | GPUGRID | [slot] linked ../../projects/www.gpugrid.net/Ann145_r2-TONI_AGGd8-13-100-RND9865_0_4 to slots/10/output.dcd
28.2.2013 13:10:10 | GPUGRID | [slot] linked ../../projects/www.gpugrid.net/Ann145_r2-TONI_AGGd8-13-100-RND9865_0_8 to slots/10/output.vel.dcd
28.2.2013 13:10:11 | GPUGRID | [sched_op] Deferring communication for 2 min 56 sec
28.2.2013 13:10:11 | GPUGRID | [sched_op] Reason: Unrecoverable error for task Ann145_r2-TONI_AGGd8-13-100-RND9865_0
28.2.2013 13:10:11 | | [slot] cleaning out slots/10: handle_exited_app()
28.2.2013 13:10:11 | | [slot] removed file slots/10/acemd.2865P.exe
28.2.2013 13:10:11 | | [slot] removed file slots/10/boinc_lockfile
28.2.2013 13:10:11 | | [slot] removed file slots/10/COPYRIGHT
28.2.2013 13:10:11 | | [slot] removed file slots/10/cudart32_42_9.dll
28.2.2013 13:10:11 | | [slot] removed file slots/10/cufft32_42_9.dll
28.2.2013 13:10:11 | | [slot] removed file slots/10/init_data.xml
28.2.2013 13:10:11 | | [slot] removed file slots/10/input
28.2.2013 13:10:11 | | [slot] removed file slots/10/input.coor
28.2.2013 13:10:11 | | [slot] removed file slots/10/input.idx
28.2.2013 13:10:11 | | [slot] removed file slots/10/input.vel
28.2.2013 13:10:11 | | [slot] removed file slots/10/LICENSE
28.2.2013 13:10:11 | | [slot] removed file slots/10/META_INP
28.2.2013 13:10:11 | | [slot] removed file slots/10/output.coor
28.2.2013 13:10:11 | | [slot] removed file slots/10/output.dcd
28.2.2013 13:10:11 | | [slot] removed file slots/10/output.idx
28.2.2013 13:10:11 | | [slot] removed file slots/10/output.vel
28.2.2013 13:10:11 | | [slot] removed file slots/10/output.vel.dcd
28.2.2013 13:10:11 | | [slot] removed file slots/10/parameters
28.2.2013 13:10:11 | | [slot] removed file slots/10/progress.log
28.2.2013 13:10:11 | | [slot] removed file slots/10/stderr.txt
28.2.2013 13:10:11 | | [slot] removed file slots/10/structure.pdb
28.2.2013 13:10:11 | | [slot] removed file slots/10/structure.psf
28.2.2013 13:10:11 | | [slot] removed file slots/10/tcl85.dll
28.2.2013 13:10:11 | | [cpu_sched_debug] Request CPU reschedule: application exited
28.2.2013 13:10:11 | GPUGRID | Computation for task Ann145_r2-TONI_AGGd8-13-100-RND9865_0 finished

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28873 - Posted: 28 Feb 2013 | 18:02:54 UTC - in response to Message 28847.

First test ends with error for short and long 4.2
<message>
- exit code -9 (0xfffffff7)
</message>

Ouch, bleeding edge strikes again. Sorry to hear it and hope there's a resolution soon.

nucleon
Send message
Joined: 18 Dec 11
Posts: 10
Credit: 172,348,621
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwat
Message 28881 - Posted: 28 Feb 2013 | 21:08:21 UTC

Installed my titan last night - no luck with gpugrid.

I loaded up gpuz and the cuda box isn't ticked. Sounds like no cuda drivers yet.


-- Craig

HA-SOFT, s.r.o.
Send message
Joined: 3 Oct 11
Posts: 100
Credit: 5,879,292,399
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28883 - Posted: 28 Feb 2013 | 21:15:59 UTC - in response to Message 28881.

314.09 is CUDA 5 cappable.

Are you using remote desktop?

nucleon
Send message
Joined: 18 Dec 11
Posts: 10
Credit: 172,348,621
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwat
Message 28889 - Posted: 1 Mar 2013 | 8:18:00 UTC - in response to Message 28883.

The Titan card is installed in a machine with a GTX580. I wonder if that's the issue. Can I embed a screenshot in this forum?

Not using remote desktop.

-- Craig

HA-SOFT, s.r.o.
Send message
Joined: 3 Oct 11
Posts: 100
Credit: 5,879,292,399
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28891 - Posted: 1 Mar 2013 | 8:39:18 UTC - in response to Message 28889.

This may be the case. Driver 314.09 support Titan only. Older drivers do not support titan.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28892 - Posted: 1 Mar 2013 | 8:40:16 UTC - in response to Message 28889.

Can I embed a screenshot in this forum?


If you don't have your own HTML server or a wqebsite such as most ISPs include with your ISP service, you can upload the image to a free image hosting service like photobucket. They will give you a URL to the image. Then put the URL in a post here. You can embed it by using BBCode tags which are similar to HTML tags. While composing the message find the link that says "Use BBCode tags to format your text" to the left of the edit box and up a little bit to get a summary of what the various tags do. If you can't make the BBCode tags work then just post the URL, we can copy 'n paste it into a browser address bar.


____________
BOINC <<--- credit whores, pedants, alien hunters

Operator
Send message
Joined: 15 May 11
Posts: 108
Credit: 297,176,099
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28903 - Posted: 1 Mar 2013 | 15:03:24 UTC - in response to Message 28889.
Last modified: 1 Mar 2013 | 15:04:06 UTC

nucleon;

I will be installing my Titan in the same machine my Tesla is in.

I have already downloaded the Titan specific driver (314.09) from Nvidia.

Did you manage to get yours working along with your GTX 580?

Operator
____________

Profile MJH
Project administrator
Project developer
Project scientist
Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 28904 - Posted: 1 Mar 2013 | 17:41:27 UTC - in response to Message 28903.

We need to refresh the app for Titan. Probably get this done on Monday.

MJH

nucleon
Send message
Joined: 18 Dec 11
Posts: 10
Credit: 172,348,621
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwat
Message 28907 - Posted: 2 Mar 2013 | 3:54:57 UTC - in response to Message 28904.

We need to refresh the app for Titan. Probably get this done on Monday.

MJH


Cheers, I think that's the problem now.

It seems a possible bug in gpuz. If I set Titan to be primary, the cuda box is checked, if it's secondary gpu then cuda is deselected. Even though other cuda apps work.

Looks like my issue. Oops.

-- Craig

Profile anthropisches
Send message
Joined: 9 Nov 08
Posts: 1
Credit: 319,663,947
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28997 - Posted: 6 Mar 2013 | 6:55:43 UTC
Last modified: 6 Mar 2013 | 6:56:22 UTC

Hi,

There are some News to TITAN? My GPUs work with error by GPUGRID... (Long and short Queue).

Mathi

HA-SOFT, s.r.o.
Send message
Joined: 3 Oct 11
Posts: 100
Credit: 5,879,292,399
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29012 - Posted: 6 Mar 2013 | 15:55:54 UTC - in response to Message 28997.

Hi,

There are some News to TITAN? My GPUs work with error by GPUGRID... (Long and short Queue).

Mathi


I think MJH is working on update.

Operator
Send message
Joined: 15 May 11
Posts: 108
Credit: 297,176,099
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29023 - Posted: 7 Mar 2013 | 2:22:35 UTC - in response to Message 29012.
Last modified: 7 Mar 2013 | 2:23:10 UTC

I installed my new Titan this evening and every single WU is failing within 20 seconds (all NOELIAs).

This was with the Tesla in the same machine. (ID 146106).

So now it's just the Titan by itself and still the same result.

I'd hate to have to put the older cards back in....

Operator
____________

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 29026 - Posted: 7 Mar 2013 | 3:12:26 UTC - in response to Message 29023.

I installed my new Titan this evening and every single WU is failing within 20 seconds (all NOELIAs).

This was with the Tesla in the same machine. (ID 146106).

So now it's just the Titan by itself and still the same result.

I'd hate to have to put the older cards back in....

Operator


You all ready knew that was going to happen from you're previous posts and others who rushed out and bought the card. Why didn't you buy a GTX690 instead? It has more cuda cores, costs the same and works right out of the box.

Operator
Send message
Joined: 15 May 11
Posts: 108
Credit: 297,176,099
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29027 - Posted: 7 Mar 2013 | 3:42:19 UTC - in response to Message 29026.

I thought this card would run cooler than a GTX 690 and didn't want to spend the extra money to water cool a card that will be on it's way out in less than a year.

I already did that once...

I'm sure it won't be that long before the Titan problem is sorted, and I can put my older cards back in if it comes to it.

Operator
____________

Post to thread

Message boards : Graphics cards (GPUs) : GeForce GTX Titan launching the 18th

//