Advanced search

Message boards : Graphics cards (GPUs) : 750TI-650TI Combo on Linux

Author Message
Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 36841 - Posted: 15 May 2014 | 13:30:33 UTC - in response to Message 36840.

There are two slots between the two cards (one small PCIE and one PCI), so I think there's enough room for air. The side fan gets to be in a somewhat strange place: its center meets the lower card. This seemed awkward at first, but I'm thinking it's probably better, since both cards' front sides (bearing their fans) are directly in front of the side-fan's blades, theoretically getting air flow exactly where they need it.

In the end, I believe the factory-OCed 750 is just plain hotter than the stock-clocked 650. If the 650 were OCed as well, it would definitely run hotter, although maybe not as hot as the 750. Also, I think ASUS's choice of heatsink for the 750 was a fail, the card clearly needs a beefier heatsink. No wonder the Gigabyte 750 I was also looking at has the Windforce heat-piped heatsink!
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36843 - Posted: 16 May 2014 | 8:15:07 UTC - in response to Message 36841.
Last modified: 16 May 2014 | 8:29:45 UTC

The Maxwell's are not superscalar, so for here they would be somewhat better utilized (and hotter) relative to a theoretical superscalar Maxwell.
The fans on some small cards are poor, barely adequate.
Most small cards are not designed to be used with other cards. Neither the GTX750Ti or the GTX650Ti are Sli capable, so they are not designed to scale well.
You can, and should, use coolbits to control fan speed.
If you are lazy (like me) put the hottest card in the top slot and just configure fan control for that card :)
Most GTX750ti's come with a 6pin power connector, though only 25W can be drawn through it (that means power is only coming in through 2 pins). Even a molex converter would probably suffice.
While some boards cannot supply the full 75W over PCIE, that tends to be the 3rd or 4th PCIE slot, and not the first slot.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1386
Credit: 3,479,133,183
RAC: 450,087
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36846 - Posted: 16 May 2014 | 9:26:42 UTC - in response to Message 36843.

I got a couple of Gainward 750 Ti 'Golden Sample' - rated at 1281 MHz boost, with no external power needed. One is in host 45218.

I believe those cards which have a 6-pin connector primarily use it to power the fans, not the main graphics chip.

Sorry Stefan, I was already drafting when you posted. Feel free to move the post.

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 36848 - Posted: 16 May 2014 | 10:12:07 UTC

Continuing the discussion here from the "Number Crunching / New SANTIs 100% fail rate" thread.

skgiven wrote:
The Maxwell's are not superscalar, so for here they would be somewhat better utilized (and hotter) relative to a theoretical superscalar Maxwell.
The fans on some small cards are poor, barely adequate.
Most small cards are not designed to be used with other cards. Neither the GTX750Ti or the GTX650Ti are Sli capable, so they are not designed to scale well.
You can, and should, use coolbits to control fan speed.
If you are lazy (like me) put the hottest card in the top slot and just configure fan control for that card :)
Most GTX750ti's come with a 6pin power connector, though only 25W can be drawn through it (that means power is only coming in through 2 pins). Even a molex converter would probably suffice.
While some boards cannot supply the full 75W over PCIE, that tends to be the 3rd or 4th PCIE slot, and not the first slot.


Both my cards are from ASUS and they have a better-than-low-end large-ish heatsink with two fans. ASUS advertises both better cooling and quieter operation (of course...) and one year crunching full-time with my 650 leaves me no room for complaining. Even during the hot Greek summer in an old box with no side fan (although otherwise built for good airflow), the 650 barely touched 70C.

The direct successor 750 TI card uses the exact same heatsink (by all appearances), but it seems to operate ~10C hotter when crunching the same type of tasks.

Anyway, after a couple of days crunching on both cards, I've had 7 failed tasks and 2 that stalled and I aborted myself. Of the 7 that failed, 3 did so with the message below:
SWAN : FATAL : Cuda driver error 715 in file 'swanlibnv2.cpp' in line 1963.
# SWAN swan_assert -57

The other 4 failed with:
The simulation has become unstable. Terminating to avoid lock-up


The 2 tasks that stalled were of the GERARD_A2ART4E_adapt1 and GERARD_A2ART4E_adapt2 types. The symptom was the GPU temperature falling to almost idle levels and completion estimations rocketing.

I am suspecting the driver. The NVidia site said that I had to use 334.21 with the 750TI, so I updated to that. I have now downgraded to 331.49, which I've been using without issue for some time now (almost since it was released) and the 750TI seems to be working with it just fine! So I will let it work for a couple of days with that and see what happens.
____________

mikey
Send message
Joined: 2 Jan 09
Posts: 286
Credit: 567,463,276
RAC: 65,018
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36852 - Posted: 16 May 2014 | 11:26:22 UTC - in response to Message 36848.


I am suspecting the driver. The NVidia site said that I had to use 334.21 with the 750TI, so I updated to that. I have now downgraded to 331.49, which I've been using without issue for some time now (almost since it was released) and the 750TI seems to be working with it just fine! So I will let it work for a couple of days with that and see what happens.


The 'company' ALWAYS wants you to use the latest and greatest thing they just put their money into developing, whether that is the best for users is a different story. I have several 760's and am using version 327.23 of the drivers at PrimeGrid. They work for me and I am not getting any errors. I know some workunits have certain drivers requirements and that needs to be taken in account. But as long as it is working I rarely upgrade, unless I have to for the workunits or because someone comes in and says x driver is much faster at crunching.

Now I also know some people game with their gpu's while I do not, gamers often can use the latest and greatest drivers as it makes the game better, ie they don't die as quick in the shoot-em-up games or the graphics are MUCH better in other games. Sometimes you have to compromise between the two, as I said I don't game so I am good using the older ones that work for me.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36853 - Posted: 16 May 2014 | 12:15:35 UTC - in response to Message 36846.

I got a couple of Gainward 750 Ti 'Golden Sample' - rated at 1281 MHz boost, with no external power needed. One is in host 45218.

I believe those cards which have a 6-pin connector primarily use it to power the fans, not the main graphics chip.


With a TDP of 60W the cards still have 20% headroom before they reach the 75W that is normally supported by a PCIE slot. So having power cables really isn't needed. It's more of a gimmick to make people think it can do something it cant.

That said it does make some sense to take the fan power off the card and allow the PSU to power the fans directly - I've had fans fail on ~5 small cards (without 6pin power connectors) over the years, and replacing the fan only worked once. Mostly GT240's (69W TDP) so less room to play with. I was able to mount small fans powered off the mainboard or Molex cables and use them for a while.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 36854 - Posted: 16 May 2014 | 15:49:55 UTC - in response to Message 36852.

mikey said:
The 'company' ALWAYS wants you to use the latest and greatest thing they just put their money into developing, whether that is the best for users is a different story. I have several 760's and am using version 327.23 of the drivers at PrimeGrid. They work for me and I am not getting any errors. I know some workunits have certain drivers requirements and that needs to be taken in account. But as long as it is working I rarely upgrade, unless I have to for the workunits or because someone comes in and says x driver is much faster at crunching.

I have the same mindset with drivers, I only upgrade when it's needed. But hey, when the manufacturer says you need a newer driver to support a new card, the logical thing to do is to use that driver.

Now, in my relatively short time GPU crunching (a bit over a year) I have been bitten by NVidia drivers once or twice, that's why I decided to downgrade and test. The good thing is, the 750TI seems to work just fine with the "old" driver and judging by the almost half-crunched SDOERR_BARNA2, the performance is the same. I hope it's stable as well!

skgiven said:
With a TDP of 60W the cards still have 20% headroom before they reach the 75W that is normally supported by a PCIE slot. So having power cables really isn't needed. It's more of a gimmick to make people think it can do something it cant.

Well, ASUS specifies a power consumption of "up to 150W", requiring a 6-pin power cable, which did seem strange to me with the GPU rated at just 60W by NVidia.
____________

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 36879 - Posted: 20 May 2014 | 12:48:39 UTC

I went ahead and swapped the cards' positions, placing the 750 in the lower slot and the 650 in the upper slot. Temperature-wise, things are much better now, with values much closer to each other: 5-6C difference with the 650 being hotter vs 10-11C difference with the 750 being hotter. The card in the upper slot is always the hotter one. While the two cards' coolers appear to be identical, in practice the 650's one is much more efficient than the 750's one!

Neither the position swap nor the driver seem to have eliminated the errors though! :( Each and every GERARD_A2ART4E I get on the 750 stalls, each and every SANTI_marsalWTbound I get on the 750 fails and even some of the new NOELIA_BI fail on it! :((

I think something must be wrong with the 750Ti-Linux64bit combination (although I've seen some tasks fail on 750s on Windows as well). Is it possible for someone from the project to take a look into this? I'll gladly provide log files etc to assist! I think it's pretty important to resolve this, as this is one of the first Maxwells (the GPU type, not my card) being used on GPUGRID and they won't be getting any fewer in the future!
____________

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36884 - Posted: 21 May 2014 | 16:53:43 UTC - in response to Message 36879.
Last modified: 21 May 2014 | 16:57:02 UTC

I think something must be wrong with the 750Ti-Linux64bit combination (although I've seen some tasks fail on 750s on Windows as well). Is it possible for someone from the project to take a look into this? I'll gladly provide log files etc to assist! I think it's pretty important to resolve this, as this is one of the first Maxwells (the GPU type, not my card) being used on GPUGRID and they won't be getting any fewer in the future!

It might be the 750Ti-Linux64bit combination or it might possibly be a bad card or a problem with your system. Can't help with the Linux part but I'm running 5 750Ti GPUs: 2 PNY OC, 2 EVGA Superclocked and 1 EVGA ACX. All but the ACX have pretty wimpy HS/fans and the 3 EVGA cards run cooler than my 650Ti cards. The PNY that have the highest clocks and tiny HS/fans run about the same temp as the 650Ti cards they replaced. They also draw 100 watts less power (according to my Kill-a-watt) than the 460 and 560 cards. They're all factory OCed with memory OCed to 6000 MHz, are running Win7-64 and there have been no errors. They're all running about 40% faster than my 650 Ti cards. Another reason I suspect either that you have a bad card or a problem with your setup is that even running Linux your completion times for the 750Ti are much slower than I'm seeing in Win7-64. That shouldn't be. If you can, check the 750Ti in a Windows box and see if you still get errors. If so, send the card back ASAP.

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 36885 - Posted: 21 May 2014 | 17:16:47 UTC - in response to Message 36884.

Thanks for your post, Beyond. I took a look at a few of your tasks and indeed my card is a little to quite slower than your 750Tis. NOELIA_BIs seem to be much slower on my card than SDOERR_BARNA5s.

I noticed that your tasks mention application version 8.41, while mine say 8.21. I think I'm going to reset the project on the machine and see if I get the newer application version.
____________

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36958 - Posted: 30 May 2014 | 4:42:05 UTC

Umm... the Linux app version is only 8.21. Please see the GPUGrid app page: http://www.gpugrid.net/apps.php

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 36976 - Posted: 31 May 2014 | 19:25:50 UTC - in response to Message 36958.

Yes, I figured that out pretty soon after writing that last post!
____________

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 37070 - Posted: 17 Jun 2014 | 12:25:10 UTC

I am continuing the discussion started in High Failure Rate of SANTI Tasks here, since it appears it is not just SANTIs failing on my 750Ti on Linux, but all sorts of tasks!

So, in my attempts to make the card behave, I made an interesting observation: GPUGRID tasks running on the 750Ti somehow interfere with WCG tasks running on other cores! Specifically, when running WCG tasks on the other cores of my i7-870, the acemd process's CPU usage drops significantly, even by half for some WUs - probably by no coincidence SANTIs! ;)

Once I suspended WCG and let just GPUGRID run, the card seems to be running MUCH more stable!

Now, the really weird part is, even running less WCG tasks, letting one full physical core to each GPUGRID task, the acemd process CPU usage drops!

So, it seems there is some other resource contention occurring when running GPUGRID together with WCG tasks, either having to do with the processor as a whole (L3 cache perhaps?), or with some other part of the system. Bear in mind that the PCIEx16 slot the 750Ti is on gets its 4 lanes from the chipset, not the CPU, maybe this makes some difference (?).

I will let GPUGRID run solo for the next few days, but I'd be interested to read your thoughts (or better, experiences) on this.
____________

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 37078 - Posted: 19 Jun 2014 | 14:01:08 UTC - in response to Message 37070.

So, it seems there is some other resource contention occurring when running GPUGRID together with WCG tasks, either having to do with the processor as a whole (L3 cache perhaps?), or with some other part of the system. Bear in mind that the PCIEx16 slot the 750Ti is on gets its 4 lanes from the chipset, not the CPU, maybe this makes some difference (?).

I will let GPUGRID run solo for the next few days, but I'd be interested to read your thoughts (or better, experiences) on this.

Here's some random thoughts:

1) I know you have heat problems but as a rule of thumb the faster GPU should be in the more capable PCIe slot.

2) You may have to open your case and aim a fan at it to alleviate your heat problems.

3) I've personally have had bad luck with ASUS GPUs, more so than with any other brand. That said I've had decent luck with their motherboards. In general 750 Ti GPUs seem to be very solid (at least in Win7-64). If you can't get your 750 Ti to work consider RMAing it.

4) The only problem I've had with a 750 Ti is on my one Intel box (Celeron 1037u) which ran a 650 Ti fine (only 1 GPU on that machine). The 750 Ti for some reason wouldn't run NOELIA TRPS WUs on that box. Switched that PNY 750 TI OC to an AMD box and it's been running perfectly (including many NOELIA TRPS WUs). Incidentally the AMD box has 2 GPUs.

Don't know if any of this is helpful to you but I'm throwing out some thoughts on the subject as you've been fighting this problem for a long time. The experience with ASUS and Intel represent very small samples (2 out of 3 ASUS GPU failures, only 1 Intel box) but I list them as something at least to consider. Maybe the Celeron doesn't have enough CPU power to support the 750 Ti? seems unlikely, but perhaps. I've put a different 750 Ti in the Celeron box and it's running OK so far except that it hasn't yet received a NOELIA TRPS, so it's hard to say if the problem is still there...

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 37117 - Posted: 21 Jun 2014 | 18:57:40 UTC - in response to Message 37070.
Last modified: 21 Jun 2014 | 18:59:43 UTC

So, in my attempts to make the card behave, I made an interesting observation: GPUGRID tasks running on the 750Ti somehow interfere with WCG tasks running on other cores! Specifically, when running WCG tasks on the other cores of my i7-870, the acemd process's CPU usage drops significantly, even by half for some WUs - probably by no coincidence SANTIs! ;)

Once I suspended WCG and let just GPUGRID run, the card seems to be running MUCH more stable!

As a general observation, GPUGrid Performance and stability is often improved by running less CPU tasks, but both stability and performance also depend on drivers, apps, WU's, the system and indeed the GPU.

Now, the really weird part is, even running less WCG tasks, letting one full physical core to each GPUGRID task, the acemd process CPU usage drops!

So, it seems there is some other resource contention occurring when running GPUGRID together with WCG tasks, either having to do with the processor as a whole (L3 cache perhaps?), or with some other part of the system. Bear in mind that the PCIEx16 slot the 750Ti is on gets its 4 lanes from the chipset, not the CPU, maybe this makes some difference (?).

When utilizing the CPU to a greater extent for WCG tasks, the CPU usage for GPUGrid work could increase because there is an added burden of fetching the GPUGrid CPU work, after processing WCG work. Per physical Core the 870 only has 32KB of Level 1 data cache, 256KB of Level 2 cache and the 8MB of L3 cache is shared. Anything over that is stored in the system memory.

Non CPU based PCIE controllers tend to be a bit slower. Only having 4 PCIE2 lanes might also slow it down slightly. The system RAM is probably not very fast and my instinct is that your WCG tasks might be writing to disk a lot (checkpointing).

FAQ - Best configurations for GPUGRID This thread is a bit dated, some parts are obsolete and parts might need an overhaul/re-write but it's still worth a look at.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 37187 - Posted: 30 Jun 2014 | 10:01:33 UTC

So, I spent a week working on my crunching rig, trying to make my 750 Ti crunch in a nice, stable manner. Thankfully, work was slow last week!

First thing I did was to install Windows Server 2008 R2 on a SSD I had sitting around (was in my previous work laptop). I then installed the drivers needed for my system, including the latest NVidia driver for Windows.

I tried to find good tools to monitor and manage my GPUs and system and ended up installing EVGA Precision X, NVidia Inspector, GPU-Z and SpeedFan. I tried the Asus Tweak Tool that came bundled with my 750 Ti, but honestly, it looks and works CRAP! Then I installed BOINC 7.2.42, attached to GPUGRID and the torture started!

While my 650 Ti continued crunching like a BOSS, the 750 Ti continued to fail! So, I started fooling around with the (numerous) NVidia-related options available in Windows: performance settings, power settings, frequencies, etc. Nothing worked! Plenty of errors and, when it didn't error out, pretty poor performance with very low GPU utilization ~65%!

So, as a last resort, I decided to swap the positions of the cards, placing the 750 on the first PCIE slot and the 650 on the second. Immediately the 750's utilization and performance went to the levels they should be, with reported utilization of ~90%! In the meantime, the 650 Ti continued to crunch Like A BOSS with ZERO performance drop! What a card!!

Placing the 750 Ti on the first slot seemed to make the stability problems go away, but then other problems surfaced: high temperatures, leading to throttling and in some cases errors. So I started fooling around with downclocking, temperature targets, custom fan profiles and case mods, trying to stabilize the card. The recent heat wave that hit Athens, with ambient temps of 38-40C didn't help one bit!

This card, at least my own sample, doesn't like working above ~72C. Once it reaches such temps, I start seeing "simulation unstable" messages and simulation restarts. So, I had to find a way to keep the card below 70C.

Finally, I think I have come up with a stable combination: GPU clock down to 1000MHz (from the 1214MHz it boosts to) and memory clock down to 2600MHz (from its default 2700), a prioritized temperature target of 72C and a fan profile that increases fan speed in a linear fashion up to 70C (with the fans at 70%) and increases the rate of speed increase from that point on (85% at 72C and 90% at 75C, although the temp target ensures it will not reach that temp). Also, I removed the dust filter from in front of the side fan! It appears it is amazingly restrictive! Removing it lowered temps by ~5C!!

All the above keeps the card running at ~90% utilization and at 62-66C, depending on ambient temp. I could perhaps raise the GPU clock to 1100MHz, but I prefer to play it safe for the summer and increase it come autumn and its pleasantly lower temps!

I hope I have reached the end of my saga! One word of caution to those looking out to buy a 750Ti: DON'T go for the ASUS one, it's VERY poorly cooled!
____________

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 37200 - Posted: 1 Jul 2014 | 17:47:08 UTC

Just wanted to post something that really helps, so others know as well.

NVidia Inspector provides one amazingly helpful function: Resetting the graphics driver!

One heat-related problem one may have to deal with is the card massively down-clocking. Frequency drops to some absurdly low value and stays there. Suspending / resuming the task, stopping / restarting BOINC do no good. The card stubbornly remains down-clocked. It seems all that remains to do is to reboot the machine...

Thankfully, you can use NVidia Inspector's -restartDisplayDriver command to restart / reset the driver and, magically, your card will start working max-out once more! Note that you may need to restart BOINC and / or log out of your session and back in. For some this may equal to a reboot, but for others, that have other things running on their machine as well, it is a valuable time saver!
____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 37201 - Posted: 1 Jul 2014 | 17:50:15 UTC - in response to Message 37187.

Dust filters are important though. I clean them once a week with a vacuum cleaner.

Asus Tweak tool works sort of okay if you get the hang of it. However when running rigs 24/7 after a week the program get issues and graphs are not working after a few days of continuous run. And it is not always doing as what it is told.

EVGA Precision is the best tool I think I am using it too. I have set the power target to 105% and temp target to 75°C. But for a 780Ti with dual fans from EVGA. Not my first choice but the one's with one radial fan are all sold out in the Netherlands from EVGA and I want that brand as they are among the best over the best. Also more expensive. But that 780Ti is running at 68°C 24/7. So EVGA Precision is doing a good job here. Downside from this card is that the case will become to warm and the CPU runs warmer too. I have already heat issues and it is still cold here. I know Greece will become a lot hotter. So good luck with your heat issues Vagelis.


____________
Greetings from TJ

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 37209 - Posted: 2 Jul 2014 | 16:55:58 UTC - in response to Message 37201.

Thanks for your message TJ! :)

I agree with you about dust filters, I've been using them for years with my previous case and they really did wonders! But these that came with my new Corsair case seem to be too restrictive. An increase of almost 5C when using the dust filter with the side fan is amazing! It makes the difference between good performance and poor with stability problems!

So, at least until the hottest part of the summer is behind us, the dust filter on the size fan is off. I know, I will probably find the inside of the computer a little on the hairy side, next time I open it for cleaning, but I do hope the positive static pressure my fans are building in the case will not let things get too scary!
____________

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 37216 - Posted: 3 Jul 2014 | 15:08:53 UTC

MSI Afterburner for me on all machines. Works on all GPUs (both NVidia and AMD), has many controls, a reset function, controls GPUs individually or en masse (for similar models). It has great fan controls, is light on resources, and now even provides ram and CPU core usage. Rock solid.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 37218 - Posted: 3 Jul 2014 | 19:58:16 UTC - in response to Message 37216.
Last modified: 3 Jul 2014 | 20:18:29 UTC

MSI Afterburner for me on all machines. Works on all GPUs (both NVidia and AMD), has many controls, a reset function, controls GPUs individually or en masse (for similar models). It has great fan controls, is light on resources, and now even provides ram and CPU core usage. Rock solid.

MSI Afterburner is simply the best... better than all the rest...
I like their motherboards too, less expensive and haven't seen one fail yet...

Vagelis, 35 to 40°C room temps really isn't cool!
As a general rule of thumb if the ambient case temperature goes over 50°C expect hardware failures.
As tasks worked with the side of the case off it seems likely that the ambient case temperatures was the issue and a component on the GTX750Ti is susceptible to ambient temperature.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 37220 - Posted: 4 Jul 2014 | 8:39:46 UTC - in response to Message 37218.

MSI hardware is just as good as anybody else's in my book, through my PC building / upgrading years I've tried many of them, Asus, MSI, Gigabyte, DFI, QDI (remember them?), etc. I've never had any quality complaints, although one Athlon motherboard I had blew a capacitor on me! :D

Generally speaking, over the years they've all upped their quality standards, using better parts, better designs, better power delivery components, etc.

All that said, I do tend to prefer Asus parts, I've associated them in my mind with "better quality", but on second evaluation I can always just pick a part from someone else.

As for my card, yes it seems heat causes many problems. It seems to prefer to work below 70C. I am kind of disappointed, not so with Asus - they've just produced a gaming card, not a computing GPU that is supposed to work at 90+% 24/7 - but rather with myself. I should have picked the Gigabyte model with the heatpipe-powered Windforce HS.

Good thing is, I never had to have the side panel off and have all 6 fans spinning right next to my ear! I just had to remove the side fan's dust filter. This is better than one might think, because my case's panels have thick noise-insulating material that reduce noise greatly, so you definitely don't want to have them off!
____________

GoodFodder
Send message
Joined: 4 Oct 12
Posts: 53
Credit: 333,467,496
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 37225 - Posted: 5 Jul 2014 | 10:08:16 UTC
Last modified: 5 Jul 2014 | 10:09:59 UTC

If you have a case fan blowing directly on your cards; have you tried taking the shroud off your 750 ti?

I have a gigabyte 750 ti card and have it running passively with a 650 ti at around 70C on a matx motherboard.
Incidently found adding a U shaped shroud made from cardboard over my 120mm side case fan helps distribute the air more evenly to my outside 650 ti card - all depends of course on your layout.

Post to thread

Message boards : Graphics cards (GPUs) : 750TI-650TI Combo on Linux

//