Advanced search

Message boards : Number crunching : What to build in 2014?

Author Message
Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34458 - Posted: 24 Dec 2013 | 9:24:52 UTC

We've been having some excellent discussions about building our own systems lately. Thanks to the excellent advice and feedback we receive here from our intrepid gurus my own thoughts on what to build has changed several times. My goals are unchanged:


  • small, or else big but with high component density as in several cards on a single mobo
  • scalable (in a physical or spatial sense)
  • inexpensive
  • future proof


After a lot of thought, the first new system I build in 2014, probably at end of January, is probably going to use something very similar to this BIOSTAR NM70I-847 Intel Celeron 847 1.1GHz 2C/2T BGA1023 Intel NM70 Mini ITX Motherboard/CPU Combo.

It's mini ITX form factor (17 cm X 17cm, 6" X 6") satisfies my first criteria. Picture one of these with a video card in the slot and think of it as an L shape. Now imagine 2 L's arranged with 1 foot on the bottom and the other foot on top and directly above the first foot, both legs vertical, something like this:

____ | | | # | |___|


The # represents a single PSU that feeds both mobo/GPU combos. Let's call it an "LPSUL module". Now imagine several LPSUL modules side by side in a "rack". If the 2 L's and PSU are spaced far enough apart and/or the ambient air temp is kept low enough they'll cool well.

Before I go into details of the rest of the grand plan... what do you guys think of this mobo and CPU combo? The CPU has integrated Intel 2000 graphics which I think is NOT the kind we can use with OpenGL, correct? One thing I don't like about it is that it's single PCIe X16 slot runs at X8 but I'm convinced that if the CPU can keep up, the PCIe bandwidth will be sufficient for GPUgrid for some time to come. Is the CPU going to be too weak? Opinions, please?

Unless the mobo gets bad ratings here, in January I think I'll buy one and put it plus 1 GB RAM on one of my GTX 670 to see how it performs. If it works I'll try a GTX 780 or perhaps 780Ti later in the year, maybe even a Maxwell, maybe a dual GPU card (GTX 690 for example).

If the mobo works as well as I hope, it would allow me to buy many small, inexpensive pieces slowly over time rather than requiring a big initial expenditure to get up and running.

For the "rack" I have an old wooden (particle board) bookshelf with adjustable shelves. It was headed to the recycle bin so I am free to cut and modify it any way I want or toss it in the bin if it doesn't pan out. I already have it positioned up against a window. The shelves are 72cm long X 30cm deep (28.5" X 11.75"). One mobo/GPU combo will attach to the top of a shelf while it's sister hangs like a bat from the shelf above. In my mind I see 4 of the double L modules per shelf. The unit has 6 shelves at the moment. I see myself pulling a 30 amp feed into the room in the near future.

I already have the bookshelf cum "cruncher rack" positioned up against a window and I'm measuring it up for custom add-on ducts, doors and fans, all of which will be top quality junk I find here and there and preferably free. I'll take pics and put them in an online gallery as Dag's Mongo Cruncher Rack progresses. I promised pics of my previous "wind tunnel" thing but after I realized it was a good start but flawed I scrapped the idea and the pics. I'll definitely share pics of the mongo cruncher rack.

____________
BOINC <<--- credit whores, pedants, alien hunters

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34460 - Posted: 24 Dec 2013 | 13:14:38 UTC - in response to Message 34458.
Last modified: 24 Dec 2013 | 13:15:29 UTC

On the plus side this could be very power efficient, with a good PSU, but my gut feeling is that it’s more suited to a GTX650Ti than a GTX770 or above.
SFF motherboards often use restricted controllers/reduce PCIE lanes and RAM frequencies, so read the details. I expect these would limit high end GPU somewhat (certainly by a few %).

That motherboard (and ones like it) are probably not designed to deliver 75W of power through the board continuously, so don’t be surprised if it fails within 18months. It’s nice to have a 3years Manufacturer’s warranty, but the return costs might be prohibitive and if you factor in your times it’s rarely worth the bother.

The main weakness might well be the CPU. For GPUGRid work, Keplers presently rely on the CPU for some processing, and a fast turnaround is needed. I can’t say what this would be as a percentage today, but from past experience I expect it would be more than a 10% hit with a 1.1GHz Celeron compared to a high end CPU, and that’s by itself.
The top Intel processors now have a PCIE controller on the CPU. Without this you are going to get a slight reduction in performance, something that’s probably more noticeable with 2 or 3 GPU’s, but still a small factor.
Was never a great BIOSTAR fan, but maybe they have improved?
The fact that it’s PCIE 2.0 X8 isn’t inherently significant and ditto for the 1333MHz DDR3 limit, which probably is the real limit.

There is no mention of Linux support, so I would look into that before jumping.
As far as I know that Intel GPU won’t bring OpenGL/OpenCL, not that you can use it on Linux anyway, and not that all the Intel processors that support OpenCL are usable by Boinc – my Xeon E3-1265L V2 processor can’t be used with Einstein even though it’s got the same Intel graphics as an i7-3770. The drivers are installed, Boinc sees it, says what it, but it doesn’t get used?!?

I suggest you use 4GB RAM rather than 1GB (it’s probably not enough for some tasks and there is no headroom). With a 670 you are likely to hit bottlenecks and I think a 780Ti would struggle badly. You could easily start running into downclocking and may never get these cards to boost to their full potential.

Performance ways Maxwell’s are as yet unknown & untested. We do know some things about the cards design, such as it will have its own CPU, but how useful this is to GPUGrid remains unknown. If it transpires that the Integrated ARM CPU is sufficient to support GPUGrid work, then the need for a powerful CPU might vanish. I suspect that ACEMD would need some surgery though. Having just read up on the unified memory, it’s still reliant on PCIE bandwidth so that might be an issue for systems will lesser PCIE bandwidth. These GPU’s are supposed to be a bit faster, a bit more power efficient, more independent from the CPU and should scale better, but I think the main improvement will be facilitating a greater coding audience (CUDA 6). All if’s but’s and maybe’s though…

My G2020 system 2.9GHz (which was really a proof of point project), hosted 2 high end GPU’s (GTX670’s) and used <400W while crunching GPUGrid. It was easy enough to keep temperatures down. While the motherboard would use more power than the Biostar it supports two GPU’s. Any time I have had a system with 3 or 4 GPU’s I’ve had trouble and usually lots of it. For me two GPU systems are a happy medium. Do a lot more work than a single card system, without the hassle or expense that comes with 3 or 4GPU systems (high end everything).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34492 - Posted: 28 Dec 2013 | 4:33:36 UTC - in response to Message 34460.

Thanks, those are all good points. After Googling around I find several reports of people running Linux on that mobo/CPU combo, no problems there.

As for restricting PCIe lanes, yes, the specs state PCIe X16 (X8) but it's PCIe Gen 2.0 so it might suffice. I don't know if it restricts RAM frequency.

The ability to continuously deliver 75 watts across the PCIe bus is a big consideration I didn't think of. What factors reduce the mobo's ability to deliver the power? Is it simply narrow copper trace widths? If that's the only cause then cutting the trace(s) and replacing it with a heavier copper wire might fix that problem. Trust me, I can solder things that small, no problem. If it's a matter of capacitance and other components on the board then I likely can't fix that.

I also noticed lots of negative comments about Biostar's quality while Googling around. I'm going to look into warranty and replacement policy before purchasing.
____________
BOINC <<--- credit whores, pedants, alien hunters

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34554 - Posted: 2 Jan 2014 | 17:09:35 UTC - in response to Message 34492.

While I would not be comfortable getting a Biostar board, ITX boards are really not designed for discrete GPU’s and would be more suited to the likes of a GT620, than a GTX770. Some don't even have a PCIE slot. I don’t think they are up to running 24/7 for years and that’s without a GTX770. The solder tracks are probably cheap as, but so is everything else including capacitors quality or count, and its not worth rebuilding a board when you can simply get another better board. I’m not even sure it’s possible to retrace the solder or rewire; some tracks may be hidden within the motherboard (which is basically about 6 layers of plenum grade resin stuck together).
I expect that CPU isn’t powerful enough and you would lose >10% performance because of it. The cumulative loss from the CPU performance, lower RAM speeds, reduced controller performance, reduced PCIE lanes @ 2.0 might tot up to 20% performance loss compared to a mid-range desktop system. 20% of the cost of a $300 GPU is $60, which would be enough to cover the extra in getting a G2020 + dual PCIE slot Motherboard, rather than the Biostar. That system would be 20% faster, last longer and allow you to add a second GPU which would make it more power efficient. Of course there might be better options than a G2020 based rig now, but at least you wouldn’t be caught out by some unknown restriction found in integrated CPU’s. For example the embedded J1900, looks great with it’s 4 cores, 2.42GHz freq. and 10W TDP, but it’s limited to supporting 4 PCIE lanes! Would the card even work using 4 PCIE lanes? Would it downclock or just continuously fail work?
I honestly think the best design is a two GPU mainstream system; one GPU has a lot of overhead, 3 or more GPU’s require expensive, high end components, come with serious cooling problems and are too much hassle to be bothered with; especially if you can build 2 or 3 systems for a lot less.
I do still like the idea of being able to design and build a reasonably efficient 1 GPU system on the cheap (something that could be used as a media centre system), but it’s that ‘potential’ 20% hit that I don’t like.

____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34658 - Posted: 15 Jan 2014 | 9:11:35 UTC - in response to Message 34554.

OK scrap the Biostar/Celeron combo board. It was a bad idea plus they're all sold out. Now here's the bomb...

MSI H61M-P31/W8 LGA 1155 Intel H61 Micro ATX Intel Motherboard with UEFI BIOS
Intel Celeron G1610 Ivy Bridge 2.6GHz LGA 1155 55W Dual-Core Desktop Processor BX80637G1610

The above combination is only $10 more than the Biostar/Celeron combo board but it has PCIe x16 3.0 as opposed to PCIe x8 2.0, a better brandname mobo and double the CPU speed. Four of the above combos totals ~$400.

An extended ATX mobo that has 4 PCIe gen 3.0 slots that all run at x16 speeds when all 4 slots are occupied sells for far more than $400 plus you need a very expensive CPU to run all the slots at x16. The Celeron G1610 may not have enough lanes for x16 but for another $20 I can get an i3 Ivy bridge that I am quite sure does.

The mobo is slightly larger than the Biostar mobo but I think I could cram 4 of them, 4 GPUs plus 2 PSUs into the same volume provided by an e-ATX sized case. Add custom on/off/reset switches and power LED, keep the stock fan and heatsink on the CPU and GPU and add custom air cooling based on huge noisy fans and a sound-proof custom rack/case, maybe even mount the fans outside the house and duct cool air in and hot air out. No HDD or optical disk, boot from PXE, use NAS and go headless to eliminate the video cable and decrease RAM overhead, use a minimal Linux OS to reduce RAM overhead even further (shoot for 2 GB).




____________
BOINC <<--- credit whores, pedants, alien hunters

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2335
Credit: 16,178,080,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34664 - Posted: 15 Jan 2014 | 11:35:47 UTC - in response to Message 34658.

An extended ATX mobo that has 4 PCIe gen 3.0 slots that all run at x16 speeds when all 4 slots are occupied sells for far more than $400 plus you need a very expensive CPU to run all the slots at x16.

You don't need to think in extremes. Look for the Gigabyte GA-Z87X-OC motherboard, it should be around $220. It has a more expensive version called GA-Z87X-OC Force (around $400), which has the capability of running two PCIe 3.0 cards at x16, and four at PCIe 3.0 x8. PCIe 3.0 x8 is sufficient for the current GPUGrid client. I don't recommend to put four cards in a single system, though.

The Celeron G1610 may not have enough lanes for x16 but for another $20 I can get an i3 Ivy bridge that I am quite sure does.

Ivy Bridge i3 processors don't have PCIe 3.0, they are only PCIe 2.0. See i3-3240. Only the i5 and the i7 series has PCIe 3.0 from the Ivy bridge series. However the Haswell series i3 processors do have PCIe 3.0. See i3-4130.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34669 - Posted: 15 Jan 2014 | 12:59:01 UTC - in response to Message 34664.

You recommended the Gigabyte GA-Z87X-OC to tomba a few days ago so I checked it out. All the vendors I like to deal with are out of stock on that board and don't plan to stock more. The reason seems to be that it has trouble running more than 2 PCIe cards. It sounds like the GA-Z87X-OC Force is intended to correct the deficiencies in the GA-Z87X-OC. Well, I want at least 3 more GPUs so the GA-Z87X-OC is crossed off my list.

I don't recommend to put four cards in a single system, though.


Why? Cooling issues? I have the heat beat, not a problem here.

Over a year ago you said there are driver issues with 4 cards on one mobo but I read in the NVIDIA forums the newer drivers don't have a problem with 4. Is there another reason?

____________
BOINC <<--- credit whores, pedants, alien hunters

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2335
Credit: 16,178,080,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34673 - Posted: 15 Jan 2014 | 14:03:11 UTC - in response to Message 34669.

I don't recommend to put four cards in a single system, though.

Why? Cooling issues? I have the heat beat, not a problem here.

It depends on what kind of GPUs are we talking about.
Four non high-end GPUs: It's better to have two high-end cards than four non-high end despite they are more expensive because of cooling problems, and because every GPU processes a separate WU, therefore a high-end card (GTX780Ti) is more futureproof. Today's fastest GPU could process a long workunit under 24 hours, even in the future when they will be 5 times longer than today's long workunits.
Four high-end GPUs (~250W/GPU): it is difficult to silently dissipate ~1kW from a PC case, so it's recommended not to put the PC in a case at all, or to have water cooling (but it could have reliability problems). Power problems: four high-end GPUs could consume 75W each from the PCIe slots, that is 300W (at worst case it could be even more if you overclock them). A regular motherboard powers its PCIe slots through the 24-pin ATX power connector, which has only two 12V pins. 300W/12V=25A, that is 12.5A on each pin (plus the power for the chipset, and the memory, and the coolers). I assure you, that those pins will burn. Not in flames, hopefully. In the end the OS will crash, and a GPU could be damaged also (my GTX 690 broke that way). So if there are four high-end GPUs in a single MB, that MB should have extra power connectors for its PCIe slots, and they must be connected to the PSU (without converters). It is highly recommended not to skimp on any component when building a PC for crunching with four high-end GPUs. And this host should not have Windows Vista, 7, 8, 8.1 OS, because of the WDDM overhead (maybe it won't be that bad using the Maxwell GPUs, but we don't know it yet)

Over a year ago you said there are driver issues with 4 cards on one mobo but I read in the NVIDIA forums the newer drivers don't have a problem with 4. Is there another reason?

You are right about that, but - as far as I can recall - that discussion was about putting four GTX 690s in a single MB, that is 8 GPUs in a single system.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34686 - Posted: 15 Jan 2014 | 23:13:56 UTC - in response to Message 34673.

So if there are four high-end GPUs in a single MB, that MB should have extra power connectors for its PCIe slots


Aha, now I understand. Now that I think about it, I have a very old mobo with only 2 PCIe slots and it has an extra power 4-pin Molex connector just for PCIe slot power.

Yes, the discussion was about putting 4 GTX 690 on one mobo. I've scaled that ambition back to 4 GTX 780Ti. Now that you've done the math for me (thanks) I agree 12.5A on those pins is a lot and that is probably why I have read elsewhere the non-Force Gigabyte board is inadequate for 4 high demand cards. I know Gigabyte's literature for the Force model mentions something similar to "it meets PCIe slot power requirements in extreme conditions better than previous models". I'll definitely investigate that before putting 4 high end cards on a Force board. I think I'll email Gigabyte and ask them if it will handle 4 high-end cards and what they estimate the life-expectancy to be.

I still like the 4 mini-ATX mobo scheme. It allows a lot of flexibility for physically arranging all the components into a compact yet easily cooled custom case. Also, if 1 board fails it's a smaller loss compared to a $220 board. I agree about dissipating 1KW from a case silently but I don't want the expense and problems that come with water block, pump and radiator style liquid cooling. I know I can do it with air alone and I know I can make it very quiet as long as I think outside the box.

I am still considering liquid submersion cooling but not with mineral oil. It's too thick. My friend who works for an electric utility company told me he will bring me a free 20 liter pail of the cooling oil they use in big transformers. He guarantees it will not affect any of the components on a motherboard and it's environmentally safe. He says the fumes are safe to breath but the aroma is not the kind of aroma you want in your home. He says all it would need is a reasonably air-tight seal on the container (cheap and easy to do) and a small (10mm) vent hose to the outdoors. Most important is the fact it is thinner than mineral oil, more like diesel fuel, so it will circulate well due to convection alone. If it circulates well enough then it might not need a pump, just a properly designed convection system.

____________
BOINC <<--- credit whores, pedants, alien hunters

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34753 - Posted: 21 Jan 2014 | 20:03:34 UTC - in response to Message 34673.

A regular motherboard powers its PCIe slots through the 24-pin ATX power connector, which has only two 12V pins. 300W/12V=25A, that is 12.5A on each pin (plus the power for the chipset, and the memory, and the coolers). I assure you, that those pins will burn. Not in flames, hopefully. In the end the OS will crash, and a GPU could be damaged also (my GTX 690 broke that way). So if there are four high-end GPUs in a single MB, that MB should have extra power connectors for its PCIe slots, and they must be connected to the PSU (without converters).


It turns out both the the Gigabyte GA-Z87X-OC and the GA-Z87X-OC Force have that extra power connector for the PCIe slots, Gigabyte's name for the feature is OC Peg.

Another nice feature that both boards have is OC Brace which appears to be a metal bracket that holds up to 4 cards in place perpendicular to the mobo in case-less or custom case applications like mine will be. Excellent feature! I just finished laying out such a bracket on a piece of sheet metal and was about to start drilling and cutting it. Instead I'll order a GA-Z87X-OC Force and use their bracket.

Unless someone points to some negative aspect, I'll be ordering a GA-Z87X-OC Force and modest i5 Haswell at month's end.

BTW, I don't see the issue of cooling 4 cards side by side on 1 board as a problem. I'll be handling that issue by populating the 4 PCIe slots with 4 cards that have radial fans. Unlike axial fans which come in all sorts of configurations, radial fans are very similar in size and location regardless of brand or model which will make it easy to design and fabricate a custom duct that will carry cool air from outside the case directly into the cards' fan intakes. The fans will be lined up one behind the other almost perfectly, give or take a few millimeters, even closer if the cards are all of one brand and model. The radial fans will then push the hot exhaust into a duct leading to the outdoors to keep the case temp and ambient temp low.

____________
BOINC <<--- credit whores, pedants, alien hunters

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34841 - Posted: 28 Jan 2014 | 23:04:17 UTC - in response to Message 34753.
Last modified: 28 Jan 2014 | 23:10:51 UTC

Pounced on the GA-Z87X-OC Force mobo at Newegg Canada, priced at 419.99 but with Newegg discount I'm getting it for $394.99 CDN plus $10 shipping plus 5% tax on (the price of the board + price of shipping). Then I get a $100 manufacturer's rebate so it'll be $294.99.

Think I'll wait on the CPU and pounce on a great price on a high end Haswell when I find it. Would like ~$100 off on the CPU too. Because I deserve it. Good price... that's what changes *my* game and makes *my* jaw drop. Looking for $100 off on each of 4 780Ti too or maybe I'll just wait and pay full tilt for some new Maxwells.
____________
BOINC <<--- credit whores, pedants, alien hunters

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34851 - Posted: 31 Jan 2014 | 3:23:02 UTC - in response to Message 34841.

Good price... that's what changes *my* game and makes *my* jaw drop.


What makes me do the forehead palm and say "Doh!" is when I realize in spite of being warned 101 times by skgiven and possibly others I fall into the trap anyway. I got a great deal on the mobo and it's up to the job wrt to slot spacing and power capacity but it's socket 1150. CPUs for socket 1150 have only 16 PCI lanes when I want a minimum of 32 lanes and preferably 64. Doh!

That's why I like tables and charts and precisely why I should have made one to summarize all the advice and things I was hearing as well as the implications. That's what ya have to do when you get this old or ya forget. Now my jaw-droppingly "inexpensive for the features it has" mobo will run at most only 3 video cards, if I'm lucky, and they will run at (x16,x4,x4) if they run at all.

It is guaranteed to run 2 cards at (x8,x8). I don't think it can do 4 cards at (x4,x4,x4,x4) though if it did that config just might be fast enough since it is PCIe gen 3.0. If 3 cards work then iff GTX 690 has some sort of on card bus arbitration perhaps 3 GTX 690 or 790 (if they ever appear) is a good plan B now that I've screwed up my own game.

Would someone please pass the salt, this crow tastes flat.

____________
BOINC <<--- credit whores, pedants, alien hunters

a1kabear
Send message
Joined: 19 Oct 13
Posts: 15
Credit: 578,770,199
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 34858 - Posted: 1 Feb 2014 | 15:25:17 UTC

ouch :( i was under the impression that the oc force board could do 4 x16 lanes too :/

it sure looks pretty though :D

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34859 - Posted: 1 Feb 2014 | 20:50:39 UTC - in response to Message 34858.

It's not the board itself or the chipset that is limited. The limitation is in the CPUs that can be used on the board. The board and chipset are capable of providing 8 lanes to each of 4 slots, a total of 32 lanes. I would be quite content with that as they are PCIe 3.0 which would provide data throughput roughly equivalent to 16 lanes on each of 4 slots at PCIe 2.0 speed. An x16 2.0 slot is said to be sufficient even for a GTX 690.

The limitation is due to the fact the board is CPU socket 1150. Info at the Intel website clearly states all Intel socket 1150 CPUs are limited to 16 PCIe lanes, a point skgiven kindly made several times between posts in this thread as well as other threads, a point I forgot when I ordered the board. It came back to me when I was rummaging around the Intel website trying to pick out a CPU for the board.

From a crunching perspective, it doesn't make sense to build a board with 4 x8 slots then limit the CPU selection to models that can power at best 2 x8 slots but that's what they did and I suppose they did it for gamers who can run 4 cards in SLI and perhaps not be restricted by the PCIe bus. Or maybe they just don't realize they're "lane restricted".

I should return it and pick something more suitable but damn it IS a sexy looking board!! The orange and black Hallowe'en theme and the way the heat pipe curves just like a woman's hips makes me think they people who designed this board have been tasking lessons from the people designing automobile bodies. Hey, wait a minute! Now I see what made me cough up the dough and forget all the advice I received!! Damn those sneaky marketing people :-)

I should have gone with 4 mini ATX boards. I would have ended up with more total PCIe capacity for less money. On the other hand this mobo might do very well with 3 Maxwell cards on it. It might even be sufficient for the 2 GTX 670 and 1 GTX 660Ti I now have.

____________
BOINC <<--- credit whores, pedants, alien hunters

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2335
Credit: 16,178,080,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34860 - Posted: 2 Feb 2014 | 0:43:37 UTC - in response to Message 34859.

I should return it [the GA-Z87X-OC Force] and pick something more suitable...

As you said before in this post, the PCIe lanes are CPU limited. So if you want something more suitable, you should look for a Socket 2011 board (and a Socket 2011 CPU), but it would be an overkill. There is no significant GPU crunching performance gain between having 4xPCIe3.0x8 or having 4xPCIe3.0x16. You can spend your money more wisely, if you want the best bang for the buck. So I still recommend you to use a Socket 1150 based system. You won't find a better S1150 board than this one, because such a motherboard should have three PCIe switch, but other high-end S1150 motherboards also have only one. For example: ASUS Maximus VI Extreme, or ASUS Z87WS

I should have gone with 4 mini ATX boards. I would have ended up with more total PCIe capacity for less money.

PCIe capacity is not the Holy Grail of GPU computing. You should have a decent CPU to properly utilize the bandwith of the PCIe 3.0, so in the end you would spend more money on CPUs than the optimal.

On the other hand this mobo might do very well with 3 Maxwell cards on it. It might even be sufficient for the 2 GTX 670 and 1 GTX 660Ti I now have.

True. I think it could do very well even with 4 pieces of GTX780Ti.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34863 - Posted: 2 Feb 2014 | 11:57:37 UTC - in response to Message 34860.

Thanks for the reassurance, Retvari. I will keep it. I like the way it's laid out, it's going to work very well in my custom case.

For the CPU, I think I'll go with an i5-4440. It's not an easily OCd K model but I want VT-d and K models do not have it. It runs stock at 3.1 GHz, 3.3 in turbo-boost. That should be fast enough. It'll be a dedicated cruncher running a minimal Linux OS, no web surfing video streaming just crunching.

On the other hand if I run across a great deal on an i7 Haswell I'll jump on it.



____________
BOINC <<--- credit whores, pedants, alien hunters

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 34886 - Posted: 3 Feb 2014 | 19:26:27 UTC - in response to Message 34863.

Nope, this board was a mistake. I'm glad I'm getting a $100 rebate because it's going to take at least 2 hours additional labor and $50 in parts to make it work. The problem is the cards are too close together to allow sufficient air into the fans. I'll solve it by removing the 4 fans and replacing them with 1 big fan and a duct that pushes air into the back of all 4 cards. Actually, maybe that's a good thing as it will allow me to crop 25 or 30 mm off the tail of each card which will allow room for a thicker sound/heat barrier on the cabinet door.

____________
BOINC <<--- credit whores, pedants, alien hunters

Post to thread

Message boards : Number crunching : What to build in 2014?

//