Message boards : Graphics cards (GPUs) : GTX580 specifications
Author | Message |
---|---|
Not confirmed. | |
ID: 19145 | Rating: 0 | rate: / Reply Quote | |
This may just be a paper launch to deflect away from ATI’s 6000 series cards, due out later in Nov. Even the change to the 500 series suggests they want people to think they have released a new series of card. I think it will be a while (Jan) until we see how these perform on the new ACEMD 3.2 app. | |
ID: 19159 | Rating: 0 | rate: / Reply Quote | |
So far it loks like the same architecture (which is fine) with a general overhaul regarding manufacturing, improving speed and/or leakage (that's how they manage slightly higher clocks at slightly reduced power consumption) and reliability (otherwise they still couldn't provide a 512 SP part). TSMCs 40 nm process should be almost mature by now and nVidia should have had enough experience to handle it. | |
ID: 19167 | Rating: 0 | rate: / Reply Quote | |
GTX 580 Specs: | |
ID: 19283 | Rating: 0 | rate: / Reply Quote | |
That looks like everything the GTX480 should have been :) | |
ID: 19297 | Rating: 0 | rate: / Reply Quote | |
Techpowerup are giving these preliminary specs, | |
ID: 19354 | Rating: 0 | rate: / Reply Quote | |
They also said "about 3 billion" transistors for GF100 prior to the launch, so I wouldn't read more than "between 3.2 and 3.49 billion" into this. Apart from that and the clock speeds there's not much difference between these sources, or did I miss anything? | |
ID: 19358 | Rating: 0 | rate: / Reply Quote | |
The "gone" pre-review did say 3.0M, and that they trimed some leakage. | |
ID: 19359 | Rating: 0 | rate: / Reply Quote | |
| |
ID: 19398 | Rating: 0 | rate: / Reply Quote | |
The memory chips are probably at least as high-quality as the previously used ones. But the memory controller is the same, and that's the one who's limiting the OC here. | |
ID: 19434 | Rating: 0 | rate: / Reply Quote | |
I just started running a couple of the new GTX 580's here, they haven't finished any Wu's yet though. The Wu's that are finished already on that Host are from a couple GTX 460's that were in the Box. | |
ID: 19571 | Rating: 0 | rate: / Reply Quote | |
It's great to have someone crunching with the new top GPU here so soon! | |
ID: 19572 | Rating: 0 | rate: / Reply Quote | |
I tried them but didn't like what I was seeing, my CPU Temps jumped by 10c-13c so that's a no no. I turned HT back off but am still running the SWAN_SYNC=0 to see what happens. Looks like each GPU is using 12%-25% of a CPU Core when it needs to that way ... | |
ID: 19575 | Rating: 0 | rate: / Reply Quote | |
Run the GPU's at reference frequencies (stock), and keep the fans high. | |
ID: 19576 | Rating: 0 | rate: / Reply Quote | |
Run the GPU's at reference frequencies (stock), and keep the fans high. The Temp's on the GPU didn't go up any, it was the PU that took a Jump. The i7 920's don't like anything over 4 Cores I've found, at least not at 3.80GHZ anyway on Air | |
ID: 19577 | Rating: 0 | rate: / Reply Quote | |
I tried them but didn't like what I was seeing, my CPU Temps jumped by 10c-13c so that's a no no. I turned HT back off but am still running the SWAN_SYNC=0 to see what happens. Looks like each GPU is using 12%-25% of a CPU Core when it needs to that way ... This is interesting, a test of SWAN_SYNC on the very fastest NVidia GPU. Looking at your results there are 2 WU types that completed with SWAN_SYNC both on and off. On one type the speedup was 8% and on the other 21%. It would be interesting to see results with SWAN_SYNC off and the priority boosted to High or even just to Above Normal. The GPUGRID process can be boosted automatically with a program such as eFMer Priority 64: http://www.efmer.eu/boinc/download.html | |
ID: 19581 | Rating: 0 | rate: / Reply Quote | |
The i7 920's don't like anything over 4 Cores I've found Well.. I'd say that's because HT actually increases hardware utilization. There's no free lunch here, though: throughput at the same clock speed increaes, but you have to pay for it in terms of energy. Running GPU-Grid it's a little different, as the CPU is not directly doing any crunching, it's just asking the GPU "are you done yet?" all the time. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 19582 | Rating: 0 | rate: / Reply Quote | |
This is the fastest result ever seen on GPUGRID 3.9 ms/step on the DHFR workunit: Run the GPU's at reference frequencies (stock), and keep the fans high. | |
ID: 19583 | Rating: 0 | rate: / Reply Quote | |
Okay, I'll try some of the ideas out when I bring the 580 back over here to GPU Grid, right now I'm running the PrimeGrid Sieve Wu's with it ... | |
ID: 19596 | Rating: 0 | rate: / Reply Quote | |
Hello - | |
ID: 19689 | Rating: 0 | rate: / Reply Quote | |
I just finished my first WU via the 580. Run time 9176 sec. I am not sure how good that is or not. This is good. If you want maximum performance, you should dedicate one CPU core to feed the GPU. Use the SWAN_SYNC=0 environmental setting to achieve this. (control panel -> system -> advanced tab -> environment variables -> new (system variables) -> variable name: SWAN_SYNC value: 0 -> restart windows) | |
ID: 19690 | Rating: 0 | rate: / Reply Quote | |
I just finished my first WU via the 580. Run time 9176 sec. I am not sure how good that is or not. Thanks for the tip. I just added this environment variable. | |
ID: 19702 | Rating: 0 | rate: / Reply Quote | |
This is the fastest result ever seen on GPUGRID 3.9 ms/step on the DHFR workunit: How about 3.747 ms / step? I paste it here, before it'll be gone: <core_client_version>6.10.58</core_client_version> <![CDATA[ <stderr_txt> # Using device 0 # There are 2 devices supporting CUDA # Device 0: "GeForce GTX 580" # Clock rate: 1.61 GHz # Total amount of global memory: 1610153984 bytes # Number of multiprocessors: 16 # Number of cores: 128 # Device 1: "GeForce GTX 480" # Clock rate: 1.40 GHz # Total amount of global memory: 1610285056 bytes # Number of multiprocessors: 15 # Number of cores: 120 SWAN: Using synchronization method 0 MDIO ERROR: cannot open file "restart.coor" # Time per step (avg over 2000000 steps): 3.747 ms # Approximate elapsed time for entire WU: 7494.516 s called boinc_finish </stderr_txt> ]]> I think it's very hard to reach 3.3 ms/step, because my GTX 580 runs at 95-96% GPU usage already. Maybe we can gain another 0.1-0.2 ms/step with heavy overclocking of the GPU (some bigger GPU cooler required to do this and maybe the new Core i3, i5, i7 processors, wich will be released in 2011/Q1). | |
ID: 19899 | Rating: 0 | rate: / Reply Quote | |
Your 3.747 is the fastest result I have seen, and I did check a few systems. | |
ID: 19906 | Rating: 0 | rate: / Reply Quote | |
3.614 ms/step | |
ID: 19945 | Rating: 0 | rate: / Reply Quote | |
Excellent, you just shaved off another 0.133 ms/step, raised the bar a notch, and showed that the GTX580 can operate the most demanding tasks (GIANNI_DHFR1000) while overclocked by 10.7%. A very good card. | |
ID: 19949 | Rating: 0 | rate: / Reply Quote | |
I've made a little calculation: | |
ID: 19953 | Rating: 0 | rate: / Reply Quote | |
Your 4% loss from 1h shutdown would be offset by a day or two's increased performance ;) | |
ID: 19954 | Rating: 0 | rate: / Reply Quote | |
Could well be that your GPU utilisation dropped a little upon further OC'ing. Or that GPU memory is holding you back. Generally I would have expected a much higher improvement going from 1.6 to 1.7 GHz. | |
ID: 19982 | Rating: 0 | rate: / Reply Quote | |
I've removed my PC from it's case in order to make a new ventilator inlet at the bottom (the lower GPU cooler could not breath in enough air, because it's right next to the bottom of the case). The temps vent down of course, so I've raised the GTX580's clock to 900MHz. It has failed a couple of tasks, so I've raised the GPU's voltage by 13mV (to 1.063V). It's seems to be ok since then. It's completed a _KASHIF_HIVPR_n1_unbound_ task in 2 hours (7.203 ms/step). The max temp was 65°C at 95% GPU usage. (standard GTX580 cooler @78% plus a 12cm cooler placed right to the top of the two cards) I hope I'll receive a GIANNI_DHFR task soon. | |
ID: 19990 | Rating: 0 | rate: / Reply Quote | |
I've set a new world record in crunching GIANNI_DHFRs: 3.441ms/step | |
ID: 20002 | Rating: 0 | rate: / Reply Quote | |
"How fast do you want to get the wrong result?" is not the question we're trying to answer ;) | |
ID: 20008 | Rating: 0 | rate: / Reply Quote | |
Records are made to be Broken at any Cost ... lol | |
ID: 20010 | Rating: 0 | rate: / Reply Quote | |
Perhaps the 3.441ms/step record will fall when someone uses a Sandy Bridge CPU on Linux, with some light OC'ing. | |
ID: 20011 | Rating: 0 | rate: / Reply Quote | |
EVGA has a 2-slot water cooled 580 out, street price in Akihabara, Tokyo is 70,000yen or over $700, a ¥20,000/$200 premium. Water-cooled GeForce GTX card debut for models 580, EVGA's "GeForce GTX 580 FTW Hydro Copper 2 (015-P3-1589-KR)" was released. http://www.evga.com/products/moreinfo.asp?pn=015-P3-1589-KR | |
ID: 20033 | Rating: 0 | rate: / Reply Quote | |
It looks really good, and should give good performance, but the connectors are vertically mounted, meaning that the tubing will protrude into the next slot. Pity they did not top/rear mount the connectors. pic | |
ID: 20035 | Rating: 0 | rate: / Reply Quote | |
I just installed a new GTX 580 and ran a task to see how it performs. | |
ID: 20108 | Rating: 0 | rate: / Reply Quote | |
Hi zombie67, that is FAR too slow for a GTX580. Something is very wrong! | |
ID: 20111 | Rating: 0 | rate: / Reply Quote | |
Hi zombie67, that is FAR too slow for a GTX580. Something is very wrong! Using only 1gb (of 8gb) of RAM. 70gb of 146gb free disk space. Right click on the desktop, click NVidia control panel, Manage 3D settings, Global Settings, Power management mode, Prefer Maximum Performance. Restart the system. I tried that, but it is not possible. On the Global settings tab, power management is not listed at one of the options. On the Program Settings tab, there are two problems: 1) you have to select the Program. BOINC/CUDA/GPUGRID/ACEMD2 none of these are listed. 2) Even if it was listed, the only option choice under Power Management is "Adaptive". In any case, according to GPU-Z, the GPU is not being throttled. It is running at full speed, and full load. If you are running some very CPU intensive apps consider freeing another CPU thread; some CPU tasks want more CPU time than they can get, so in Boinc check the difference between CPU time and elapsed time, in case you are running some very hungry CPU tasks. I actually ran most of that task using only 6 of 8 CPU cores. There was plenty of idle CPU cycles. Did you install the driver after the card and then do a restart? Yes. Before installing the card, I uninstalled the old card driver, then ran driver sweeper, then installed the latest driver from nVidia's site (263.09). After each step, I rebooted. FWIW, on other CUDA projects, the card runs at equivalent speeds to other machines with the GTX 580. It is only this project where I am having this problem. Edit: I think the differing values for CPU time are a clue. I just don't know enough about the app to understand what it means. ____________ Reno, NV Team: SETI.USA | |
ID: 20112 | Rating: 0 | rate: / Reply Quote | |
I still think the card is throttling back. | |
ID: 20115 | Rating: 0 | rate: / Reply Quote | |
I still think the card is throttling back. Okay. Solution? And why only on this project? ____________ Reno, NV Team: SETI.USA | |
ID: 20116 | Rating: 0 | rate: / Reply Quote | |
What monitoring/tuning software tools are you using? I have a generic GTX570 and use | |
ID: 20117 | Rating: 0 | rate: / Reply Quote | |
Try the latest Beta driver; GPUZ might be misreporting the actual clocks. | |
ID: 20123 | Rating: 0 | rate: / Reply Quote | |
I now suspect the PSU. I have a new one on order, and will give it another shot when it arrives. Stay tuned. | |
ID: 20124 | Rating: 0 | rate: / Reply Quote | |
Did you check the GPU temps, as kts suggested? You may need to up the fan speed. | |
ID: 20125 | Rating: 0 | rate: / Reply Quote | |
Did you check the GPU temps, as kts suggested? You may need to up the fan speed. I run all my GTX Box's on AUTO Fan and haven't noticed any of them Throttle Back yet ... | |
ID: 20126 | Rating: 0 | rate: / Reply Quote | |
Manually raising the fan speed increases noise, but reduces temperatures in the card and the system. This increases longevity and may reduce power consumption – hot cards leak more energy so they need more Amps. | |
ID: 20127 | Rating: 0 | rate: / Reply Quote | |
The speeds are stock and the fans on auto. Temps are 68-70c. | |
ID: 20128 | Rating: 0 | rate: / Reply Quote | |
We've discussed PSU, overheating, throttling, optimizing driver, RAM, CPU usage, etc. You say the card works fine(like a working GTX580 should) on other projects but not here... total speculation, but could the access pattern of GPUGRID be finding bad video RAM causing Error Correction that just doesn't happen with other projects? What benchmark/testing tools are there to verify whether a video card is operating properly in hardware? If not hardware, then software? What is the proper troubleshooting sequence for a video card? | |
ID: 20131 | Rating: 0 | rate: / Reply Quote | |
We've discussed PSU, overheating, throttling, optimizing driver, RAM, CPU usage, etc. You say the card works fine(like a working GTX580 should) on other projects but not here... total speculation, but could the access pattern of GPUGRID be finding bad video RAM causing Error Correction that just doesn't happen with other projects? What benchmark/testing tools are there to verify whether a video card is operating properly in hardware? If not hardware, then software? What is the proper troubleshooting sequence for a video card? Correction. It ran the other project at top speed for a few test tasks. Running over night and the same slow down became obvious. That is why I am suspecting the PSU...even though GPUZ says it is running full out. I am suspecting not. ____________ Reno, NV Team: SETI.USA | |
ID: 20132 | Rating: 0 | rate: / Reply Quote | |
Ah! Looks like I was on the right track with the PSU. | |
ID: 20133 | Rating: 0 | rate: / Reply Quote | |
Ah! Looks like I was on the right track with the PSU. It's nearly normal. I think your CPU limits the performance of your GTX 580, or it still may be the PSU. See this task (processed on my overcklocked GTX 580) for speed reference :) Or this same type task processed on my overclocked GTX 480. You can find other GTX 580s for reference on the 'top hosts' list. My old 750w PSU has 3x 6pin pcie connectors. The 580 needs a 6pin and an 8pin. So I had to use a Y connector which takes 2 6pins and makes an 8pin. This could be a dangerous way, if your PSU has separate 12V rails and you connect them together with this Y cable connector converter. I tried switching the three of them around with no change. I had a new thought tonight. With all my 5870s, they each included a Y connector to convert two old IDE plugs to a 6pin PCI plug. So a double Y from several IDE cables into the Y 8pin connector seems to be working! This is the same dangerous method as the previous one with different 12V rails. It's not recommended to use cable converters for power connectors (especially for high current power connectors like the PCI-E or CPU), those add an unnecessary contact resistance in a way of high currents causing voltage loss and hot (even burning) connectors. The PCI rails in my PSU are dead. Looking forward to my new PSU to make this cable mess clean. The point is, it's the PSU, and the PCI rails FAIL. That's right. By the way 750W should be enough for a GTX 580. | |
ID: 20136 | Rating: 0 | rate: / Reply Quote | |
Right. My point was to demonstrate that the PCI rails were the problem, by using different rails. Proof of concept. I agree that the various Y methods are wrong. | |
ID: 20138 | Rating: 0 | rate: / Reply Quote | |
Hi zombie67, good to hear all the details and that the PSU replacement resolved the problem. Your times are spot on now. | |
ID: 20139 | Rating: 0 | rate: / Reply Quote | |
Glad to hear you tracked down the issue for your card and all is well now.
This information added to the recent Selecting a PSU for dual GTX570 / 580 use thread fills in more important details for system building. Your problem has now been Y-converted to a useful solution for others. (Still looking for recipes ;) ) | |
ID: 20141 | Rating: 0 | rate: / Reply Quote | |
I had to put a new power supply in my system when I got the GTX570. That thing will take a LOT of amps just by itself when you load it up. I dont remember the numbers but I think mine was asking for 190 Watts from 0 to full processor loading on the card. If I am not mistaken the card recommends a 550 watt supply at the very minimum. Now don't forget any other stuff you have in your system, RAMdisk, multiple HDD's, a lot of ram, a 6 core processor??? all that stuff EATS power quickly. | |
ID: 20146 | Rating: 0 | rate: / Reply Quote | |
It looks like my hunch was wrong. New 1000w PSU, and still no joy. I am down to thinking that the card is throttling itself. According to anandtech: Much like GDDR5 EDC complicated memory overclocking, power throttling would complicate overall video card overclocking, particularly since there’s currently no way to tell when throttling kicks in. On AMD cards the clock drop is immediate, but on NVIDIA’s cards the drivers continue to report the card operating at full voltage and clocks. We suspect NVIDIA is using a NOP or HLT-like instruction here to keep the card from doing real work, but the result is that it’s completely invisible even to enthusiasts. At the moment it’s only possible to tell if it’s kicking in if an application’s performance is too low. It goes without saying that we’d like to have some way to tell if throttling is kicking in if NVIDIA fully utilizes this hardware. Maybe that is what's happening here. ____________ Reno, NV Team: SETI.USA | |
ID: 20158 | Rating: 0 | rate: / Reply Quote | |
Forgot to finish my thought in the previous post. I have RMA'd the card. Let's see if I have better luck with the replacement. | |
ID: 20159 | Rating: 0 | rate: / Reply Quote | |
It looks like my hunch was wrong. New 1000w PSU, and still no joy. I am down to thinking that the card is throttling itself. According to anandtech: It would be much easier to give you more useful advice if you would be more specific on your component types. I am using a 1000W PSU (LC-Power Legion X2) for a dual GPU configuration (GTX 480 + GTX 580), and I have no such problems, even when I'm overclocked my GTX 580 to 900MHz. Now it's running at 850MHz at factory voltage (1.050V). So if the cause of the slowness is the protective throttling, it's too sensitive on your GPU only, therefore a replacement should work fine. It's designed to protect GPUs from overloads caused by GPU stress test utilities such as furmark - 'real' GPU applications (including GPUgrid) cannot cause that much power draw, and should not trigger this throttling. But if the new one is also slow, it must be some other (hardware or software) component we can't think of. Maybe a screensaver. BOINC CPU tasks running at low priority level, while GPU tasks at below normal priority level (it's higher than 'low'), so if you run other CPU demanding applications those will run at normal priority (it's higher than both CPU and GPU tasks) and will hold up BOINC CPU and GPU tasks (or slow them down a bit). There are tools for changing priority levels (I'm using eFMer priority). Raising priority levels however can make your computer less responsive, or even unresponsive. You can monitor your GPU with MSI Afterburner 2.1 beta 5. KASHIF_HIVPR type tasks should produce 90-95% GPU usage. | |
ID: 20161 | Rating: 0 | rate: / Reply Quote | |
I was not clear. I RMA'd the GPU, not the PSU. The new PSU is a 1000w coolermaster. And no, no screensaver is being used. Also, this is a dedicated cruncher. No other tasks are runing. | |
ID: 20162 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : GTX580 specifications