Message boards : Graphics cards (GPUs) : New nvidia beta application
Author | Message |
---|---|
Dear all, | |
ID: 14283 | Rating: 0 | rate:
![]() ![]() ![]() | |
Excellent news. That should blow the cobwebs off the server :) | |
ID: 14284 | Rating: 0 | rate:
![]() ![]() ![]() | |
Sounds good. Is this 60% improvement estimate an average across numerous card types, or should only a subset of users experience this? | |
ID: 14292 | Rating: 0 | rate:
![]() ![]() ![]() | |
I hope it will be a fantastic step to reduce CPU occupation. | |
ID: 14293 | Rating: 0 | rate:
![]() ![]() ![]() | |
Does this beta build address issues with cuda/fft bugs? That is should those of us with the 65nm GTX260 try them? | |
ID: 14294 | Rating: 0 | rate:
![]() ![]() ![]() | |
How can recognise if its one of the new units when its downloaded? | |
ID: 14296 | Rating: 0 | rate:
![]() ![]() ![]() | |
From what was said in other threads, | |
ID: 14297 | Rating: 0 | rate:
![]() ![]() ![]() | |
From what was said in other threads, What was said was that the new app is about 60% faster when compiled for 1.3 (double precision) and somewhat less fast when compiled for 1.1 (single precision). What was not said was which version or versions would actually be released. I would expect some rise in temperatures for many cards, so keep an eye on that. There's been nothing from the project saying this. If you have more information, please share it. GPUGRID currently runs at 77% utilization on my GTX280. Of the projects I currently run, Milkyway@Home has the highest utilization; approximately 90%. The temperature difference between GPUGRID and Milkyway is only 2 or 3 degrees. That with the GPU running some 25 degrees Centigrade below its maximum temperature, with the fan running well below full speed. Even running at 100% utilization, it's unlikely the running temperature would change significantly. On older architectures (which are less massively parallel than the G200/G200b) the application is more likely to be already be running closer to 100% utilization than it is on the G200s -- it's harder to keep a large number of parallel processors busy than it is to keep a small number of processors busy. So, most likely, the 77% utiization figure I'm seeing on the G200 is close to the lowest number you would see on any GPU. On older cards, there should be less room for improvement simply by increasing the GPU utilization. The bottom line is that on the G200 based cards, an increase in utilization probably won't raise the temperature a lot, and they have a lot of headroom. On older cards, you're not likely to raise the utilization (since they're probably closer to 100% to start with), so there's unlikely to be any rise in temperature. And all that, of course, is all making one huge assumption, that the increase in performance is due to increasing the efficiency of the parallelization to increase the GPU utilization. That's not necessarily true. The new application may use normal optimization techniques to be more efficient by performing fewer (or faster) calculations to achieve the same result. This would not increase the operating temperature. In any event, we'll see soon enough. We can replace the speculation and assumptions with some real observations as soon as the beta apps are released. ____________ Want to find one of the largest known primes? Try PrimeGrid. Or help cure disease at WCG. ![]() | |
ID: 14300 | Rating: 0 | rate:
![]() ![]() ![]() | |
Michael, you're not seeing much higher temperatures at MW because it uses double precision and GT200 has only 1 dp unit for every 8 normal sp shader units. If GPU Grid went from 77 to 90% utilization the temperatures would surely increase quite a bit. | |
ID: 14302 | Rating: 0 | rate:
![]() ![]() ![]() | |
Interesting. Thanks for the information. | |
ID: 14303 | Rating: 0 | rate:
![]() ![]() ![]() | |
Unless the GPUGrid team start writing different applications for different cards, there will only be one release that is capable of increasing performance by around 60% for CC1.3 cards, but which will also increase performance for CC1.1 cards, but by slightly less. | |
ID: 14306 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hi, | |
ID: 14307 | Rating: 0 | rate:
![]() ![]() ![]() | |
Thanks for clearing that up. | |
ID: 14308 | Rating: 0 | rate:
![]() ![]() ![]() | |
I thought CC1.0 applied to the now obsolete G80, CC1.1 is G92, 1.3 is G200, and the CC1.2 cards use G215 and G216 cores Yes, I was playing fast and loose with terms there. Here's the deal: With the exception of the very first G80 cards (8800GTX 768MB, IIRC), all of the G8x and G9x are compute 1.1 cards. As far as the CUDA programmer's concerned, all of the 1.1 silicon has pretty much the same performance characteristics and capabilities. The second generation G2xx silicon is rather more capable and has more features that are very useful to us[1], which is why we care to make the distinction for our app. Initially all of those devices described themselves as compute 1.3 but recently some low-mid GPUs have been released that call appear as compute 1.2. In practice, these seem to have the same performance characteristics as the original 1.3 devices, minus double-precision support (we've not had any of these in the lab to test, though so don't quote me). Matt [1] more on-chip registers so kernels can be more complex, relaxed memory access rules, better atomic memory ops and double precision, to name the highlights. | |
ID: 14312 | Rating: 0 | rate:
![]() ![]() ![]() | |
We are now testing the Windows build on our local server. Possibly in the afternoon we will upload some WUs with the new application. | |
ID: 14340 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hi, i have nvidia 9800gt 512mb, will it be able to crunch new wu? | |
ID: 14351 | Rating: 0 | rate:
![]() ![]() ![]() | |
One more question: in "Maximum CPU % for graphics 0 ... 100", which parameter i should write? And what does it change by changing this %, in computing the wu? That parameter applies only to projects that have a screensaver (SETI, Einstein, CPDN, Rosetta, Docking, etc.). Also, as far as I know, only CPU-based tasks have screen savers and there are no GPU applications with screensavers. The parameter says how much of the CPU should be dedicated to running the screensaver graphics (and taking away from the number crunching). For GPUGRID, this setting has no effect. | |
ID: 14355 | Rating: 0 | rate:
![]() ![]() ![]() | |
Maximum CPU for graphics should be set to 100% | |
ID: 14360 | Rating: 0 | rate:
![]() ![]() ![]() | |
On windows it does not work well. We will come out with Linux at first. | |
ID: 14545 | Rating: 0 | rate:
![]() ![]() ![]() | |
We have uploaded the new application for Windows and Linux with some 50 workunits to test. First come first served. | |
ID: 14790 | Rating: 0 | rate:
![]() ![]() ![]() | |
Does that mean the Windows version now works well? | |
ID: 14798 | Rating: 0 | rate:
![]() ![]() ![]() | |
What are these workunits called? | |
ID: 14800 | Rating: 0 | rate:
![]() ![]() ![]() | |
We actually uploaded only the Windows one now. | |
ID: 14801 | Rating: 0 | rate:
![]() ![]() ![]() | |
They are called something like L*-TEST. | |
ID: 14803 | Rating: 0 | rate:
![]() ![]() ![]() | |
Sorry guys. I let my ION try to pick up tasks and it downloaded and spat out 13 tests. Its now set to not pick up any more tasks. Not sure why I could not pick up any tests on my GTS 250 or GTX 260, perhaps just timing. | |
ID: 14805 | Rating: 0 | rate:
![]() ![]() ![]() | |
L15-TONI_TEST2901-0-10-RND2988 (state: In progress) | |
ID: 14807 | Rating: 0 | rate:
![]() ![]() ![]() | |
4 more tasks completed and validated: | |
ID: 14810 | Rating: 0 | rate:
![]() ![]() ![]() | |
gtx260 OC 696/1500/999 Boinc 6:10:25 XP64 | |
ID: 14811 | Rating: 0 | rate:
![]() ![]() ![]() | |
the cpu utilization should have remained the same. We will look into it. | |
ID: 14812 | Rating: 0 | rate:
![]() ![]() ![]() | |
I have two of these WU's running on my Windows 7 64 bit Pro', i7 920 with two GTX260 (216 core 55nm) GPU's. After they were about 66% completed I noticed that they were each using 100% of one of the eight multi-threaded cores. I closed down Boinc manager and switched off multithreading. On restarting Boinc manager, each WU was now running 100% of one of the four,non multithreaded cores. The elapsed timer on Boinc manager started again at the point at which it had previously stopped (3.5 hours). The time to completion looks to be about the same as when previously using hyperthreading (approximately 5 hours). I will forward details when completed. | |
ID: 14816 | Rating: 0 | rate:
![]() ![]() ![]() | |
The result of one of the WU's which completed after approximately five hours and seven minutes. The other is similar. Temperatures normal. | |
ID: 14817 | Rating: 0 | rate:
![]() ![]() ![]() | |
I picked up 2 so far. Links are here and here | |
ID: 14820 | Rating: 0 | rate:
![]() ![]() ![]() | |
L6-TONI_TEST2901-0-10-RND2222_2 | |
ID: 14822 | Rating: 0 | rate:
![]() ![]() ![]() | |
GTX285, XP32/ i7-920 HT ON @4.0 GHz. | |
ID: 14825 | Rating: 0 | rate:
![]() ![]() ![]() | |
Kaboom, in the middle of the night. | |
ID: 14826 | Rating: 0 | rate:
![]() ![]() ![]() | |
9800 GT, 607MHz/1517MHz/900MHz (512MB) driver: 19062 / QX9650 @3.66GHz / Vista64 | |
ID: 14827 | Rating: 0 | rate:
![]() ![]() ![]() | |
L45-TONI_TEST2901-0-10-RND5880_0 is 43% complete after 5h, so should complete in around 6h (though the estimate time to finish is 12h). On a 2.2GHz opteron Quad with a GT240. On that system typical task turnaround is about 17 or 18h. So it is preforming about 60% faster on that CC1.2 card. | |
ID: 14828 | Rating: 0 | rate:
![]() ![]() ![]() | |
gtx260 OC 696/1500/999 Boinc 6:10:25 XP64 8800GT, BOINC v6.10.29, Win7-64, AMD X2 Running with 2 instances of Wieferich@home, beta runs a bit slower than old app, machine is barely responsive, 1 instance of Wieferich stalls. If I free up one complete core (close 1 instance of Wieferich) beta runs faster and machine becomes responsive. With the old app everything ran fine with an instance of Wieferich on each core. The 100% CPU core utilization of the beta is a problem... | |
ID: 14832 | Rating: 0 | rate:
![]() ![]() ![]() | |
GTX 295, 701MHz/1509MHz/1086MHz (896MB) driver: 19062 / i7-860 HT @3.8 GHz/ Vista64 | |
ID: 14833 | Rating: 0 | rate:
![]() ![]() ![]() | |
9800GTX Stock no o/c Phenom2 o/c 3.2Ghz 8Gb RAM Vista64 | |
ID: 14834 | Rating: 0 | rate:
![]() ![]() ![]() | |
Looks like this ACEMD beta 6.05 workunit has a severe underestimate of how much CPU time it uses: | |
ID: 14835 | Rating: 0 | rate:
![]() ![]() ![]() | |
I have just uploaded acemdbeta6.06 that should use less CPU as before. | |
ID: 14837 | Rating: 0 | rate:
![]() ![]() ![]() | |
L36_TONI_TEST2901-1-10-RND9113_0 | |
ID: 14838 | Rating: 0 | rate:
![]() ![]() ![]() | |
The earlier Beta, L45-TONI_TEST2901-0-10-RND5880_0 completed as expected in 11h 34m. | |
ID: 14839 | Rating: 0 | rate:
![]() ![]() ![]() | |
The shorter tasks are all reported. | |
ID: 14840 | Rating: 0 | rate:
![]() ![]() ![]() | |
............ If it is a lot slower what if we also credit for the usage of the CPU? In such case, we would ask for 1 CPU and 1 GPU of course. That could get expensive in pure credit terms, and therefore go headlong into a lot of controversy. For that to work, the cpu element would need to compensate for the loss of one core at the highest rate given by Projects for cpu. Arguably thats Aqua, and currently their high end app on my phenom2 runs for 32 hrs at 32,000 credits - an i7 does it in around a quarter of that time. The problem comes when lower rated machine run GPUGRID capable cards. A fixed rate would bring a howl of protest as the lower capacity machine gets the higher rate for cpu. I've used extremes to illustrate the issue. As Aqua found out to its cost a few months ago, its a foolish person who ignores this kind of credit issue. Whatever anyone thinks of credits and the reality that they are worthless, the S%$T storm a high fixed rate will cause is going to slam the Project into the deck. Therefore to get out of that one, the cpu element would need to be calculated using the classic BOINC sliding scale and the whole crazy world of cobblestones, benchmarks and built in elements to the app to make it work. You take the view that going for a mid level fixed cpu rate would be "fair", you will just get it in the neck both ways "Low end machines get too much" or "high end machines dont get enough". Yup its a silly nightmare, but it will happen, dont even dream you'll avoid it, because you will not. The only way round it all is the classic sliding scale based on the "power" of the machine. However do the latter, and GPU based Projects will scream blue murder that a GPU Project is giving out far too many credits....... Theanswer? I have no idea, it will depend how much Flak you are prepared to take .... Turn left your dead, turn right your dead, go straight ahead your probably dead .... :) Aint BOINC fun :) Regards Zy | |
ID: 14841 | Rating: 0 | rate:
![]() ![]() ![]() | |
I have just uploaded acemdbeta6.06 that should use less CPU as before. Thanls gdf and others for what you're doing. Here is my POV Usually, a lot of people really like using their GPU on one specific project and their CPU on another. (I do). I don't really like seeing 100% of CPU usage for feeding a GPU. Well if it is really slower and then don't really have the choice, then I will accept that. However, crediting CPU usage is a fair way but I dislike this way as 1k/d is almost nothing on this project but so much on others. Usually when a project has a GPU app, I'm stopping my CPUs on these projects and put them in others. Anyway I cannot compare that as I didn't receive any beta WU on my gentoobox (GTX275) EDIT : By the way, with this new app, I guess task are running faster and we will get the same amount of point, then I deduce we will have a higher RAC ? right ? I'm not sure rather it is the good topic or not, but even if I have to admit I'm crunching with GPU, a bit for the points (Not solely otherwise I would have been on Collatz now, which I did anyway, but I guess I won't), I'm not sure if it is really correct to do so... It will be very easy for a new project to play with the credit to attract ppl with an easy method : Release first a "crappy" GPU app, get a fair credit/WU and then improve this new app and them increase the global RAC. (if the new app is 10 times faster than the first one...then 10 times more credits...well..this is not good for BOINC I think). I really don't mean you are doing that ! (Not at all !!) so in fact because I know that, I prefer explain here this opinion. May be GPUGrid can once again lead the way xD. A fair credit system will be not to increase (in my example) 10 times the credit, but far less, just to continue to be consistent. | |
ID: 14842 | Rating: 0 | rate:
![]() ![]() ![]() | |
My long Beta tasks stayed at Zero % for a while, but one is now at 9%, the other 2 are still at Zero and. | |
ID: 14843 | Rating: 0 | rate:
![]() ![]() ![]() | |
Anyone done a L36_TONI_TEST2901 app? Mine been running for 7hrs 36mins countdown clock not working, showing 0% for ages - have a feeling its a bust but dont want to sell it short if in fact its likely to run through. | |
ID: 14845 | Rating: 0 | rate:
![]() ![]() ![]() | |
Just started L42-TONI_TEST2901-3-10-RND1170_0 acemdbeta version 606 on a 9800GTX, it'll run overnight | |
ID: 14846 | Rating: 0 | rate:
![]() ![]() ![]() | |
All my Beta tests show 0% until they're finished. I'm going through my 4th & 5th ones if I'm not mistaken. | |
ID: 14847 | Rating: 0 | rate:
![]() ![]() ![]() | |
If a CPU is used to facilitate a 60% increase in performance of a GPU it is well worth the sacrifice even without any CPU points. That is quite logical. But some people will value their cpu projects much higher than GPU-Grid and are thus not fine with that sacrifice. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 14849 | Rating: 0 | rate:
![]() ![]() ![]() | |
Just had a 6.05 WU that refused to start. Instead the machine downloaded a new WU and left the 6.05 WU sitting "ready to start". I aborted it and it was immediately sent back out as a 6.06 WU. Hopefully 6.06 is better. IMO the old app (6.71) was far preferable to 6.05. The CPU usage really needs to be improved before this gets released. Think what will happen when the beta in it's current form hits a single core machine. | |
ID: 14850 | Rating: 0 | rate:
![]() ![]() ![]() | |
That is quite logical. But some people will value their cpu projects much higher than GPU-Grid and are thus not fine with that sacrifice. I agree ETA. The high CPU usage is a show stopper IMO. | |
ID: 14851 | Rating: 0 | rate:
![]() ![]() ![]() | |
Let me get this right ... a GIANNI_BIND use to take 8 hours on my GPU and I get 6118 points ... times 3 per day and I get 18354. Now if I throw in a CPU I can run the same WU in 4.8 hours. So that means I can do 5 WUs per day for 30590 points and you guys are bi^&chin about using 1 CPU core to get the additional 12236??? | |
ID: 14852 | Rating: 0 | rate:
![]() ![]() ![]() | |
Wow, I'm on almost 20 hours with L6-TONI_TEST2901-1-10-RND2222_0 & 18 hours with L13-TONI_TEST2901-1-10-RND4450_1 | |
ID: 14854 | Rating: 0 | rate:
![]() ![]() ![]() | |
Mixed results with 6.06. Had two long units complete with the 60% improvement in performance and even less cpu utilisation than a 6.71 unit. Both were awarded 6000 or so points. | |
ID: 14855 | Rating: 0 | rate:
![]() ![]() ![]() | |
There were 2 sets of Betas released containing short Betas and long Betas. The short Betas lasted about 5 minutes. CPU time usage was fixed for the second release of the long Betas going by the posts. 37,082.82s on a GTS 250 (10 ½h) CC1.1 29,239.43s on a GTX 260 216sp (8 ½h) CC1.3 45,470.80 on a GT 240 (12 ½h) CC1.2 41,658.85 on a GT 240 (11 ½h) CC1.2
| |
ID: 14857 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hi | |
ID: 14858 | Rating: 0 | rate:
![]() ![]() ![]() | |
The application with low cpu usage works well, so no problem. | |
ID: 14860 | Rating: 0 | rate:
![]() ![]() ![]() | |
GDF ... you and your team are the best !!! | |
ID: 14861 | Rating: 0 | rate:
![]() ![]() ![]() | |
Could you guys check if 6.06 is slower? For some reason the appox elapsed time does not get printed. | |
ID: 14865 | Rating: 0 | rate:
![]() ![]() ![]() | |
OK, I had this all nice but then lost it all as I had timed out :-( | |
ID: 14866 | Rating: 0 | rate:
![]() ![]() ![]() | |
Any guesstimates available for how long a L42-TONI_TEST2901-3-10-RND1170_0 acemdbeta version 606 on a 9800GTX would run? Its been going for 19hrs 26mins so far | |
ID: 14867 | Rating: 0 | rate:
![]() ![]() ![]() | |
... and you guys are bi^&chin about using 1 CPU core to get the additional 12236??? Not me, I'd take the increase in GPU speed any time. But we've had this discussion before in the pioneer time of GPU-Grid when they still needed to figure out how to use less than 1 core. From that I know the CPU power is important to many participants. If this proves difficult to fix: what if there was a user preference for - maximum GPU performance, use 1 CPU - use minor amount of CPU and crunch what you can the GPU MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 14868 | Rating: 0 | rate:
![]() ![]() ![]() | |
ETA ... read up a few posts ... GDF fixed it !!! | |
ID: 14869 | Rating: 0 | rate:
![]() ![]() ![]() | |
Zydor ... I find that the beta is taking about 65% of the time it takes me to process a GIANNI_BIND. Not sure what a 9800 takes but there have not be reports of these betas crashing (in fact they appear to be remarkably stable). I would suggest that if it is version 6.05 that you abort because we have already moved on to 6.06 which uses much less CPU time. If you want the points just let it run. | |
ID: 14870 | Rating: 0 | rate:
![]() ![]() ![]() | |
This one ran for 28 hours on a 8800GT with v6.05, went to 100% and then was back to 17% when I aborted it. CPU time was also > 26 hours: | |
ID: 14871 | Rating: 0 | rate:
![]() ![]() ![]() | |
Could you guys check if 6.06 is slower? For some reason the appox elapsed time does not get printed. One is running on one of my 9800GT. There is no progress shown, so I have to waite for completition. But the GPU-load looks very good (low CPU-load). ![]() | |
ID: 14872 | Rating: 0 | rate:
![]() ![]() ![]() | |
You have to compare speed between 6.05 and 6.06 which are all run on a TONI- workunit (there are two types, one short and one long). What I would like to know is the speed on a fully CPU loaded host, with all cores busy. | |
ID: 14873 | Rating: 0 | rate:
![]() ![]() ![]() | |
I have a L7-TONI_TEST WU here. (6.06) | |
ID: 14874 | Rating: 0 | rate:
![]() ![]() ![]() | |
I think so. | |
ID: 14875 | Rating: 0 | rate:
![]() ![]() ![]() | |
Running L42-TONI_TEST2901-3-10-RND1170_0 acemdbeta version 606 on a 9800GTX - no progress bar or % counter. Its up to 19hrs 55mins run time at present. | |
ID: 14876 | Rating: 0 | rate:
![]() ![]() ![]() | |
You have to compare speed between 6.05 and 6.06 which are all run on a TONI- workunit (there are two types, one short and one long). What I would like to know is the speed on a fully CPU loaded host, with all cores busy. The results I posted above were run on an i7-920, HT ON, fully loaded with 8 threads of WCG's HFCC subproject. I am running that unit headless so I don't know if the progressbar was displayed properly. ____________ Thanks - Steve | |
ID: 14877 | Rating: 0 | rate:
![]() ![]() ![]() | |
Our tests shows that under full load (all CPUs used), the application is very slow. This is probably true for the standard application as well, a bit less because the kernels are longer there. | |
ID: 14879 | Rating: 0 | rate:
![]() ![]() ![]() | |
sorry, but this will be a very bad move imo... | |
ID: 14880 | Rating: 0 | rate:
![]() ![]() ![]() | |
If the hit on the GPU app is so high, then it does not make any sense to have less than 0.1 CPU driving the GPU. You might lose up 100% of performance of 256 GPU cores to save 1 CPU core! | |
ID: 14881 | Rating: 0 | rate:
![]() ![]() ![]() | |
Previously it worked with comparably little cpu support. Does this method not work any more due to the algorithm changes? | |
ID: 14882 | Rating: 0 | rate:
![]() ![]() ![]() | |
I think that we have a big hit on the previous application as well. | |
ID: 14883 | Rating: 0 | rate:
![]() ![]() ![]() | |
If the hit on the GPU app is so high, then it does not make any sense to have less than 0.1 CPU driving the GPU. You might lose up 100% of performance of 256 GPU cores to save 1 CPU core! Makes sense in a performance perspective. 0.6 or thereabouts would be fine as by implication dual gpus are involved which by inlarge means a more PC/Technically aware individual who will realise the sense in the trade off. 0.6 should also allow access to that cpu by other apps by single gpu owners, and that will help fend off yelling about credits and interference with other projects. Its a very obvious statement to say the priority is the needs of the app - but I will repeat it to prevent nugatory spinning off into credit rhetoric here. This is about how to launch this one and the real issues it will face in BOINC Land, not credit Rhetoric. Utilisation of the cpu/gpu in this way is relatively new in BOINC - SETI does a version of it in one of their apps, but the credit spin there is another world as they are relatively low on credits anyway, and this GPUGRID App is engineered from the ground up - different ball game. The fact remains whatever the rhetoric, a configuration of this nature will open doors to the chattering classes, and if its not approached and "Marketed" correctly, the droves of BOINC Crunchers who love to repeat dramatic rumour, will feed momentum of bad news on its Launch - thats the last thing thats needed when its all there for the right reasons. Therefore I would strongly suggest a brief "Time Out" at some point to put together the case for it and why the App has ended up the way it does, and why the credits are the way they end up, posting it prominantly as a sticky. GPUGRID members can then refer / link to it in their travels around BOINC Land and nip rumours in the bud. We can all make a Grand Case in Theory that such a precaution is not required, as all true and rightuous Crunchers naturally march forth to the golden light giving selfless assistance to the goodness of Mankind. However we all know life in BOINC Land is not so simplistic - unfortunately :) Its not going to take much effort, and will go a long way to deflect critics by being open and transparent. There will be still be some moaning no doubt, but hopefully most will be nipped off by such a move. Regards Zy | |
ID: 14884 | Rating: 0 | rate:
![]() ![]() ![]() | |
Please, please, please go to 1GPU + 1 CPU! | |
ID: 14885 | Rating: 0 | rate:
![]() ![]() ![]() | |
I think that we have a big hit on the previous application as well. On my systems v6.71 running with a CPU core idle did not speed up the application. V6.71 played very well with every other project, and I currently run more than a dozen on various machines. I wish people would quit talking about points gained or lost. For many of us it's not about points. It's about being able to run work for other projects too. If ANY project makes that difficult it becomes expendable. I really like GPUGRID and want to keep running it, so if this app makes running other projects more difficult please give us the option of running the old app (via app_info.xml if necessary). | |
ID: 14886 | Rating: 0 | rate:
![]() ![]() ![]() | |
On a long term prognosis I would agree re 6 cores etc etc - however the transition to multi core niavana has a bear trap. | |
ID: 14887 | Rating: 0 | rate:
![]() ![]() ![]() | |
There's lots of speculation and AFAIK few machines that have run many v6.05 & v6.06 WUs. Well here's one: | |
ID: 14889 | Rating: 0 | rate:
![]() ![]() ![]() | |
Our tests shows that under full load (all CPUs used), the application is very slow. When you say under full load what kind of load are you talking about? My load, and likely that of other people that are concerned about dedicating a full CPU for each GPU WU , is with other BOINC projects but they all share pretty good and I got very good results with the 6.06 version. If you would like more detailed testing I have a GTX295 and GTX285 that I can configure, set, run anything you would like to help make the best decision for the project. ____________ Thanks - Steve | |
ID: 14890 | Rating: 0 | rate:
![]() ![]() ![]() | |
Snow Crash, that was your GTX 285 I used for the comparison above. Hope that's OK. | |
ID: 14891 | Rating: 0 | rate:
![]() ![]() ![]() | |
[Within 2 months 6core systems with 12 threads will be available. You will even be able to get dual socket versions. | |
ID: 14892 | Rating: 0 | rate:
![]() ![]() ![]() | |
Snow Crash, that was your GTX 285 I used for the comparison above. Hope that's OK.I noticed that :-) and that's fine by me. If I didn't want anyone to see them I would make them hidden. It looks like v6.06 was running well, even better than the current v6.71 app.[/quote]Definately much better ... could I have more please? ____________ Thanks - Steve | |
ID: 14893 | Rating: 0 | rate:
![]() ![]() ![]() | |
Beyond, There's lots of speculation and AFAIK few machines that have run many v6.05 & v6.06 WUs. Well here's one: | |
ID: 14895 | Rating: 0 | rate:
![]() ![]() ![]() | |
Definately much better ... could I have more please?Snow Crash, that was your GTX 285 I used for the comparison above. Hope that's OK.I noticed that :-) and that's fine by me. If I didn't want anyone to see them I would make them hidden. [/quote] Hi, run the 6.06 application together with say Seti/Aqua running on all your CPU cores and let's see what are the performance. GDF | |
ID: 14896 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hi GDF, Beyond, | |
ID: 14897 | Rating: 0 | rate:
![]() ![]() ![]() | |
I own the GTX285 that I posted results for. | |
ID: 14898 | Rating: 0 | rate:
![]() ![]() ![]() | |
If the hit on the GPU app is so high, then it does not make any sense to have less than 0.1 CPU driving the GPU. You might lose up 100% of performance of 256 GPU cores to save 1 CPU core! Folding@Home also used 1 cpu core for their ati client. They worked around it using flush intervals, buffering and dualbuffering as environment variables. I don't know if this could be applied to cuda... I get your point with the performance, but as a whole I think this shouldn't stay this way. I think at the moment virtually no one runs gpugrid exclusively. Say I have a quadcore with 1 gpu, my objective is to maximize my multiproject participation. To illustrate let's say I run WCG on the cpu and gpugrid on the gpu. I'm currently able to contribute 4 cores to WCG and "1core" (the gpu) to gpugrid. This allows me to have "5cores" in my quadcore computer. I was able to run 4 WUs of WCG at a time, and when I found out about gpugrid, I was able to contribute to this project on top of my former regular contribution on WCG. Now, it would mean to me that I'm still contributing to the 2 projects, but it would degrade my WCG production. I'll end up with "4cores" (4WUs running on a quad+gpu). I'm not talking about credits, in that perspective the new solution would outweight the old, but I'm talking in terms of output. This compared to the old situation and other gpu projects. Gpu crunching is still in an early stage, but it has till now been seen as an additional source of production: it works on top of what was possible before, as a "5th core". As somebody before me said, the other projects all use almost no cpu power. If you have a computer dedicated to gpugrid, the new solution is of course the best one as you're not viewing it from a multiproject point of view. | |
ID: 14899 | Rating: 0 | rate:
![]() ![]() ![]() | |
When running a 6.71 application Boinc says 0.64CPUs + 1GPU. | |
ID: 14901 | Rating: 0 | rate:
![]() ![]() ![]() | |
If you set Boinc to use 3 of the 4 CPUs, this actually applies to GPUGrid tasks as well. So Boinc will try to run three WCG CPU tasks, and use one of these CPUs for the GPU tasks, leaving one CPU free (useless). This is not the case based on my most recent experience. Last night I set BOINC to use 75% of available cores. Turned it loose and I got 100% load on 3 out of 4 cores and an average of about 30% on the fourth with no background tasks active. If I suspend activity on all projects except GPUgrid I get the same result. One core with a 30% load, the rest with nothing. That's with two 6.71 GPUgrid apps running. BOINC 6.10.18, Win7 Ult64, C2Q @ 3.83 GHz, and two GTX 260-216. I'm waiting for a few more WUs to finish before I check to see if there's a performance advantage to running this way. | |
ID: 14902 | Rating: 0 | rate:
![]() ![]() ![]() | |
I've been aborting v6.71 tasks on my GTX 260 trying to get a v6.06 WU to test, no luck. | |
ID: 14903 | Rating: 0 | rate:
![]() ![]() ![]() | |
GTX 295, 701MHz, 1509MHz, 1086MHz (896MB) driver: 19062 , i7-860@3.8 GHz/, Vista64 | |
ID: 14905 | Rating: 0 | rate:
![]() ![]() ![]() | |
I just ran this beta test. It errored out instantly, as it did with my wingmen. | |
ID: 14906 | Rating: 0 | rate:
![]() ![]() ![]() | |
My first test 6.06 WU choked after running for about 1.5 hours. Saw the issue where the progress bar never moved, but saw the message below after it errored out. | |
ID: 14908 | Rating: 0 | rate:
![]() ![]() ![]() | |
Jeremy wrote: If you set Boinc to use 3 of the 4 CPUs, this actually applies to GPUGrid tasks as well. So Boinc will try to run three WCG CPU tasks, and use one of these CPUs for the GPU tasks, leaving one CPU free (useless). I second that. Running MW on a HD4870 and a C2Q. If I set BOINC 6.10.29 to use 75% CPU is launches 3 CPU tasks and one MW@ATI. This way performance is much better than at 4 CPU + 1 MW, even though MW itself uses little CPU. The catch here is that it needs cpu support often and in precise intervals. So effectively you have to dedicate one core here as well.. or live with a slower GPU. Tom Philippart wrote: I'm not talking about credits, in that perspective the new solution would outweight the old, but I'm talking in terms of output. You're measuring with two different gauges here. A reduction of your WCG output by a factor of 1.33 does count, but a GPU-Grid output increase by a factor of 1.66 does not count? You must not count it purely in terms of "cores", as that can be quite misleading. Or is one core of a Celeron 266 MHz worth as much as one of an i7? Or a GTX260 or a GTX380? Zydor wrote: ... and second or more app(s) could be there using one cpu and one gpu. Place a test on the faster app to allow access to Dual and Quad core only, and denying download to single core. Don't deny it, just don't make it the default. Otherwise dedicated cruncher boxes will hate you ;) MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 14909 | Rating: 0 | rate:
![]() ![]() ![]() | |
You might lose up 100% of performance of 256 GPU cores to save 1 CPU core! The reason the CPU performance is important to many of is, well, look down at my sig. We run other applications, most of which don't run on GPUs. Many, many problems simply don't lend themselves to parallelization very well. While helping with medical research is fine and noble, so are other tasks such as preventing the spread of Malaria or keeping an asteroid from falling on my head. One of the criteria which I use to decide whether or not to run a project is whether it will interfere with other projects I run. A GPU-based project is already consuming a highly valuable resource. I don't want it to also gobble up the a CPU core that would otherwise be used by projects that can't run on the GPU. Likewise, I won't run something like Einstein@Home's APB1/2 GPU task because it wastes the GPU, which could be put to better use by projects such as GPUGRID. ____________ Want to find one of the largest known primes? Try PrimeGrid. Or help cure disease at WCG. ![]() | |
ID: 14910 | Rating: 0 | rate:
![]() ![]() ![]() | |
Running MW on a HD4870 and a C2Q. If I set BOINC 6.10.29 to use 75% CPU is launches 3 CPU tasks and one MW@ATI. This way performance is much better than at 4 CPU + 1 MW, even though MW itself uses little CPU. The catch here is that it needs cpu support often and in precise intervals. So effectively you have to dedicate one core here as well.. or live with a slower GPU. This hasn't been my experience at all. I'm running 6 HD4770 ATI cards on 4 machines. MW tasks reliably take 210 seconds CPU time and 212 seconds elapsed time no matter what project is running on the 4 CPU cores (Athlon II 620, Win7-64, ATI v9.12). So thought I'd give it a test to check out your theory. The machine I tested is running 4 instances of SIMAP on the CPU (10 WUs run each way). Times: MW + 4 instances SIMAP: 3:30 CPU time --- 3:32 elapsed time MW + 0 instances SIMAP: 3:29 CPU time --- 3:31 elapsed time There you have it. The dedicated quad with nothing running but the OS saves 1 second or about .47%, not much at all. As a trade off each CPU core pumps out a SIMAP WU every 28 minutes. If your results differ much from these you might try adding -b to your MW app_info.xml commandline parameters:-) | |
ID: 14911 | Rating: 0 | rate:
![]() ![]() ![]() | |
Good news. We have found the problem of the hanging. | |
ID: 14912 | Rating: 0 | rate:
![]() ![]() ![]() | |
Should we abort the v6.06 WUs that haven't started yet? | |
ID: 14913 | Rating: 0 | rate:
![]() ![]() ![]() | |
There was a problem 7.07. So, we removed for now. | |
ID: 14915 | Rating: 0 | rate:
![]() ![]() ![]() | |
Beyond, thanks for that post. My test were done a few months back, can't remember if it was 0.20 or 0.19. Now I'm running a C2Q, Win7 64, Cat 9.11 and MW 0.21 without app_info. | |
ID: 14916 | Rating: 0 | rate:
![]() ![]() ![]() | |
I have been trying to track the various beta WUs | |
ID: 14919 | Rating: 0 | rate:
![]() ![]() ![]() | |
The timing of 6.05 and 6.05 could be wrong as there was a problem in the restarting. Only the fast times are correct. | |
ID: 14921 | Rating: 0 | rate:
![]() ![]() ![]() | |
Beyond, thanks for that post. My test were done a few months back, can't remember if it was 0.20 or 0.19. Now I'm running a C2Q, Win7 64, Cat 9.11 and MW 0.21 without app_info. You should have a PM... Also sorry, made a typo in the previous post. The commandline should be: <cmdline>b-1</cmdline> not -b That should get you a bit more GPU usage :-) | |
ID: 14925 | Rating: 0 | rate:
![]() ![]() ![]() | |
A problem with an ACEMD beta version 6.06 (cuda) workunit: | |
ID: 14927 | Rating: 0 | rate:
![]() ![]() ![]() | |
6.08 is looking very nice!!! | |
ID: 14931 | Rating: 0 | rate:
![]() ![]() ![]() | |
As I said, as nearly no one is running gpugrid as their only project, I want to run it on top of the regular production: stay with the normal output of my primary project and only take minor losses to contribute with the gpu. This is a higher output than running nothing on the gpu. My point is that I want to run it "on top" of my primary project. I don't want to touch the output of my primary focus. That's also the reason why I didn't run the folding ati client until they build the workaround to reduce cpu usage Anyways I think aiming to reduce cpu usage, by for instance using buffering techniques (as far as it's possible) is the best solution. Einstein@home is also working on reducing the cpu usage of their app. | |
ID: 14933 | Rating: 0 | rate:
![]() ![]() ![]() | |
Had two 6.08 Beta's fail within 3 seconds: Beta WU 1 and Beta WU 2 | |
ID: 14934 | Rating: 0 | rate:
![]() ![]() ![]() | |
Please go ahead with the test. 6.08 is looking very nice!!! | |
ID: 14935 | Rating: 0 | rate:
![]() ![]() ![]() | |
GTX260 (754 MHz, Shader: 1568 MHz, Speicher: 1211 MHz) (WinXP_32, Kentsfield, 4xSpinhenge@home) | |
ID: 14936 | Rating: 0 | rate:
![]() ![]() ![]() | |
I got a 6.08 result which, unfortunately, was consuming a significant portion of a C2Q core (7% on task manager, so about 28% of the CPU time of one core.) Typical usage is 0 to 1 percent in Task Manager. | |
ID: 14937 | Rating: 0 | rate:
![]() ![]() ![]() | |
CPU usage is not important anymore. Just look at the time/step with and without load. So far, it seems very good. | |
ID: 14938 | Rating: 0 | rate:
![]() ![]() ![]() | |
Running three 6.08 units and eight WCG units together on one machine using 275's and looks fine. CPU load for 6.08 units is 17-18% and WCG units average 89%. GPUgrid tasks showing 60% performance increase. | |
ID: 14939 | Rating: 0 | rate:
![]() ![]() ![]() | |
I got a 6.08 result which, unfortunately, was consuming a significant portion of a C2Q core (7% on task manager, so about 28% of the CPU time of one core.) Typical usage is 0 to 1 percent in Task Manager. I need to correct my own statement. Either I'm misremebering how each project behaves, or something is different on this box today. Both are fairly likely. :) The production (6.71) application is showing 5 to 6 percent CPU utilization in Task Manager, so the difference between the CPU usage between the two is insignificant. | |
ID: 14943 | Rating: 0 | rate:
![]() ![]() ![]() | |
Fully loaded with 8 CPU WUs, one 6.71 GPU and one 6.08 GPU. | |
ID: 14944 | Rating: 0 | rate:
![]() ![]() ![]() | |
Running 2 beta 6.08 WUs at the same time on a GTX295 takes longer than they would if you processed them with 6.71. | |
ID: 14945 | Rating: 0 | rate:
![]() ![]() ![]() | |
Could somebody with one of those failing GTX260 see if the beta app works for them? | |
ID: 14949 | Rating: 0 | rate:
![]() ![]() ![]() | |
Just started running one IBUCH task using the 6.08 Beta application on a GTX260 sp216: | |
ID: 14950 | Rating: 0 | rate:
![]() ![]() ![]() | |
Was this GTX260 causing problems for the FFT bug before? | |
ID: 14953 | Rating: 0 | rate:
![]() ![]() ![]() | |
Could somebody with one of those failing GTX260 see if the beta app works for them? Picked up 4 IBUCH_1000smd on my machine with dual GTX260's (65nm). Current estimate is 3 hours 40 minutes to completion. Will let you know in a few hours how they go. Link to host here. ____________ BOINC blog | |
ID: 14954 | Rating: 0 | rate:
![]() ![]() ![]() | |
GTX260 (754 MHz, Shader: 1568 MHz, Speicher: 1211 MHz) (WinXP_32, Kentsfield) | |
ID: 14957 | Rating: 0 | rate:
![]() ![]() ![]() | |
Was this GTX260 causing problems for the FFT bug before? No. The FFT bug is not a problem on my 55nm GTX 260, as it uses the G200b revision. The problem was seen on the earlier 65nm G200 versions of the GTX260 – both the 216sp and 192sp card versions. I no longer have one of these versions. Still, the 55nm card can act as a comparison reference. MarkJ has an earlier 65nm (G200) GTX260 card, which would have been subject to intermittent FFT errors, and is running a Beta. Siegfried Niklas, Which version is your card? - GTX 260 sp216 55nm, GTX 260 sp216 65nm, or GTX 260 sp192 65nm? For reference, my 144-IBUCH_1000smd_pYEEI_100202-0-10-RND5155_1 task completed in 5h 34min. http://www.gpugrid.net/result.php?resultid=1810596 Completed and validated: Run Time 20,202.83 CPU Time 5,240.67 (Credit claimed 3,977.21, Credit granted 5,369.23) - My GTX 260 (G200b) 216shaders: GPU 625MHz, Memory 1100MHz (X2), Shaders 1348MHz (Factory Clocked) - The NVidia Reference clock rates for the GTX 260 cards: GPU 576MHz, Memory 1998MHz, Shaders 1242MHz. Most cards are somewhat factory, if not user, overclocked compared to these. As the GTX 280 also uses the 65nm G200 core technology, it too was presumably subject to the FFT bug. | |
ID: 14962 | Rating: 0 | rate:
![]() ![]() ![]() | |
It is a GTX 260 sp216 55nm Rev. B1 I run a GTX 260 sp216 65nm Rev. A2 on a other host (same OC). No significant difference at "process priority: below normal" (I had only one "TONI_TEST -ACEMD beta version v6.08" on the "65nm Rev. A2" card. I never had problems with the "FFT bug") | |
ID: 14965 | Rating: 0 | rate:
![]() ![]() ![]() | |
sorry, to ask this stupid question: | |
ID: 14970 | Rating: 0 | rate:
![]() ![]() ![]() | |
dsred, you have done all you have to do. | |
ID: 14971 | Rating: 0 | rate:
![]() ![]() ![]() | |
Picked up 4 IBUCH_1000smd on my machine with dual GTX260's (65nm). Current estimate is 3 hours 40 minutes to completion. Will let you know in a few hours how they go. Link to host here. 1st wu (RND8704_2) is on 89% and thinks another 40 mins to complete 2nd wu (RND4096_1) stuck at 7.12%. Suspend/resume in BOINC seems to have got it going again and its now up to 9%. 3rd and 4th wu waiting to run. Host is also running 8 Einstein GW searches on the cpu, so that might have slowed things down a bit. ____________ BOINC blog | |
ID: 14972 | Rating: 0 | rate:
![]() ![]() ![]() | |
sorry, to ask this stupid question: It is just for windows now. gdf | |
ID: 14974 | Rating: 0 | rate:
![]() ![]() ![]() | |
Does anyone know if running a gtx295 as one card | |
ID: 14975 | Rating: 0 | rate:
![]() ![]() ![]() | |
1st wu (RND8704_2) is on 89% and thinks another 40 mins to complete 1st wu completed successfully. Link to wu here 2nd wu failed. Had 2 popup windows on the console saying app has stopped responding (seems to be new with Win 7). Link to wu here 3rd and 4th wu now running. Cards are GTX260 (65nm) 216sp revision A2 ____________ BOINC blog | |
ID: 14976 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hi would test it, but: | |
ID: 14979 | Rating: 0 | rate:
![]() ![]() ![]() | |
I'm afraid the GeForce 9600M GT (laptop) is not suitable for the project... thanks anyway. | |
ID: 14982 | Rating: 0 | rate:
![]() ![]() ![]() | |
I'm afraid the GeForce 9600M GT (laptop) is not suitable for the project... thanks anyway. Hi.. thx for information. But i'd crunched several WU's til yet (switching to Beta) with the normal Cuda-App.?! best RS | |
ID: 14984 | Rating: 0 | rate:
![]() ![]() ![]() | |
you can still run another gpu project like folding@home | |
ID: 14985 | Rating: 0 | rate:
![]() ![]() ![]() | |
you can still run another gpu project like folding@home hm... i understand.. ...but i HAD crunched here at GPUgrid with THIS GPU! Would my card at this moment not further supported at all? Or only with the new beta-App? best RS (Sorry for poor english) | |
ID: 14986 | Rating: 0 | rate:
![]() ![]() ![]() | |
Your card IS supported -- but is considered too slow to reliably return WUs quickly enough to make the WU deadline. | |
ID: 14990 | Rating: 0 | rate:
![]() ![]() ![]() | |
Your card IS supported -- but is considered too slow to reliably return WUs quickly enough to make the WU deadline. I had succesfull... means within the deadline.
Because the card is not really fast i would try out the new Beta-App. - but all Wu's crashing with error within 2 seconds. May be my card isn't supported for the new BETA-App? Best RS | |
ID: 14991 | Rating: 0 | rate:
![]() ![]() ![]() | |
It *would* help if you unhid your computers. Otherwise we're just guessing. :) | |
ID: 14992 | Rating: 0 | rate:
![]() ![]() ![]() | |
It *would* help if you unhid your computers. Otherwise we're just guessing. :) done: http://www.gpugrid.net/hosts_user.php?userid=424 best RS | |
ID: 14994 | Rating: 0 | rate:
![]() ![]() ![]() | |
3rd and 4th wu now running. Machine was off during day due to weather. I suspended all the Einstein wu. This makes the wu progress a lot faster, so sharing a cpu core is clearly a major performance issue. 3rd wu completed successfully. Link to wu here 4th wu still running. However it too froze. A suspend/resume in BOINC got it going again. Also machine rebooted for no apparent reason. According to the Win7 event log it refers to an application error. Details are: acemdbeta_6.08_windows_intelx86__cuda 0.0.0.0 4b680f5f acemdbeta_6.08_windows_intelx86__cuda 0.0.0.0 4b680f5f 40000015 0003274d a74 01caa5930a0dce07 C:\ProgramData\BOINC\projects\www.gpugrid.net\acemdbeta_6.08_windows_intelx86__cuda C:\ProgramData\BOINC\projects\www.gpugrid.net\acemdbeta_6.08_windows_intelx86__cuda c5ce415c-119f-11df-bbfe-00248c1ddc91 ____________ BOINC blog | |
ID: 14995 | Rating: 0 | rate:
![]() ![]() ![]() | |
It *would* help if you unhid your computers. Otherwise we're just guessing. :) One thing I noticed about your machine: Measured integer speed 1536.79 million ops/sec You're running a Core2 CPU at a higher clock rate than I am, and my machine gets around 5000 for that benchmark. Same version of Windows, too. So, why do you see about 25% of what I do on that benchmark? 25% happens to be the speed that most laptops cut the CPU to when it's going into power saving mode. Since the benchmarks will only run when BOINC is running, this means BOINC is running during the time that the laptop is taking steps to conserve power and/or reduce heat (pretty much the same thing). It's possible, therefore, that the reason the WUs are failing is because of the power saving mode affecting the GPU. To test this, you need to go into the Windows power settings and make sure the laptop is always running at full power and then try running one of the beta WUs again. This might be completely wrong, of course. But unless someone from the project says "nope, that card is not supported", at least this is something you could try. Second observation: The most recent stable version of the BOINC client is 6.10.18. Not sure if this has anything to do with your problem. I don't think it does, but it's a possibility. I'd install 6.10.18 anyway. Third observation: There's more recent versions of the video drivers. People have been saying that the 195 and 196 versions are slower, but 191 should be good. I'm running 191.07. Again, I don't think this is the cause of the problem. P.S. That's a pretty fast CPU for a laptop, and a pretty good mobile GPU too. Nice machine you have there! ____________ Want to find one of the largest known primes? Try PrimeGrid. Or help cure disease at WCG. ![]() | |
ID: 14996 | Rating: 0 | rate:
![]() ![]() ![]() | |
Santa: I suspect you need a newer driver. | |
ID: 14997 | Rating: 0 | rate:
![]() ![]() ![]() | |
thx @ all. | |
ID: 15002 | Rating: 0 | rate:
![]() ![]() ![]() | |
UPDATE: | |
ID: 15004 | Rating: 0 | rate:
![]() ![]() ![]() | |
How soon before 6.08 becomes the standard application? | |
ID: 15007 | Rating: 0 | rate:
![]() ![]() ![]() | |
For stability it seems fine. We have to check the results. Probably next week we will create a new application acemd2 which is not beta to go along side the old one. | |
ID: 15008 | Rating: 0 | rate:
![]() ![]() ![]() | |
4th wu still running. However it too froze. A suspend/resume in BOINC got it going again. Also machine rebooted for no apparent reason. According to the Win7 event log it refers to an application error. Details are: Well the 4th one finished. As above caused a reboot plus a couple of "not responding" popups. It managed to validate, despite the error messages. Wu can be found here ____________ BOINC blog | |
ID: 15014 | Rating: 0 | rate:
![]() ![]() ![]() | |
This is the CUDA FFT bug on 260 cards. | |
ID: 15020 | Rating: 0 | rate:
![]() ![]() ![]() | |
On windows it does not work well. We will come out with Linux at first. Well, how about Linux' beta app now? ____________ From Siberia with love! ![]() | |
ID: 15021 | Rating: 0 | rate:
![]() ![]() ![]() | |
this beta app for windows only? I'll try 3day to get beta WU on linux | |
ID: 15028 | Rating: 0 | rate:
![]() ![]() ![]() | |
I was out of town. | |
ID: 15029 | Rating: 0 | rate:
![]() ![]() ![]() | |
I was out of town. let me know when it will be ready :-) GTX275 waiting for it :-) ____________ ![]() | |
ID: 15030 | Rating: 0 | rate:
![]() ![]() ![]() | |
I've had a WU failing today on the new app: | |
ID: 15031 | Rating: 0 | rate:
![]() ![]() ![]() | |
This is the CUDA FFT bug on 260 cards. Well I guess it didn't fix that then :-) ____________ BOINC blog | |
ID: 15033 | Rating: 0 | rate:
![]() ![]() ![]() | |
So far have run 19 v6.08 WUs on 5 different GPUs (GTX 260 / GT 240 / 9600GSO). 18 were successful. The 1 failure was on a GT 240. | |
ID: 15035 | Rating: 0 | rate:
![]() ![]() ![]() | |
I've had a WU failing today on the new app: as you can see ut us a 9600gt on vista x64 | |
ID: 15036 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hi, | |
ID: 15042 | Rating: 0 | rate:
![]() ![]() ![]() | |
Update. | |
ID: 15043 | Rating: 0 | rate:
![]() ![]() ![]() | |
That's normal. I believe it's stating that it can't find a checkpoint file, therefore it's starting from the beginning. This will always happen when starting a WU for the first time. | |
ID: 15044 | Rating: 0 | rate:
![]() ![]() ![]() | |
The beta is running great here on 4 different types of cards: GTX 260, GT 240, 8800 GT and 9600GSO. | |
ID: 15047 | Rating: 0 | rate:
![]() ![]() ![]() | |
We will move this beta to a standard acemd2 application as soon as possible. | |
ID: 15050 | Rating: 0 | rate:
![]() ![]() ![]() | |
We will move this beta to a standard acemd2 application as soon as possible. Fine by me: reporting successful completion of a IBUCH_1000smd_pYEEI with the Beta app and a 9800GTX+. | |
ID: 15053 | Rating: 0 | rate:
![]() ![]() ![]() | |
Completed 2 successfully on a GTX275 with 196.21 drivers. | |
ID: 15056 | Rating: 0 | rate:
![]() ![]() ![]() | |
MarJ - can you tell us a little about the macine/ os/ drivers/ anything else going on on your PC with the 295? Looking at the processing time / time per step for the two WUs you posted was painful ... Time per step: 95.802 ms | |
ID: 15059 | Rating: 0 | rate:
![]() ![]() ![]() | |
Linux beta application uploaded. | |
ID: 15067 | Rating: 0 | rate:
![]() ![]() ![]() | |
Linux beta application uploaded. what I need to do? I changed "Run test applications?" to "yes" but still WUs for 6.70 app. ____________ ![]() | |
ID: 15069 | Rating: 0 | rate:
![]() ![]() ![]() | |
Wait... for the right one. Linux beta application uploaded. | |
ID: 15070 | Rating: 0 | rate:
![]() ![]() ![]() | |
Wait... for the right one. ok, let's wait ____________ ![]() | |
ID: 15071 | Rating: 0 | rate:
![]() ![]() ![]() | |
The beta is running great here on 4 different types of cards: GTX 260, GT 240, 8800 GT and 9600GSO. Now up to 33 successful v6.08 WUs on 5 cards (GTX 260, GT 240, 8800 GT, 9600GSO and another GT 240). Still only the single error mentioned above and I think that was caused by something not related to GPUGRID (another project had put out some bad WUs and the bad CPU WU crashed pretty much everything including the GPUGRID WU). I think v6.08 is more stable than v6.71, and that wasn't bad. | |
ID: 15072 | Rating: 0 | rate:
![]() ![]() ![]() | |
Linux beta application uploaded. Got one. 19 min = 0.704% Is it OK? GTS250, Ubuntu 9.10 x64, 195.30. ____________ From Siberia with love! ![]() | |
ID: 15074 | Rating: 0 | rate:
![]() ![]() ![]() | |
I also got one. Later 2day there will be results. | |
ID: 15075 | Rating: 0 | rate:
![]() ![]() ![]() | |
1h01m - 2.704% only. Is it ok? | |
ID: 15077 | Rating: 0 | rate:
![]() ![]() ![]() | |
For a GTX275 it seems quite slow. | |
ID: 15078 | Rating: 0 | rate:
![]() ![]() ![]() | |
For a GTX275 it seems quite slow. 2h41m - 9.024%. Something wrong... my GTX275 is OC'd @702/1584/1260 upd.: sorry, I cancelled that WU... 3 hours and 10%... ____________ ![]() | |
ID: 15080 | Rating: 0 | rate:
![]() ![]() ![]() | |
8 hr, 12% (GTS250). Will it be finished untill the deadline?.. | |
ID: 15083 | Rating: 0 | rate:
![]() ![]() ![]() | |