Advanced search

Message boards : Graphics cards (GPUs) : Run times

Author Message
Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 12554 - Posted: 19 Sep 2009 | 4:13:03 UTC

Has the run time been creeping up while I was not looking? I recall run times of about 6:30 on most of my cards (with some plus minus slop) but now It seems I am seeing 7:30 to 8:00 run times.

I can even see some that have projected run times of over 9 hours...

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 12562 - Posted: 19 Sep 2009 | 17:40:35 UTC - in response to Message 12554.

There is a new bug in BOINC (6.6.36) which assigns all WU to the same gpu if you have multiple gpus. As they time share it, they take much longer.
I am not sure that this is your case.

Can anyone suggest a good BOINC version which is new, but without major flaws?

gdf

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 12563 - Posted: 19 Sep 2009 | 18:07:47 UTC - in response to Message 12562.

There is a new bug in BOINC (6.6.36) which assigns all WU to the same gpu if you have multiple gpus. As they time share it, they take much longer.
I am not sure that this is your case.

Can anyone suggest a good BOINC version which is new, but without major flaws?

gdf

GDF,

As far as I know that assign all tasks to GPU 0 is a Linux only bug ... of course that is what you run ... :)

At the moment I am running mostly 6.6.3x versions but have been pretty happy with 6.10.3 which does not have the two major issues of 6.10.4 and .5 ... 6.10.6 fixed a couple issues but has still left uncorrected some problems with the order in which it processes GPU tasks for some people (introduced in 6.10.4).

Also note that I think that I just uncovered a new bug / situation with task ordering on the GPU with multiple projects that, in essence, will cause Resource Share to be ignored. I do not know how far back in versions that this bug extends. For me it is new in that to this point there was no pressure to run multiple projects for the simple reason that effectively there were no projects to run ...

Now that we are ramping up more and more projects with GPU capabilities ... well ...

Anyway, my suggestions are still 6.5.0, 6.6.36, or 6.10.3; and as I said these are versions I had run extensively or am running now ...

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 12565 - Posted: 19 Sep 2009 | 19:35:35 UTC

I just realized that you neatly sidestepped my original question... :)

Are the tasks longer now, or is it my imagination? I am not talking about longer run times caused by bugs but just normal run times...

Temujin
Send message
Joined: 12 Jul 07
Posts: 100
Credit: 21,848,502
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwat
Message 12566 - Posted: 19 Sep 2009 | 20:04:00 UTC - in response to Message 12562.

There is a new bug in BOINC (6.6.36) which assigns all WU to the same gpu if you have multiple gpus. As they time share it, they take much longer.

Aha, that'll be why my GTX295 has started working, albeit slowly

JackOfAll
Avatar
Send message
Joined: 7 Jun 09
Posts: 40
Credit: 24,377,383
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwat
Message 12569 - Posted: 20 Sep 2009 | 0:55:57 UTC - in response to Message 12562.
Last modified: 20 Sep 2009 | 1:00:09 UTC

There is a new bug in BOINC (6.6.36) which assigns all WU to the same gpu if you have multiple gpus. As they time share it, they take much longer.
I am not sure that this is your case.

Can anyone suggest a good BOINC version which is new, but without major flaws?


GPU scheduling seems to be fubar'd for Linux in one way or another with pretty much all releases. There is the everything gets assigned to '--device 0' bug in the 6.6.3x series (cause coproc_cmdline() is called post fork()) and the preempt problems with 6.10.x.

I'm running 6_6a branch (which is equiv to an unreleased 6.6.39) plus the following patch (r18836 from trunk) which will resolve the '--device 0' issue. It seems pretty solid.


--- boinc_core_release_6_6_39/client/app_start.cpp.orig 2009-09-15 11:18:45.000000000 +0100
+++ boinc_core_release_6_6_39/client/app_start.cpp 2009-09-15 11:52:34.000000000 +0100
@@ -104,8 +104,10 @@
}
#endif

-// for apps that use coprocessors, reserve the instances,
-// and append "--device x" to the command line
+// For apps that use coprocessors, reserve the instances,
+// and append "--device x" to the command line.
+// NOTE: on Linux, you must call this before the fork(), not after.
+// Otherwise the reservation is a no-op.
//
static void coproc_cmdline(
COPROC* coproc, ACTIVE_TASK* atp, int ninstances, char* cmdline
@@ -793,6 +795,13 @@

getcwd(current_dir, sizeof(current_dir));

+ sprintf(cmdline, "%s %s",
+ wup->command_line.c_str(), app_version->cmdline
+ );
+ if (coproc_cuda && app_version->ncudas) {
+ coproc_cmdline(coproc_cuda, this, app_version->ncudas, cmdline);
+ }
+
// Set up core/app shared memory seg if needed
//
if (!app_client_shm.shm) {
@@ -924,10 +933,6 @@
}
}
#endif
- sprintf(cmdline, "%s %s", wup->command_line.c_str(), app_version->cmdline);
- if (coproc_cuda && app_version->ncudas) {
- coproc_cmdline(coproc_cuda, this, app_version->ncudas, cmdline);
- }
sprintf(buf, "../../%s", exec_path );
if (g_use_sandbox) {
char switcher_path[100];


Send me a PM if you want a link to the RPM's and SRPM for a Fedora 11 build.

Profile Jet
Send message
Joined: 14 Jun 09
Posts: 25
Credit: 5,835,455
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwat
Message 12570 - Posted: 20 Sep 2009 | 7:05:39 UTC - in response to Message 12554.

Should agree with you, Paul.
Unfortunately, couldn't find the records earlier 31 of July ( when software was updated to CUDA 2.2 capability), to confirm your thoughts, but anyhow, sure that you are totally right: WU's becomes longer.
New WU's are lasts longer (from 6-6:30 hours to complete till 7:30++ hours), according to my feelings. Previously my station was able to complete at least, 3,5 - 4 WU's\day per GPU, right now I'm happy with 3 WU's\day. OK, to keep station more stable, I was downclocked GPU's a bit ( from 1,63gHz till 1,57gHz), but that wasn't the main reason.
Running damn stable GTX260 x3 under BOINC 6.10.0, 190.38 driver on Win Server 2008.

B.T.W., did you run other projects on your farms on CPU's ? If "yes", it could happened, that neighbor project, running on CPU's, could consume a bit of the CPU power, with need GPU's to be served ( data feed & output, etc). That could be one of the reasons, as well, I think. Right now I'm checking this on my system.


____________

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 12571 - Posted: 20 Sep 2009 | 7:21:10 UTC - in response to Message 12570.

No, WUs are not longer. They are designed to last 1/4 of a day on a fast card.

gdf

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 12572 - Posted: 20 Sep 2009 | 7:23:33 UTC - in response to Message 12571.

I have updated the recommended client to 6.10.3.

thanks, gdf

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 12576 - Posted: 20 Sep 2009 | 11:29:03 UTC - in response to Message 12571.

No, WUs are not longer. They are designed to last 1/4 of a day on a fast card.

gdf


Looking through my last lot of results the shortest seem to be 7 hours 30 mins and the majority seem to be around 8 hours 30 mins. That was taken using the "approx elapsed time" shown in the wu results.

These were run on GTX295 and GTX275 so by no means a slow card, although they do run at standard speeds. One machine (the GTX275) has BOINC 6.6.37 and the other is currently running 6.10.3 under windows.
____________
BOINC blog

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 12577 - Posted: 20 Sep 2009 | 15:43:14 UTC - in response to Message 12571.

No, WUs are not longer. They are designed to last 1/4 of a day on a fast card.

Well, your design is bent ...

I used to get timings that were in the range of 6 hours and change on my GTX295 cards... sadly I cannot prove this as the task list is truncated at about the first of September and I am thinking back to much earlier.

If your intent is to run for about 1/4 of a day, or 6 hours well, you are over-shooting that on GTX260s, GTX285, and GTX295 cards ... the more common time seems to be up in the 28,000 seconds than down at 21K seconds.

This does seem task dependent.

I am only pointing this out because it seems strange that before most tasks did come in under 7 hours and now more and more are running up to 9 hours.

And you don't seem to be aware of the increase in run times ...

A minor point then becomes you are shading the credit grant ... :)

But most importantly to me is that you are not aware that you are overrunning your execution time targets ...

Off to see the football ...

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 12578 - Posted: 20 Sep 2009 | 16:22:36 UTC - in response to Message 12577.
Last modified: 20 Sep 2009 | 16:23:17 UTC

Ok, let's say that it is between 1/4 and 1/3 of a day.
The calculations is made approximately, it is not designed to be exact.

gdf

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 12581 - Posted: 21 Sep 2009 | 6:08:00 UTC - in response to Message 12578.

Ok, let's say that it is between 1/4 and 1/3 of a day.
The calculations is made approximately, it is not designed to be exact.

Ok ...

But I am not sure that you are seeing to point of my question...

Are you aware that the time is growing... The only reason I really noticed it was because for a couple months here lately I was not able to pay attention to GPU Grid (notice the lack of posting) and it was a little bit of a shock to see that my run times are almost always over 7 hours now and running as high as 9 where before my run times were real consistenly clustered about 6.5 hours ...

Not to put too fine a point on it, but, if this is the case the low end recommendation for hardware needs revision ...

Tom Philippart
Send message
Joined: 12 Feb 09
Posts: 57
Credit: 23,376,686
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 12583 - Posted: 21 Sep 2009 | 8:41:35 UTC - in response to Message 12581.

I had a similar increase in runtime just after I upgraded to the 190 drivers. Completely removing them and reinstalling them fixed it for me!

SuperViruS
Avatar
Send message
Joined: 18 Aug 08
Posts: 8
Credit: 127,707,074
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 12606 - Posted: 22 Sep 2009 | 6:00:54 UTC

The increased time may be due to a bug in the 190.xx drivers, which puts the GPU in 2D mode, more information in this post.
____________

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 12607 - Posted: 22 Sep 2009 | 6:57:49 UTC - in response to Message 12606.

The increased time may be due to a bug in the 190.xx drivers, which puts the GPU in 2D mode, more information in this post.

I could have sworn we were told we needed to update to the 190 series drivers. Did I misunderstand? I mean I think I have all my systems running the 190.62 drivers now ... no 2 are on 190.62 and one is on 190.38 ...

The thing is that in that I don't turn my systems off and they run 24/7 I don't see how they get back into 3D mode if the issue is down-shifting to 2D mode... I would think that once it was down it could not, or at least would not, re-adjust up on the next task.

That is why I have trouble thinking that this is that kind of problem. I have not done a survey though my quick look seemed to hint that it is more likely task type dependent ... that is, some of the tasks (by task name class) are now running longer than the norms ...

RalphEllis
Send message
Joined: 11 Dec 08
Posts: 43
Credit: 2,216,617
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 12649 - Posted: 23 Sep 2009 | 2:55:04 UTC - in response to Message 12607.

With the new Linux cuda 2.2 application, my work units are running faster and producing more credit per day. I am using the 190.32 Nvidia drivers.
When I was running with the 190 drivers in Windows, the work units were not running faster but they were more stable and used less CPU time.

Profile Jet
Send message
Joined: 14 Jun 09
Posts: 25
Credit: 5,835,455
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwat
Message 12691 - Posted: 23 Sep 2009 | 19:03:29 UTC - in response to Message 12606.

I don't think, that sudden switch to 2D mode could be the reason. I'm almost run all the time GPU-Z, to control the core \ mem frequencies, as well, as core temps. All three GTX 260 runs at full load. Additionally, in <stderr_txt> file main details is shown, as well, as core frequency. Here is sample:

# Using CUDA device 0
# Device 0: "GeForce GTX 260"
# Clock rate: 1.59 GHz
# Total amount of global memory: 939524096 bytes
# Number of multiprocessors: 27
# Number of cores: 216
# Driver version 2030
# Runtime version 2020
# Device 1: "GeForce GTX 260"
# Clock rate: 1.59 GHz
# Total amount of global memory: 939524096 bytes
# Number of multiprocessors: 27
# Number of cores: 216
# Driver version 2030
# Runtime version 2020
# Device 2: "GeForce GTX 260"
# Clock rate: 1.59 GHz
# Total amount of global memory: 939524096 bytes
# Number of multiprocessors: 27
# Number of cores: 216
# Driver version 2030
# Runtime version 2020
MDIO ERROR: cannot open file "restart.coor"
# Time per step: 51.394 ms
# Approximate elapsed time for entire WU: 32121.105 s
called boinc_finish

</stderr_txt>

No any sign of fall to 2D mode.

____________

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 12714 - Posted: 24 Sep 2009 | 11:47:34 UTC - in response to Message 12691.

Another consideration is that the amount of CPU time has risen sharply which slows down other projects.
GPU Grid was my project of choice for the GPU however, it appears it consumes a hefty amount of CPU, more than I would expect and I am aware it needs to use some CPU time.
____________
Radio Caroline, the world's most famous offshore pirate radio station.
Great music since April 1964. Support Radio Caroline Team -
Radio Caroline

JackOfAll
Avatar
Send message
Joined: 7 Jun 09
Posts: 40
Credit: 24,377,383
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwat
Message 12715 - Posted: 24 Sep 2009 | 12:41:47 UTC - in response to Message 12714.

Another consideration is that the amount of CPU time has risen sharply which slows down other projects. GPU Grid was my project of choice for the GPU however, it appears it consumes a hefty amount of CPU, more than I would expect and I am aware it needs to use some CPU time.


I have noticed that v670 of the Linux app uses approx 10% more CPU than it used to with v666. I wonder whether that is by design or an unwelcome side effect?

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 12722 - Posted: 24 Sep 2009 | 17:00:21 UTC - in response to Message 12715.

it might depend on the WU. The new application uses some more CPU for some runs.

gdf

Profile Jet
Send message
Joined: 14 Jun 09
Posts: 25
Credit: 5,835,455
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwat
Message 12936 - Posted: 30 Sep 2009 | 13:50:40 UTC - in response to Message 12578.

Should state again, that WU's becomes longer & longer.
I do not make any changes in the system, nor in SW or in HW, but my RAC becomes to decrese step by step & reason is the long time running WU's.
Checking the run times shows, that most of WU's, are lasts much longer, up to 8,5-9 hours. This means also, as I understand, some kind of infaltion of the given credit points, isn't it ?
Can somebody explain this more in details, greatly appreciated !

BarryAZ
Send message
Joined: 16 Apr 09
Posts: 163
Credit: 920,275,294
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 12973 - Posted: 2 Oct 2009 | 5:39:50 UTC - in response to Message 12570.

I've seen this as well -- it is quite noticeable with the slower cards I run (9600GT to GTS 250). In fact, with the 'run time creep' which seemed to start over the summer, I found 9400GT cards which could just squeak in under the 120 hour run limit had no chance to complete any longer. I'm running Collatz for that.

On a 9800GT I used to see 1 WU per 12 to 13 hours. Now it is more like 16 to 18 hours. Same credit for it.

What I've taken to doing is splitting between GPUGrid and Collatz -- which has forced a move to the 6.10 series of the client (Collatz requires 2.3 CUDA support). One big thing with Collatz -- *if* (relatively big if) you can live with the 6.10.x client, it supports not only slower CUDA cards -- all the way down to a 8400GS I pulled out of the 'give to the needy' collection I have, but also with ATI GPU's -- including embedded 3100/3200/3300/4200.

If GPUGrid is to consider moving to CUDA 2.3, in my view (since that will force the 6.10 client as well as driver updates), along with that stick, they ought to offer the carrot of ATI GPU support as well.


Should agree with you, Paul.
Unfortunately, couldn't find the records earlier 31 of July ( when software was updated to CUDA 2.2 capability), to confirm your thoughts, but anyhow, sure that you are totally right: WU's becomes longer.
New WU's are lasts longer (from 6-6:30 hours to complete till 7:30++ hours), according to my feelings. Previously my station was able to complete at least, 3,5 - 4 WU's\day per GPU, right now I'm happy with 3 WU's\day. OK, to keep station more stable, I was downclocked GPU's a bit ( from 1,63gHz till 1,57gHz), but that wasn't the main reason.
Running damn stable GTX260 x3 under BOINC 6.10.0, 190.38 driver on Win Server 2008.


Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 12977 - Posted: 2 Oct 2009 | 11:24:14 UTC

Though it is not a "cure" and I have not done it universally yet... I have started to run MW on one of my CUDA equipped machines. With the longer run times here, I hate to say it, but, based on my thumbnail calculations the pay is better with MW ... I have not yet tried Collatz on the CUDA cards yet to see what the time is ... but ... with the time creeping higher and higher ...

Anyway, you can tell that the run times are longer by looking at the queued tasks where the DCF shows the expected run times and for 260 cards that is now 7:40 vice prior 6:00 ...

Stable versions of 6.10 are 6.10.3, 6.10.7 and 6.10.11 ... I have run, was running, or am running those versions. There are issues with work fetch and resource share and running MW ... You have to trim your cache settings back to about 0.1 to have RS work properly ...

The problem has been reported ...

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 23,749
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13087 - Posted: 9 Oct 2009 | 6:31:44 UTC - in response to Message 12572.
Last modified: 9 Oct 2009 | 6:57:54 UTC

I have updated the recommended client to 6.10.3.

thanks, gdf


I found that, at least for the 64-bit Windows versions, simply upgrading from 6.6.36 to 6.10.3 makes a large increase in the initial expected time to completion for GPUGRID workunits. All I've had has since then ran in much less time than the initial expected value, though, so the initial expected value is gradually coming down. Now down from about 120 hours to about 31 hours for me, compared to the 20 hours they typically take.

Also, I've found that at least this 6.10.3 version has a problem with ignoring the memory limit when any memory-demanding CPU project in the queue has enough workunits in the queue to be unable to run them on only the number of CPU cores it can use while obeying the memory limit and still get enough CPU time to finish them all by their deadlines without switching more than that number into high-priority mode immediately.

Could you let me know if you've found a 64-bit Windows BOINC version that does not have similar problems and is still recent enough to use with GPUGRID?

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13088 - Posted: 9 Oct 2009 | 6:48:48 UTC - in response to Message 13087.

I have updated the recommended client to 6.10.3.

thanks, gdf


I found that, at least for the 64-bit Windows versions, simply upgrading from 6.6.36 to 6.10.3 makes a large increase in the initial expected time to completion for GPUGRID workunits. All I've had has since then ran in much less time than the initial expected value, though, so the initial expected value is gradually coming down. Now down from about 120 hours to about 31 hours for me, compared to the 20 hours they typically take.

Also, I've found that at least this 6.10.3 version has a problem with ignoring the memory limit when any memory-demanding CPU project in the queue has enough workunits in the queue to be unable to run them on only the number of CPU cores it can use while obeying the memory limit and still get enough CPU time without switching all of them into high-priority mode immediately.

Could you let me know if you've found a 64-bit Windows BOINC version that does not have similar problems and is still recent enough to use with GPUGRID?


You will find that later BOINC versions don't react as badly. A bug was intoroduced back in Feb where the <on_frac> value didn't get updated. 6.10.3 fixed this. The <on_frac> value tells BOINC how much time your computer is on during a day, so it can estimate how long things will take, taking this figure into account.

As I don't have a 64 bit windows so I can't comment on not honouring the memory limits. I haven't seen any reports about it on the Boinc_Alpha mailing list. I found 6.10.11 to be stable, for me anyway. The current BOINC version is 6.10.13 which, unless there is a major bug found, will probably become the "release" version soon.
____________
BOINC blog

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 23,749
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13089 - Posted: 9 Oct 2009 | 7:12:28 UTC - in response to Message 13088.

Sounds like it's a good time for me to start preparing to check the 64-bit Windows 6.10.11 version for similar problems, then.

I may have to wait for The Lattice Project to release another batch of their workunits that ask for 1 GB each to check the memory limits problem; I'll let them know so they have more chance of getting their alpha testers to check for this problem before deciding whether to report it to the BOINC alpha testers group.

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 23,749
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13133 - Posted: 11 Oct 2009 | 5:10:18 UTC

Could you modify the procedure for creating new posts in this thread so that I don't often have to wait until the next day after the last post until I can see the last post without the graphics and ads for the bottom of the thread overlaying it and keeping me from seeing all of the last post? A number of other threads need this as well.

Since the button to add another post to the thread is visible this time, I've decided to try that and see if it helps.

Note - I finished uninstalling 6.10.3 and started looking for 6.10.11 to install. Surprise - 6.10.11 is no longer available at any site I've found.
Finding this and downloading 6.10.13 instead took long enough that I plan to wait until tomorrow to do the installation.

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 23,749
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13134 - Posted: 11 Oct 2009 | 5:12:52 UTC - in response to Message 13133.

Since the button to add another post to the thread is visible this time, I've decided to try that and see if it helps.


It did help this time. That button isn't always visible in such cases, though.

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13135 - Posted: 11 Oct 2009 | 5:15:34 UTC - in response to Message 13133.

Note - I finished uninstalling 6.10.3 and started looking for 6.10.11 to install. Surprise - 6.10.11 is no longer available at any site I've found.
Finding this and downloading 6.10.13 instead took long enough that I plan to wait until tomorrow to do the installation.


You can find all of them here.
____________
BOINC blog

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 23,749
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13151 - Posted: 12 Oct 2009 | 13:45:45 UTC - in response to Message 13135.

Note - I finished uninstalling 6.10.3 and started looking for 6.10.11 to install. Surprise - 6.10.11 is no longer available at any site I've found.
Finding this and downloading 6.10.13 instead took long enough that I plan to wait until tomorrow to do the installation.


You can find all of them here.


Thanks. I've just downloaded 6.10.11 and am about to start installing it.

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 23,749
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13154 - Posted: 13 Oct 2009 | 0:01:48 UTC - in response to Message 13151.

Have finished installing and found the following:

Under 64-bit Vista, it is not adequately compatible with Norton Internet Security 2009, which prevents boinc.exe and boincmgr.exe from communicating with each other. That version offers some capability to give certain programs more internet access, but when I tried it, I found that it will let you do this only for 32-bit programs.

I was able to handle this by upgrading to Norton Internet Security 2010 instead, although that version took more Vista restarts than I expected before it stopped causing error messages every time I restarted 6.10.11. This version starts out with boinc.exe (64-bit version) able to reach the internet.

The first GPUGRID workunit after that would not start until I adjusted the local preferences to allow GPU use while I use the computer. It then ran about 5 hours, then had a computation error, and have been unable to download another GPUGRID workunit since even though the GPU is still waiting for its next workunit. It keeps asking all the BOINC projects I participate in for GPU workunits, even though none of the others are supposed to have any.

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13158 - Posted: 13 Oct 2009 | 10:46:45 UTC - in response to Message 13154.

The first GPUGRID workunit after that would not start until I adjusted the local preferences to allow GPU use while I use the computer. It then ran about 5 hours, then had a computation error, and have been unable to download another GPUGRID workunit since even though the GPU is still waiting for its next workunit. It keeps asking all the BOINC projects I participate in for GPU workunits, even though none of the others are supposed to have any.


Re: The checking for GPU work

This is normal behaviour with the BOINC 6.6.xx versions onwards. It doesn't know which projects have GPU apps and which ones have CPU apps, so it asks for each sort. It has backoff mechanisms to it will cut back on the frequency it checks if it doesn't get any for each project. One of recent changes is to allow the projects to tell it not to bother asking for types of work (CPU/ATI or Nvidia) for up to 28 days.

As for the error, can you give us a link to the work unit in question and then we can try and see what its doing.
____________
BOINC blog

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 23,749
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13166 - Posted: 13 Oct 2009 | 18:12:44 UTC - in response to Message 13158.

The first GPUGRID workunit after that would not start until I adjusted the local preferences to allow GPU use while I use the computer. It then ran about 5 hours, then had a computation error, and have been unable to download another GPUGRID workunit since even though the GPU is still waiting for its next workunit. It keeps asking all the BOINC projects I participate in for GPU workunits, even though none of the others are supposed to have any.


As for the error, can you give us a link to the work unit in question and then we can try and see what its doing.


This workunit:

http://www.gpugrid.net/result.php?resultid=1374844


BOINC finally downloaded and started another workunit, after leaving the GPU idle for several hours.

superempie
Send message
Joined: 17 May 08
Posts: 3
Credit: 46,454,132
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 14105 - Posted: 3 Jan 2010 | 17:23:59 UTC

Talking about runtimes and me crunching my first NVidia wu for GPUGRID, how long will it take for my XFX 9600GT 512MB to finish the wu? It has run now for almost 9 hours, progress showing 22% now. Other projects are running on the CPU (POEM@Home and WCG).

Profile Michael Goetz
Avatar
Send message
Joined: 2 Mar 09
Posts: 124
Credit: 46,573,744
RAC: 140,965
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 14106 - Posted: 3 Jan 2010 | 17:47:09 UTC - in response to Message 14105.

Talking about runtimes and me crunching my first NVidia wu for GPUGRID, how long will it take for my XFX 9600GT 512MB to finish the wu? It has run now for almost 9 hours, progress showing 22% now. Other projects are running on the CPU (POEM@Home and WCG).


My guess would be around 18-30 hours if it's running continuously. Recent work units have been running between 6 to 9 hours on my GTX280, and judging by this table, your GPU should be about 1/3 the speed of mine. However, my card is factory overclocked, so it's a bit faster than the reference numbers in that table.

My last few WUs have been at the high end of the spectrum, so I'm guessing you'll see closer to 30 hours than to 18 hours.

____________
Want to find one of the largest known primes? Try PrimeGrid. Or help cure disease at WCG.

superempie
Send message
Joined: 17 May 08
Posts: 3
Credit: 46,454,132
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 14110 - Posted: 3 Jan 2010 | 18:43:28 UTC
Last modified: 3 Jan 2010 | 18:43:45 UTC

Thanks for the info Michael.

Will see how long it takes. Estimated time of completion in BOINC is about 30 hours as of now (25% after 10 hours).

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 14130 - Posted: 5 Jan 2010 | 23:37:48 UTC - in response to Message 14110.

Thanks for the info Michael.

Will see how long it takes. Estimated time of completion in BOINC is about 30 hours as of now (25% after 10 hours).

Just so you know, running other tasks on the CPU has negligable effects on running GPU tasks on virtually all projects. It is nice to now have a selection of projects for GPUs (thoiugh the ATI selection is still weak) so that if one goes down there are others to fill in ...

You may be able to also run Colaltz and/or Prime Grid on your card, been a long time since I ran such a low end Nvidia so you will have to try or wait till someone tells me I am all wet ... :)

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 23,749
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14131 - Posted: 6 Jan 2010 | 0:10:33 UTC
Last modified: 6 Jan 2010 | 0:20:23 UTC

I've recently run both Collatz and PrimeGrid successfully on an G105M Nvidia card, which is near the low end of the Nvidia cards now available.

They both appear to require CUDA 2.3 or higher and computing capability 1.1 or higher, but no other special requirements.

Post to thread

Message boards : Graphics cards (GPUs) : Run times

//