Advanced search

Message boards : Number crunching : New simulations soon

Author Message
Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 258
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 46541 - Posted: 23 Feb 2017 | 11:06:44 UTC

I know we have been a bit down on simulations lately. Right now we are setting up multiple systems with Adria though so we should be sending out new WU either by the end of the week or by beginning of next week.

It simply takes some time to find proteins we think are interesting to simulate, prepare them, write the software necessary for doing adaptive sampling on them since we changed a bit the method etc. We will take a final look at it today with Gianni.

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 240
Credit: 970,319,881
RAC: 3,543,850
Level
Glu
Scientific publications
watwat
Message 46542 - Posted: 23 Feb 2017 | 11:37:44 UTC

Thank you for keeping us updated Stefan

Rion Family
Send message
Joined: 13 Jan 14
Posts: 15
Credit: 5,411,955,715
RAC: 9,234,559
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwat
Message 46551 - Posted: 23 Feb 2017 | 23:57:01 UTC

Yes, Thank you for the update it is appreciated! We understand the complexity of your work and look forward to helping you succeed.



Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 258
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 46564 - Posted: 27 Feb 2017 | 10:18:01 UTC
Last modified: 27 Feb 2017 | 10:18:54 UTC

Haha I think I should have put soon(tm).

No, we are still actively working on getting the simulations ready. Requires some communications with other groups as well though. It's really not as easy as it might sound :)

Bedrich Hajek
Send message
Joined: 28 Mar 09
Posts: 332
Credit: 3,759,688,409
RAC: 392,906
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 46565 - Posted: 28 Feb 2017 | 0:10:24 UTC - in response to Message 46564.
Last modified: 28 Feb 2017 | 0:11:21 UTC

Haha I think I should have put soon(tm).

No, we are still actively working on getting the simulations ready. Requires some communications with other groups as well though. It's really not as easy as it might sound :)


Good, take your time and get it right. That's more important than haste, and see if you can give these WUs a high GPU usage. as well. I wouldn't mind if you throw in some ultra long WUs, provided you put them in a separate category.

So how many WUs will there be in these simulations, approximately?

will
Send message
Joined: 17 Jan 17
Posts: 6
Credit: 8,713,100
RAC: 0
Level
Ser
Scientific publications
wat
Message 46566 - Posted: 28 Feb 2017 | 0:11:51 UTC - in response to Message 46564.

Haha I think I should have put soon(tm).

No, we are still actively working on getting the simulations ready. Requires some communications with other groups as well though. It's really not as easy as it might sound :)


Update is most appreciated! In the meantime I've got my Titan XP grinding away at a Genfer n=22 prime on PrimeGrid, going for that world record ;)

GDB
Send message
Joined: 24 Oct 11
Posts: 3
Credit: 158,822,270
RAC: 205,132
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 46575 - Posted: 2 Mar 2017 | 11:55:46 UTC - in response to Message 46566.



Update is most appreciated! In the meantime I've got my Titan XP grinding away at a Genfer n=22 prime on PrimeGrid, going for that world record ;)


A Genefer n=22 prime found now would only be 2nd largest, not world record!

Profile Logan Carr
Send message
Joined: 12 Aug 15
Posts: 193
Credit: 25,979,525
RAC: 19
Level
Val
Scientific publications
wat
Message 46576 - Posted: 3 Mar 2017 | 0:52:45 UTC - in response to Message 46564.

Haha I think I should have put soon(tm).

No, we are still actively working on getting the simulations ready. Requires some communications with other groups as well though. It's really not as easy as it might sound :)



Hi Stefan!

Once you are done working on the simulations, will there be tasks available at all times for a bit, or are they only going to be in phases at a time?

I'm just wondering so I can plan ahead. Thank you :)
____________
Cruncher/Learner in progress.

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 240
Credit: 970,319,881
RAC: 3,543,850
Level
Glu
Scientific publications
watwat
Message 46577 - Posted: 3 Mar 2017 | 13:49:04 UTC

It seems that we are finally saturating the hosts with work! I have paused all folding to continue with GPUGrid now that my GPUs can do something.

will
Send message
Joined: 17 Jan 17
Posts: 6
Credit: 8,713,100
RAC: 0
Level
Ser
Scientific publications
wat
Message 46578 - Posted: 3 Mar 2017 | 14:03:01 UTC
Last modified: 3 Mar 2017 | 14:06:04 UTC

GPU is humming! looking at ~70% utilization on a Titan X Pascal, estimated completion time 1 hr 7 mins.

I do however find it interesting that the TDP draw is only hovering around 40%...

JoergF
Avatar
Send message
Joined: 20 Apr 15
Posts: 189
Credit: 224,425,186
RAC: 373,030
Level
Leu
Scientific publications
watwat
Message 46583 - Posted: 3 Mar 2017 | 21:00:03 UTC - in response to Message 46578.
Last modified: 3 Mar 2017 | 21:01:00 UTC

GPU is humming! looking at ~70% utilization on a Titan X Pascal, estimated completion time 1 hr 7 mins.

I do however find it interesting that the TDP draw is only hovering around 40%...


I would run 2 concurrent jobs on this GPU - as the utilization of 70% is very low, even by Win10 standards. Your CPU is a good one, but the single core performance not high enough to feed that Pascal monster entirely. 2 Jobs/GPU should yield >90% load right away. Despite the nasty WDDM handbrake.
____________
Die Liebe allein versteht das Geheimnis, andere zu beschenken und dabei selbst reich zu werden. [Clemens von Brentano]
Only love understands the secret of giving and getting richer at the same time [Clemens of Brentano]

Profile Logan Carr
Send message
Joined: 12 Aug 15
Posts: 193
Credit: 25,979,525
RAC: 19
Level
Val
Scientific publications
wat
Message 46584 - Posted: 4 Mar 2017 | 4:50:38 UTC - in response to Message 46583.

How can I make my boinc search for new tasks on gpugrid multiple times in one minute?

Regardless how long I leave my pc on, it just doesn't get any tasks.
____________
Cruncher/Learner in progress.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1049
Credit: 1,062,986,064
RAC: 881,068
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 46586 - Posted: 4 Mar 2017 | 5:40:41 UTC - in response to Message 46584.
Last modified: 4 Mar 2017 | 5:43:46 UTC

Logan, I know the answer to your question, but I refuse to answer it, because it sounds like you're basically wanting to hammer the server with that knowledge. The whole reason BOINC backoffs exist, is to prevent DDOS-style hammering.

How about you set yourself up with some 0-resource-share backup GPU projects, and just let BOINC get work from GPUGrid when it can. It was designed so you don't have to babysit it.

Now, if you believe you are having a legitimate problem, where the server has tasks available, yet your work fetch got 0 tasks, then please open that discussion in a new thread, and I'd be more than happy to help you with it. I helped David design the current work fetch algorithms, and they work pretty well, but are not perfect.

If you feel like investigating solo, then use Options -> Event Log Options -> work_fetch_debug, and then look at Tools -> Event Log. Lots of fun info in there, if you're willing to learn.

Profile Logan Carr
Send message
Joined: 12 Aug 15
Posts: 193
Credit: 25,979,525
RAC: 19
Level
Val
Scientific publications
wat
Message 46587 - Posted: 4 Mar 2017 | 5:55:44 UTC - in response to Message 46586.

Logan, I know the answer to your question, but I refuse to answer it, because it sounds like you're basically wanting to hammer the server with that knowledge. The whole reason BOINC backoffs exist, is to prevent DDOS-style hammering.

How about you set yourself up with some 0-resource-share backup GPU projects, and just let BOINC get work from GPUGrid when it can. It was designed so you don't have to babysit it.

Now, if you believe you are having a legitimate problem, where the server has tasks available, yet your work fetch got 0 tasks, then please open that discussion in a new thread, and I'd be more than happy to help you with it. I helped David design the current work fetch algorithms, and they work pretty well, but are not perfect.

If you feel like investigating solo, then use Options -> Event Log Options -> work_fetch_debug, and then look at Tools -> Event Log. Lots of fun info in there, if you're willing to learn.



Hi,

I have no interest in d-dosing anything and nor do I care to even learn how to. This was a serious problem. But it's ok, I changed a couple of settings on my own and I got a single task now which is all I wanted.

And my apologies, I will make a new thread next time. I should've done that but honestly have a bad habit of asking in other threads.

Thank you

-Logan

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1049
Credit: 1,062,986,064
RAC: 881,068
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 46588 - Posted: 4 Mar 2017 | 6:03:02 UTC - in response to Message 46587.

I pretty much already knew that you didn't intentionally want to burden the server - most people here are friendly :) But, in reality, if you bypass the BOINC backoffs, that is exactly what you are doing - burdening the server unnecessarily.

That being said.... I have recently employed a method to change the backoff maximum limit to be 1 hour instead of 1 day, for one of my projects.. which is why I said "I know the answer". If you follow me on other projects, you'll see why I did what I did. Let's just say, I have the capability to run a task for 500 days without it crashing :)

That project isn't GPUGrid. And GPUGrid wouldn't benefit at all, from us trying to get tasks at a rate that is faster than standard BOINC backoffs.

I'm glad to see "New simulations soon" in this thread's title, and I'm glad we're seeing SOME new work, but I too have GPUs (5 of them, in 2 PCs!) that sometimes don't get GPUGrid work... and when that happens, they get single one-off tasks from the 0-resource-share backup projects -- Seti, Einstein, Asteroids, etc. I totally recommend you set those up, to keep your GPUs from going idle! :)

Profile Logan Carr
Send message
Joined: 12 Aug 15
Posts: 193
Credit: 25,979,525
RAC: 19
Level
Val
Scientific publications
wat
Message 46589 - Posted: 4 Mar 2017 | 6:18:21 UTC - in response to Message 46588.

Okay, I will add prime grid as a back-up project and have it's resource share set to 0. Thank you and I'm glad we reached an understanding of the situation.
____________
Cruncher/Learner in progress.

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 240
Credit: 970,319,881
RAC: 3,543,850
Level
Glu
Scientific publications
watwat
Message 46613 - Posted: 9 Mar 2017 | 14:36:54 UTC

Very high GPU utilization on these new ADRIA WUs on windows 10, very impressed, keep up the good work!

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 258
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 46614 - Posted: 9 Mar 2017 | 14:42:25 UTC
Last modified: 9 Mar 2017 | 14:42:36 UTC

And we are on :D Finally a nice big batch of simulations, hehe. They will decrease a bit over the next days but I think it should be a relatively stable supply now that we got the adaptive going on those proteins.
Enjoy!

JoergF
Avatar
Send message
Joined: 20 Apr 15
Posts: 189
Credit: 224,425,186
RAC: 373,030
Level
Leu
Scientific publications
watwat
Message 46617 - Posted: 9 Mar 2017 | 15:49:27 UTC

Nice! If it goes on like this I really would have to rob my piggy bank and buy the new 1080ti...
____________
Die Liebe allein versteht das Geheimnis, andere zu beschenken und dabei selbst reich zu werden. [Clemens von Brentano]
Only love understands the secret of giving and getting richer at the same time [Clemens of Brentano]

Post to thread

Message boards : Number crunching : New simulations soon