Advanced search

Message boards : Number crunching : my deadline is getting closer, please give me more time

Author Message
PaoloNasca
Send message
Joined: 1 Jan 20
Posts: 2
Credit: 1,231,081
RAC: 0
Level
Ala
Scientific publications
wat
Message 58446 - Posted: 6 Mar 2022 | 0:54:44 UTC

Advanced molecular dynamics simulations for GPUs 2.19 (cuda1121) is my first WU running fine.

The GPUGrid server let me crunching the WU for 5 days.
I need more days to close the crunch.
I'm at 75% after 3.5 days and the deadline is 2022-03-07 06:21 UTC.

I’m afraid to lose the time I donate to your projects.

Please give more time to finish the WU

https://www.gpugrid.net/workunit.php?wuid=27110336
https://www.gpugrid.net/show_host_detail.php?hostid=593709

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 520
Credit: 2,275,558,465
RAC: 227
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 58447 - Posted: 6 Mar 2022 | 8:37:33 UTC - in response to Message 58446.

I also have processed this kind of tasks on a GTX 750 Ti GPU.
If I want them to finish inside their 5 days deadline, I have to heavily overclock the GPU...
But exceeding the deadline doesn't necessarily mean that you won't receive any credits for your job: Deadline is in fact "extended" in a variable number of hours, or even days. In some way, it's like a bet.
It depends on the computer that catches your re-sent task after its deadline.
If the task reaches a high-end computer, the minimum deadline extension you might expect is about 7 hours for a RTX 3080 Ti, for example.
If the other computer sends a valid result for that task before you, your task won't receive any reward (Ops!).
But if your task is finished and reported first, you'll get your credits (Horey!)... and the second computer will lose any bonus (Oooops!! :-)

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 520
Credit: 2,275,558,465
RAC: 227
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 58448 - Posted: 6 Mar 2022 | 8:53:07 UTC

Related: Overclocking to wacky (?) limits

PaoloNasca
Send message
Joined: 1 Jan 20
Posts: 2
Credit: 1,231,081
RAC: 0
Level
Ala
Scientific publications
wat
Message 58449 - Posted: 6 Mar 2022 | 11:30:47 UTC

I’m looking forward and I hope to receive the automatic deadline extension.

Thank you for your suggestion about GPU overclocking. I’ll read the post but I’m not interested to do any overclock.

If I cannot finish the crunch by 5 days, I’ll do by 7 or 10 days ( 😊 ).

I hope the project administrators could give the volunteers enough flexibility.

Really interesting and wacky the final race lap. At my deadline, a new WU is assigned to another volunteer but I’m not out of the game 😊. I’ll can earn the whole loot if I’m the first to cross the finish line

Thanks for your time.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 744
Credit: 4,943,798,494
RAC: 524,854
Level
Arg
Scientific publications
wat
Message 58450 - Posted: 6 Mar 2022 | 17:11:41 UTC

Invest in a faster GPU if you want to reliably contribute here. 750ti is very low power compared to more modern offerings and the project as a whole is trending towards more and more difficult tasks. It’s unreasonable to expect a project to support certain GPUs forever, at some point you just need to upgrade the hardware.

Soon the 750ti will be essentially unsupported anyway. The new acemd4 application will be requiring greater than 2GB (up to 3GB) GPU memory and your 750Ti won’t have enough to crunch any tasks.

If you want to keep using your 750Ti, it would be best to use it on a project that has much smaller tasks. Like milkyway, einstein (gamma ray only), or world community grid Open Pandemics Covid-19 (when they come back in a few months). But in my opinion, cards in the range of 750Ti or slower are not suitable for GPUGRID anymore.
____________

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1070
Credit: 1,450,990,714
RAC: 426,047
Level
Met
Scientific publications
watwatwatwatwat
Message 58451 - Posted: 7 Mar 2022 | 1:33:22 UTC

As Ian suggested you will need to move to newer and more powerful hardware.

The project is moving to the cutting edge of machine learning/artificial intelligence with both cpu and gpu necessary to compute the tasks.

This is going to require fast cpus and gpus with LOTS of memory. Just the nature of machine learning I'm afraid.

nealburns5
Send message
Joined: 7 Nov 21
Posts: 8
Credit: 15,232,439
RAC: 30
Level
Pro
Scientific publications
wat
Message 58554 - Posted: 24 Mar 2022 | 14:12:59 UTC

I was getting 3-4 hour tasks for a few days and things were going great. Then I got one that was running for about 17 hours and was only about 20% done, which I aborted. Is there a way to only request smaller tasks?

Thanks,
Neal

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1070
Credit: 1,450,990,714
RAC: 426,047
Level
Met
Scientific publications
watwatwatwatwat
Message 58555 - Posted: 24 Mar 2022 | 16:19:40 UTC - in response to Message 58554.

Not at this time. Maybe later the admins will get back to previous method of segregating "long" and "short" tasks.

The current "short" ones are only test tasks and not typical of what the project normally sends for research data.

What is considered normal is 12-14 hour tasks on modern, high-end hardware. Middle to low end hardware will be pushing up against the 5 day deadline.

nealburns5
Send message
Joined: 7 Nov 21
Posts: 8
Credit: 15,232,439
RAC: 30
Level
Pro
Scientific publications
wat
Message 58556 - Posted: 24 Mar 2022 | 17:11:28 UTC - in response to Message 58555.
Last modified: 24 Mar 2022 | 17:28:06 UTC

Thanks. My hardware is probably not up to this project in that case.

What kind of gpu card is good? I found a list in the forum, but it looks to be several years old. I would like one that's energy efficient and has a low overall power consumption.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 744
Credit: 4,943,798,494
RAC: 524,854
Level
Arg
Scientific publications
wat
Message 58557 - Posted: 24 Mar 2022 | 19:50:15 UTC - in response to Message 58556.
Last modified: 24 Mar 2022 | 19:52:59 UTC

anything from the Turing or Ampere generation will be the most energy efficient in terms of work done per watt.

GTX 16 series
RTX 20 series
RTX 30 series

low power consumption is relative. what is "low" to you? what's the max TDP you are comfortable with? do you have an available 8-pin PCIe/VGA power connection from your PSU if needed?
____________

nealburns5
Send message
Joined: 7 Nov 21
Posts: 8
Credit: 15,232,439
RAC: 30
Level
Pro
Scientific publications
wat
Message 58558 - Posted: 24 Mar 2022 | 21:02:13 UTC - in response to Message 58557.
Last modified: 24 Mar 2022 | 21:02:54 UTC

Thanks for your help.

Ideally it would be bus powered. My current card is 70 or 75 watts, and that is great.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1070
Credit: 1,450,990,714
RAC: 426,047
Level
Met
Scientific publications
watwatwatwatwat
Message 58559 - Posted: 24 Mar 2022 | 21:47:00 UTC - in response to Message 58558.

Only bus powered limits you to cards not really suitable for this project.
I suggest you try using your gpu on other projects with not as much need for modern hardware.

nealburns5
Send message
Joined: 7 Nov 21
Posts: 8
Credit: 15,232,439
RAC: 30
Level
Pro
Scientific publications
wat
Message 58560 - Posted: 24 Mar 2022 | 22:47:50 UTC - in response to Message 58559.

Hypothetically, if you were going to get a GPU just to do GPUGRID, which one would you get?

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1070
Credit: 1,450,990,714
RAC: 426,047
Level
Met
Scientific publications
watwatwatwatwat
Message 58567 - Posted: 25 Mar 2022 | 17:10:42 UTC - in response to Message 58560.

As Ian mentioned any from this list of candidates and what you can afford.

GTX 16 series
RTX 20 series
RTX 30 series

I've been purchasing new RTX 30 series cards at MSRP lately. They are still expensive even at MSRP.

Maybe a used one from the RTX 20 series if you can find one for an decent price.

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 520
Credit: 2,275,558,465
RAC: 227
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 58570 - Posted: 27 Mar 2022 | 21:50:39 UTC - in response to Message 58556.
Last modified: 27 Mar 2022 | 22:27:31 UTC

I would like one that's energy efficient and has a low overall power consumption.

Regarding your specific question, I'm very satisfied with GTX 16XX based cards that I'm using.
Strictly meeting your requirements:
GTX 1650 based cards. Their rated power consumption is 75 Watts, and many commercial models are PCIe bus powered.
They are able to process with noticeable efficiency the most currently GPU demanding ACEMD 3 ADRIA tasks in about 42 hours, thus easily getting mid (+25 %) bonus for returning result before 48 hours.

Meeting my own criteria for 120 Watts maximum power consumption for my cards:
GTX 1660 Ti is my current highest performance card.
At its 120W rated power consumption, it processes the same kind of ADRIA tasks in less than 23 hours, thus getting full bonus (+50 %) most of the times.

I've checked, and both models are still available at my local supplier.

I found a list in the forum, but it looks to be several years old.

Also quite (but not so) old thread is Low power GPUs performance comparative
At first post you'll find your current GTX 750 Ti graphics card performance compared to those mentioned (and other) low power consumption cards.
And at this other post, an efficiency comparative is presented.

nealburns5
Send message
Joined: 7 Nov 21
Posts: 8
Credit: 15,232,439
RAC: 30
Level
Pro
Scientific publications
wat
Message 58579 - Posted: 29 Mar 2022 | 15:05:04 UTC - in response to Message 58570.

Thanks for your detailed answer.

I'm pretty new to gpu compute. My main card right now is a Quadro M2000 [1] that I picked up used for $100. I've decided to see what I can do with this before spending more money. I'm getting quite good results on Einstein@Home and the Binary Pulsar Search application. I've picked up ~790,000 points in about 5 days [2].

I'd rather be curing diseases than looking for pulsars, because I think curing diseases is more important. That may have to wait until World Community Grid comes back and I can try Open Pandemics.


[1] https://www.techpowerup.com/gpu-specs/quadro-m2000.c2837
[2] https://www.boincstats.com/stats/5/user/detail/1039969/lastDays

tullio
Send message
Joined: 8 May 18
Posts: 190
Credit: 104,426,808
RAC: 0
Level
Cys
Scientific publications
wat
Message 58580 - Posted: 29 Mar 2022 | 18:57:59 UTC
Last modified: 29 Mar 2022 | 19:01:49 UTC

My GTX 1060 is faster than my GTX 1650 This waa explained to me in a SETI@home message board becaus the GTX 1650, with 4 GB Video RAM, has a 128 bit bus while the GTX 1060, with 3 GB Video RAM, has a 195 bit bus.
Tullio
____________

nealburns5
Send message
Joined: 7 Nov 21
Posts: 8
Credit: 15,232,439
RAC: 30
Level
Pro
Scientific publications
wat
Message 58583 - Posted: 31 Mar 2022 | 0:03:36 UTC

The GTX 1660 Ti definitely looks good. The GTX 1660 Super looks even slightly better according to the specs. Is that right?

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1070
Credit: 1,450,990,714
RAC: 426,047
Level
Met
Scientific publications
watwatwatwatwat
Message 58584 - Posted: 31 Mar 2022 | 1:33:16 UTC - in response to Message 58583.
Last modified: 31 Mar 2022 | 1:34:11 UTC

The GTX 1660 Ti definitely looks good. The GTX 1660 Super looks even slightly better according to the specs. Is that right?

No. The GTX Super has less CUDA cores than the Ti. (Ti 1536) (Super 1408)

Both are equivalent in amount of memory (6GB) and memory bandwidth (192bit)
Both have the same Tu116 chip but the Super is more cut down.

The Ti will get more work done as it can do more parallel computations.

https://www.techpowerup.com/gpu-specs/geforce-gtx-1660-ti.c3364

https://www.techpowerup.com/gpu-specs/geforce-gtx-1660-super.c3458

You can also compare the Theoretical GFLOPS performance on those pages.

nealburns5
Send message
Joined: 7 Nov 21
Posts: 8
Credit: 15,232,439
RAC: 30
Level
Pro
Scientific publications
wat
Message 58585 - Posted: 31 Mar 2022 | 13:53:44 UTC - in response to Message 58584.

The GTX 1660 Ti definitely looks good. The GTX 1660 Super looks even slightly better according to the specs. Is that right?

No. The GTX Super has less CUDA cores than the Ti. (Ti 1536) (Super 1408)


I already ordered a Super on ebay. According to techpowerup, the difference is 2%.

nealburns5
Send message
Joined: 7 Nov 21
Posts: 8
Credit: 15,232,439
RAC: 30
Level
Pro
Scientific publications
wat
Message 58586 - Posted: 31 Mar 2022 | 14:58:05 UTC

The GTX 1660 Super specs say that it has 14 Gbps of memory bandwidth and the Ti has 12 Gbps. Tullio's post makes it sound like memory speed may be a bottleneck.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1070
Credit: 1,450,990,714
RAC: 426,047
Level
Met
Scientific publications
watwatwatwatwat
Message 58587 - Posted: 31 Mar 2022 | 16:59:37 UTC - in response to Message 58586.

If the CUDA core count was the same I would say that yes the Super would be faster because of the higher clocked memory.

But CUDA core count comes first in determining the speed of the applications here at GPUGrid.

Second would be the card bandwidth and as long as you run a card in X8 or faster is marginal difference.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 744
Credit: 4,943,798,494
RAC: 524,854
Level
Arg
Scientific publications
wat
Message 58589 - Posted: 31 Mar 2022 | 19:19:53 UTC - in response to Message 58586.

The GTX 1660 Super specs say that it has 14 Gbps of memory bandwidth and the Ti has 12 Gbps. Tullio's post makes it sound like memory speed may be a bottleneck.


memory is not a big bottleneck for GPUGRID. Tullio is incorrect.
____________

Post to thread

Message boards : Number crunching : my deadline is getting closer, please give me more time

//