Enigma Cuda 1.08 runtime 7 minutes - now 1.09 only around a few seconds ?? |
Message boards : Number crunching : Enigma Cuda 1.08 runtime 7 minutes - now 1.09 only around a few seconds ??
Author | Message |
---|---|
San-Fernando-Valley Send message Joined: 16 Jul 17 Posts: 4 Credit: 12,501,649 RAC: 0 |
Something has changed enormously between those two versions mentioned in the title. Enigma cuda 1.08 (cuda_fermi) WUs used to take around 7 to 8 MINUTES with credits over 2000. Now 1.09, since yesterday, run max. of 16 SECONDS giving around 90 credits. I am STOPPING GPU/NVIDIA crunching until "problem" is explained to me. "Not everything that can be counted counts, and not everything that counts can be counted" - Albert Einstein (1879 - 1955) |
TJM Project administrator Project developer Project scientist Send message Joined: 25 Aug 07 Posts: 843 Credit: 267,994,998 RAC: 0 |
Shorter workunits are from different batch and there is nothing wrong with them, other that the fact they're too short for fast GPUs. There are a couple of workunits lengths as it was the first release and I was testing various options. M4 Project homepage M4 Project wiki |
San-Fernando-Valley Send message Joined: 16 Jul 17 Posts: 4 Credit: 12,501,649 RAC: 0 |
OK - thanks for your quick response! So, wenn are you finished testing? I would like to start crunching "normal" WUs. The short ones are "killing" my speed index. I was up to over 34,000.000 and after unknowingly running the short WUs now under 2,800.000 !? Have a nice day. |
San-Fernando-Valley Send message Joined: 16 Jul 17 Posts: 4 Credit: 12,501,649 RAC: 0 |
... back up to 29,000.000 and over after I started to crunch (shorties) again ... It's like magic ... "Not everything that can be counted counts, and not everything that counts can be counted" - Albert Einstein (1879 - 1955) |
TJM Project administrator Project developer Project scientist Send message Joined: 25 Aug 07 Posts: 843 Credit: 267,994,998 RAC: 0 |
These WUs are real ones just shorter. Also the workunit length is not constant, it fluctuates a bit and the runtime depends on initial machine settings. M4 Project homepage M4 Project wiki |
JugNut Send message Joined: 16 Mar 13 Posts: 24 Credit: 125,506,046 RAC: 0 |
Yea besides the poorer credit for the G3's v's the G4's the main thing that annoys me is having the G4's & G3's mixed together. IMHO the G4's run better by themselves where the G3's run more efficiently 2 or 3 at a time. Since they both have the same plan class & both use the same app I can't configure my app_config to treat them differently. IE: to run G3's at 3 at a time & g4"s at 1 at a time. Perhaps if they came in batch's one after the other or were changed daily? Although I could easily imagine this may not be practical. The long G4's give a decent amount more credit than the G3's do, maybe be try giving 120cr instead of 90 credit given now for smallies and maybe 1050cr instead of 900 for the larger G3's. This may not seem like much but when you doing thousands of WU's a day it quickly adds up.(or down as it is now) This will at least bring both G4's & G3's closer to being in line with each other credit wise and stop people cherry picking & just moving off when they see the smallness start to flow through. Some people dislike the smallies anyway so giving them even more reason to dislike them by given them less credit is probably not a good idea. Of course this is your show and this is just my opinion. Cheers |
europe64 Send message Joined: 9 Dec 15 Posts: 3 Credit: 100,265,550 RAC: 0 |
hi, my opinion is that credit should be given for computing power |
TJM Project administrator Project developer Project scientist Send message Joined: 25 Aug 07 Posts: 843 Credit: 267,994,998 RAC: 0 |
With all the new batches of work I'll be aiming for workunit length somewhere around the current g4* workunits or slightly longer. I can't make them too long, as the app performance quickly scales down with GPU speed and the processing time would be painfully slow on mid range GPUs. BOINC server has a mechanism that allows sending longer workunits to faster hosts and at some point I'll look into this. In the meantime, I have disabled the g3_alqfi87_2 and g3_alqfi87_3 and replaced them with the same batches but with workunits 10x and 20x longer. M4 Project homepage M4 Project wiki |
Message boards :
Number crunching :
Enigma Cuda 1.08 runtime 7 minutes - now 1.09 only around a few seconds ??