New M4 batch - U-534 P1030680 |
Message boards : News : New M4 batch - U-534 P1030680
Author | Message |
---|---|
TJM Project administrator Project developer Project scientist Send message Joined: 25 Aug 07 Posts: 843 Credit: 267,994,998 RAC: 0 |
The work generator will be restarted today, running a new M4 batch on VROL NMKA naval message. At first there will be only a short test batch to check the server backend, after the tests the server will resume in auto mode with lots of work. It's also possible that the server will go down for a couple of hours during the next days; as the system hard drive needs to be replaced. M4 Project homepage M4 Project wiki |
Peciak Send message Joined: 27 Aug 09 Posts: 9 Credit: 117,918,807 RAC: 0 |
Z wielką radością cała załoga Polish National Team wita projekt wśród "żywych". Przystępujemy do liczenia. |
zombie67 [MM] Send message Joined: 2 Sep 07 Posts: 25 Credit: 15,424,373 RAC: 0 |
|
zombie67 [MM] Send message Joined: 2 Sep 07 Posts: 25 Credit: 15,424,373 RAC: 0 |
Based on the current run-rate, what is the project duration of this batch? Just a rough estimate would be fine. Thanks! Dublin, CA Team SETI.USA |
Aurel Send message Joined: 26 Sep 12 Posts: 18 Credit: 921,616 RAC: 0 |
So, I see we have a lot of new workunits. More than 91 million wu´s had to been computed now, but why? For a few days, that was "only" 22 million wu´s. |
TJM Project administrator Project developer Project scientist Send message Joined: 25 Aug 07 Posts: 843 Credit: 267,994,998 RAC: 0 |
There is no way to guess when the batch will end. I hope we won't have to go through all the workunits. The # of workunits changed because at first I added only 1/4 of machine settings to the queue. There was a glitch which made adding certain settings impossible. M4 Project homepage M4 Project wiki |
Aurel Send message Joined: 26 Sep 12 Posts: 18 Credit: 921,616 RAC: 0 |
There is only one way to be ready: compute, compute and compute. ;) |
Aurel Send message Joined: 26 Sep 12 Posts: 18 Credit: 921,616 RAC: 0 |
I see, the m4_vroln72_3 would be run, too. In the server status we only see the m4_vroln72_1 tasks. Would be the status for m4_vroln72_3 come to? |
TJM Project administrator Project developer Project scientist Send message Joined: 25 Aug 07 Posts: 843 Credit: 267,994,998 RAC: 0 |
Server status will be fixed soon, extracting the data actually stresses the server and I have to think about possible solution(s), probably the workunit info will be cached for 6-12 hours with additional info added. A bit of progress info: 20 average restarts were done on m4_vroln72_1 ('naval' dictionaries) with a minimum of 4. This batch suffered a bit from a server bug, the workunit distribution was very chaotic at first and some of the combinations went as high as 1400+ restarts. 11 average restarts were done on m4_vroln73_3 ('u534' dictionaries) with a minimum of 5. M4 Project homepage M4 Project wiki |
TJM Project administrator Project developer Project scientist Send message Joined: 25 Aug 07 Posts: 843 Credit: 267,994,998 RAC: 0 |
Stats from today: m4_vroln72_3 - 16.5 avg restarts, minimum 10 m4_vroln72_1 - 20.6, minimum 5. I tweaked the fetcher code a bit for even smoother workunit distribution, now it slightly boosts the priority of blocks which have 0 results in progress. Btw, yesterday I upgraded the BOINC server code to the last revision due to possible security bugs. Badges are not displayed because my old code is incompatible, this will be fixed soon. M4 Project homepage M4 Project wiki |
TJM Project administrator Project developer Project scientist Send message Joined: 25 Aug 07 Posts: 843 Credit: 267,994,998 RAC: 0 |
I added stats info to the server_status, now for both batches. Please don't be scared by the huge amount of workunits listed there, that's only because I set the target # of results to 2000. This does not mean that all the workunits have to be processed. M4 Project homepage M4 Project wiki |
TJM Project administrator Project developer Project scientist Send message Joined: 25 Aug 07 Posts: 843 Credit: 267,994,998 RAC: 0 |
Both batches are near 30 restarts (with some blocks lagging behind, as usual). Based on my findings from test runs, I've upgraded the server with an option to temporarily boost the priority of group of workunits every time a result with score near (~0.95+) current top score is received (only works once for each machine setup, so duplicate results won't trigger this again). That's because very often a partial decrypt is sitting somewhere around the top results and it's very hard to notice until there's a plaintext to compare it to. M4 Project homepage M4 Project wiki |
TBREAKER Send message Joined: 26 Sep 11 Posts: 29 Credit: 0 RAC: 0 |
Cryptanalysis is a hard work. Don´t be impatient! Maybe, some people have no idea about the complexity of the work: An example for a single CPU (Enigma M4 hillclimbing): 4·336·26·26·26·26 (Positions) ·26·26 (Rings) ·TIME (maybe 50ms) = 20759140147s = 658.2 years! Now you can divide this time by the participating CPUs. Success? No guarantee... The hillclimbing algorithm has to proof several thousand plugs at every single ring/position. In comparison, Brute Force would need 150738274937250 plug tests at every single ring/position!!! --> Not feasible... @TJM: Can you tell us the time, your software needs, for a single hillclimb at one position/ring? Maybe you can calculate the average from several runs... In other words: What time needs the enigma@home project for the complete keyspace? All the best Michael -=> Breaking German Navy Ciphers - The U534 Enigma messages <=- |
TJM Project administrator Project developer Project scientist Send message Joined: 25 Aug 07 Posts: 843 Credit: 267,994,998 RAC: 0 |
1000-passes over a single key for 72-letters ciphertext take around 2 seconds (Q9450 @ default clock) or 1.4s when running optimized app (gcc 4.3.5, -march=core2 -mtune=core2). A decent quad core machine, running compiler-optimized app can surely do at least 1 full walk over keyspace per year when using 4 cores 24/7. Modern i7-based CPUs are even faster, I'd say even 2 walks per year would be possible on top models. That's a lot of time, especially considering the fact that short text usually requires lots of restarts. I'm currently looking into possible solutions for CUDA-accelerated app. On average, when running 72 letters text, the project does full restart (a walk through entire M4 keyspace) in less than 24 hours. This is split between two separate batches with different dictionaries assigned - the first set uses Stefan Krah's naval dictionary, which was used in M4 project. The second runs a set based on the decoded U-534 messages. If you'd like to take a look at the server output data, let me know. The server updates lots of info in realtime, when the results are returned - this includes current keyrange distribution, full result list sorted by score, current work queue and some additional stats/diagnostic info. Unfortunately due to the massive number of workunits I can't make the 'live' scripts public, as it surely would kill the server. EDIT: visualisation of project speed: M4 Project homepage M4 Project wiki |
TBREAKER Send message Joined: 26 Sep 11 Posts: 29 Credit: 0 RAC: 0 |
Thank you very much for the information. 2 walks per year seems to be very fast on a single machine. I´m very impressed about the speed of the project (24h). I still have problems to understand what a "pass" is. 1000-passes means 1000 plug test? Or maybe 1000 restarts of the plug algorithm? Nvidia´s CUDA is very interesting, but it is hard to parallelize a software, which is originally written for a single cpu system. All the best Michael -=> Breaking German Navy Ciphers - The U534 Enigma messages <=- |
TJM Project administrator Project developer Project scientist Send message Joined: 25 Aug 07 Posts: 843 Credit: 267,994,998 RAC: 0 |
A single "restart" is a walk over subset of machine keyrange, doing single hillclimb on each of possible wheel settings. All the current workunits are doing one pass only, however it's possible to assign "n" passes to a workunit. Then the app iterates through all wheel settings and upon reaching end of given key range, it restarts from the begining, decreases "n" by 1 and does next pass, this is repeated until n=0. M4 Project homepage M4 Project wiki |
TJM Project administrator Project developer Project scientist Send message Joined: 25 Aug 07 Posts: 843 Credit: 267,994,998 RAC: 0 |
One more thing: due to the fact that everything here runs in asynchronous mode and I have little to no control over workunits which are already in progress (in 'sent' state), I also added the second unbroken text to the queue. This is because BOINC has no reliable mechanism for cancelling work that was already sent. I guess Enigma@Home is the only project (or one of very few, the other one might be distributed.net wrapper) where the solution may be found at any time and the rest of workunits from a batch will not be needed anymore. The work generator stops every time top score changes and the server waits for decision what to do next. In worst case it will either run dry for a while (until I remove the stop flag) or it'll send out some work units which are not needed anymore (usually a few hundreds). The real problem sits there in the 'in progress' work units, because even if I kill them on the server, it does not guarantee they'll be killed on clients. The client has to contact the server to notice that the workunit state has changed and most of the time there is massive number of work units in progress. For example, at this moment there are nearly 110k work units on the clients, which translates to roughly 2,5 walks over M4 key range. Running two texts in parallel surely slows things down by a factor of 2 (from single text point of view), but in case when one of the texts is broken, it'll save some CPU power (50% less work units to abort). M4 Project homepage M4 Project wiki |
TJM Project administrator Project developer Project scientist Send message Joined: 25 Aug 07 Posts: 843 Credit: 267,994,998 RAC: 0 |
The current status for VROLN is 37+39 full keyspace walks. During the short maintenance today (I was testing new UPS and it's software - and eventually I managed to shut down the server accidentally) I took a snapshot of top results, if anyone would like to take a look they are here: http://www.enigmaathome.net/static/3278xxyvui/vroln721.txt http://www.enigmaathome.net/static/3279x7v5d3/vroln723.txt M4 Project homepage M4 Project wiki |
TBREAKER Send message Joined: 26 Sep 11 Posts: 29 Credit: 0 RAC: 0 |
Thank you for sharing the top results! I had a look on them too... All the best Michael -=> Breaking German Navy Ciphers - The U534 Enigma messages <=- |
TJM Project administrator Project developer Project scientist Send message Joined: 25 Aug 07 Posts: 843 Credit: 267,994,998 RAC: 0 |
I noticed a very serious problem that affects at least some texts. For example, P1030655 is unbreakable when using the "naval" dictionary. On a single key it breaks after 180 retries (worst case) with a score of 1.47M. However when running 'unknown key' it will never break, because 1.47M is lower than average output score for a 72 letters text, which is around 1.6M. Even if the result is found, it's overwritten by garbles with higher score. The highest scoring random outputs are around 1.8M. The same text is broken after just 3 retries when using "u534" dictionary. The top score is around 1.85M with average score around 1.2M. This shows that the good trigram dictionary is critical when attacking short texts, and it's not just single case -> https://docs.google.com/spreadsheet/ccc?key=0AhS-kPmFI4OxdGVwd3VSZHk3cUx4SkZoR2FFMzRWS1E#gid=0 TBREAKER - do you have any bigram/trigram (or trigram only) dictionary that could be used ? M4 Project homepage M4 Project wiki |
Message boards :
News :
New M4 batch - U-534 P1030680