Introduction to Cryptology: Remarks on the Dictionary Attack project

There are no real technique difficulties in this project. You do need to form a strategy to generate possible passwords from a word list, though. I have two points want to make: one practical and the other theoretical.
  1. Don't think this is a toy project. It's of practical value. Here is a true story from your TA. I was once passed on an HP9000 725/100 (venerable line of workstations made by HP). It was dedicated to operate a certain apparatus. The problem was we got the machine without the root password, which prevented us from upgrading the software for the apparatus. The only one who was supposed to remember the password couldn't recall it. He only hinted it was a simple word. Since it's a stand-alone (meaning not networked) machine, no security cautions were applied to it and I gained access to the machine using an account without password (back then, there were more than one such accounts in every /etc/passwd shipped). The workstation ran the good old HP/UX 9.0 and the passwd file wasn't shadowed. So I grabbed the passwd file and mounted a dictionary attack, just like you did for the project. Within a couple of hours, the root passwd was found. It's a word. But not common English word: it's the logo of the company making the apparatus. You may some day face the exact same problem as I did and your college-day Perl script will have saved the day (and give you a promotion).
  2. That's some (boring) story. Now here's some math, which tells why a dictionary attack is feasible at all (yes, I know you'd say people are lazy and imprudent, but why?). We need some theory from the great Claude Shanon (BTW, he's on the celeb list and his passwd is Info). Namely, entropy and unicity distance. I'm giving the conclusions here and will show a little math later on. Unicity distance is not that relevant to thsi project. It makes more sense to talk about it in a cryptoanalysis project. But we'll say it here anyway.

    Put simply, the entroy of a password measures how much information, or how much uncertainty in it. The unit of entropy is bit. One bit reveals the very minimum information, namely, on or off, yes or no. The way we compute entroy is to count how many possibilities are there and how many bits are required to accomadate all these possiblities (this is too simplified, we'll give a better version later). Looking at Unix crypt(), it takes upto 8 characters. In real life, each character can be any symbol on a (US) keyboard. That's 94 possibilities (ASCII 33 to 126, ASCII 32 is space), so the maximum entropy is log 94^8, a little less than 38 bits. Well, we know that 8 characters have 64 bits. So we have 64-38 bits redundancy. But this redundancy is not the issue here. It's caused by coding (only the last 7 bits of a byte is used). 38 bits is not that small for a 2G P4 (it's small to NSA, of course). The issue is that for real life passwords, the entropy is much smaller than 38 bits. Some combinations,e.g., "$%T3)^l;?" are not very likely to be a (human chosen) password. If the passwords are all English words, the entropy would be log 100000 = 17 bits. I'm assuming people choose from 100000 vocabulary (Winston Church was believed to command such a vocabulary. If you do, you can win a Nobel price too, I guess). And I'm assuming each word is equally likely. This is of course not the case. "CB" is much more likely to be our professor's password than say "Yoknapatawpha" is. So the effective entropy of a (real life) password is much smaller. That's why we can launch a dictionary attack. We can handle 2^17 possibilities with our Apple IIe. Now here's the other side of story. Why people only choose among 100,000 words (make it 1 million, counting all sort of variations, e.g., different capitalization, numbers before and after the word, etc, it adds 3 more bits of entropy. Or making one day of work to 10 days) in the first place? It's entropy again. Our brain are not able (willing?) to deal with contents with high entropy. What's our trick to memorize stuff? We associate it with other things we remember. That's our brain compressing entropy. To memorize the word "stoichiometry", we break it down to familiar 3 parts (or 2). So it doesn't have 13-character entropy anymore, instead, less than 3. That why people can't and are not willing to remember "good" passwords. This context dependency introduces the notion of unicity distrance. In the above sentence, 'context' is more likely to follow 'this' than 'listen' (a verb) or 'at' (a preposition). We've already seen 2nd- (digrams) and 3rd- (trigrams) order statistics can characterize a certain language very well. What if we go 4th, 100th order?


Last modified: