More Statistics Recall the notion of a sample space W = {w_1, w_2, ...) of atomic events, with associated probabilities p(w_1), p(w_2), ... A "Random Variable" associates some value x_i = X(w_i) with each of these outcomes. Note that it is possible for x_i = x_j for i != j. We can associate a "probability" with a certain value of x P(X=x) = Sum (over all i such that X(w_i) = x) p(w_i) Which effectively defines a new sample space over the possible values of X (at least for discrete x) For real-valued random variables, it is useful to define a couple of quantities * The "expected value" E(X) is defined as Sum p(w_i)X(w_i). The idea is that the represents the an "average" value that we would approach if we averaged X over increasing number of trials. * If we have two random variables, X and Y, we can define a "sum" which is a new random variable denoted and defined by (X + Y)(w_i) = X(w_i) + Y(w_i) It is easy to see that E(X + Y) = E(X) + E(Y) * Similarly, we can define a scalar multiple (cX)(w_i) = c * (X(W_i)) and E(cX) = c * E(X) * We could also try defining a product (XY)(w_i) = X(w_i)y(w_i) In this case, however, it is NOT always true that E(XY) = E(X)E(Y). This does hold, however, if X and Y are "independent" random variables, that is, if P(X=x and Y=y) = P(X=x) * P(Y=y) for all x,y. Note that this is a stronger condition than independence and disjointness of the underlying atomic events. (Consider the two random variables on a coin-toss space: X = 1 if heads, 0 if tails; and Y = 0 if heads 1 if tails. These are not independent, even though successive trials are. Proof: E(XY) = Sum_w p(w)(XY)(w) = Sum_w p(w)X(w)Y(w). rewrite sum over set of x,y VALUES = sum_x,y sum_w' p(w)X(w)Y(w) where w' is all w where X = x AND Y = y. = sum_x,y p(X = x and Y = y)xy which under the independence assumption we can pull apart as the product sum_x p(X=x) x * sum_y p(Y=y) y = E(X)E(Y) It is useful to know the expected values for certain common distributions. For example, consider the probability space of all all n-tuples from {0,1}, where 1s occur with probability p, and 0s with probability q = 1 - p. Let X be the random variable whose value on a tuple is the number of 1s in that tuple. This is sometimes referred to as the "binomial distribution" In this case, E(X) = pn, as is easily seen by considering X to be a "sum" random variable. Garret gives another proof, which you can look at if you want. A second important measure of a random variable is its "variance" * For a random variable x with expected value (or mean) E(X) = u, Variance(X) = sigma^2(X) = E(X-u)^2) * This gives some indication of how much variation there is in a random variable (hence the name) * the "standard deviation" sigma, is the square root of the variance. With a little algebra we can obtain the useful relation E((X-u)^2) = E(X^2 - 2Xu + u^2) = E(X^2) - 2uE(X) + u^2 = E(X^2) - 2u*u + u^2 = E(X^2) - u^2. For the binomial distribution, we can obtain the important formula variance = sigma^2 = p(1-p)n = (1-p)u sigma = sqrt(u(1-p)) This implies that for small p, the standard deviation is approximately the square root of the mean, which is a useful rule of thumb. Even for p as high as .5, this is only off by a factor of sqrt2/2. The limit as p goes to 0 is known as the "Poisson Distribution" and is an important model of lots of processes (including character counts in text). For high means, the Poisson distribution looks like a normal distribution of the same standard deviation. Actually, this is true for any binomial where the standard deviation is several times the discrete increment. For low means (u << 1.0), the Poisson distribution looks like an exponential distribution. Knowing the variance places certain hard bounds on the probability of a random variable being more than a set distance away from the mean. In particular, "Chebycheff's Inequality" states that for X with mean u and standard deviation s (abbreviating sigma with s), and for any t > 1, P(|X-u| >= t*s <= 1/(t^2) In other words, the probability of X being more than certain multiple of the standard deviation away from the mean can only be so high. This hold for ANY distribution. Basic argument: * If we are stuck with a certain mean and variance, and want to get as much as possible at or beyond t*s, the best we can do is to put two dollops each of 1/(2*t^2) at u +- t*s, and the rest at E(X) = u. * Clearly this still has mean u, and the variance is (t*s)^2/(t^2) = s^2. * If we move anything further out, the variance will tend up, so we have to move more than that towards u to counter the tendency. This is the situation that is formalized by the proof in the Garret. Using Chebycheff's inequality in conjunction with the formula for the variance for the binomial distribution, produces a weak version of the "law of large numbers" for binomial distributions. In particular, it is easy to show that as the number of samples becomes large, the proportion of the trials deviating from the mean by more than any fixed fraction of its value approaches zero. lim n->inf P( |X - p * n| > epsilon * n) = 0 or put another way, P( | X-u | > epsilon * u) goes to 0 as n->inf. (where we sucked an extra p into the epsilon) * Actually, the bound on the binomial is MUCH tighter than this would suggest, being approximated by the cumulative normal distribution, of the same variance, which decreases in a super-exponential fashion. ----------------------------------------------------------------------- GCDs LCMs and Euclidean algorithm Also unique factorization ----------------------------------------------------------------------- The Hill Cipher (Lester Hill, 1929) * Block Cipher, with simple mathematical basis. * The math provides both the original strength, and a method of attack. * Influential in that it caused mathematics to play a deeper role in encryption. * Seem to have been actually used in WWII Basic encryption is matrix multiplication. * Consider blocks of i consecutive characters to be a vector y * The key is a square matrix K of integers mod 26. * Encryption is performed by matrix multiplication (mod 26) E(y) = K * y * Decryption is by multiplication by the matrix inverse of K D(y) = K^-1 * y Issues: * Need to be able to find an inverse mod 26. * Number theorists are able to count these, and show that (over) 1/4 of such mod26 matrices DO have inverses, hence there are on the order of 1/4 26^(n^2) potential keys. * Neither pure substitution, nor pure transposition. Every character in an output block is affected by every character in the corresponding input block.r * Specifically, every character in the output block is some linear combination of the characters in the input block. Attacks on the Hill cipher * Based on linearity * Note that if we have several vectors x_1, x_2 ... that we want to encrypt, if we write them as the columns of a matrix X, then the encryption of each vector is the corresponding column of a matrix Y where Y = K * X (just because of the way matrix multiplication works). * Goal of attack is to recover the matrix K * Chosen plaintext attack is easy. If we pick X equal to the identity matrix, then the resulting Y is exactly X. This is done by picking the text 10000...01000...00100.... If the block size is not known, 100000....... will (usually) yield the block size just by looking at how far non-zero values extend in the output. * Known plaintext is slightly more difficult, but doable because of the solvability of linear systems. Basic approach, for an NxN matrix K, let x_i x_2 ... x_N be plaintext vectors, and Y_1, y_2,.. y_N be the corresponding ciphertext. Let X and Y be the square matrixes formed using these vectors as columns. Then we know that Y = K * X. If X^-1 exists, then K = X^-1 * Y. For any particular X, there is only about a 25% chance that this is true. However, if we have more than N corresponding pairs, we can try again with a different set of N, until we find one that works. (Or there is fancier algebra) * Ciphertext only attack. Trickier, but known plaintext attack suggests a possible approach using a "crib" or string that is guessed to be in the text somewhere. In particular, if we know a string of length at least N^2 + N - 1, is in the text, then SOME set of N contiguous blocks lies completely beneath the string, allowing N correspondences to be guessed. e.g. if N = 3, and "COLONELJACK" is known to be in the message, then one of the block combinations COL ONE LJA or OLO NEL JAC or LON ELJ ACK must occur. We check if any of these combinations lead to invertible X, and then do solutions and trial decryptions for all possible offsets. (there are at most, about m of these where m is the message length) Overall, we have about a one-in 4 chance for getting the key. If the known string is longer, we have additional possibilities to try. With somewhat more work, this sort of approach can be employed using shorter fragments. Suppose N = 3 and we don't have a crib, but we have a longish message. We might guess that "the" "and" and "ing" all occur at least once in the message, aligned with the block boundaries. Assuming that the X arising from these three trigrams is invertible (If not, pick another triple) we make trial decryptions employing all possible locations for the three trigrams. There are on the order of m^3 of these, so we would need a computer, but we could crunch through them. To put some numbers on this, note that a message of length 1000 has a pretty good chance of having all three trigrams at the right alignment somewhere. There are about 333 positions to try out for each word, so a total of 3.7 * 10^7 possibilities. So a lot of work, but less than trying out all 26 ^ 9 / 4 = 1.4 * 10^12 possible keys (and a lot fewer matrix inversions, since we only do ours once) Similar approaches can be formulated utilizing fragmentary cribs and other sorts of information, and more sophisticated algebra can be used to reduce the search space even for cribs shorter than N. Note on finding multiplicative inverses for matrixes mod n Basically, the technique based on Gaussian reduction for standard linear systems (The Gauss-Jordan method) carries over, using the multiplicative inverse mod 26 instead of 1/x to do the row reductions. The fact that this inverse does not always exist leads to the non-existence of the inverse in some cases. Encryption Homework Write a program to find the multiplicative inverse, mod 26 for a square matrix if it exists, and report that it does not exist if that is the case. Use this routine to write a program that encrypts and decrypts messages using the Hill cipher. Generate 2 (good) 4x4 keys, and use them to encrypt two pieces of text at least 256 characters long. Place the encryptions along with a 30 character crib in the files xxxx_hill_4x4_1.txt and xxxx_hill_4x4_2.txt in the directory ~davidahn/hill Also generate 2 3x3 keys and use them to encrypt two pieces of text at least 1800 characters. Place these without cribs in xxxx_hill_3x3_1.txt and xxxx_hill_3x3_2.txt DO NOT REUSE TEXT FROM A PREVIOUS ASSIGNMENT. Due Thursday, Oct 3, 2002