Weiss Ch. 2.

How is the running time of algorithms described in a
machine-independent manner? Algorithm growth rates.
Calculating growth rates from code. Leads to idea of provably fast,
slow, or unusably slow algorithms,
where we take speed to be measured by the number of instructions
executed
as a function of problem size. Problems with only impossibly slow
algorithms are
"intractable" problems (e.g. "NP-complete" problems like traveling
salesman, knapsack...). Bad: brute-force search of things that
get exponentially larger with N (e.g. the number of N-digit
integers is 10^{N}). Or binary trees: 20 decisions means
a million possible outcomes.

Chapter has nice friendly set of definitions and examples of
complexity
classes (different growth rates) and the "Big-Oh" notation. Then
illustrates how to relate programming constructs like `for loops` to the
complexity analysis. Then an extended study of four
algorithms for the maximum subsequence problem with different
complexity (O(n^{3}), O(n^{2}), O(nlogn), O(n)).
Motivates appreciation for logarithmic algorithms like binary search
and motivates our search for quick graph and tree algorithms for,
say, search trees, dictionaries, etc.

Big-Oh notation. Usually you see: "this algorithm is obviously O(f(N))", or "O(g(N)) is preferred to O(f(n))", etc.

For us, the f(N) and g(N) functions are a small set of simple
functions like f(N) = N, or
N^{power}, Nlog(N), log(N), 2^{N}. Think of them as
growth rates. The definitions mean that, up to a constant of
multiplication, you can show your function grows slower, faster, or
the same as some other function (practically, the f() and g()'s above.)

Weiss says that his T(N)'s are functions, which could lead us to think
of them as what their
algorithms compute. He means they are the ** time functions** of
the associated algorithms: their running times.

FOCS, rather more coherently, says T(N) are the
growth-rate functions OF the algorithms of interest. Subtle
difference,
but Weiss *treats* his T(N) like FOCS does but seems to mis-identify
them, so we may get an apples and oranges feeling...

- Function T(N) is O(f(N))

if there are c, n_{0}> 0 ("witnesses") s.t.

T(N) ≤ cf(N) when N ≥ n_{0}.

Your function grows no faster than f. f is an*upper bound*on T. (Contrariwise, T is a*lower bound*on f.) - Function T(N) is Ω(g(N))

if there are c, n_{0}> 0 s.t.

T(N) ≥ cg(N) when N ≥ n_{0}.

Your function grows no slower than g. g is a*lower bound*on T (maybe at same rate). - Function T(N) is Θ(h(N))

iffi T(N) is O(h(n)) and T(N) is Ω (h(n)).

Your function grows at the same rate as h. - Function T(N) is o(p(N))

iffi T(N) is O(p(n)) and T(N) is not Ω (p(n)).

Your function definitely grows slower than p.

Weiss writes T(N) = O(f(N)).

Computational model has a unit of operation, or instruction, like addition, multiplication, comparisons, an inner loop, often determined by the problem It is defined to take unit time since we don't care about the absolute size of the units.

Assume infinite memory, no access time.

So measure of "cost" of algorithm, or the time it takes, is the number of elementary operations it requires, taken to be the "running time".

*Worst-case* running times are usually used even if pessimistic:
they are easier to compute. *Average-case* running times are of
interest but sometimes not obvious how to define average, and they're
hell to compute. *Best case* is sometimes amusing (e.g. for a sorting method,
what's the best-case input order if any).

Other measures exist, like area for circuits, models including memory access, etc. Realistic computer and systems models are another story.

How measure input size? Good question. You can imagine algorithms that work with the numerical value of the input N, but also those that work on N itself as a sequence of bits, so the "length of N" is at issue, not the "value of N". Usually obvious.

Proofs that T(n) is O(f(n)) are all alike: From the formal
definitions above, we "need a c and an n_{0} such that..."
So--

- Find and state the two specific constants: a positive
c and nonnegative n
_{0}, which are*witnesses*to your claim. - Use algebra to show that for n > n
_{0}, T(n) ≤ cf(n) for your n_{0}and c.

E.g. Suppose we show our running time is T(n) = (n+1)^{2}.
By staying awake in lecture, we observe that this is a *quadratic*
function, O(n^{2}). How prove it? Choose witnesses
n_{0} = 1 (a VERY common choice for reasons we'll see) and c =
4.

This c-guessing, rabbit-from-hat effect is good drama, but we
actually compute what our witness c needs to be for our n_{0} using the
reasoning
below first, then appeal to it later.

First, the obvious approach: n_{0} = 1 means n > 1. So we
want

a c such that
(n+1)^{2} = n^{2} + 2n +1 ≤ cn^{2}

(c-1)n^{ 2 } -2n -1 = 0. Bit of a mess, headed toward a quadratic.

Use our powerful inequalities -- to hell with exact solutions.

n_{0} = 1 means n > 1. So now we need to
prove
(n+1)^{2} ≤ 4n^{2} provided n ≥ 1. Or
n^{2} +2n+1 ≤ 4n^{2} as before.

Now n ≥ 1 implies both
n ≤ n^{2} and 1 ≤ n^{2}, so

n^{2} + 2n +1 ≤
n^{2}+2n^{2}+n^{2}
= 4n^{2}, *QED*.

Wait! How'd we guess that c=4 would work? By **jumping to the last
line immediately,** given n_{0} = 1 and that
you see the basic trick: all the powers of n may be changed to
n^{2}.

This n_{0} = 1 trick and associated reasoning lets you
just glance at the claim
5n^{3} + 100n^{2} +36n +1095 ≤ cn^{3}

And say: for n_{0} = 1, c=2000 will work, and the smallest c
would be 1236. You'll see that if n_{0} ≠ 1, we're talking
about computing the intersection of two cubics, given we've somehow
picked a c. Very messy and doesn't scale.

constant, logarithmic, log-squared (log^{2}N), N log N, linear,
quadratic,
cubic, polynomial, power (e.g. N^{1.57}), exponential.

Useful Facts or Rules:

- if T(N) = O(f(N)) and S(N) = O(g(N)), then

- T(N) + S(N) = O(f(N)+g(N)) ⇒ = O(max(f(N), g(N))).
- T(N) * S(N) = O(f(N)*O(g(N)).

- if T(N) is a polynomial of degree k, T(N) = Θ(N
^{k}). - log
^{k}N [= (log(N))^{k}] = O(N) for any k. So the log grows slowly.Never say "O(3N)" or "O(2N

^{2}+ 4)" for example.Never say "f(n) ≤ O(g(N))" (implied) or "f(n) ≥ O(g(N))" (nonsense).

"Just look at the derivatives?" Yeah, in fact l'Hopital's rule; how-to in Weiss p. 32. Unnecessary usually. NlogN vs N

^{1.5}? Know if know logN vs N^{.5}. Or log^{2}N vs. N, which by 3rd rule above, is that N grows faster.

*for loops:* running time's at most number of all statements in the
loop times number of iterations. For a singly nested loop with
no function calls and N iterations, the length of the loop is some
constant, so it goes away and we get O(N).

*Nested loops:* analyze inside-out.

*Consecutive statements* (or loops, function calls, etc.). Their time
adds, which by one of our three early rules means the answer is
the Order (Big Oh) of the maximum-time one.

*Conditionals:* Running time is at most the time of the test plus the
time of the larger (or largest) running time of the "then", "else"
or "case" etc.
statements.

*Function Calls:* Consider the function an algorithm and
recursively apply these rules!

*Generally*, need to work from inside out (from function calls,
innermost loops, etc.).

*(Really) Recursive Calls:* Tail recursion is like a for loop, so easy.
Otherwise, analysis leads to *recurrence relations*, which are a
topic to themselves (for later).

Weiss 2.4.3 = Bentley *Prog. Pearls* Column 8.

Problem: find the largest sum of contiguous elements in a given N- vector of numbers. If they're all negative the answer's 0 for "no elements in sum". e.g. answer for [3 -4 10] is 10, [-4 3 10] is 13.

1: Cubic Time: triply-nested for loop, outer two pick all possible
left and right sub-vector elements, inner adds all elts between them, each
loop has maximum of N repetitions means O(N^{3}).

2: Quadratic Time: smarter ways to compute sum in inner loop. Just update
the sum with the "current element" or pre-compute the N cumulative
sums of elements (O(N) to do this) and subtract two cum-sums to get
any subsequence sum. Thus inner loop becomes O(1). *Morals:
inner loops repay scrutiny, keeping subresults is often smart*.

3: NlogN time: Typical divide and conquer approach. Cut problem in half, get maxSSS of each half, AND in O(N) time compute the max sum of subsequences that slop over the boundary you used. Report the max of these three subproblems up the recursive chain. This yields the famous Quicksort recurrence T(1) = 1, T(N) = 2T(N/2) + O(N), whose solution is O(NlogN) -- we'll visit this again, next time in our overview of Trees.

4: Linear time: Some actual thought (!!) leads to vast improvements in the first quadratic algorithm: e.g. "can't have a (seq of) negative number(s) as first member(s) of maxSS". Get an O(N) "scanning" algorithm that zips thru the sequence once.

Why do we care? Weiss doesn't but Bentley does give some dramatic
statistics and tables: he implements, times the results, figures out
the multiplicative constants for the O(f(N)) formulae, and finds:

for N = 1M The times are

1: 41 years. 2: 1.7 weeks. 3: 11
secs. 4: 0.48 secs.

In a minute, how big a problem (N) can be solved?

1: 3600. 2: 10,000. 3: 1M. 4: 2.1x10^{7}.

Divide and conquer, as in QuickSort say, are often NlogN (often comes from splitting (sub)problems in half).

*Binary Search* is common in dictionaries or phone books. Open in middle,
look, go to L or R half and repeat. 20 questions means can find one
in a million possible answers. O(logN) algorithm for lookup. But
requires O(N) for insertion (in an array representation). Of course
might need a sort to be able to use Binary Search, so that would have
to be counted (or *amortized* as we DS professionals say.)

*Euclid's algorithm* is an Oldie but Goodie, a very clever way to find
the greatest common divisor (largest integer dividing both) of two
integers by repeatedly calculating remainders. They get smaller, and
Weiss proves (Th. 2.1) that the size of the remainder goes down at
worst by half every two iterations of the algorithm (one of which
swaps the operands).

Th 2.1: if M>N, M mod N < M/2. Easy proof by cases N < M/2 and otherwise. So another O(logN) algorithm: divide size of problem by two every 2 iterations at worst.

*Fast exponention.* This is the trick of using previously
computed powers (aka *Dynamic Programming*):
so N^2 is one multiplication, times self is N^4,
ditto is N^8 so there's N^8 computed with 3 = log(8) *'s, not 7.
Nice little recursive algorithm (Fig. 2.11) and analysis. At most two
multiplies are needed to get a problem half as big. Also Weiss
discusses some coding improvements and snares in this little 9-liner,
worth a look.

x = 150; %O(1) N= scanf(input); % Size of one array dimension O(1) Arr1 = MakeRandArr(N,2); % make 2-D NxN random array O(N^2) %*******insert line here if Sum(Arr1) < x % O(N^2) x++; % O(1) else Arr2 = MatMult(Arr1, Arr1) % O(N^3) PrintArr(Arr2); %O(N^2)

Looks O(N^{3}) to me, and Ω(N^{2}). Is that right?
What changes with O(N), Ω(N), and Θ(N) if we also made a 3-D array by adding line

... %*******insert line here Arr3 = MakeRandArr(N,3); ...

- Visualizing and Organizing Complex Systems

-- .jar files

-- Scheduling

-- (Computer) inventories

-- Filesharing - Games

-- Video

-- Board (chess)

- Embedded systems

-- Smart house, motion, temperature sensing, remote control

-- Robots (e.g. blimp)

-- Reverse engg? - Web Programming

-- Optimizing Javascript

-- Reverse engg? - AI

-- Nat. Lang. Understanding: Chatbot

-- Other NLU Apps

-- Machine learning - Geog. Info. Systems

-- GoogleMaps and UR Campus - Music

-- Analysis (genre, name-that-tune, filtering, effects...)

-- Synthesis (micro-tone scales, new timbres (instruments), modeling existing instruments (ukelele? ocarina?), composition (fractals, random, natural data, new instruments).

-- Soundtrack for animated video (see below) - Graphics

-- And AI: smart game mods? e.g. Quagents

-- Visualization and intelligent agents (W. Gibson meets TRAINS?)

-- Visualizing big data (VISTA Collaboratorium, Gd. floor Carlson)

-- make Animated video

-- Rendering ( 173 Raytracing Project?) - Theory

-- CS200 type project (Lane Hemaspaandra Advisor)

Last update: 9/11/14