Lecture notes for CSC 252, Tues. Apr. 7, 2009 Announcements A6 due next Monday evening Read chapter 9 -------------------------------- Measuring time Two main approaches interval counting (portable, coarse) cycle counting (less portable, difficult to virtualize w/out OS help) ------------ Interval counting Kernel uses regular timer interrupts to trigger context switches, deliver SIGALARM signals, track time of day (literally, by counting timer ticks), schedule bookkeeping tasks, etc. Kernel keeps with every process a count of how many times the timer interrupt handler found it (the process) in (a) user mode, or (b) kernel (system) mode. These counts work like the statistical profiling of gprof to give you a good sense, averaged over a long period of time, of what fraction of time a process was running, and what fraction of that it spent in the kernel (system). The time command in the shell collects and reports these times. a user time b system time c wall-clock time (including when other things were running) d % of CPU (= (a+b)/c) Can also get with "times" syscall: #include clock_t times (struct tms *buf); Returns timer intervals ("clock" ticks) since system started. Buf reference parameter gets stuffed with elapsed user and system times for process and its reaped children. Can also get total time used by the current process (not children): #include clock_t clock(void); Sadly, while the return types are ostensibly the same, the units may not be. Use sysconf(_SC_CLK_TCK) for times, CLOCKS_PER_SEC for clock. Because the granularity of the statistical sampling is roughly equivalent to a scheduling quantum (~5-20ms), interval timing is terribly inaccurate for times below about 100ms. Even beyond that, it's only good to within about 10%. It also tends to charge processes for some of the overhead of processing timer interrupts. The authors report that on their Linux system this overestimates consumed time by 4-5%. So... we'd like something with higher resolution and more accuracy than timer interrupt ticks. ------------ Cycle counting Many (NOT all) machines provide a cycle counter: a special register that is automatically incremented (by hardware) once every cycle. Recent Pentiums have one. 64 bits. The "rdtsc" (read time stamp counter) instruction moves the high 32 bits into %edx and the low 32 bits into %eax. The rdtsc instruction is not privileged; you can execute it in user space. The authors give wrapper code on p. 461. Uses in-line assembler: asm("rdtsc; movl %%edx, %0; movl %%eax, %1" : "=r" (high), "=r" (low) /* results */ : /* no arguments */ : "%edx", %eax"); /* registers trashed */ The Sparc V9 has a similar counter. Whether reading it is privileged depends on the value of a control register, access to which is always privileged. Solaris 7 makes it privileged; Solaris 8 and later allow it to be read in user mode. The gethrtime() library routine (Solaris only, not Linux) returns the value. Under Solaris 7 it requires a syscall, and is thus kind of slow; under Solaris 8ff it doesn't, and is much faster. ---------------- Problem: Context switches (and execution of other programs) bloat the cycle count. If you want to time something that takes longer than a quantum, you basically can't use cycle counters, unless you're running on an otherwise unloaded machine, and even then you'll be including certain kernel overhead. For short (sub-quantum) timing, use K-best scheme, as described in book. Run enough tests that the K (e.g. 3) fastest runs are within a tiny percentage of each other. Assume other runs included unwanted activity by kernel and/or other processes. For long (> quantum) timing ON A LIGHTLY LOADED MACHINE, you can correct for the overhead of timer interrupts by (a) measuring the minimum time that one of them takes, (b) counting how many occurred (via times()), and (c) subtracting out the product. Task (a) isn't adequately explained in the book (p. 676). It's something like this: // run on unloaded machine #include ... int shortest = maxint many_times_do int ticks1 = times(); long long cycles1 = read_cycle_counter(); long long cycles2 = read_cycle_counter(); int ticks2 = times(); int duration; if ((ticks2 > ticks1) && (duration = cycles2 - cycles1) > 1000)) && (duration < shortest)) shortest = duration return shortest For a heavily loaded machine you're generally stuck using an interval timer. The OS can help by virtualizing the cycle counter -- basically saving and restoring it on context switches. Solaris does this (man gethrvtime()), but Linux, sadly, doesn't. All Unix systems provide a gettimeofday() call that fills in a struct with the number of seconds and microseconds since Jan 1, 1970. On some systems (e.g. Linux and Solaris 8) this call has low latency and high resolution, so it can stand in for access to the cycle counter. On other systems, however, it has high latency and low resolution. So code that uses this call is more portable than direct use of a cycle counter, in the sense that it will compile and run anywhere, but it won't necessarily do what you want. ---------------- NB: Context switches also tend to mess up cache and TLB "footprint", reducing the efficiency of a process at the beginning of each quantum on a multiprogrammed system. Depending on what you're trying to measure, you may or may not want to count that. And even within an uninterrupted run of a program, cache effects can distort the cycle count a lot (though much less than context switches). If you want to factor out cache effects, you can "warm up" the cache by doing something twice and timing the second occurrence (assuming you have enough cache for the working set). Conversely, you can measure cold-start effects by accessing a very large amount of data, to flush everything else out of the cache. Practice problem 9.6: Suppose we run an experiment 6 times in a row, using two different (identical) copies of the code and three different (identical) copies of the data, and we get the following times: call cycles 1 C1(d1) 399 2 C1(d2) 132 3 C1(d3) 134 4 C1(d1) 100 5 C2(d1) 317 6 C2(d2) 100 Estimate c time to execute code with perfect cache behavior m time to load measurement code into cache p time to load to-be-measured code into cache d time to load data into cache 1 399 = c + m + p + d 2 132 = c + d 3 134 = c + d 4 100 = c 5 317 = c + p 6 100 = c From 2, 3, 4, and 6 we deduce that d ~= 33. From 5 we deduce that p ~= 217. Then m ~= 399 - 100 - 217 - 33 = 49