ArsDigita University, Structure and Interpretation of Computer Programs

Lecture Notes for Lecture 4 -- October 2000

Topics covered in today's lecture:
The code for today's lecture is called orders-growth.scm.

Orders of Growth

n is a parameter that measures the size of the problem.

R(n) is the amount of resources required. We will look at space, S(n), and time, T(n).

R(n) has order of growth Theta(f(n)), R(n)=Theta(f(n)), iff there exist positive constants k1 and k2 independent of n such that

k1*f(n) <= R(n) <= k2*f(n)

for any sufficiently large value of n.

Recursion and Iteration

When multiplying, we can add the first operand to itself the number of times given by the second operand:

(mult 3 4) => 3 + 3 + 3 + 3
We can use recursion to write this procedure. In recursion, the problem is broken up into parts then built back up. Recursion is sometimes called "wishful thinking", where we say that we know what to do for a problem operating on n if only we could solve the problem for (- n 1).

Using this idea, (mult 3 4) will be equal to 3 + (mult 3 3) and (mult 3 3) is going to be equal to 3 + (mult 3 2). Finally, we'll get to (mult 3 0), which will be called the base case. This is the point at which we stop calling mult recursively and return a single value.

(define mult
   (lambda (x y)
      (if (= y 0)
          (+ x 
             (mult x (- y 1))))))
Let's follow this through, using a slightly simplified substitution model:

(mult 3 4)
(+ 3 (mult 3 3))
(+ 3 (+ 3 (mult 3 2)))
(+ 3 (+ 3 (+ 3 (mult 3 1))))
(+ 3 (+ 3 (+ 3 (+ 3 (mult 3 0)))))
(+ 3 (+ 3 (+ 3 (+ 3 0))))
(+ 3 (+ 3 (+ 3 3)))
(+ 3 (+ 3 6))
(+ 3 9)
This procedure has T(n) = Theta(n) and S(n) = Theta(n). T(n) is the order of growth in terms of time, which is linear. S(n) is the order of growth in terms of space, which is also linear.

Observe the shape of the substitution model steps above: it arcs out then back in. There are deferred operations that must wait until we reach the base case. This is a recursive process: deferred operations are stored on the stack.

A process with no deferred operations is called an iterative process. We can rewrite our mult procedure iteratively:

(define mult
   (lambda (x y)
      (define mult-helper
         (lambda (y ans)
            (if (= y 0)
                (mult-helper (- y 1) (+ x ans)))))))
Note: up until now, we have been using lambda when defining procedures. We will now start to use the sugared version for defining procedures. We rewrite the iterative version of mult here to show the difference:

(define (mult x y)
   (define (mult-helper y ans)
      (if (= y 0)
          (mult-helper (- y 1) (+ x ans)))))))
Let's follow through the evaluation of (mult x y), again using a simplified substitution model:

(mult 3 4)
(mult-helper 4 0)
(mult-helper 3 3)
(mult-helper 2 6)
(mult-helper 1 9)
(mult-helper 0 12)
Again, note the shape of this computation, which is a straight line. There are no deferred operations, making it an iterative process. Iterative processes use constant space, which we write as S(n) = Theta(1). The procedure still requires linear time: T(n) = Theta(n).

  • A recursive process has delayed operations.
  • An iterative process uses state variables and has no delayed operations.
Both are recursive procedures since they call themselves. Process describes if operators are delayed or not.

Scheme is tail-recursive: it executes an iterative process in constant space. C and Pascal are not tail-recursive; the memory grows with the number of procedure calls.

Now let's write a faster version of our procedure to multiply by adding:

(define (fast-mult a b)
   (cond ((= b 0) 0)
         ((even? b) (fast-mult (double a) (halve b)))
         (else (+ a (fast-mult a (- b 1))))))
Let's follow the computation of (fast-mult 10 12):

(fast-mult 10 12) 
(fast-mult 20 6)
(fast-mult 40 3)
(+ 40 (fast-mult 40 2))
(+ 40 (fast-mult 80 1))
(+ 40 (+ 80 (fast-mult 80 0)))
(+ 40 (+ 80 0))
(+ 40 80)
For this procedure, T(n)=Theta(log n) and S(n)=Theta(log n). You can characterize Theta(log n) by thinking of the work being divided in half at each time step.

And another version of multiplying by adding:

(define (fast-mult-2 a b)
   (define (times-iter a b result)
      (cond ((= b 0) result)
            ((even? b) (times-iter (double a) (halve b) result))
            (else (times-iter a (- b 1) (+ a result)))))
   (times-iter a b 0))
Let's follow through the evaluation of (fast-mult-2 10 12):

(fast-mult-2 10 12)
(times-iter 10 12 0)
(times-iter 20 6 0)
(times-iter 40 3 0)
(times-iter 40 2 40)
(times-iter 80 1 40)
(times-iter 80 0 120)
This procedure has T(n)=Theta(log n) and S(n)=Theta(1).

Towers of Hanoi

In the Towers of Hanoi, a tower of disks stacked in descending size with the smallest on top needs to be moved from one peg to another peg, using a third peg as temporary storage. The disks can only be moved one at a time and no disk may be placed on top of a disk that is smaller than it. See the videotape of the lecture for a demonstration of moving disks from one peg to another.

This problem can be solved using the principle of "wishful thinking." For a tower of 4 disks, if we could move the top 3 to the extra peg, then we could move the bottom disk to the destination peg and move the tower of size 3 to the destination peg from the extra peg.

(define (move-tower size from to extra)
   (cond ((= size 0) nil)
            (move-tower (- size 1) from extra to)
            (print-move from to)
            (move-tower (- size 1) extra to from))))

(define (print-move from to)
   (display "Move top disk from ")
   (display from)
   (display " to ")
   (dispaly to))
This code is Theta(2^n) in terms of time and Theta(n) in terms of time.

Orders of Growth: Why does it matter?

Why do we care?

Constant: Theta(1)111
Logarithmic: Theta(log n)13.336.66
Linear: Theta(n)210100
Quadratic: Theta(n^2)410010,000
Exponential: Theta(2^n)41024~1.26 x 10^30

At 1 billion operations per second, if you were to run an exponential time algorithm in the lab on a data set of size n=100, you would be waiting for approximately 4 x 10^11 centuries for the code to finish running!

Space and Time

When we talk about order of growth in terms of time, we are counting the number of operations as a function of the input size.

When we talk about order of growth in terms of space, we are counting the number of delayed operations as a function of the input size.

Iterative vs. Recursive Processes

By definition, an iterative process uses constant space. A process is recursive if it is not iterative (that is, it uses more than constant space).

Note that procedures that give rise to iterative processes may be recursive procedures (that is, procedures that call themselves).

Last update: October 2000