Tutorial

Question 1

Review the definitions of the basic notations such as Θ, O, Ω, o, and ω.
O-notation
For a given function g(n), denoted by O(g(n)) the set of functions.
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0f(n)cg(n) for all n n0 }.
Ω-notation
For a given function g(n), denoted by Ω(g(n)) the set of functions.
Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0cg(n)f(n) for all n n0 }.
Θ-notation
For a given function g(n), denoted by Θ(g(n)) the set of functions.
Θ(g(n)) = { f(n): there exist positive constants c1 , c2 , and n0 such that 0 c1 g(n)f(n) c2 g(n) for all n n0 }.
o-notation
For a given function, g(n), we denote by o(g(n)) the set of functions.
o(g(n)) = { f(n): for any positive constant c>0, there exists a constant n0 >0 such that 0f(n)<cg(n) for all n n0 }.
The implication of this, is that f(n) becomes insignificant relative to g(n) as n approaches infinity:

limn f(n) g(n) =0

ω-notation
For a given function, g(n), we denote by ω(g(n)) the set of functions.
ω(g(n)) = { f(n): for any positive constant c>0, there exists a constant n0 >0 such that 0cg(n)<f(n) for all n n0 }.
The implication of this is that g(n) becomes insignificant relative to f(n) as n approaches infinity:

limn f(n) g(n) =

Question 2

What is the difference between O and o as well as Ω and ω?
The difference between O and o is that O represents an upper bound of the solution for a problem, which may also be the tight bound of solving that problem, while o clearly indicates that it is an upper bound but definitely not the tight bound.
For example, 2n=o( n2 ) but 2 n2 o( n2 ).
For the Omegas...
For example n2 /2=ω(n) but n2 /2ω( n2 ).

Question 3

Rank the following functions in the order of growth: n2 logn, n3/2 log4 n, 30.7n , log* n log2 n, n1/logn , n1/3 logn, nloglogn , n
First, work out that the following is actually a constant...

n 1 log2 n = n lg2 lgn = n logn 2 = 2 logn n = 21 = 2

(5) n1/logn 2
(4) log* n log2 n logn
(6) n1/3 logn n1/6
(8) n n1/2
(2) n3/2 log4 n n3/2
(1) n2 logn n2
(7) nloglogn nloglogn
(3) 30.7n 3n
Note that "Log star of n" is iterated logarithm.
log* n= min {i0: logi n1}

Question 4

Use the divide-and-conquer paradigm to explain the basic steps of the quicksort algorithm.
The divide-and-conquer strategy divides a problem A into a number of subproblems of roughly equal sizes. Then each subproblem is solved recursively. The base case being a subproblem that is small enough to solve without further dividing. Next, the solutions of the subproblems are combined to obtain the solution for the original problem.
For the case of quicksort:
  1. (DIVIDE) Pick an element x= ai to be the pivot element. Usually the first element a1 is picked.
  2. (DIVIDE) Partition the sequence into two disjoint subsequences:
    S1 ={ ai : ai x}, and
    S2 ={ aj : aj >x}.
  3. (CONQUER) Sort S1 and S2 recursively using quicksort.
  4. (MERGE) Merge the sorted S1 , {x}, and S2 . The original sequence is sorted.

Question 5

Review the four techniques for bounding the summations. Use one of the introduced techniques to bound k=1 k 5k .
The four techniques for bounding summations are as follows.
  1. Mathematical induction
    What is mathematical induction? For example, binding the series: 1+2+3++n

    P(n)  says:  1+2+3++n = n(n+1) 2

    If P(n) is true, is P(n+1) true?

    P(n+1)  says:  1+2+3++n+(n+1) = n(n+1) 2 +(n+1) = (n+1)(n+2) 2 = (n+1)((n+1)+1) 2

    Therefore P(n)P(n+1).
    Now, show our base case. P(1) is true because 1= 1(1+1) 2 =1. And since P(1) is true, P(2) must be true, so P(3) is true, and so on...
  2. Splitting summations
    For example, finding the lower bound.
    We can change k to n and find the upper bound:

    k=1 nk k=1 nn = n2 = O( n2 )

    To find the lower bound we split the summation:

    k=1 nk = k=1 n/2k+ k=n/2+1 nk k=1 n/20+ k=n/2+1 nn/2 = n2 4 = Ω( n2 )

  3. Approximation by integrals
    For example, if we have x=p qf(x), then the sum really is the area under the curve f(x) for pxq. We can use integration to find the area under a curve. Thus the summation is approximately equal to p q f(x).
  4. Bounding terms
    We now apply the bounding terms to show the upper bound of

    k=1 k 5k

    Let ak = k 5k
    Then...

    ak+1 ak = k+1 5k+1 × 5k k = k+1 5k

    This is maximum when k=1, so...

    r 1+1 5 r 2 5

    Given that the first term a1 =1/5, k=1 k 5k can be bounded by the sum to infinity of the geometric series.

    k=1 k 5k 1 5 × 1 1- 2 5 = 1 5 × 1 3 5 = 1 3

Question 6

Review the two approaches for solving recurrences, and apply them to solve the following recurrences.

Question 6(a)

T(n)=T(n/3)+ n4/3
Using the iterative method...

T(n) = n4/3 +T(n/3) = n4/3 +( n 3 )4/3 +T( n 32 ) = n4/3 +( n 3 )4/3 +( n 32 )4/3 +T( n 33 ) = n4/3 +( n 3 )4/3 +( n 32 )4/3 ++( n 3k )4/3


T(n) = n4/3 i=0 k ( 1 3i )4/3 = n4/3 i=0 k ( 1 34/3 )i < n4/3 i=0 ( 1 34/3 )i < n4/3 1 1- 1 34/3

Let constant c= 1 1- 1 34/3

T(n) < c n4/3 = O( n4/3 )

If you want to find k...

n 3k = 1 k = log3 n

Question 6(b)

T(n)T(n/4)+T(n/5+57)+ log2 n
Using the substitution method, first...

n/4 n/5+57 n 1140

Since n/4n/5+57 when n1140, T(n/4) will dominate T(n/5+57) when n1140. So we simplify the recurrence into:

T(n)2T(n/4)+ log2 n

GUESS 1: Now we guess that the solution is:

T(n)cn

Assuming that this holds for T(n/4):

T(n/4) cn/4 = 1 2 cn

Substitute this into our original T(n):

T(n) 2( 1 2 cn)+ log2 n = cn+ log2 n

This doesn't agree with our guess.
GUESS 2: Revise our guess to T(n)cn+b log2 n
Assume T(n/4)cn/4+b log2 (n/4)
Substitute:

T(n) 2(cn/4+b log2 (n/4))+ log2 n = cn+2b log2 n-2b log2 4+ log2 n = cn+(2b+1) log2 n-4b

Still doesn't agree with our guess.
GUESS 3: Revise our guess to T(n)cn+b log2 n+a
Assume T(n/4)cn/4+b log2 (n/4)+a
Substitute:

T(n) 2(cn/4+b log2 (n/4)+a)+ log2 n = cn+2b log2 n-2b log2 4+ log2 n+2a = cn+(2b+1) log2 n+2a-4b

Since this is the same form as our guess, we solve for b and a.

2b+1 = b b = -1 2a-4b = a a = 4b a = -4

Thus, the inequality holds when b=-1 and a=-4.
For all n<1140, let the T(n) be bounded by a constant R.
Solve for c when n=4096:

T(4096) c4096- log2 4096-4 2T(1024)+ log2 4096 c4096- log2 4096-4 2R+12 64c-12-4 c 2R+28 64

Thus, 0T(n)cn- log2 n-4 when c 2R+28 64 for all n1140
T(n)=O(n)

Question 6(c) iterative

T(n)=T(n)+3n
Using the iterative method...

T(n) = 3n+T(n) = 3n+3 n1/2 +T( n1/4 ) = 3n+3 n( 1 2 ) +3 n( 1 2 )2 ++3 n( 1 2 )k


k = no. of times we can square root n until it is 2 n ( 1 2 )k = 2 ( 1 2 )k lgn = lg2 ( 1 2 )k = lg2 lgn 2k = lgn klg2 = lglgn k = lglgn


T(n) 3n×(k+1) 3n×(lglgn+1) 3nlglgn+3n = O(nlglgn)

Question 6(c) changing variables

T(n)=T(n)+3n
An alternative method is to use "changing variables" together with the iterative method.
Let n= 2m .
Substituting this into the recurrence, we get:

T( 2m ) = T( 2m/2 )+3× 2m Note that square root is essentially halving the power.

Rename S(m)=T( 2m ):

S(m)=S(m/2)+3× 2m

This is possible because now we have changed the "problem size" variable to m. Note that work done at each level of the recursion is still 3× 2m =3n. So using the iterative method to solve S(m)...

S(m) = S(m/2)+3× 2m = 3× 2m +3× 2m/2 +S(m/ 22 ) = 3× 2m +3× 2m/2 +3× 2m/ 22 ++3× 2m/ 2k = 3 i=0 k 2m/ 2i

Find k:

2m/ 2k = 2 m/ 2k = 1 k = log2 m

Since the biggest element in the summation is when i=0, we can bind the summation by replacing 2m/ 2i with 2m .

S(m) 3× 2m log2 m = O( 2m log2 m) T(n) = T( 2m ) = S(m) = O( 2m log2 m) = O(n log2 log2 n)




File translated from TEX by TTM, version 3.67.
On 31 Mar 2006, 18:12.