# Introduction to Algorithms

An algorithm is any well-defined computational procedure that takes some value or set of values as input, and produces some value or set of values as output.

## 1  Why study algorithms?

As the speed of processors increase, performance is often said to be less important than the other software quality characteristics (e.g. security, extensibility, re-usability, etc.). However, large problem sizes are common place in the area of computational science; which makes performance a very important factor. This is because longer computation time, to name a few, mean slower results, less through research, and higher cost of computation (if buying CPU hours from an external party).
The study of algorithms, therefore, gives us a language to express performance as a function of problem size. A similar approach can also be taken to express resource requirements as a function of problem size. It is important to keep resource requirements in check as it may easily result in poorer performance.

## 2  Notation

The representation of the problem, the solution, and the algorithm to obtain the solution are greatly influenced by the state of technology. In this day and age, the basic model is the sequential processor with access to random access memory. Simple problems in Mathematics such as "the minimum of a set of integers" requires us to consider two points:
1. How is the set represented?
2. How to find the minimum (depends on how the set is represented)?
To keep things simple ${}^{1}$, let's assume that the integers are stored in a sequential array; and that we know the number of integers to be $n$. Since any one of the integers in the array may be the minimum, the solution is simply to take a look at each one of them and report the minimum. The following algorithm implements this strategy.
FIND-MINIMUM $\left(A,n\right)$
1    if  $n=0$
2          then return NIL  /* No minimum in an empty array. */
3     $m$ $←$  $A\left[1\right]$
4     $i$ $←$  $2$
5    while  $i\le n$
6                  do if  $A\left[i\right]
7                              then  $m$ $←$  $A\left[i\right]$
8                         $i$ $←$  $i+1$
9    return  $m$
The FIND-MINIMUM algorithm reflects on the state of present day computer technology:
• The computer executes one line at the time.
• There may be branches (if).
• There may be iterations (while, for, repeat/until).
• There may be assignments ( $m$ $←$  $A\left[1\right]$).
• There may be random memory accesses (Using $A\left[i\right]$ to access the $i$th element of $A$).

## 3  Analysis

The basic idea is that the more instructions a computer has to process, the longer it will take to complete processing. Thus, the aim of analysing algorithms is to describe how much work a computer has to do. Since the amount of work usually depends on the problem size, we are mostly interested how an algorithm scales with problem size.
For starters, let us examine the FIND-MINIMUM algorithm introduced earlier. Lines 1 to 4 can be completed in some constant amount of time ${c}_{1}$. Due to the while-loop at line 5, lines 5 to 8 take ${c}_{2}\left(n-1\right)$ time. Lastly, line 9 takes ${c}_{3}$ time. Total time taken is therefore the sum of all these:

 $\begin{array}{c}{c}_{1}+{c}_{2}\left(n-1\right)+{c}_{3}\hfill \\ ={c}_{1}+{c}_{2}n-{c}_{2}+{c}_{3}\hfill \\ \text{Let}{c}_{4}={c}_{1}-{c}_{2}+{c}_{3}\text{}\hfill \\ ={c}_{4}+{c}_{2}n\hfill \end{array}$

Therefore, each element in the input array contributes ${c}_{2}$ time to the total runtime.

## 4  Order of growth

How an algorithm scales in relation to its input problem size is termed order of growth. For example, say we have an algorithm that runs at ${n}^{3}+{n}^{2}+n+4$. The following table shows us the significance of each term:
 Value of $n$ ${n}^{3}$ ${n}^{2}$ ${n}^{1}$ $4$ 1 1 1 1 4 10 1000 100 10 4 100 1000000 10000 100 4 1000 1000000000 1000000 1000 4
As can be seen, as $n$ approaches infinity, all the other terms fade in significance next to ${n}^{3}$. Therefore the order of growth of the algorithm is ${n}^{3}$.
So applying this vocabulary to FIND-MINIMUM, whose cost we found to be ${c}_{4}+{c}_{2}n$, we say that the order of growth is $n$.

## 5   Insertion-Sort Example

A sorting algorithm arranges the elements of a list into a certain order. A simple example is the INSERTION-SORT algorithm. The strategy is to divide the list into a set of sorted elements and a set of unsorted elements. We then take an element from the unsorted set and insert it into the correct position in the sorted set until the unsorted set is empty.
 INSERTION-SORT( $A$) Cost Times 1    for  $j$ $←$  $2$ to  $\mathrm{length}\left[A\right]$ ${c}_{1}$ $n$ 2             do  $\mathrm{key}$ $←$  $A\left[j\right]$ ${c}_{2}$ $n-1$ 3                    $i$ $←$  $j-1$ ${c}_{3}$ $n-1$ 4                   while  $i>0$ and $A\left[i\right]>\mathrm{key}$ ${c}_{4}$ ${\Sigma }_{j=2}^{n}{t}_{j}$ 5                                 do  $A\left[i+1\right]$ $←$  $A\left[i\right]$ ${c}_{5}$ ${\Sigma }_{j=2}^{n}\left({t}_{j}-1\right)$ 6                                        $i$ $←$  $i-1$ ${c}_{6}$ ${\Sigma }_{j=2}^{n}\left({t}_{j}-1\right)$ 7                    $A\left[i+1\right]$ $←$  $\mathrm{key}$ ${c}_{7}$ $n-1$
To compute the running time of this algorithm, we sum the products of the cost and times columns. If the array is in reverse sorted order, the worst case will apply, and so, noting that:

 $\sum _{j=2}^{n}j=\frac{n\left(n+1\right)}{2}-1$

the worst case running time is

 $T\left(n\right)=\left(\frac{{c}_{4}}{2}+\frac{{c}_{5}}{2}+\frac{{c}_{6}}{2}\right){n}^{2}+\left({c}_{1}+{c}_{2}+{c}_{3}+\frac{{c}_{4}}{2}-\frac{{c}_{5}}{2}-\frac{{c}_{6}}{2}+{c}_{7}\right)n-\left({c}_{2}+{c}_{3}+{c}_{4}+{c}_{7}\right)$

Hence the order of growth is ${n}^{2}$.

## 6  Design of Algorithms

As with all areas of computing, there are many different approaches to designing an algorithm to solve any given problem. Over the course of this unit, we will look at a few of these approaches, and how to recognise which approach is best for which problem.
Some of the popular approaches include:
• Divide-and-conquer
• Greedy approach
• Dynamic programming
• Linear programming
• Branch-and-bound
• $\alpha -\beta$ pruning

## 7  Divide-and-Conquer Algorithms

Divide-and-Conquer is a recursive approach, which involves the following steps:
1. Divide the problem into a number of subproblems.
2. Conquer the subproblems by solving them recursively. If however the subproblems are small enough, simply solve them.
3. Combine the solutions of the subproblems into the solution for the original problem.
Consider the sorting problem. The insertion sort uses an incremental approach, it sorts the first item, then the first two items, then first three items, and so on, until it has sorted all $n$ items of the array.
Another approach is merge sort. It uses the divide-and-conquer approach, as follows:
1. Divide the $n$-element sequence into two subsequences of $n/2$ elements each.
2. Conquer the two subsequences by recursively using the merge sort on each of them.
3. Merge the two sorted subsequences into a sorted sequence. See Figure 1.
Figure 1: Merging an eight integer sequence.

## 8  Analysis of Merge Sort

The running time of an algorithm that makes a recursive call to itself is often called a "recursion". We can analyse the running time of merge sort by looking at each of its three steps.
1. Divide - Dividing the array requires simply finding the mid-point of the array, this takes constant time, thus $D\left(n\right)=1$.
2. Conquer - Here the algorithm makes two calls to itself, each one with an order of input of $n/2$. Thus $C\left(n\right)=2T\left(n/2\right)$.
3. Merge - The process of merging the two sorted arrays can be done in $M\left(n\right)=n$ time.
Combining the above, and remembering that an order of growth of 1 is insignificant next to an order of growth of $n$, we get the following:

Using the above formula, we can build a tree (Figure 2) depicting how much time is spent at each level of the recursion. The summation of all the costs of each level shows that the running time of merge sort is $n\mathrm{lg}n$.
Figure 2: The breakdown of the cost of a Merge Sort. Each level of the tree represents a level of recursion in the algorithm; thus there are $\mathrm{lg}n$ levels. The cost of each level is shown on the right of the tree. Intuitively, the cost of the whole algorithm is the sum of all the level costs. In this case, it is $n\mathrm{lg}n$.

### Footnotes:

${}^{1}$Readers who consistently require extracting the minimum/maximum value of a set should take a look at the "heap" data structure covered in these notes.

File translated from TEX by TTM, version 3.67.
On 31 Mar 2006, 18:12.