# All Pairs Shortest Paths

Given a directed, connected weighted graph $G\left(V,E\right)$, for each edge $⟨u,v⟩\in E$, a weight $w\left(u,v\right)$ is associated with the edge. The all pairs of shortest paths problem (APSP) is to find a shortest path from $u$ to $v$ for every pair of vertices $u$ and $v$ in $V$.

## 1  The representation of $G$

The input is an $n×n$ matrix $W=\left({w}_{\mathrm{ij}}\right)$.

 $w\left(i,j\right)=\left\{\begin{array}{cc}0\hfill & \text{if}i=j\text{}\hfill \\ \text{the weight of the directed edge}⟨i,j⟩\text{}\hfill & \text{if}i\ne j\text{and}⟨i,j⟩\in E\text{}\hfill \\ \infty \hfill & \text{if}i\ne j\text{and}⟨i,j⟩\notin E\text{}\hfill \end{array}$

## 2  Algorithms for the APSP problem

• Matrix Multiplication / Repeated Squaring
• The Floyd-Warshall Algorithm
• Transitive Closure of a Graph

## 3  Matrix Multiplication Algorithm

The algorithm is based on dynamic programming, in which each major loop will invoke an operation that is very similar to matrix multiplication. Following the DP strategy, the structure of this problem is, for any two vertices $u$ and $v$,
1. if $u=v$, then the shortest path $p$ from $u$ to $v$ is 0.
2. otherwise, decompose $p$ into $u\to x\to v$, where $p\text{'}$ is a path from $u$ to $x$ and contains at most $k$ edges and it is the shortest path from $u$ to $x$.
A recursive solution for the APSP problem is defined. Let ${d}_{\mathrm{ij}}^{\left(k\right)}$ be the minimum weight of any path from $i$ to $j$ that contains at most $k$ edges.
1. If $k=0$, then

 ${d}_{\mathrm{ij}}^{\left(0\right)}=\left\{\begin{array}{cc}0\hfill & \text{if}i=j\text{}\hfill \\ \infty \hfill & \text{if}i\ne j\text{}\hfill \end{array}$

2. Otherwise, for $k\ge 1$, ${d}_{\mathrm{ij}}^{\left(k\right)}$ can be computed from ${d}_{\mathrm{ij}}^{\left(k-1\right)}$ and the adjacency matrix $w$.
${d}_{\mathrm{ij}}^{\left(k\right)}=min\left\{{d}_{\mathrm{ij}}^{\left(k-1\right)},\underset{1\le l\le n}{min}\left\{{d}_{\mathrm{il}}^{\left(k-1\right)}+{w}_{\mathrm{lj}}\right\}\right\}=\underset{1\le l\le n}{min}\left\{{d}_{\mathrm{il}}^{\left(k-1\right)}+{w}_{\mathrm{lj}}\right\}$
SPECIAL-MATRIX-MULTIPLY $\left(A,B\right)$
1     $n$ $←$  $\text{rows}\left[A\right]$
2     $C$ $←$ new $n×n$ matrix
3    for  $i$ $←$  $1$ to  $n$
4             do for  $j$ $←$  $1$ to  $n$
5                            do  ${c}_{\mathrm{ij}}$ $←$  $\infty$
6                                  for  $k$ $←$  $1$ to  $n$
7                                           do  ${c}_{\mathrm{ij}}$ $←$  $min\left({c}_{\mathrm{ij}},{a}_{\mathrm{ik}}+{b}_{\mathrm{kj}}\right)$
.                                                    /* Here's where this algorithm */
.                                                    /* differs from MATRIX-MULTIPLY. */
8    return  $C$
The optimal solution can be computed by calling SPECIAL-MATRIX-MULTIPLY $\left({D}^{\left(k\right)},W\right)$ for $1\le k\le n-2$. We only need to run to $n-2$ because that will give us ${D}^{\left(n-1\right)}$ giving us all the shortest path lengths of at most $n-1$ edges (you only need $n-1$ edges to connect $n$ vertices). Since SPECIAL-MATRIX-MULTIPLY is called $n-2$ times, the total running time is $O\left({n}^{4}\right)$.

### 3.1  The repeated squaring method

Since each ${D}^{\left(k\right)}$ matrix contains the shortest paths of at most $k$ edges, and $W$ really is ${D}^{\left(1\right)}$, all we were doing in the earlier solution was going: "Given the shortests paths of at most length $k$, and the shortests paths of at most length $1$, what is the shortests paths of at most length $k+1$?"
This situation is ripe for improvement. The repeated squaring method rephrases the question to: "Given the shortests paths of at most length $k$, what is the shortests paths of at most length $k+k$?" The correctness of this approach lies in the observation that the shortests paths of at most $m$ edges is the same as the shortest paths of at most $n-1$ edges for all $m>n-1$. Thus:
${D}^{\left(1\right)}=W$
${D}^{\left(2\right)}={W}^{2}=W.W$
${D}^{\left(4\right)}={W}^{4}={W}^{2}.{W}^{2}$
$:$
${D}^{\left({2}^{⌈\mathrm{log}\left(n-1\right)⌉}\right)}={W}^{\left({2}^{⌈\mathrm{log}\left(n-1\right)⌉}\right)}={W}^{\left({2}^{⌈\mathrm{log}\left(n-1\right)⌉}-1\right)}.{W}^{\left({2}^{⌈\mathrm{log}\left(n-1\right)⌉}-1\right)}$
Using repeated squaring, we only need to run SPECIAL-MATRIX-MULTIPLY $⌈\mathrm{log}\left(n-1\right)⌉$ times. Hence the running time of the improved matrix multiplication is $O\left({n}^{3}\mathrm{log}n\right)$.
ALL-PAIRS-SHORTEST-PATHS $\left(W\right)$
1     $n$ $←$  $\text{rows}\left[W\right]$
2     ${D}^{\left(1\right)}$ $←$  $W$
3     $m$ $←$  $1$
4    while  $m
5                  do  ${D}^{\left(2m\right)}$ $←$  SPECIAL-MATRIX-MULTIPLY $\left({D}^{\left(m\right)},{D}^{\left(m\right)}\right)$
6                         $m$ $←$  $2m$
7    return  ${D}^{\left(n\right)}$

### 3.2  Repeat Squaring Example

Figure 1: Example graph for the Repeated Squaring method.
Take the graph in Figure 1 for example. The weights for this graph in matrix form...

 $W=\left(\begin{array}{cccc}\hfill 0\hfill & \hfill 5\hfill & \hfill \infty \hfill & \hfill \infty \hfill \\ \hfill \infty \hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill \infty \hfill \\ \hfill 8\hfill & \hfill \infty \hfill & \hfill 0\hfill & \hfill 3\hfill \\ \hfill 2\hfill & \hfill \infty \hfill & \hfill \infty \hfill & \hfill 0\hfill \end{array}\right)={D}^{\left(1\right)}$

Repeatedly squaring this gives us...

 ${D}^{\left(1\right)}.{D}^{\left(1\right)}=\left(\begin{array}{cccc}\hfill 0\hfill & \hfill 5\hfill & \hfill 6\hfill & \hfill \infty \hfill \\ \hfill 9\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 4\hfill \\ \hfill 5\hfill & \hfill 13\hfill & \hfill 0\hfill & \hfill 3\hfill \\ \hfill 2\hfill & \hfill 7\hfill & \hfill \infty \hfill & \hfill 0\hfill \end{array}\right)={D}^{\left(2\right)}$

 ${D}^{\left(2\right)}.{D}^{\left(2\right)}=\left(\begin{array}{cccc}\hfill 0\hfill & \hfill 5\hfill & \hfill 6\hfill & \hfill 9\hfill \\ \hfill 6\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 4\hfill \\ \hfill 5\hfill & \hfill 10\hfill & \hfill 0\hfill & \hfill 3\hfill \\ \hfill 2\hfill & \hfill 7\hfill & \hfill 8\hfill & \hfill 0\hfill \end{array}\right)={D}^{\left(4\right)}$

${D}^{\left(4\right)}$ contains the all-pairs shortest paths. It is interesting to note that at ${D}^{\left(2\right)}$, the shortest path from $2$ to $1$ is 9 using the path $⟨2,3,1⟩$. Since the final solution ( ${D}^{\left(4\right)}$) allows for up to $4$ edges to be used, a shorter path $⟨2,3,4,1⟩$ was found with a weight of 6.

## 4  The Floyd-Warshall Algorithm

Floyd-Warshall's algorithm is based upon the observation that a path linking any two vertices $u$ and $v$ may have zero or more intermediate vertices. The algorithm begins by disallowing all intermediate vertices. In this case, the partial solution is simply the initial weights of the graph or infinity if there is no edge.
The algorithm proceeds by allowing an additional intermediate vertex at each step. For each introduction of a new intermediate vertex $x$, the shortest path between any pair of vertices $u$ and $v$, $x,u,v\in V$, is the minimum of the previous best estimate of $\delta \left(u,v\right)$, or the combination of the paths from $u\to x$ and $x\to v$.

 $\delta \left(u,v\right)←\mathrm{min}\left(\delta \left(u,v\right),\delta \left(u,x\right)+\delta \left(x,v\right)\right)$

Let the directed graph be represented by a weighted matrix $W$.
FLOYD-WARSHALL $\left(W\right)$
1     $n$ $←$  $\text{rows}\left[W\right]$
2     ${D}^{\left(0\right)}$ $←$  $W$
3    for  $k$ $←$  $1$ to  $n$
4             do for  $i$ $←$  $1$ to  $n$
5                            do for  $j$ $←$  $1$ to  $n$
6                                           do  ${d}_{\mathrm{ij}}^{\left(k\right)}$ $←$  MIN $\left({d}_{\mathrm{ij}}^{\left(k-1\right)},{d}_{\mathrm{ik}}^{\left(k-1\right)}+{d}_{\mathrm{kj}}^{\left(k-1\right)}\right)$
7    return  ${D}^{\left(n\right)}$
The time complexity of the algorithm above is $O\left({n}^{3}\right)$.

### 4.1  Floyd-Warshall Example

Figure 2: Example graph for FLOYD-WARSHALL.
Let us now work through the steps of the FLOYD-WARSHALL algorithm when it is applyed on the graph depicted in Figure 2.

 $W=\left(\begin{array}{cccc}\hfill 0\hfill & \hfill 1\hfill & \hfill 8\hfill & \hfill \infty \hfill \\ \hfill \infty \hfill & \hfill 0\hfill & \hfill \infty \hfill & \hfill 2\hfill \\ \hfill \infty \hfill & \hfill \infty \hfill & \hfill 0\hfill & \hfill \infty \hfill \\ \hfill \infty \hfill & \hfill \infty \hfill & \hfill 3\hfill & \hfill 0\hfill \end{array}\right)$

1. Allowable intermediate vertices $\left\{\right\}$: ${D}^{\left(0\right)}=W$
2. Allowable intermediate vertices $\left\{1\right\}$: ${D}^{\left(1\right)}$ no change
3. Allowable intermediate vertices $\left\{1,2\right\}$:

 ${D}^{\left(2\right)}=\left(\begin{array}{cccc}\hfill 0\hfill & \hfill 1\hfill & \hfill 8\hfill & \hfill \text{3}\hfill \\ \hfill \infty \hfill & \hfill 0\hfill & \hfill \infty \hfill & \hfill 2\hfill \\ \hfill \infty \hfill & \hfill \infty \hfill & \hfill 0\hfill & \hfill \infty \hfill \\ \hfill \infty \hfill & \hfill \infty \hfill & \hfill 3\hfill & \hfill 0\hfill \end{array}\right)$

Where $3=\text{min}\left\{\infty ,1+2\right\}$, using paths $1\to 2$ and $2\to 4$.
4. Allowable intermediate vertices $\left\{1,2,3\right\}$: ${D}^{\left(3\right)}$ no change
5. Allowable intermediate vertices $\left\{1,2,3,4\right\}$:

 ${D}^{\left(4\right)}=\left(\begin{array}{cccc}\hfill 0\hfill & \hfill 1\hfill & \hfill \text{6}\hfill & \hfill 3\hfill \\ \hfill \infty \hfill & \hfill 0\hfill & \hfill \text{5}\hfill & \hfill 2\hfill \\ \hfill \infty \hfill & \hfill \infty \hfill & \hfill 0\hfill & \hfill \infty \hfill \\ \hfill \infty \hfill & \hfill \infty \hfill & \hfill 3\hfill & \hfill 0\hfill \end{array}\right)$

Where $6=\text{min}\left\{8,3+3\right\}$, using paths $1\to 4$ and $4\to 3$.
Where $5=\text{min}\left\{\infty ,2+3\right\}$, using paths $2\to 4$ and $4\to 3$.

## 5  The transitive closure of a directed graph

The transitive closure of a directed graph $G=\left(V,E\right)$ tells us if there is a path from $i$ to $j$, for each pair of vertices $i,j\in V$. It is itself another graph ${G}^{*}=\left(V,{E}^{*}\right)$ where ${E}^{*}=\left\{⟨i,j⟩:\text{there is a path from}i\text{to}j\text{in}G\text{}\right\}$. The transitive closure of a directed graph $G$ can be determined by the following algorithm.
TRANSITIVE-CLOSURE $\left(G\right)$
1     $n$ $←$  $|V|$
2    for  $i$ $←$  $1$ to  $n$
3             do for  $j$ $←$  $1$ to  $n$
4                            do if  $i=j$ or $\left(i,j\right)\in E$
5                                        then  ${t}_{\mathrm{ij}}^{\left(0\right)}$ $←$  $1$
6                                        else   ${t}_{\mathrm{ij}}^{\left(0\right)}$ $←$  $0$
7    for  $k$ $←$  $1$ to  $n$
8             do for  $i$ $←$  $1$ to  $n$
9                            do for  $j$ $←$  $1$ to  $n$
10                                           do  ${t}_{\mathrm{ij}}^{\left(k\right)}$ $←$  ${t}_{\mathrm{ij}}^{\left(k-1\right)}\vee \left({t}_{\mathrm{ik}}^{\left(k-1\right)}\wedge {t}_{\mathrm{kj}}^{\left(k-1\right)}\right)$
11    return  ${T}^{\left(n\right)}$

### 5.1  Transitive Closure Example

Figure 3: Example graph for Transitive Closure.
The matrices below illustrate the state of the TRANSITIVE-CLOSURE algorithm as it operates on the graph depicted in Figure 3.

 $\begin{array}{ccc}\hfill {T}^{\left(0\right)}=\left(\begin{array}{cccc}\hfill 1\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 1\hfill \end{array}\right)\hfill & \hfill {T}^{\left(1\right)}=\left(\begin{array}{cccc}\hfill 1\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 1\hfill \end{array}\right)\hfill & \hfill {T}^{\left(2\right)}=\left(\begin{array}{cccc}\hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \end{array}\right)\hfill \\ \hfill \hfill & \hfill \hfill & \hfill \hfill \\ \hfill \hfill & \hfill {T}^{\left(3\right)}=\left(\begin{array}{cccc}\hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \end{array}\right)\hfill & \hfill {T}^{\left(4\right)}=\left(\begin{array}{cccc}\hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \end{array}\right)\hfill \end{array}$

File translated from TEX by TTM, version 3.67.
On 31 Mar 2006, 18:12.