Single Source Shortest Paths

Given a connected weighted directed graph G(V,E), associated with each edge u,vE, there is a weight w(u,v). The single source shortest paths (SSSP) problem is to find a shortest path from a given source r to every other vertex vV-{r}. The weight (length) of a path p= v0 , v1 ,, vk is the sum of the weights of its constituent edges:

w(p)= i=1 kw( vi-1 , vi ).

The weight of a shortest path from u to v is defined by δ(u,v)=min{w(p): p is a path from u to v}.

1  Negative Cycles

In some applications, graphs may contain edges with negative weights. Such edges may result in negative cycles (e.g. Figure 1). If there is a negative weight cycle which is reachable from the source s, the weight δ(s,v) is undefined, where v is a vertex in the cycle.
Figure 1: Shortest paths from root vertex s. Negative cycles are formed by g and h, as well as b, c, and f. δ(a,g), δ(a,h), and δ(a,i) are - because of the negative cycle formed by g and h. The weights for b, c and f are , despite of the negative cycle, because they are unreachable from root vertex s.

2  Representation of a shortest path tree

The shortest path tree rooted at s is a directed subgraph G'=(V',E') where V'V and E'E such that every vV' is reachable from s; and for all vV', the unique simple path in G' from s to v is a shortest path from s to v in G. In other words, for each vertex vV, we maintain a predecessor p(v) that is either a vertex or NIL. Thus, the output for a single shortest paths problem is a tree rooted at the source.
Figure 2: Shortest Path Tree. The shortest path tree rooted at s has its edges in bold. Two variants (a) and (b) are shown for the same graph. In (a) vertex t is reached through the edge (q,t), whereas in (b), edge (p,t) is used instead.

3  Initialization

The algorithms introduced here make use of a common initialization that is as follows:
 1     d[s]   0
 2     p[s]  NIL
 3    for all vV-{s}
 4             do  d[v]  
 5                    p[v]  NIL

4  Shortest Paths and Relaxation

The main technique used by the shortest path algorithms introduced here is relaxation, a method that repeatedly decreases an upper bound on the length of an actual shortest path for each vertex until the upper bound equals the length of the shortest path.
For each vertex v, we maintain an attribute d[v] which is an upper bound on the length of a shortest path from source s to v. d[v] is also called the shortest path estimate.
RELAX (u,v,w)
 1    if  d[v]>d[u]+w(u,v)
 2          then  d[v]   d[u]+w(u,v)
 3                       p[v]   u

5  Dijkstra's Algorithm

Dijkstra's algorithm assumes that all the edges in G are non-negative. The algorithm maintains a set S of vertices whose final shortest path weights from the source s have already been determined. That is, for all the vertices vS, we have d[v]=δ(s,v).
The algorithm repeatedly selects a vertex uV-S with the minimum shortest-path estimate, inserts u to S, and relaxes all the edges leaving u. Also, a priority queue Q that contains all the vertices in V-S is maintained, keyed by their d values. Figure 3 shows the execution of the DIJKSTRA-SHORT algorithm listed below.
 1     INITIALIZE (G,s)
 2     S  
 3     Q   V
 4    while  Q
 5                  do  u   EXTRACT-MIN (Q)
 6                         S   S{u}
 7                        for each vertex v Adj [u]
 8                                 do  RELAX (u,v,w)
Figure 3: Execution of DIJKSTRA-SHORT. Step (a) shows the state of the graph after initialisation. In step (b), vertex s (the source) is extracted (shaded gray) from Q and the adjacent vertices a and e relaxed. Since d[a]>d[s]+w(s,a), d[a] is updated and p[a] set to s (arrow from s to a is made bold). Ditto for vertex e. In step (c), vertex e is extracted (and shaded) as it is Q's minimum. As before, its adjacent vertices are relaxed. Step (d) illustrates the RELAX mechanism. Here a is extracted and b and f relaxed. In doing so, f's parent is updated to a (formerly e). This execution carries on until Q is empty.

6  The Bellman-Ford Algorithm

Like Dijkstra's algorithm, the Bellman-Ford algorithm uses the technique of relaxation, progressively decreasing an estimate d[v] on the weight of a shortest path from a source s to each other vertex vV until it reaches the actual shortest path weight δ(s,v). The algorithm is capable of detecting negative cycles and returns true if and only if the graph contains no negative cycles that are reachable from the source. If BELLMAN-FORD returns true, then we have d[u]=δ(s,v) for all vertices v. Also, for all vV not reachable from s, δ(s,v)=. Figure 4 shows the execution of the BELLMAN-FORD algorithm listed below on an example graph.
 1     INITIALIZE (G,s)
 2    for  i   1 to  |V|-1
 3             do for each edge (u,v)E
 4                            do  RELAX (u,v,w)
 5    for each edge (u,v)E
 6             do if  d[v]>d[u]+w(u,v)
 7                         then return FALSE
 8    return TRUE
Figure 4: Executing BELLMAN-FORD. Step (a) shows the state of the graph after initialisation. In each of the |V|-1 iterations, we assume that the edges are visited in a left to right order (s,a), (s,b), (s,e), (a,f), ... (q,r). Due to this visiting order, within iteration 1, vertex q is first relaxed through edge (f,q), setting d[q] to 8. Soon after, edge (p,q) is considered and d[q] set to 7. Accordingly, p[q] is also updated.

7  Single-Source Shortest Paths in DAGs

By relaxing the edges in a DAG according to their topological sort of its vertices. We can achieve Θ(n+m) time complexity.
 1    Topologically sort the vertices of G
 2     INITIALIZE (G,s)
 3    for each vertex u taken in topologically sorted (increasing) order
 4             do for  v Adj [u]
 5                            do  RELAX (u,v,w)
Figure 5 shows an example execution of DAG-SHORTEST on a DAG.
Figure 5: Excution of DAG-SHORTEST. Step (a) shows the state of the graph after initialisation. Following this, the vertices are considered in topological order: c, s, i, and then t. As each vertex u is considered, vertices in Adj [u] are relaxed.

8  Difference constraints and shortest paths

Linear Programming: Given an m×n matrix A, an m-vector b, and an n-vector c, we wish to find a vector x of n elements that maximise the objective function i=1 n ci xi subject to the m constraints given by Axb.
Approaches to solving linear programming.

9  Systems of difference constraints

Special linear programming: each row of the linear programming matrix A contains 1 or -1, and all other entries of A are 0s. Thus, the constraints can be expressed as

xj - xi bk

where 1i,jn and 1km.
For example, a set of four unknowns { x1 , x2 , x3 , x4 }, and 6 difference constraints:

x1 - x2 6 x1 - x3 -4 x2 - x3 -10 x3 - x4 14 x4 - x2 -4 x4 - x1 -10

... can be represented as ...

( 1-100 10-10 01-10 001-1 0-101 -1001 )( x1 x2 x3 x4 )( 6 -4 -10 14 -4 -10 )

Given a system Axb of difference constraints, the corresponding constraint graph is a weighted directed graph G=(V,E) where

V={ v0 , v1 , v2 ,, vn }

... where n is the number of unknowns, v0 is a special source vertex created, and vi corresponds to xi for 1in. Also, the set of edges ...

E={ vi , vj : xj - xi bk isaconstraint}{ v0 , vi :i=1,,n}

Theorem: Given a system Axb of difference constraints, let G be the corresponding constraint graph. If G contains no negative weight cycles, then
x={δ( v0 , v1 ),δ( v0 , v1 ),,δ( v0 , vn )}

is a feasible solution for the system. Figure 6 shows how this is done for the example with four unknowns above.
Figure 6: A constraint graph constructed from our on going example using simple rules: (1) Each unknown xi maps onto a vertex vi . (2) For each constraint xj - xi bk , there is an edge ( vi , vj ) with weight bk . (3) Add a vertex v0 and have edges from it to all other vertices with weight 0. Next, use one of the SSSP algorithms to solve d[ vi ] for 0in; using v0 as the source.

File translated from TEX by TTM, version 3.67.
On 31 Mar 2006, 18:12.