Dijkstra’s algorithm is a method to find the shortest paths between nodes in a graph. In particular, Dijkstra’s algorithm solves the single-source shortest path problem: find the shortest path from a single node to all other nodes in the graph. We assume in the discussion below that the graph
G on which we operate is connected, undirected, and each edge has a nonnegative weight . We can think of the graph as a collection of major cities (nodes) connected by highways of a certain distance (edges).
The intuition behind Dijkstra’s algorithm is simple. If we wanted to find the shortest path between two cities A and B, we could just try all possible paths between A and B. Once we arrive at a node, let’s say we can clone ourselves to travel to each outgoing edge. When a clone reaches a node, it checks to see if another clone had already been to that node. If so, the clone stops: this clone was not the fastest to reach the node (therefore, this clone took a longer path). If another clone had not been to the node, this clone’s path to the node was the fastest (and, therefore, the shortest). This is the idea behind Dijkstra’s algorithm, but we use the edge distances instead of time to keep track of the shortest path .
The input to Dijkstra’s algorithm is a graph
G and a source node (or root)
v. The algorithm works by assigning a cost value to each node and then updating it until it becomes optimal. Initially, the cost of the starting node is 0, and all others are infinity. (Images are from .) The algorithm is then:
n from the unvisited nodes which has the least weight.
path_weight[n] + edge_distance[(n, x)]. This is determining, can we get a path from the root to
x with smaller distance by going through
n instead of the current best path? If so, we update the weight of
x to the new distance.
n from the set of unvisited nodes.
Starting at the current location, we search for unexplored points, which become candidates for our next location. We figure out the next location by calculating which of the candidate nodes has the least cost path to that node, and in that process we slowly expand the set of visited nodes for which we’ve found the shortest path.
The result of Dijkstra’s algorithm starting at node
v is a shortest path tree rooted at
v, such that the path from root
v to any other node in the tree is the shortest path distance. The shortest path tree is a spanning tree, meaning that it is a subgraph which includes all the vertices of
To pull out the least weight node in step 1 of the algorithm, we use a data structure called a Priority Queue, which allows us to retrieve an object with the minimum associated key. It acts like a queue except that it supports the ability to efficiently extract the highest priority item instead of simply the oldest. Priority queues are often implemented with a heap, so check out that article for more info!
The worst-case runtime of Dijkstra’s is
O(|E| + |V| log|V|), where
V are the edge and node set, respectively. This is because we check each edge of the graph in the traversal, requiring
O(|E|) time. We must also pull out each node from the priority queue, where each extract operation takes
O(log|V|) time. So the total time is
O(|E| + |V| log|V|).
Conceptually, we can think of Dijkstra’s algorithm as breadth-first search where each edge has weight 1. In BFS, since we consider all edges to have the same weight, we can simply use a normal queue instead of a priority queue since the nodes will obey the first-in-first-out (FIFO) ordering.