DEV Community

Minh Le
Minh Le

Posted on

743. Network Delay Time - Breadth-First Search Approach

function networkDelayTime(times: number[][], n: number, k: number): number {

    const adjList: Record<string,[number,number][]> = {};

    for(let i = 0; i < times.length; i++) {
        if(adjList[times[i][0]] === undefined) {
            adjList[String(times[i][0])] = [[times[i][1], times[i][2]]];
        } else {
            adjList[String(times[i][0])].push([times[i][1], times[i][2]]);
        }
    }

    const minTimes = Array(n+1).fill(Infinity);
    minTimes[k] = 0;
    let nextNodes = [k];

    while(nextNodes.length !== 0) {
        const temp:number[] = [];

        for(let iN = 0; iN < nextNodes.length; iN++) {
            const curNode = nextNodes[iN];
            const curEdges = adjList[String(curNode)];

            if(!curEdges) continue;


            for(let iE = 0; iE < curEdges.length; iE++) {
                if(minTimes[curEdges[iE][0]] > minTimes[curNode] + curEdges[iE][1]) {
                    minTimes[curEdges[iE][0]] = minTimes[curNode] + curEdges[iE][1];
                    temp.push(curEdges[iE][0]);
                }
            }
        }

        nextNodes = temp;


    }

    let max = 0;

    for(let i = 1; i < minTimes.length; i++) {
        if(minTimes[i] === Infinity)
            return -1;
        else if( max < minTimes[i])
            max = minTimes[i];
    }

    return max;
};
Enter fullscreen mode Exit fullscreen mode

This is the breadth-first search approach to solving this problem. It is much faster than the depth-first search approach that I've discussed (beats 83.18% of users with TypeScript). Well, who am I kidding? That percentage is kind of meaningless since it can vary (a lot) from one submission to another of the same code. But if the DFS approach only beats somewhere around 9.7% of users, it must mean something, right?

So, in this approach, I use an array of nodes called nextNodes to keep track of which nodes I need to process in the next round. At first, it only contains the source node (k) because in the next round, which is also the first round, we need to examine the source node first. Then, we enter the while loop until there are no more next nodes to process (nextNodes.length === 0).

At each round of the while loop, we pick the nodes from nextNodes, one by one. For each of them, we get the edges array from the adjacency list (well, I forgot to mention, we need to build the adjacency list adjList first, where each key is a node and the value is an array of pairs in which the first value is the target node and the second value is the weight of the edge that leads from it to the target node). For each edge, we check if adding that edge to our graph creates another path from the source node to the target node that is shorter than the current distance from the source node to the target node. If it is, we update the min value.

Then, we iterate through the min value array to find the maximum value and return it. If any element from the min value array still hasn't been updated yet and still has the Infinity value, it means we can't reach all of the nodes from the source node. In that case, we return -1.

Top comments (0)