DEV Community

YugabyteDB: how does a master deal with HA

The Masters in a YugabyteDB cluster are a crucial component, and contain crucial (meta)data. Therefore these are protected by and form a RAFT group. The RAFT group performs the election of a LEADER and the Master LEADER performs all the master duties, the master FOLLOWERS simply obtain the data the LEADERs persist.

Masters servers do not move around, but the role does

There is a very big difference between the RAFT groups of the tablets and the RAFT group of the Master. The replicas of a tablet server RAFT group can be moved over the tablet servers in the cluster, which is a crucial component of the load balancing mechanism. The Masters can change role, which means that the LEADER role can move to another Master, but a master server can not automatically move to another node, unless manually taken down and moved to another node, and reconfigured in the cluster.

Master role switch

The role of the masters can be seen in the master web UI at port 7000. This information can also be shown via yb_stats:

➜ yb_stats --print-masters
42c97ba1e7784e81a85bb3e2c83ebea5 FOLLOWER Placement: local.local.local
                                 Seqno: 1676380739263049 Start time: 1676380739263049
                                 RPC addresses: ( yb-1.local:7100 )
                                 HTTP addresses: ( yb-1.local:7000 )
225d3fc307da4e6e9d33a2d2b42a240f LEADER Placement: local.local.local
                                 Seqno: 1676377578485290 Start time: 1676377578485290
                                 RPC addresses: ( yb-2.local:7100 )
                                 HTTP addresses: ( yb-2.local:7000 )
98d7c66bd81c4d068552e78908861e34 FOLLOWER Placement: local.local.local
                                 Seqno: 1676380706013680 Start time: 1676380706013680
                                 RPC addresses: ( yb-3.local:7100 )
                                 HTTP addresses: ( yb-3.local:7000 )
Enter fullscreen mode Exit fullscreen mode

This is an RF3 cluster, so there are 3 master servers, one is leader (at server yb-2.local).

If a single random master server is stopped, the master will still continue functioning, because there is a majority of masters in the RAFT group:

➜ yb_stats --print-masters
42c97ba1e7784e81a85bb3e2c83ebea5 FOLLOWER Placement: local.local.local
                                 Seqno: 1676380739263049 Start time: 1676380739263049
                                 RPC addresses: ( yb-1.local:7100 )
                                 HTTP addresses: ( yb-1.local:7000 )
225d3fc307da4e6e9d33a2d2b42a240f LEADER Placement: local.local.local
                                 Seqno: 1676377578485290 Start time: 1676377578485290
                                 RPC addresses: ( yb-2.local:7100 )
                                 HTTP addresses: ( yb-2.local:7000 )
98d7c66bd81c4d068552e78908861e34 UNKNOWN_ROLE Placement: -.-.-
                                 Seqno: 0 Start time: 0
                                 RPC addresses: ( yb-3.local:7100 )
                                 HTTP addresses: ( )
AppStatusPB {
    code: NETWORK_ERROR,
    message: Some(
        "Unable to get registration information for peer ([yb-3.local:7100]) id (98d7c66bd81c4d068552e78908861e34): recvmsg error: Connection refused",
    ),
    error_codes: None,
    source_file: Some(
        "../../src/yb/util/net/socket.cc",
    ),
    source_line: Some(
        540,
    ),
    errors: Some(
        "\u{1}o\0\0\0\0",
    ),
}
Enter fullscreen mode Exit fullscreen mode

One FOLLOWER and one LEADER, together a majority, and one master with an unknown state (UNKNOWN_ROLE).

If another master is stopped, there is no majority anymore. The web UI of the surviving master will throw an error:

Not found (yb/master/master-path-handlers.cc:205): Unable to locate leader master to redirect this request: /
Enter fullscreen mode Exit fullscreen mode

yb_stats will not show the master status when it requests it normally, because it will try to find the master leader, which is not available if the remaining master is a minority.

However, we can obtain the status of the surviving master if we ask yb_stats to not find the leader, and just print everything:

➜ yb_stats --print-masters --details-enable
192.168.66.81:7000 42c97ba1e7784e81a85bb3e2c83ebea5 UNKNOWN_ROLE Placement: -.-.-
192.168.66.81:7000                                  Seqno: 0 Start time: 0
192.168.66.81:7000                                  RPC addresses: ( yb-1.local:7100 )
192.168.66.81:7000                                  HTTP addresses: ( )
192.168.66.81:7000 AppStatusPB {
    code: NETWORK_ERROR,
    message: Some(
        "Unable to get registration information for peer ([yb-1.local:7100]) id (42c97ba1e7784e81a85bb3e2c83ebea5): recvmsg error: Connection refused",
    ),
    error_codes: None,
    source_file: Some(
        "../../src/yb/util/net/socket.cc",
    ),
    source_line: Some(
        540,
    ),
    errors: Some(
        "\u{1}o\0\0\0\0",
    ),
}
192.168.66.81:7000 225d3fc307da4e6e9d33a2d2b42a240f FOLLOWER Placement: local.local.local
192.168.66.81:7000                                  Seqno: 1676377578485290 Start time: 1676377578485290
192.168.66.81:7000                                  RPC addresses: ( yb-2.local:7100 )
192.168.66.81:7000                                  HTTP addresses: ( yb-2.local:7000 )
192.168.66.81:7000 98d7c66bd81c4d068552e78908861e34 UNKNOWN_ROLE Placement: -.-.-
192.168.66.81:7000                                  Seqno: 0 Start time: 0
192.168.66.81:7000                                  RPC addresses: ( yb-3.local:7100 )
192.168.66.81:7000                                  HTTP addresses: ( )
192.168.66.81:7000 AppStatusPB {
    code: NETWORK_ERROR,
    message: Some(
        "Unable to get registration information for peer ([yb-3.local:7100]) id (98d7c66bd81c4d068552e78908861e34): recvmsg error: Connection refused",
    ),
    error_codes: None,
    source_file: Some(
        "../../src/yb/util/net/socket.cc",
    ),
    source_line: Some(
        540,
    ),
    errors: Some(
        "\u{1}o\0\0\0\0",
    ),
}
Enter fullscreen mode Exit fullscreen mode

This shows that two masters have UNKNOWN_ROLE, as seen from 192.168.66.81, and that we indeed have a surviving master node, but that its role has switched to FOLLOWER. This is not surprising: a majority of masters is down, so either:

  • a FOLLOWER is left, which cannot elect a new LEADER because of the minority.
  • a LEADER is left, which will timeout on its LEADER term, and then is unable to get elected as LEADER again, and becomes a FOLLOWER.

In both cases the remaining minority will consists of FOLLOWERS only.

The tablet servers

The tablet servers are essentially loosely coupled with the Masters. This means that they are started independently from the Masters, and do not become unavailable when a master leader doesn't respond. This is not as strange is it may seem at first: when a master switches leader, the tablet servers will heartbeat to the former LEADER for some time, which is until the new master LEADER broadcasted the master change to tablet servers. If a tablet server would directly respond, a master change would be disruptive. This does not impact data consistency: that is arranged via the tablet RAFT group, which functions independently from the masters.

Top comments (0)