This article will provide a detailed explanation of the technical concepts and principles behind GBase 8s clusters. The GBase 8s instance-level cluster modes are divided into three types: Shared Storage Cluster (SSC), High Availability Cluster (HAC) with one primary and one standby node for local disaster recovery, and Remote High Availability Cluster (RHAC) with one primary and multiple standby nodes for remote disaster recovery.
SSC Cluster: Shared Storage
The GBase 8s Shared Storage High Availability Cluster (SSC) uses shared disks to achieve node availability. Data is stored only once, efficiently utilizing hardware resources and avoiding duplicate storage issues. The cluster operates in a peer-to-peer management mode, where queries access the local cache without network overhead, offering excellent linear scalability. The cluster can scale up to 128 nodes, all of which can perform read and write operations. When the primary node fails, an auxiliary node can upgrade to become the primary node, ensuring system availability. The SSC cluster is simple to operate and cost-effective, requiring no additional hardware.
The SSC cluster utilizes an independent architecture for caching across nodes, meaning that cached data on each node may not always be the same. When the primary node updates data, it notifies auxiliary nodes by transmitting page numbers. The auxiliary nodes then update their cache to ensure consistency, preventing stale data pages. All cached data is local, so there is no network overhead when accessing it.
SSC shared storage supports raw devices and shared file systems like NFS and GFS. Additionally, the SSC cluster can be combined with HAC and RHAC clusters to build a two-city, three-center deployment scenario. SSC is suitable for large-scale business systems, providing a high degree of data consistency across nodes based on a single storage unit. It is also ideal for transaction systems that require high availability, performance, and load balancing.
SSC Cluster Replication Mechanism
HAC Cluster: Local Disaster Recovery
The HAC cluster is a high-availability solution with one primary and one standby node, using log synchronization to ensure fast data synchronization between nodes. While the primary node handles read and write operations, the standby node can manage read-only queries. HAC is known for its simple installation, transparent application experience, automatic failover, and no extra costs. To improve performance, the cluster uses a memory buffer for synchronization before replaying on the standby node. The log synchronization modes include synchronous, semi-synchronous, and asynchronous modes.
HAC Cluster Replication Mechanism
The contents of the primary server’s logical log buffer are copied to the shared memory data replication buffer and then flushed to disk. If the primary server is using fully synchronous or near-synchronous modes, it must receive confirmation from the HAC standby server before completing the log flush. The primary server starts a log sending thread to transfer the data replication buffer to the standby server’s log receiving thread, which then writes the data into the shared memory's receiving buffer. The redo log thread copies the receiving buffer into the recovery buffer, and the standby server uses the log recovery thread to replay the logs. The two nodes monitor the connection between the servers by sending and receiving heartbeat messages.
The primary database server can use three modes to replicate data to the HAC standby server:
- Fully Synchronous Mode
In this mode, transactions require completion confirmation from the HAC standby server before they can complete. This mode offers the highest data integrity but may negatively impact system performance.
- Semi-Synchronous Mode
Transactions require acknowledgment of receipt from the HAC standby server before completing. This mode provides better performance than fully synchronous mode while still ensuring higher data integrity compared to asynchronous mode.
- Asynchronous Mode
Transactions can complete without waiting for acknowledgment of receipt or completion from the HAC standby server. This mode offers the best system performance but carries a risk of data loss in the event of a server failure.
RHAC Cluster: Remote Disaster Recovery
The Remote High Availability Cluster (RHAC) is architecturally similar to the local two-center HAC cluster. The main difference lies in the synchronization between the send buffer and receive buffer, where the Server Multiplexor Component (SMX) technology is employed. SMX is a communication interface that supports multiplexed network connections between servers in a high-availability environment, providing reliable, high-performance communication between database server instances.
RHAC Cluster Replication Mechanism
Once the primary node verifies its connection to the standby node, the log sending thread copies pages from the disk or logical log buffer to the data replication buffer. The log sending thread uses the Server Multiplexor Group (SMX) to send the data replication buffer to the standby node’s log receiving thread. The log receiving thread writes the data to the receive buffer, and the redo log thread copies the receive buffer into the recovery buffer. Unlike the fully synchronous or near-synchronous modes in HAC, the RHAC primary node does not need acknowledgment from the standby node before sending the next buffer. The primary node can send up to 32 unacknowledged data replication buffers before waiting for confirmation from the standby node.
Given that remote replication encounters larger network bandwidth, delay, and quality differences compared to local replication, the challenge of RHAC lies in the reliable transmission of logs and ensuring transaction integrity. As such, RHAC is designed with an asynchronous high-reliability transmission mechanism.
RHAC clusters support up to 256 standby nodes, with easy installation and deployment, and no additional costs. Users can freely add or remove RHAC nodes as needed, and RHAC and HAC node types can be interchanged. RHAC clusters have more lenient requirements for bandwidth and network latency, making them suitable for business systems with distributed nodes over large distances and lower network bandwidth demands. Combining SSC, HAC, and RHAC clusters creates a two-city, three-center solution.
The GBase 8s database cluster technology provides flexible and reliable database solutions for financial businesses. With a two-city, three-center deployment, GBase 8s not only meets the high security and availability demands of financial businesses but also offers excellent performance scalability. In this data-driven era, GBase 8s database cluster technology will be a solid foundation for financial operations.
Top comments (0)