Backup Background
PolarDB-X distributed database adopts an integrated centralized-distributed architecture. As business development and expansion progress, the Data Node can be gradually expanded from centralized to distributed linearly, smoothly and horizontally split online.
Database backup and restoration, bear an important role in ensuring data security. Based on data protection and disaster recovery requirements, PolarDB-X provides various capabilities to support point-in-time recovery (PITR). For more technical details, please refer to Safeguarding Your Data with PolarDB-X: Backup and Restoration.
As shown in the preceding figure, PolarDB-X is restored in PITR. The combination of the full backup and incremental binlog can meet the recovery requirements at any time.
In addition, as a multi-replica architecture, the data node (DN) of PolarDB-X will trigger dynamic migration and reconstruction of replicas during daily host operation and maintenance. Then it is necessary to rely on the secondary database reconstruction technology. First, a full backup is performed through the primary database and the multi-replica Consensus Log is used to incrementally synchronize the replica data.
This article focuses on the design of full backup as a data node responsible for online data storage. Based on the requirements of data protection and disaster recovery, DN needs to support flexible and customized backup policies. At the same time, PolarDB-X can process and store huge amounts of data and back up these data volumes (such as data sets up to 20TB in size). If you want to complete the backup in a shorter time (maybe 3 to 4 hours), such backup SLAs need to be optimized in terms of online business impact and parallelism strategies.
Backup Challenges of Traditional MySQL
Open-source MySQL uses the InnoDB storage engine to store data. Based on the traditional WAL architecture, it uses redo log physical logs to record data changes. The Server layer uses binlogs as logical logs to record the change logs of database instances, including all database changes such as transactional data modifications, non-transactional data modifications, and schema modifications. It plays three important roles:
1. Two-phase Commit Coordination Logs
In the MySQL structure with multiple storage engines, binlogs assume the responsibility of the coordinator of 2PC and participate in the two-phase commit and recovery process.
2. Data Change Logs
Binlogs record all changes in database instances. You can use the change log to replay a logical image to build a cluster of multiple types.
3. Executed Set Logs
Binlogs record the unique identifier of the change, Global Transaction Identifier, or GTID, to construct the Executed set.
How to ensure the backup consistency between the binlog and InnoDB storage engine poses a challenge. To maintain the consistency between binlogs and storage engine participants, a logical point in time is required, while generally, a global lock is required to implement a logical point in time, which seriously affects the running of online instances and then user business.
Lizard Lock-free Full Backup Design
PolarDB-X storage engine developed the Lizard distributed transaction system to replace the traditional InnoDB standalone transaction system. Given the disadvantages of the InnoDB transaction system, the Lizard transaction system designs the SCN standalone transaction system and the GCN distributed transaction system to solve these disadvantages, effectively supporting distributed database capabilities.
At the same time, compared with the limitations of traditional MySQL physical backup, PolarDB-X introduces the newly designed Lizard lock-free backup based on the Lizard transaction engine, making the backup process transparent to user business and truly imperceptible.
The core of Lizard lock-free backup is to abandon binlog backup during the physical backup process and only retain the backup of storage engine participants. The three most important responsibilities of binlogs are:
1. Two-phase Commit Coordination Logs
All the storage engine participants in the one-phase prepare state are processed based on rollback.
2. Change Logs
When a DN uses a backup set to restore a new instance, it is forced to join the cluster as a follower or learner to receive binlogs sent by the cluster leader using the X-Paxos protocol.
3. Executed Set
PolarDB-X DN introduces a new component, Lizard Transaction Slot (LTS), as a new mandatory participant engine to participate in all MySQL changes, including transactional, non-transactional, and structural changes. The Gtid Executed set is maintained by LTS to achieve idempotent control.
The preceding methods replace the responsibilities of binlogs and achieve Lizard lock-free full backup capability.
Lizard Lock-free Full Backup Architecture
Lizard lock-free full backup introduces a new important component, Transaction Slot, which serves as a persistent structure to record database changes. Its architecture diagram is as follows:
Transaction Slot Component
As a persistent structure, Transaction Slot records the transactional changes of the database, including:
1. Transaction State
The state field is used to record the states of transactions, including active, prepare, commit, and rollback. During the recovery process, the state field is used to perform rollback for all transactions in the active or prepared state.
2. Transaction Commit Version
The SCN field records the internal version number of the transaction commit or rollback, that is, the System Commit Number.
3. Global Transaction Identifier
The GTID field records the unique identifier of the transaction commit or rollback, that is, the Global Transaction Identifier. To ensure record integrity, all changes to the database include:
Data Manipulation Language (DML)
Data Definition Language (DDL)
Data Control Language (DCL)
Changes are all forcibly allocated Transaction Slots to record all GTIDs, thereby ensuring the idempotence of binlogs after the instance is restored.
Lizard Consistency Recovery
Through the Transaction Slot component, PolarDB-X DN no longer uses the global lock to obtain the checkpoint, achieving lock-free full backup. However, during the recovery process for the full backup set, the lock-free full backup loosely processes the three core checkpoints that need to be processed, namely, the Recovery checkpoint, the Consensus checkpoint, and the Apply checkpoint, guaranteeing the final consistency.
1. Recovery Checkpoint
The physical recovery checkpoint takes the last log location copied by the redo log in the physical backup as the final recovery location. All redo logs after the checkpoint location are copied during the backup process and reapplied during recovery, and all transactions in the active/prepare state are rolled back.
2. Consensus Checkpoint
Since DN uses the X-Paxos consistency protocol, whose core is to achieve a majority commit during the commit process, a majority binlog checkpoint is obtained as the Consensus checkpoint before the backup ends. During the instance recovery process, it rejoins the cluster as a Learner and receives binlogs from this checkpoint.
3. Apply Checkpoint
Without the protection of Global Lock, it is impossible to obtain the applied checkpoint with consistency between binlogs and the storage engine. Therefore, before the backup ends, an applied checkpoint is obtained. After the instance is restored, during the application process, the GTID executed set constructed by the Transaction Slot is used for idempotent control to catch up with the cluster checkpoint.
The acquisition and application logic of the above checkpoints ensures that after the backup set is connected to the original cluster, it can integrate the protocol layer, replication layer, and data layer to finally achieve strict consistency between various parts of the system.
Originally published at https://www.alibabacloud.com.
Top comments (0)