From 21cc7e15af503c468fa633515f4f23b1e4d30257 Mon Sep 17 00:00:00 2001 From: Patrick Birch <48594400+patrickbirch@users.noreply.github.com> Date: Mon, 29 Dec 2025 04:57:39 -0600 Subject: [PATCH] PXB-3639 [Docs] add --parallel option for incremental prepare 8.0 modified: docs/encrypted-innodb-tablespace-backups.md modified: docs/prepare-incremental-backup.md modified: docs/xbcloud-binary-overview.md modified: docs/xtrabackup-option-reference.md --- docs/accelerate-backup-process.md | 8 +- docs/encrypted-innodb-tablespace-backups.md | 69 +++++++++------ docs/prepare-incremental-backup.md | 83 ++++++++++-------- docs/release-notes/8.0/8.0.35-33.0.md | 2 +- docs/xbcloud-binary-overview.md | 93 ++++++++++++--------- docs/xtrabackup-option-reference.md | 15 +++- 6 files changed, 161 insertions(+), 109 deletions(-) diff --git a/docs/accelerate-backup-process.md b/docs/accelerate-backup-process.md index e4530907c..756fc1f49 100644 --- a/docs/accelerate-backup-process.md +++ b/docs/accelerate-backup-process.md @@ -23,8 +23,8 @@ If the data is stored on a single file, this option will have no effect. To use this feature, simply add the option to a local backup, for example: -```{.bash data-prompt="$"} -$ xtrabackup --backup --parallel=4 --target-dir=/path/to/backup +```shell +xtrabackup --backup --parallel=4 --target-dir=/path/to/backup ``` By using the *xbstream* in streaming backups, you can additionally speed up the @@ -34,8 +34,8 @@ compression. The default value for this option is 1. To use this feature, simply add the option to a local backup, for example: -```{.bash data-prompt="$"} -$ xtrabackup --backup --stream=xbstream --compress --compress-threads=4 --target-dir=./ > backup.xbstream +```shell +xtrabackup --backup --stream=xbstream --compress --compress-threads=4 --target-dir=./ > backup.xbstream ``` Before applying logs, compressed files will need to be uncompressed. diff --git a/docs/encrypted-innodb-tablespace-backups.md b/docs/encrypted-innodb-tablespace-backups.md index b2df85991..9d45e7852 100644 --- a/docs/encrypted-innodb-tablespace-backups.md +++ b/docs/encrypted-innodb-tablespace-backups.md @@ -79,8 +79,7 @@ keyring used when the backup was taken and prepared. ## Use `keyring_vault` plugin -Keyring vault plugin settings are -described [here](https://www.percona.com/doc/percona-server/LATEST/security/using-keyring-plugin.html#using-keyring-plugin). +How to use the keyring vault plugin is described [in this document](https://docs.percona.com/percona-server/8.0/using-keyring-plugin.html). ### Create a backup with the `keyring_vault` plugin @@ -112,7 +111,7 @@ $ xtrabackup --prepare --target-dir=/data/backup \ --keyring-vault-config=/etc/vault.cnf ``` -Review [using the keyring vault plugin](https://www.percona.com/doc/percona-server/LATEST/security/using-keyring-plugin.html#using-keyring-plugin) for a description of keyring vault plugin settings. +How to use the keyring vault plugin is described [in this document](https://docs.percona.com/percona-server/8.0/using-keyring-plugin.html). After *xtrabackup* completes the action, the following message confirms the action: @@ -336,18 +335,38 @@ $ xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base \ --keyring-file-data=/var/lib/mysql-keyring/keyring ``` -The backup should be prepared with the keyring file and type that was used when backup was being taken. This means that if the keyring has been rotated, or you have upgraded from a plugin to a component between the base and incremental backup that you must use the keyring that was in use when the first incremental backup has been taken. +!!! note + + If you have many InnoDB Data (IBD) files, speed up the prepare phase for incremental backups by using the `--parallel` option. This option lets you process multiple delta files simultaneously. When using `--parallel` in the prepare phase, always specify a numeric value. The recommended minimum value is 4 (for example, `--parallel=4`). + + An example of the backup command using the `--parallel` option: + + ```shell + xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base \ + --incremental-dir=/data/backups/inc1 \ + --keyring-file-data=/var/lib/mysql-keyring/keyring --parallel=4 + ``` + +Prepare the backup using the same keyring file and type that were used when the backup was created. If the keyring has changed or you upgraded from a plugin to a component between the base and incremental backup, use the keyring from when the first incremental backup was made. If the original keyring is missing or has changed, recover or replace it before restoring. If you cannot recover the keyring, restore from a backup that matches the most recent available keyring. Preparing the second incremental backup is a similar process: apply the deltas -to the (modified) base backup, and you will roll its data forward in +to the (modified) base backup, and you will roll the base backup's data forward in time to the point of the second incremental backup: -```{.bash data-prompt="$"} -$ xtrabackup --prepare --target-dir=/data/backups/base \ +```shell +xtrabackup --prepare --target-dir=/data/backups/base \ --incremental-dir=/data/backups/inc2 \ --keyring-file-data=/var/lib/mysql-keyring/keyring ``` + +You can also use the `--parallel` option here to speed up the process: + +```shell +xtrabackup --prepare --target-dir=/data/backups/base \ +--incremental-dir=/data/backups/inc2 \ +--keyring-file-data=/var/lib/mysql-keyring/keyring --parallel=4 +``` Use `--apply-log-only` when merging all incremental backups except the last one. That’s why the previous line does not contain the `--apply-log-only` option. Even if the `--apply-log-only` was used on the last step, backup would still be consistent but in that case server would perform the rollback phase. The backup is now prepared and can be restored with `--copy-back` option. @@ -391,8 +410,8 @@ The `--transition-key=` option should be used to make it possible fo The following example illustrates how the backup can be created in this case: -```{.bash data-prompt="$"} -$ xtrabackup --backup --user=root -p --target-dir=/data/backup \ +```shell +xtrabackup --backup --user=root -p --target-dir=/data/backup \ --transition-key=MySecretKey ``` @@ -404,8 +423,8 @@ xtrabackup scrapes `--transition-key` so that its value is not visible in the `p The same passphrase should be specified for the prepare command: -```{.bash data-prompt="$"} -$ xtrabackup --prepare --target-dir=/data/backup \ +```shell +xtrabackup --prepare --target-dir=/data/backup \ --transition-key=MySecretKey ``` @@ -417,16 +436,16 @@ because *xtrabackup* does not talk to the keyring in this case. When restoring a backup you will need to generate a new master key. Here is the example for `keyring_file` plugin or component: -```{.bash data-prompt="$"} -$ xtrabackup --copy-back --target-dir=/data/backup --datadir=/data/mysql \ +```shell +xtrabackup --copy-back --target-dir=/data/backup --datadir=/data/mysql \ --transition-key=MySecretKey --generate-new-master-key \ --keyring-file-data=/var/lib/mysql-keyring/keyring ``` In case of `keyring_vault`, it will look like this: -```{.bash data-prompt="$"} -$ xtrabackup --copy-back --target-dir=/data/backup --datadir=/data/mysql \ +```shell +xtrabackup --copy-back --target-dir=/data/backup --datadir=/data/mysql \ --transition-key=MySecretKey --generate-new-master-key \ --keyring-vault-config=/etc/vault.cnf ``` @@ -442,8 +461,8 @@ In this scenario, the three stages of the backup process look as follows. * Backup - ```{.bash data-prompt="$"} - $ xtrabackup --backup --user=root -p --target-dir=/data/backup \ + ```shell + xtrabackup --backup --user=root -p --target-dir=/data/backup \ --generate-transition-key ``` @@ -451,15 +470,15 @@ In this scenario, the three stages of the backup process look as follows. - `keyring_file` variant: - ```{.bash data-prompt="$"} - $ xtrabackup --prepare --target-dir=/data/backup \ + ```shell + xtrabackup --prepare --target-dir=/data/backup \ --keyring-file-data=/var/lib/mysql-keyring/keyring ``` - `keyring_vault` variant: - ```{.bash data-prompt="$"} - $ xtrabackup --prepare --target-dir=/data/backup \ + ```shell + xtrabackup --prepare --target-dir=/data/backup \ --keyring-vault-config=/etc/vault.cnf ``` @@ -467,14 +486,14 @@ In this scenario, the three stages of the backup process look as follows. - `keyring_file` variant: - ```{.bash data-prompt="$"} - $ xtrabackup --copy-back --target-dir=/data/backup --datadir=/data/mysql \ + ```shell + xtrabackup --copy-back --target-dir=/data/backup --datadir=/data/mysql \ --generate-new-master-key --keyring-file-data=/var/lib/mysql-keyring/keyring ``` - `keyring_vault` variant: - ```{.bash data-prompt="$"} - $ xtrabackup --copy-back --target-dir=/data/backup --datadir=/data/mysql \ + ```shell + xtrabackup --copy-back --target-dir=/data/backup --datadir=/data/mysql \ --generate-new-master-key --keyring-vault-config=/etc/vault.cnf ``` diff --git a/docs/prepare-incremental-backup.md b/docs/prepare-incremental-backup.md index f3a8ed370..773035a1a 100644 --- a/docs/prepare-incremental-backup.md +++ b/docs/prepare-incremental-backup.md @@ -1,30 +1,16 @@ # Prepare an incremental backup -The `--prepare` step for incremental backups is not the same -as for full backups. In full backups, two types of operations are performed -to -make the database consistent: committed transactions are replayed from the -log -file against the data files, and uncommitted transactions are rolled back. -You -must skip the rollback of uncommitted transactions when preparing an -incremental backup, because transactions that were uncommitted at the time -of -your backup may be in progress, and it’s likely that they will be committed -in -the next incremental backup. You should use the -`--apply-log-only` option to prevent the rollback phase. +The `--prepare` step for incremental backups differs from full backups. For full backups, committed transactions are replayed from the log file to the data files, and uncommitted transactions are rolled back to ensure consistency. When preparing an incremental backup, you must skip the rollback of uncommitted transactions, as these may still be in progress and could be committed in a subsequent incremental backup. Use the `--apply-log-only` option when preparing the first full backup to prevent the rollback phase from occurring. !!! warning - **If you do not use the** `--apply-log-only` **option to prevent the rollback phase, then your incremental backups will be useless**. After transactions have been rolled back, further incremental backups cannot be applied. + If you do not use the `--apply-log-only` option to prevent the rollback phase, then your incremental backups are unusable. After transactions have been rolled back, further incremental backups cannot be applied. -Beginning with the full backup you created, you can prepare it, and then -apply -the incremental differences to it. Recall that you have the following -backups: +Start by preparing the full backup, then apply the incremental differences to that backup. -``` +For example, you could have the following backups: + +```text /data/backups/base /data/backups/inc1 /data/backups/inc2 @@ -33,11 +19,11 @@ backups: To prepare the base backup, you need to run `--prepare` as usual, but prevent the rollback phase: -```{.bash data-prompt="$"} +```shell $ xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base ``` -The output should end with text similar to the following: +The log sequence number should match the `to_lsn` of the base backup The output should end with text similar to the following: ??? example "Expected output" @@ -46,25 +32,22 @@ The output should end with text similar to the following: 161011 12:41:04 completed OK! ``` -The log sequence number should match the `to_lsn` of the base backup, which -you saw previously. - !!! warning - This backup is actually safe to restore as-is now, even though the rollback phase has been skipped. If you restore it and start *MySQL*, *InnoDB* will detect that the rollback phase was not performed, and it will do that in the background, as it usually does for a crash recovery upon start. It will notify you that the database was not shut down normally. + This backup is actually safe to restore as-is now, even though the rollback phase has been skipped. If you restore the backup and start the server, InnoDB detects that the rollback phase was not performed, and completes the rollback in the background, as InnoDB usually does for a crash recovery. InnoDB notifies you that the database was not shut down normally. To apply the first incremental backup to the full backup, run the following command: -```{.bash data-prompt="$"} +```shell $ xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base \ --incremental-dir=/data/backups/inc1 ``` -This applies the delta files to the files in `/data/backups/base`, which -rolls them forward in time to the time of the incremental backup. It then -applies the redo log as usual to the result. The final data is in -`/data/backups/base`, not in the incremental directory. You should see -an output similar to: +This command applies the delta files to the files in `/data/backups/base`, which +rolls them forward in time to the time of the incremental backup. The redo log is then applied as usual. The final data is in +`/data/backups/base`, not in the incremental directory. + +You should see an output similar to: ??? example "Expected output" @@ -84,20 +67,46 @@ Again, the LSN should match what you saw from your earlier inspection of the first incremental backup. If you restore the files from `/data/backups/base`, you should see the state of the database as of the first incremental backup. -!!! warning +### Faster prepare step with --parallel + +For incremental backups with many InnoDB Data (IBD) files, you can significantly reduce prepare time by using the `--parallel` option. The `--parallel` option enables the concurrent processing of multiple delta files, thereby maximizing storage bandwidth. The `--parallel` option is especially beneficial when there are many IBD files, even if the IBD files didn't change between backups, as empty delta files are processed quickly in parallel. + +!!! note "Version history" + + Before Percona XtraBackup 8.0.35-33 and 8.4.0-3, the `--parallel` option didn't have any effect on the prepare phase. + + Starting with Percona XtraBackup 8.0.35-33 and 8.4.0-3, using `--parallel=X` has effect on the prepare phase. It will now use X threads to apply the changes from `.delta` files to the IBD files. When using `--parallel` in the prepare phase, always specify a numeric value. The recommended minimum value is 4 (for example, `--parallel=4`). + + Note that each thread operates on a single file. If you have a large delta file, there is still only one thread that processes that `.delta` file. Parallelization occurs at the file level, not within individual files. + +An example command with the `--parallel` option: + +```shell +$ xtrabackup --prepare --parallel=4 --apply-log-only --target-dir=/data/backups/base \ +--incremental-dir=/data/backups/inc1 +``` + +### Prepare a second inceremental backup - *Percona XtraBackup* does not support using the same incremental backup directory to prepare two copies of backup. Do not run `--prepare` with the same incremental backup directory (the value of –incremental-dir) more than once. +Percona XtraBackup does not support using the same incremental backup directory to prepare two copies of backup. Do not run `--prepare` with the same incremental backup directory (the value of –incremental-dir) more than once. Preparing the second incremental backup is a similar process: apply the deltas -to the (modified) base backup, and you will roll its data forward in time to the point of the second incremental backup: +to the (modified) base backup, and you will roll the base backup's data forward in time to the point of the second incremental backup: -```{.bash data-prompt="$"} +```shell $ xtrabackup --prepare --target-dir=/data/backups/base \ --incremental-dir=/data/backups/inc2 ``` +You can also use the `--parallel` option here to speed up the process: + +```shell +$ xtrabackup --prepare --parallel=4 --target-dir=/data/backups/base \ +--incremental-dir=/data/backups/inc2 +``` + !!! note - `--apply-log-only` should be used when merging the incremental backups except the last one. That’s why the previous line does not contain the `--apply-log-only` option. Even if the `--apply-log-only` was used on the last step, backup would still be consistent but in that case server would perform the rollback phase. + Use `--apply-log-only` when merging the incremental backups except for the last one. This is why the previous command does not include the `--apply-log-only` option. If `--apply-log-only` is used on the last step, backup remains consistent but the server performs the rollback phase. Once prepared incremental backups are the same as the full backups, and they can be [restored](restore-a-backup.md) in the same way. \ No newline at end of file diff --git a/docs/release-notes/8.0/8.0.35-33.0.md b/docs/release-notes/8.0/8.0.35-33.0.md index 3f04be4c3..f8bc2fbfc 100644 --- a/docs/release-notes/8.0/8.0.35-33.0.md +++ b/docs/release-notes/8.0/8.0.35-33.0.md @@ -16,7 +16,7 @@ We recommend that you download the Percona XtraBackup for the same platform as t ## Improvements -* [PXB-3427]: Percona XtraBackup now prepares incremental backups faster. The `--prepare` command directly applies the `.delta` files. To speed up this process, use the `--parallel=X` option, replacing `X` with the number of threads you want to use simultaneously. This option applies the delta files concurrently. +* [PXB-3427]: Percona XtraBackup now prepares incremental backups faster. The `--prepare` command directly applies the `.delta` files. To speed up this process, use the [`--parallel=X`](../../xtrabackup-option-reference.md#parallel) option, replacing `X` with the number of threads you want to use simultaneously. This option applies the delta files concurrently. For more information, see [Prepare an incremental backup](../../prepare-incremental-backup.md#faster-prepare-step-with---parallel). * [PXB-3199]: The `xbcloud put` operations were updated to include support for [ObjectLock-enabled AWS S3 buckets] (Thanks to volver for contributing the fix for this issue.). diff --git a/docs/xbcloud-binary-overview.md b/docs/xbcloud-binary-overview.md index caab952be..fbf9c3d74 100644 --- a/docs/xbcloud-binary-overview.md +++ b/docs/xbcloud-binary-overview.md @@ -13,8 +13,8 @@ needing a local storage. ${PIPESTATUS[x]} array parameter returns the exit code for each binary in the pipe string. - ```{.bash data-prompt="$"} - $ xtrabackup --backup --stream=xbstream --target-dir=/storage/backups/ | xbcloud put [options] full_backup + ```shell + xtrabackup --backup --stream=xbstream --target-dir=/storage/backups/ | xbcloud put [options] full_backup ... $ ${PIPESTATUS[x]} 0 0 @@ -85,8 +85,8 @@ In addition to OpenStack Object Storage (Swift), which has been the only option The following sample command creates a full backup: -```{.bash data-prompt="$"} -$ xtrabackup --backup --stream=xbstream --target-dir=/storage/backups/ --extra-lsndirk=/storage/backups/| xbcloud \ +```shell +xtrabackup --backup --stream=xbstream --target-dir=/storage/backups/ --extra-lsndirk=/storage/backups/| xbcloud \ put [options] full_backup ``` @@ -94,33 +94,33 @@ An incremental backup only includes the changes since the last backup. The last The following sample command creates an incremental backup: -```{.bash data-prompt="$"} -$ xtrabackup --backup --stream=xbstream --incremental-basedir=/storage/backups \ +```shell +xtrabackup --backup --stream=xbstream --incremental-basedir=/storage/backups \ --target-dir=/storage/inc-backup | xbcloud put [options] inc_backup ``` To prepare an incremental backup, you must first download the full backup with the following command: -```{.bash data-prompt="$"} -$ xbcloud get [options] full_backup | xbstream -xv -C /tmp/full-backup +```shell +xbcloud get [options] full_backup | xbstream -xv -C /tmp/full-backup ``` You must prepare the full backup: -```{.bash data-prompt="$"} -$ xtrabackup --prepare --apply-log-only --target-dir=/tmp/full-backup +```shell +xtrabackup --prepare --apply-log-only --target-dir=/tmp/full-backup ``` After the full backup has been prepared, download the incremental backup: -``` +```shell xbcloud get [options] inc_backup | xbstream -xv -C /tmp/inc-backup ``` The downloaded backup is prepared by running the following command: -```{.bash data-prompt="$"} -$ xtrabackup --prepare --target-dir=/tmp/full-backup --incremental-dir=/tmp/inc-backup +```shell +xtrabackup --prepare --target-dir=/tmp/full-backup --incremental-dir=/tmp/inc-backup ``` You do not need the full backup to restore only a specific database. You can specify only the tables to be restored: @@ -191,14 +191,14 @@ three distinct parameters (–storage, –s3-bucket, and backup name per se). In this example s3 refers to a storage type, operator-testing is a bucket name, and bak22 is the backup name. - ```{.bash data-prompt="$"} - $ xbcloud get s3://operator-testing/bak22 ... + ```shell + xbcloud get s3://operator-testing/bak22 ... ``` This shortcut expands as follows: - ```{.bash data-prompt="$"} - $ xbcloud get --storage=s3 --s3-bucket=operator-testing bak22 ... + ```shell + xbcloud get --storage=s3 --s3-bucket=operator-testing bak22 ... ``` You can supply the mandatory parameters on the command line, @@ -211,8 +211,8 @@ type. The `--md5` parameter computes the MD5 hash value of the backup chunks. The result is stored in files that following the `backup_name.md5` pattern. -```{.bash data-prompt="$"} -$ xtrabackup --backup --stream=xbstream \ +```shell +xtrabackup --backup --stream=xbstream \ --parallel=8 2>backup.log | xbcloud put s3://operator-testing/bak22 \ --parallel=8 --md5 2>upload.log ``` @@ -222,8 +222,8 @@ header with the server side encryption while specifying a customer key. An example of using the ``--header`` for AES256 encryption. -```{.bash data-prompt="$"} -$ xtrabackup --backup --stream=xbstream --parallel=4 | \ +```shell +xtrabackup --backup --stream=xbstream --parallel=4 | \ xbcloud put s3://operator-testing/bak-enc/ \ --header="X-Amz-Server-Side-Encryption-Customer-Algorithm: AES256" \ --header="X-Amz-Server-Side-Encryption-Customer-Key: CuStoMerKey=" \ @@ -239,8 +239,8 @@ permissions: `--header="x-amz-acl: bucket-owner-full-control` First, you need to make the full backup on which the incremental one is going to be based: -```{.bash data-prompt="$"} -$ xtrabackup --backup --stream=xbstream --extra-lsndir=/storage/backups/ \ +```shell +xtrabackup --backup --stream=xbstream --extra-lsndir=/storage/backups/ \ --target-dir=/storage/backups/ | xbcloud put \ --storage=swift --swift-container=test_backup \ --swift-auth-version=2.0 --swift-user=admin \ @@ -251,8 +251,8 @@ full_backup Then you can make the incremental backup: -```{.bash data-prompt="$"} -$ xtrabackup --backup --incremental-basedir=/storage/backups \ +```shell +xtrabackup --backup --incremental-basedir=/storage/backups \ --stream=xbstream --target-dir=/storage/inc_backup | xbcloud put \ --storage=swift --swift-container=test_backup \ --swift-auth-version=2.0 --swift-user=admin \ @@ -265,55 +265,70 @@ inc_backup To prepare a backup you first need to download the full backup: -```{.bash data-prompt="$"} -$ xbcloud get --swift-container=test_backup \ +```shell +xbcloud get --swift-container=test_backup \ --swift-auth-version=2.0 --swift-user=admin \ --swift-tenant=admin --swift-password=xoxoxoxo \ --swift-auth-url=http://127.0.0.1:35357/ --parallel=10 \ full_backup | xbstream -xv -C /storage/downloaded_full ``` -Once you download the full backup it should be prepared: +Once you download the full backup, the full backup should be prepared: -```{.bash data-prompt="$"} -$ xtrabackup --prepare --apply-log-only --target-dir=/storage/downloaded_full +```shell +xtrabackup --prepare --apply-log-only --target-dir=/storage/downloaded_full ``` After the full backup has been prepared you can download the incremental backup: -```{.bash data-prompt="$"} -$ xbcloud get --swift-container=test_backup \ +```shell +xbcloud get --swift-container=test_backup \ --swift-auth-version=2.0 --swift-user=admin \ --swift-tenant=admin --swift-password=xoxoxoxo \ --swift-auth-url=http://127.0.0.1:35357/ --parallel=10 \ inc_backup | xbstream -xv -C /storage/downloaded_inc ``` -Once the incremental backup has been downloaded you can prepare it by running: +Once the incremental backup has been downloaded, you can prepare the incremental backup by running: -```{.bash data-prompt="$"} -$ xtrabackup --prepare --apply-log-only \ +```shell +xtrabackup --prepare --apply-log-only \ --target-dir=/storage/downloaded_full \ --incremental-dir=/storage/downloaded_inc +``` +and -$ xtrabackup --prepare --target-dir=/storage/downloaded_full +```shell +xtrabackup --prepare --target-dir=/storage/downloaded_full ``` +!!! note + + Accelerate the prepare phase for incremental backups with many InnoDB Data (IBD) files by using the `--parallel` option to process delta files concurrently. When using `--parallel` in the prepare phase, always specify a numeric value. The recommended minimum value is 4 (for example, `--parallel=4`). + + ```shell + xtrabackup --prepare --apply-log-only \ + --target-dir=/storage/downloaded_full \ + --incremental-dir=/storage/downloaded_inc --parallel=4 + + xtrabackup --prepare --target-dir=/storage/downloaded_full --parallel=4 + ``` + ### Partial download of the cloud backup If you do not want to download the entire backup to restore the specific database you can specify only the tables you want to restore: -```{.bash data-prompt="$"} -$ xbcloud get --swift-container=test_backup +```shell +xbcloud get --swift-container=test_backup --swift-auth-version=2.0 --swift-user=admin \ --swift-tenant=admin --swift-password=xoxoxoxo \ --swift-auth-url=http://127.0.0.1:35357/ full_backup \ ibdata1 sakila/payment.ibd \ > /storage/partial/partial.xbs -$ xbstream -xv -C /storage/partial < /storage/partial/partial.xbs +xbstream -xv -C /storage/partial < /storage/partial/partial.xbs ``` [xbcloud command line options]: xbcloud-options.md \ No newline at end of file diff --git a/docs/xtrabackup-option-reference.md b/docs/xtrabackup-option-reference.md index 324aae6a1..9bd5f7e1f 100644 --- a/docs/xtrabackup-option-reference.md +++ b/docs/xtrabackup-option-reference.md @@ -794,13 +794,22 @@ The maximum number of file descriptors to reserve with setrlimit(). Usage: `--parallel=#` -This option specifies the number of threads to use to copy multiple data -files concurrently when creating a backup. The default value is 1 (i.e., no -concurrent transfer). In Percona XtraBackup 2.3.10 and newer, this option +The `--parallel` option specifies the number of threads to use to copy multiple data +files concurrently when creating a backup. The default value is 1 (that is, no +concurrent transfer). In Percona XtraBackup 2.3.10 and newer, the `--parallel` option can be used with the `--copy-back` option to copy the user data files in parallel (redo logs and system tablespaces are copied in the main thread). + +Before Percona XtraBackup 8.0.35-33 and 8.4.0-3, the `--parallel` option didn't have any effect on the prepare phase. + +Starting with [Percona XtraBackup 8.0.35-33](release-notes/8.0/8.0.35-33.0.md) and 8.4.0-3, using `--parallel=X` has effect on the prepare phase. It will now use X threads to apply the changes from `.delta` files to the IBD files. This option processes multiple delta files simultaneously, improving storage performance and accelerating incremental backups, particularly with numerous InnoDB Data (IBD) files. The option remains effective even if IBD files are unchanged between backups and efficiently handles empty delta files. + +When using `--parallel` in the prepare phase, always specify a numeric value. The recommended minimum value is 4 (for example, `--parallel=4`). + +Note that each thread operates on a single file. If you have a large delta file, there is still only one thread that processes that `.delta` file. Parallelization occurs at the file level, not within individual files. + ### password Usage: `--password=PASSWORD`