Skip to content

Instantly share code, notes, and snippets.

@ytooyama
Last active June 9, 2025 06:51
Show Gist options
  • Save ytooyama/e5568b5dcdbdb79d1039fed11d9dd0d8 to your computer and use it in GitHub Desktop.
Save ytooyama/e5568b5dcdbdb79d1039fed11d9dd0d8 to your computer and use it in GitHub Desktop.
How to Build xCluster Replication with yugabytedb

How to Build xCluster Replication with yugabytedb

To set up xCluster Replication in YugabyteDB, follow the steps below. Here, we introduce a general setup method (semi-automatic/automatic mode) using the yugabyted command.


1. Preparing the Clusters

  • Prepare two clusters: Primary (source) and Standby (target).

    • Start each cluster's yugabyted node with --backup_daemon=true to enable the backup/restore agent. For example, prepare Linux hosts with the following hostnames and IP addresses beforehand. Although multiple-node clusters are recommended in production, we will use single-node clusters for testing here.
    • If you do not perform backup or restore, xCluster Replication is possible without using the --backup_daemon=true option.
  • xcluster-east 172.17.28.46

  • xcluster-west 172.17.28.47

To add the backup_daemon=true option, perform a small setup. Copy the ybc-2.0.0.0-b19-linux-x86_64.tar.gz package found in the yugabyte-2.25.1.0/share/ directory into the yugabyte-2.25.1.0 directory. If installed under /opt/yugabyte/yugabyte-2.25.1.0/, copy the binary into /opt/yugabyte/yugabyte-2.25.1.0/ybc/bin.

Build the Primary (source) node. Add more nodes as needed.

sudo yugabyted start --advertise_address=172.17.28.46 --base_dir=/home/yugabyte/data --backup_daemon=true

Build the Standby (target) node. Add more nodes as needed.

sudo yugabyted start --advertise_address=172.17.28.47 --base_dir=/home/yugabyte/data --backup_daemon=true

On both nodes, create the database and an empty table.

ysqlsh -h 172.17.28.4x -U yugabyte -d yugabyte
CREATE DATABASE testdb;
\c testdb
CREATE TABLE demo
(
  number SERIAL,
  name VARCHAR(128) NOT NULL,
  primary key (number)
);

2. Creating a Checkpoint (on Primary Side)

./bin/yugabyted xcluster create_checkpoint     --replication_id <replication_id>     --databases <comma-separated-database-names>
  • This command creates a snapshot of the databases to be replicated and determines whether bootstrap (data copy) is needed.

  • Follow the output instructions to perform backup & restore as needed (see below for details).
    Details

  • For the --replication_id, specify a string that uniquely identifies this xCluster replication. This ID is used to manage replication settings between the source and target clusters. For example, you can use any name like my_xcluster_repl.

  • For more information: https://docs.yugabyte.com/stable/reference/configuration/yugabyted/#set-up-xcluster-replication-between-clusters

Executed Commands

The actual command executed was as follows. Ensure that the specified database exists, has at least one table, and that you include the --base_dir option.

sudo yugabyted xcluster create_checkpoint     --base_dir=/home/yugabyte/data     --replication_id my_xcluster_repl     --databases testdb

+---------------------------------------------------------------------------------------------------------------------------------------------+
|                                                                  yugabyted                                                                  |
+---------------------------------------------------------------------------------------------------------------------------------------------+
| Status               : xCluster create checkpoint success.                                                                                  |
| Bootstrapping        : Before running the xcluster setup command, Database `testdb` and schema needs to be applied on the target cluster.   |
+---------------------------------------------------------------------------------------------------------------------------------------------+

3. Backup & Restore as Needed

  • If data already exists, perform backup on the primary and restore on the standby.
    • Skipped for this test environment.
# On Primary
./bin/yugabyted backup --cloud_storage_uri <storage-URI> --database <database_name> --base_dir <base_dir>
# On Standby
./bin/yugabyted restore --cloud_storage_uri <storage-URI> --database <database_name> --base_dir <base_dir>

4. Enabling Point-in-Time Recovery (PITR) on Both Clusters (east and west)

./bin/yugabyted configure point_in_time_recovery     --enable     --retention <retention-period>     --database <database_name>
  • The retention period should be long enough to allow for recovery or failover if the primary goes down.
  • The units for the retention period are in days.

Executed Commands

The actual commands run were:

[almalinux@xcluster-east ~]$ sudo yugabyted configure point_in_time_recovery     --enable     --base_dir=/home/yugabyte/data     --retention 2     --database testdb
✅ Verified Point-In-Time Recovery configs.   
✅ Successfully enabled Point-In-Time recovery for database testdb.

+--------------------------------------+
|              yugabyted               |
+--------------------------------------+
| Status                  : Success    |
| Database                : testdb     |
| Retention Period        : 2 days     |
| Interval                : 24 hours   |
+--------------------------------------+

Note that setting retention to 1 causes an error:

Error: Following errors found while enabling point-in-time recovery:
* Retention period must be a positive integer greater than 1. Please specify a valid postive integer greater than 1 to enable point-in-time recovery.

After correcting the value:

[almalinux@xcluster-west ~]$ sudo yugabyted configure point_in_time_recovery     --enable     --base_dir=/home/yugabyte/data     --retention 2     --database testdb
+--------------------------------------+
|              yugabyted               |
+--------------------------------------+
| Status                  : Success    |
| Database                : testdb     |
| Retention Period        : 2 days     |
| Interval                : 24 hours   |
+--------------------------------------+

5. Setting Up xCluster Replication

Reference: https://docs.yugabyte.com/preview/reference/configuration/yugabyted/#set-up

./bin/yugabyted xcluster set_up     --target_address <IP-of-target-cluster-node>     --replication_id <replication_id>     --bootstrap_done

Use the same replication_id as when creating the checkpoint (e.g., my_xcluster_repl). You may also need to specify --base_dir. Run this on the source node, specifying the target node's IP.

Executed Commands

On the source node (172.17.28.46), targeting 172.17.28.47:

sudo yugabyted xcluster set_up     --target_address 172.17.28.47     --replication_id my_xcluster_repl     --base_dir=/home/yugabyte/data     --bootstrap_done

+-----------------------------------------------+
|                   yugabyted                   |
+-----------------------------------------------+
| Status        : xCluster set-up successful.   |
+-----------------------------------------------+
  • When set up correctly, you will see "xCluster set-up successful."

6.Testing

Let's test the replication!

$ ysqlsh -h 172.17.28.46  -U yugabyte -d testdb -c "INSERT INTO demo (name) VALUES (1)"
$ ysqlsh -h 172.17.28.46  -U yugabyte -d testdb -c "SELECT * FROM demo"
 number | name 
--------+------
      1 | 1
(1 row)

$ ysqlsh -h 172.17.28.47  -U yugabyte -d testdb -c "SELECT * FROM demo"
 number | name 
--------+------
      1 | 1
(1 row)

Notes

  • For detailed steps and differences between automatic and semi-automatic modes, see the official documentation.

References:


What kind of URL should be specified with yugabyted backup --cloud_storage_uri?

Use a URI for cloud storage or NFS where the backup data will be stored. Examples include:

  • For AWS S3
    s3://[bucket_name]
    
  • For NFS
    /nfs-dir
    
  • For GCP or other cloud storage
    gs://my-gcs-bucket
    

Make sure the URI points to accessible storage. For details, see the official documentation.

Note:
You must launch yugabyted with --backup_daemon=true to perform backup/restore. Not supported on macOS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment