Skip to content

Instantly share code, notes, and snippets.

@tegila
Last active March 25, 2022 20:25
Show Gist options
  • Save tegila/d6828fb33afd8db6585c2f6b91e59a6c to your computer and use it in GitHub Desktop.
Save tegila/d6828fb33afd8db6585c2f6b91e59a6c to your computer and use it in GitHub Desktop.
ssh x1

lxc profile device remove default root
lxc profile device remove default eth0
lxc storage delete local
lxc config unset core.https_address

lxc init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=localhost]: x3
What IP address or DNS name should be used to reach this node? [default=127.0.0.1]: 10.8.0.1
Are you joining an existing cluster? (yes/no) [default=no]: 
Setup password authentication on the cluster? (yes/no) [default=yes]: 
Trust password for new clients: 
Again: 
Do you want to configure a new local storage pool? (yes/no) [default=yes]: 
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing empty disk or partition? (yes/no) [default=no]: 
Size in GB of the new loop device (1GB minimum) [default=5GB]: 20GB
Do you want to configure a new remote storage pool? (yes/no) [default=no]: 
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: eth0
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 
ssh x2

lxc profile device remove default root
lxc profile device remove default eth0
lxc storage delete local
lxc config unset core.https_address

lxc init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=localhost]: x2
What IP address or DNS name should be used to reach this node? [default=127.0.0.1]: 10.8.0.2
Are you joining an existing cluster? (yes/no) [default=no]: yes
IP address or FQDN of an existing cluster node: 10.8.0.1
Cluster fingerprint: b9d2523a4935474c4a52f16ceb8a44e80907143e219a3248fbb9f5ac5d53d926
You can validate this fingerprint by running "lxc info" locally on an existing node.
Is this the correct fingerprint? (yes/no) [default=no]: yes
Cluster trust password: 
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "source" property for storage pool "local": 
Choose "size" property for storage pool "local": 20GB
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 
> lxc cluster list
+-------+----------------------------+----------+--------+-------------------+--------------+----------------+
| NAME  |            URL             | DATABASE | STATE  |      MESSAGE      | ARCHITECTURE | FAILURE DOMAIN |
+-------+----------------------------+----------+--------+-------------------+--------------+----------------+
|  x1   | https://10.166.11.235:8443 | YES      | ONLINE | fully operational | aarch64      |                |
+-------+----------------------------+----------+--------+-------------------+--------------+----------------+
|  x2   | https://10.166.11.92:8443  | YES      | ONLINE | fully operational | aarch64      |                |
+-------+----------------------------+----------+--------+-------------------+--------------+----------------+
|  x3   | https://10.166.11.200:8443 | YES      | ONLINE | fully operational | aarch64      |                |
+-------+----------------------------+----------+--------+-------------------+--------------+----------------+


> lxc remote add my-cluster x1

Certificate fingerprint: b9d2523a4935474c4a52f16ceb8a44e80907143e219a3248fbb9f5ac5d53d926
ok (y/n)? y
Admin password for my-cluster: 
Client certificate stored at server:  my-cluster

> lxc remote switch my-cluster

lxc launch images:alpine/edge c1
lxc launch images:archlinux c2
lxc launch images:ubuntu/18.04 c3
lxc launch images:ubuntu/20.04/cloud v1 --vm
lxc launch images:fedora/32/cloud v2 --vm
lxc launch images:debian/11/cloud v3 --vm
lxc snapshot blah backup
lxc publish blah/backup --alias blah-backup
lxc image export blah-backup .
lxc image delete blah-backup
Which will get you a tarball in your current directory.

To restore and create a container from it, you can then do:

lxc image import TARBALL-NAME --alias blah-backup
lxc launch blah-backup some-container-name
lxc image delete blah-backup

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment