Moved to a repo at https://github.com/Geczy/coolify-migration
-
Star
(124)
You must be signed in to star a gist -
Fork
(30)
You must be signed in to fork a gist
-
-
Save Geczy/83c1c77389be94ed4709fc283a0d7e23 to your computer and use it in GitHub Desktop.
I tried something very similar, but must have missed something, as I spent an entire day trying to figure out some errors in coolify. Thank you sincerely for posting this. As far as I can tell, everything works exactly as it did on my original VPS and it took literally seconds to do.
Here is an updated version with support for multiple volumes and some improvements
#!/bin/bash
# This script will backup your Coolify instance and move everything to a new server,
# including Docker volumes, Coolify database, and SSH keys.
# Configuration - Modify as needed
sshKeyPath="/home/user/.ssh/key" # Key to the destination server
destinationHost="192.168.1.1"
sshPort=22 # SSH port for the destination server
# -- Shouldn't need to modify anything below --
backupSourceDir="/data/coolify/"
backupFileName="coolify_backup.tar.gz"
# Ensure the script is run as root
if [ "$EUID" -ne 0 ]; then
echo "β Please run the script as root"
exit 1
fi
# Check if the source directory exists
if [ ! -d "$backupSourceDir" ]; then
echo "β Source directory $backupSourceDir does not exist"
exit 1
fi
echo "β
Source directory exists"
# Check if the SSH key file exists
if [ ! -f "$sshKeyPath" ]; then
echo "β SSH key file $sshKeyPath does not exist"
exit 1
fi
echo "β
SSH key file exists"
# Check if Docker is installed and running
if ! command -v docker >/dev/null 2>&1; then
echo "β Docker is not installed"
exit 1
fi
if ! systemctl is-active --quiet docker; then
echo "β Docker is not running"
exit 1
fi
echo "β
Docker is installed and running"
# Check if we can SSH to the destination server
if ! ssh -p "$sshPort" -i "$sshKeyPath" -o "StrictHostKeyChecking no" -o "ConnectTimeout=5" root@"$destinationHost" "exit"; then
echo "β SSH connection to $destinationHost failed"
exit 1
fi
echo "β
SSH connection successful"
# Get the names of all running Docker containers
containerNames=$(docker ps --format '{{.Names}}')
# Initialize an array to hold the volume paths
volumePaths=()
# Loop over the container names and get their volumes
for containerName in $containerNames; do
volumeNames=$(docker inspect --format '{{range .Mounts}}{{.Name}} {{end}}' "$containerName")
for volumeName in $volumeNames; do
if [ -n "$volumeName" ]; then
volumePaths+=("/var/lib/docker/volumes/$volumeName/_data")
fi
done
done
# Calculate and print the total size of the volumes and the source directory
totalSize=$(du -csh "${volumePaths[@]}" 2>/dev/null | grep total | awk '{print $1}')
echo "β
Total size of volumes to migrate: ${totalSize:-0}"
backupSourceDirSize=$(du -csh "$backupSourceDir" 2>/dev/null | grep total | awk '{print $1}')
echo "β
Size of the source directory: ${backupSourceDirSize:-0}"
# Check if the backup file already exists and create it if it does not
if [ ! -f "$backupFileName" ]; then
echo "πΈ Backup file does not exist, creating..."
# Optionally stop Docker before creating the backup
echo "πΈ It's recommended to stop all Docker containers before creating the backup. Do you want to stop Docker? (y/n)"
read -rp "Answer: " answer
if [[ "$answer" =~ ^[Yy]$ ]]; then
systemctl stop docker && systemctl stop docker.socket
echo "β
Docker stopped"
else
echo "πΈ Docker not stopped, continuing with the backup"
fi
# Create the backup tarball with progress feedback
tar --exclude='*.sock' -Pczf "$backupFileName" -C / "$backupSourceDir" "$HOME/.ssh/authorized_keys" "${volumePaths[@]}" --checkpoint=.1000
if [ $? -ne 0 ]; then
echo "β Backup file creation failed"
exit 1
fi
echo "β
Backup file created"
else
echo "πΈ Backup file already exists, skipping creation"
fi
# Define the remote commands to be executed
remoteCommands="
if systemctl is-active --quiet docker; then
if ! systemctl stop docker; then
echo 'β Docker stop failed';
exit 1;
fi
echo 'β
Docker stopped';
else
echo 'βΉοΈ Docker is not a service, skipping stop command';
fi
cp ~/.ssh/authorized_keys ~/.ssh/authorized_keys_backup;
if ! tar -Pxzf - -C /; then
echo 'β Backup file extraction failed';
exit 1;
fi
echo 'β
Backup file extracted';
cat ~/.ssh/authorized_keys_backup ~/.ssh/authorized_keys | sort | uniq > ~/.ssh/authorized_keys_temp;
mv ~/.ssh/authorized_keys_temp ~/.ssh/authorized_keys;
chmod 600 ~/.ssh/authorized_keys;
echo 'β
Authorized keys merged';
if ! curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash; then
echo 'β Coolify installation failed';
exit 1;
fi
echo 'β
Coolify installed';
"
# SSH to the destination server, execute the remote commands
if ! ssh -p "$sshPort" -i "$sshKeyPath" -o "StrictHostKeyChecking no" root@"$destinationHost" "$remoteCommands" <"$backupFileName"; then
echo "β Remote commands execution or Docker restart failed"
exit 1
fi
echo "β
Remote commands executed successfully"
# Clean up - Ask the user for confirmation before removing the local backup file
echo "Do you want to remove the local backup file? (y/n)"
read -rp "Answer: " answer
if [[ "$answer" =~ ^[Yy]$ ]]; then
if ! rm -f "$backupFileName"; then
echo "β Failed to remove local backup file"
exit 1
fi
echo "β
Local backup file removed"
else
echo "πΈ Local backup file not removed"
fi
im getting a msg saying i should be logged as azureuser `
azureuser@Coolifyserver:~$ sudo ./CoolifyBackup.sh
β
Source directory exists
β
SSH key file exists
β
Docker is installed and running
Please login as the user "azureuser" rather than the user "root".
β SSH connection to failed
azureuser@Coolifyserver:~$`
im getting a msg saying i should be logged as azureuser
azureuser@Coolifyserver:~$ sudo ./CoolifyBackup.sh β Source directory exists β SSH key file exists β Docker is installed and running Please login as the user "azureuser" rather than the user "root". β SSH connection to failed azureuser@Coolifyserver:~$
@rakithat20 it seems that the root
user is disabled by default in Azure.[1] you may need to amend the script (i.e. ctrl+f
, replace root
with azureuser
or another user) as well as grant that user passwordless sudo permissions[2]
Hey @AlejandroAkbal I modified your script for those who want to backup Coolify to a GCP Bucket using rclone
#!/bin/bash
# This script will backup your Coolify instance and upload the backup to a GCP bucket
# Configuration - Modify as needed
GCP_BUCKET="gcpbucketalias:bucket-name/backups/coolify/"
# -- Shouldn't need to modify anything below --
BACKUP_SOURCE_DIR="/data/coolify/"
BACKUP_FILE_NAME="coolify_backup_$(date +'%Y-%m-%d').tar.gz"
# Ensure the script is run as root
if [ "$EUID" -ne 0 ]; then
printf "β Please run the script as root\n" >&2
exit 1
fi
# Check if the source directory exists
if [ ! -d "$BACKUP_SOURCE_DIR" ]; then
printf "β Source directory %s does not exist\n" "$BACKUP_SOURCE_DIR" >&2
exit 1
fi
printf "β
Source directory exists\n"
# Check if Docker is installed and running
if ! command -v docker >/dev/null 2>&1; then
printf "β Docker is not installed\n" >&2
exit 1
fi
if ! systemctl is-active --quiet docker; then
printf "β Docker is not running\n" >&2
exit 1
fi
printf "β
Docker is installed and running\n"
# Get the names of all running Docker containers
container_names=$(docker ps --format '{{.Names}}')
# Initialize an array to hold the volume paths
volume_paths=()
# Loop over the container names and get their volumes
for container_name in $container_names; do
volume_names=$(docker inspect --format '{{range .Mounts}}{{.Name}} {{end}}' "$container_name")
for volume_name in $volume_names; do
if [ -n "$volume_name" ]; then
volume_paths+=("/var/lib/docker/volumes/$volume_name/_data")
fi
done
done
# Calculate and print the total size of the volumes and the source directory
total_size=$(du -csh "${volume_paths[@]}" 2>/dev/null | grep total | awk '{print $1}')
printf "β
Total size of volumes to migrate: %s\n" "${total_size:-0}"
backup_source_dir_size=$(du -csh "$BACKUP_SOURCE_DIR" 2>/dev/null | grep total | awk '{print $1}')
printf "β
Size of the source directory: %s\n" "${backup_source_dir_size:-0}"
# Create the backup tarball with progress feedback
printf "πΈ It's recommended to stop all Docker containers before creating the backup. Do you want to stop Docker? (y/n)\n"
read -rp "Answer: " answer
if [[ "$answer" =~ ^[Yy]$ ]]; then
systemctl stop docker && systemctl stop docker.socket
printf "β
Docker stopped\n"
else
printf "πΈ Docker not stopped, continuing with the backup\n"
fi
tar --exclude='*.sock' -Pczf "$BACKUP_FILE_NAME" -C / "$BACKUP_SOURCE_DIR" "$HOME/.ssh/authorized_keys" "${volume_paths[@]}" --checkpoint=.1000
if [ $? -ne 0 ]; then
printf "β Backup file creation failed\n" >&2
exit 1
fi
printf "β
Backup file created\n"
# Transfer the backup file to GCP bucket
if ! rclone --gcs-bucket-policy-only copy "$BACKUP_FILE_NAME" "$GCP_BUCKET"; then
printf "β Backup file transfer to GCP bucket failed\n" >&2
exit 1
fi
printf "β
Backup file transferred to GCP bucket\n"
# Clean up - Ask the user for confirmation before removing the local backup file
printf "Do you want to remove the local backup file? (y/n)\n"
read -rp "Answer: " answer
if [[ "$answer" =~ ^[Yy]$ ]]; then
if ! rm -f "$BACKUP_FILE_NAME"; then
printf "β Failed to remove local backup file\n" >&2
exit 1
fi
printf "β
Local backup file removed\n"
else
printf "πΈ Local backup file not removed\n"
fi
# Optionally start Docker again
printf "Do you want to start Docker again? (y/n)\n"
read -rp "Answer: " answer
if [[ "$answer" =~ ^[Yy]$ ]]; then
systemctl start docker && systemctl start docker.socket
printf "β
Docker started\n"
else
printf "πΈ Docker not started\n"
fi```
This seems to be the only way to migrate from one machine to another one. I was capable of backing up and restore it to the same coolify image. I guess, by now, if I'd updated coolify before restore the file.dmp, I wouldn't be able to back this system up again with this file.dmp. I have also tried to change the .env file so this could matchup the origial system. Yet 500 error occored.
Anyways, my only question is how can I test this migrate.sh ?
Awesome script, worked perfectly on first try!
TPGLLC-US
You didn't per chance write another that runs from a clean machine, checks the storage bucket for backups, and upon choosing one restores it?
glad everyone likes the script <3
@dreadedhamish you can probably ask chatgpt to do that no?
Here is the script modified by o1 to handle a ssh key passphrase using ssh-agent:
#!/bin/bash
# This script will backup your Coolify instance and move everything to a new server. Docker volumes, Coolify database, and ssh keys
# 1. Script must run on the source server
# 2. Have all the containers running that you want to migrate
# Configuration - Modify as needed
sshKeyPath="$HOME/.ssh/your_private_key" # Key to destination server
destinationHost="server.example.com" # destination server IP or domain
# -- Shouldn't need to modify anything below --
backupSourceDir="/data/coolify/"
backupFileName="coolify_backup.tar.gz"
# Function to initialize ssh-agent and add the SSH key
initialize_ssh_agent() {
# Check if ssh-agent is already running
if [ -z "$SSH_AGENT_PID" ] || ! ps -p "$SSH_AGENT_PID" > /dev/null 2>&1; then
echo "π Starting ssh-agent..."
eval "$(ssh-agent -s)"
if [ $? -ne 0 ]; then
echo "β Failed to start ssh-agent"
exit 1
fi
echo "β
ssh-agent started"
else
echo "β
ssh-agent is already running"
fi
# Add the SSH key to the agent
echo "π Adding SSH key to ssh-agent"
ssh-add "$sshKeyPath"
if [ $? -ne 0 ]; then
echo "β Failed to add SSH key. Please ensure the passphrase is correct."
exit 1
fi
echo "β
SSH key added to ssh-agent"
}
# Initialize ssh-agent and add the SSH key
initialize_ssh_agent
# Check if the source directory exists
if [ ! -d "$backupSourceDir" ]; then
echo "β Source directory $backupSourceDir does not exist"
exit 1
fi
echo "β
Source directory exists"
# Check if the SSH key file exists
if [ ! -f "$sshKeyPath" ]; then
echo "β SSH key file $sshKeyPath does not exist"
exit 1
fi
echo "β
SSH key file exists"
# Check if we can SSH to the destination server, ignore "The authenticity of host can't be established." errors
if ! ssh -o "StrictHostKeyChecking no" -o "ConnectTimeout=5" root@"$destinationHost" "exit"; then
echo "β SSH connection to $destinationHost failed"
exit 1
fi
echo "β
SSH connection successful"
# Get the names of all running Docker containers
containerNames=$(docker ps --format '{{.Names}}')
# Initialize an empty string to hold the volume paths
volumePaths=""
# Loop over the container names
for containerName in $containerNames; do
# Get the volumes for the current container
volumeNames=$(docker inspect --format '{{range .Mounts}}{{.Name}}{{end}}' "$containerName")
# Loop over the volume names
for volumeName in $volumeNames; do
# Check if the volume name is not empty
if [ -n "$volumeName" ]; then
# Add the volume path to the volume paths string
volumePaths+=" /var/lib/docker/volumes/$volumeName"
fi
done
done
# Calculate the total size of the volumes
# shellcheck disable=SC2086
totalSize=$(du -csh $volumePaths 2>/dev/null | grep total | awk '{print $1}')
# Print the total size of the volumes
echo "β
Total size of volumes to migrate: $totalSize"
# Print size of backupSourceDir
backupSourceDirSize=$(du -csh "$backupSourceDir" 2>/dev/null | grep total | awk '{print $1}')
echo "β
Size of the source directory: $backupSourceDirSize"
# Check if the backup file already exists
if [ ! -f "$backupFileName" ]; then
echo "πΈ Backup file does not exist, creating"
# Recommend stopping docker before creating the backup
echo "πΈ It's recommended to stop all Docker containers before creating the backup"
read -rp "Do you want to stop Docker? (y/n): " answer
if [[ "$answer" =~ ^[Yy]$ ]]; then
if ! systemctl stop docker; then
echo "β Docker stop failed"
exit 1
fi
echo "β
Docker stopped"
else
echo "πΈ Docker not stopped, continuing with the backup"
fi
# shellcheck disable=SC2086
if ! tar --exclude='*.sock' -Pczf "$backupFileName" -C / "$backupSourceDir" "$HOME/.ssh/authorized_keys" $volumePaths; then
echo "β Backup file creation failed"
exit 1
fi
echo "β
Backup file created"
else
echo "πΈ Backup file already exists, skipping creation"
fi
# Define the remote commands to be executed
remoteCommands="
# Check if Docker is a service
if systemctl is-active --quiet docker; then
# Stop Docker if it's a service
if ! systemctl stop docker; then
echo 'β Docker stop failed';
exit 1;
fi
echo 'β
Docker stopped';
else
echo 'βΉοΈ Docker is not a service, skipping stop command';
fi
echo 'πΈ Saving existing authorized keys...';
cp ~/.ssh/authorized_keys ~/.ssh/authorized_keys_backup;
echo 'πΈ Extracting backup file...';
if ! tar -Pxzf - -C /; then
echo 'β Backup file extraction failed';
exit 1;
fi
echo 'β
Backup file extracted';
echo 'πΈ Merging authorized keys...';
cat ~/.ssh/authorized_keys_backup ~/.ssh/authorized_keys | sort | uniq > ~/.ssh/authorized_keys_temp;
mv ~/.ssh/authorized_keys_temp ~/.ssh/authorized_keys;
chmod 600 ~/.ssh/authorized_keys;
echo 'β
Authorized keys merged';
if ! curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash; then
echo 'β Coolify installation failed';
exit 1;
fi
echo 'β
Coolify installed';
"
# SSH to the destination server, execute the remote commands
if ! ssh root@"$destinationHost" "$remoteCommands" < "$backupFileName"; then
echo "β Remote commands execution or Docker restart failed"
exit 1
fi
echo "β
Remote commands executed successfully"
# Clean up - Ask the user for confirmation before removing the local backup file
echo "Do you want to remove the local backup file? (y/n)"
read -r answer
if [[ "$answer" =~ ^[Yy]$ ]]; then
if ! rm -f "$backupFileName"; then
echo "β Failed to remove local backup file"
exit 1
fi
echo "β
Local backup file removed"
else
echo "πΈ Local backup file not removed"
fi
# Kill ssh-agent if it was started by this script
if [ -n "$SSH_AGENT_PID" ]; then
echo "π Stopping ssh-agent..."
eval "$(ssh-agent -k)"
echo "β
ssh-agent stopped"
fi
Thanks everyone for helping with this.
Thanks a lot , Great work
Thanks very much for everyone's input. Yesterday, I had to migrate a live self-hosted admin and spent the better part of a day trying different approaches. In the end, I found the variation provided by @AlejandroAkbal to be spot-on for my needs, and it worked on the first run.
does this script works for v3? i am using v3 coolify and i want to migrate it to new vps. i guess moving it to v4 isn't possible automatically. so does it work for v3 ?
Does this script work for 4.0.0 beta?
It works like a breeze! Thank you very much! I think coolify must use this as an official extension.
I used Codeium Windsurf to create this version that moves a service from the main Coolify server to another server managed by Coolify.
Hope this is useful for someone else.
#!/bin/bash
# Exit on any error
set -e
# This script will backup specific running services identified by Coolify service uuid on the [Origin] Server (that is running Coolify) and move them to a [Destination] Server managed by Coolify Application
# You will need to manually collect and modify data from Coolify's Postgres Database (running in the container coolify-db) on the [Origin] Server.
# To do this you can install the NocoDB application on your [Origin] Server with the "Connect To Predefined Network" option activated.
# Then in NocoDB you can create a Postgres database connection to Host address "coolify-db" with the username and password you can find in Coolify UI Settings/Backup page.
# Finally in NocoDB you can create a data source using the connection you just created to see all the tables of the "coolify" database.
# PRIVATE_KEY_UUID can be found in the table "private_keys" and Coolify service uuid in the table "services"
# Configuration - Modify as needed
sshKeyPath="/data/coolify/ssh/keys/ssh_key@PRIVATE_KEY_UUID" # PRIVATE KEY to connect to [Destination] Server. The corresponding Public Key should be added to authorized_keys on [Destination] Server
destinationHost="my.server.com" # [Destination] Server IP or Domain
# Check if running as root
if [ "$EUID" -ne 0 ]; then
echo "β [Origin] Please run as root"
exit 1
fi
# Check if service suffixes were provided
if [ $# -eq 0 ]; then
echo "β [Origin] Please provide at least one Coolify service uuid"
echo "[Origin] Usage: $0 <coolify_service_uuid1> [coolify_service_uuid2 ...]"
exit 1
fi
# Set up directories and filenames
backupDir="/data/coolify/services_backup"
destinationDir="/data/coolify/services_backup"
servicesBackupFile="services_backup.tar.gz"
volumesBackupFile="volumes_backup.tar.gz"
# Create backup directories if they don't exist
mkdir -p "$backupDir"
# Start ssh-agent and add key
echo "π [Origin] Starting ssh-agent..."
eval $(ssh-agent)
echo "β
[Origin] ssh-agent started"
echo "π [Origin] Adding SSH key to ssh-agent"
if ! ssh-add "$sshKeyPath"; then
echo "β [Origin] Failed to add SSH key to ssh-agent"
kill "$SSH_AGENT_PID"
exit 1
fi
echo "β
[Origin] SSH key added to ssh-agent"
# Test SSH connection
if ! ssh -o "StrictHostKeyChecking no" root@"$destinationHost" "exit"; then
echo "β SSH connection to Destination Server failed"
kill "$SSH_AGENT_PID"
exit 1
fi
echo "β
SSH connection to Destination Server successful"
# Ensure backup directory exists on destination server
echo "π [Destination] Creating backup directory..."
if ! ssh -o "StrictHostKeyChecking no" root@"$destinationHost" "mkdir -p '$destinationDir'"; then
echo "β [Destination] Failed to create backup directory"
exit 1
fi
echo "β
[Destination] Backup directory created"
# Process each service suffix to gather information
declare -a containers_to_stop=()
declare -A volume_dict
for service_suffix in "$@"; do
echo "π [Origin] Analyzing service: $service_suffix"
# Validate service suffix
if ! [[ $service_suffix =~ ^[a-zA-Z0-9_-]+$ ]]; then
echo "β [Origin] Invalid service suffix: $service_suffix. Only alphanumeric characters, hyphens, and underscores are allowed."
exit 1
fi
# Check service directory
serviceSourceDir="/data/coolify/services/${service_suffix}"
if [ ! -d "$serviceSourceDir" ]; then
echo "β [Origin] Service directory not found: $serviceSourceDir"
exit 1
fi
servicePaths+=" $serviceSourceDir"
echo "β
[Origin] Service directory found: $serviceSourceDir"
# Get container IDs for this service
container_ids=($(docker ps --filter "name=${service_suffix}" --format "{{.ID}}"))
if [ ${#container_ids[@]} -eq 0 ]; then
echo "β [Origin] No running containers found for service: $service_suffix"
exit 1
fi
all_container_ids+=("${container_ids[@]}")
containers_to_stop+=("${container_ids[@]}")
echo "β
[Origin] Found ${#container_ids[@]} container(s) for service: $service_suffix"
# Get volumes for containers
for container_id in "${container_ids[@]}"; do
# Verify container state
container_state=$(docker inspect --format '{{.State.Status}}' "$container_id")
if [ "$container_state" != "running" ]; then
echo "β [Origin] Container $container_id is not running (status: $container_state)"
exit 1
fi
# Get volumes and process them line by line
while IFS= read -r volumeName; do
if [ -n "$volumeName" ] && [ -z "${volume_dict[$volumeName]}" ]; then
volume_dict[$volumeName]=1
if [ ! -d "/var/lib/docker/volumes/$volumeName/_data" ]; then
echo "β [Origin] Volume directory not found: /var/lib/docker/volumes/$volumeName/_data"
exit 1
fi
volumePaths+=" /var/lib/docker/volumes/$volumeName"
echo " β
[Origin] Found volume: $volumeName"
fi
done < <(docker inspect --format='{{range .Mounts}}{{if eq .Type "volume"}}{{.Name}}{{println}}{{end}}{{end}}' "$container_id")
done
done
# Calculate total size
total_size=0
for path in $servicePaths $volumePaths; do
size=$(du -sb "$path" 2>/dev/null | cut -f1)
total_size=$((total_size + size))
done
echo "β
[Origin] Total size to backup: $(numfmt --to=iec-i --suffix=B $total_size)"
# Create services backup first (without stopping containers)
echo "π [Origin] Creating services backup..."
echo "[Origin] Service paths to backup:"
for path in $servicePaths; do
echo " - $path"
done
services_list_file=$(mktemp)
for path in $servicePaths; do
echo "$path" >> "$services_list_file"
done
if ! tar --exclude='*.sock' -Pczf "${backupDir}/${servicesBackupFile}" -T "$services_list_file"; then
echo "β [Origin] Services backup file creation failed"
rm -f "$services_list_file"
exit 1
fi
rm -f "$services_list_file"
echo "β
[Origin] Services backup file created"
# Transfer services backup while containers are still running
echo "π Transferring services backup to Destination Server..."
if ! scp "${backupDir}/${servicesBackupFile}" "root@${destinationHost}:${destinationDir}/${servicesBackupFile}"; then
echo "β Transfer to Destination Server failed"
exit 1
fi
# Execute Phase 1 on destination (prepare volumes) while origin containers are still running
echo "π Executing Phase 1 (Services restoration and volume preparation)..."
phase1Commands="
# Check if backup file was transferred successfully
if [ ! -f '${destinationDir}/${servicesBackupFile}' ]; then
echo 'β [Destination] Services backup file not found after transfer'
echo '[Destination] Expected location: ${destinationDir}/${servicesBackupFile}'
ls -la '${destinationDir}'
exit 1
fi
echo 'β
[Destination] Services backup file found'
# Clean up existing volumes
echo 'π [Destination] Cleaning up existing volumes...'
for volume_name in \$(docker volume ls --format '{{.Name}}' | grep '^${@}'); do
echo \"Removing volume: \$volume_name\"
if ! docker volume rm -f \"\$volume_name\" 2>/dev/null; then
echo \"β οΈ [Destination] Could not remove volume \$volume_name, it might be in use\"
# Try to find and stop containers using this volume
containers=\$(docker ps -a --filter volume=\$volume_name --format '{{.ID}}')
if [ -n \"\$containers\" ]; then
echo \"Found containers using volume \$volume_name, stopping them...\"
echo \"\$containers\" | xargs docker rm -f
# Try removing the volume again
if ! docker volume rm -f \"\$volume_name\"; then
echo \"β [Destination] Failed to remove volume \$volume_name even after stopping containers\"
exit 1
fi
fi
fi
echo \"β
[Destination] Removed volume: \$volume_name\"
done
echo 'β
[Destination] Volume cleanup completed'
# Extract services backup
echo 'π [Destination] Extracting services backup...'
if ! tar -Pxzf '${destinationDir}/${servicesBackupFile}' -C /; then
echo 'β [Destination] Services backup extraction failed'
exit 1
fi
echo 'β
[Destination] Services backup extracted'
# Verify service paths
echo 'π [Destination] Verifying service paths...'
for service_path in $servicePaths; do
if [ ! -d \"\$service_path\" ]; then
echo \"β [Destination] Service directory not found after extraction: \$service_path\"
exit 1
fi
if [ ! -f \"\$service_path/docker-compose.yml\" ]; then
echo \"β [Destination] docker-compose.yml not found in: \$service_path\"
exit 1
fi
echo \"β
[Destination] Found service directory and compose file: \$service_path\"
done
# Create Docker networks for each service
echo 'π [Destination] Creating Docker networks...'
for suffix in ${@}; do
echo \"Creating network for \$suffix\"
network_name=\"\$suffix\"
if docker network inspect \"\$network_name\" >/dev/null 2>&1; then
echo \"β οΈ [Destination] Network \$network_name already exists, skipping\"
else
echo \"Creating network \$network_name\"
if ! docker network create \"\$network_name\"; then
echo \"β [Destination] Failed to create network \$network_name\"
exit 1
else
echo \"β
[Destination] Created network \$network_name\"
fi
fi
done
echo 'β
[Destination] Network setup completed'
# Start containers to create volumes
echo 'π [Destination] Starting containers to initialize volumes...'
for service_path in $servicePaths; do
echo \"Starting containers in: \$service_path\"
cd \"\$service_path\"
if ! docker compose up -d; then
echo \"β [Destination] Failed to start containers in: \$service_path\"
exit 1
fi
done
# Wait for containers to be running
echo 'π [Destination] Waiting for containers to initialize...'
sleep 30 # Give containers time to start
# Stop all containers
echo 'π [Destination] Stopping containers...'
for service_path in $servicePaths; do
echo \"Stopping containers in: \$service_path\"
cd \"\$service_path\"
if ! docker compose down; then
echo \"β [Destination] Failed to stop containers in: \$service_path\"
exit 1
fi
done
echo 'β
[Destination] All containers stopped'
# Cleanup services backup
rm -f '${destinationDir}/${servicesBackupFile}'
"
if ! ssh -o "StrictHostKeyChecking no" root@"$destinationHost" "$phase1Commands"; then
echo "β Phase 1 remote commands execution failed"
exit 1
fi
echo "β
Phase 1 completed successfully"
# Now handle volume backup (requires stopping containers)
echo "π [Origin] It's recommended to stop the service containers before creating the volume backup"
read -p "[Origin] Do you want to stop the containers? (y/n): " answer
if [[ "$answer" =~ ^[Yy]$ ]]; then
# Stop containers
echo "π [Origin] Stopping service containers..."
for container_id in "${containers_to_stop[@]}"; do
container_name=$(docker inspect --format '{{.Name}}' "$container_id" | sed 's/^\///')
echo "π [Origin] Stopping container: $container_name"
echo "$container_id"
if ! docker stop "$container_id"; then
echo "β [Origin] Failed to stop container: $container_name"
exit 1
fi
done
echo "β
[Origin] All service containers stopped"
fi
# Create volumes backup
echo "π [Origin] Creating volumes backup..."
echo "[Origin] Volume paths to backup:"
for path in $volumePaths; do
echo " - $path"
done
volumes_list_file=$(mktemp)
for path in $volumePaths; do
echo "$path" >> "$volumes_list_file"
done
if ! tar --exclude='*.sock' -Pczf "${backupDir}/${volumesBackupFile}" -T "$volumes_list_file"; then
echo "β [Origin] Volumes backup file creation failed"
rm -f "$volumes_list_file"
exit 1
fi
rm -f "$volumes_list_file"
echo "β
[Origin] Volumes backup file created"
# Start containers back up on origin server if they were stopped
if [[ "$answer" =~ ^[Yy]$ ]]; then
echo "π [Origin] Starting service containers..."
for container_id in "${containers_to_stop[@]}"; do
container_name=$(docker inspect --format '{{.Name}}' "$container_id" | sed 's/^\///')
echo "[Origin] Starting container: $container_name"
if ! docker start "$container_id"; then
echo "β οΈ [Origin] Failed to start container: $container_name"
fi
done
# Wait for containers to be running
echo "π [Origin] Waiting for containers to be ready..."
sleep 10 # Give containers time to initialize
# Verify containers are running
for container_id in "${containers_to_stop[@]}"; do
container_name=$(docker inspect --format '{{.Name}}' "$container_id" | sed 's/^\///')
if ! docker ps --format '{{.Names}}' | grep -q "^${container_name}$"; then
echo "β οΈ [Origin] Container $container_name did not restart"
else
echo "β
[Origin] Container $container_name is running"
fi
done
fi
# Transfer volumes backup after containers are back up
echo "π Transferring volumes backup to Destination Server..."
if ! scp "${backupDir}/${volumesBackupFile}" "root@${destinationHost}:${destinationDir}/${volumesBackupFile}"; then
echo "β Transfer to Destination Server failed"
exit 1
fi
# Execute Phase 2 on destination (restore volume data)
echo "π Executing Phase 2 (Volume data restoration)..."
phase2Commands="
# Check if backup file was transferred successfully
if [ ! -f '${destinationDir}/${volumesBackupFile}' ]; then
echo 'β [Destination] Volumes backup file not found after transfer'
echo '[Destination] Expected location: ${destinationDir}/${volumesBackupFile}'
ls -la '${destinationDir}'
exit 1
fi
echo 'β
[Destination] Volumes backup file found'
# Extract volumes backup
echo 'π [Destination] Extracting volumes backup...'
if ! tar -Pxzf '${destinationDir}/${volumesBackupFile}' -C /; then
echo 'β [Destination] Volumes backup extraction failed'
exit 1
fi
echo 'β
[Destination] Volumes backup extracted'
# Verify volume paths
echo 'π [Destination] Verifying volume data...'
for volume_path in $volumePaths; do
if [ ! -d \"\$volume_path/_data\" ]; then
echo \"β [Destination] Volume data directory not found: \$volume_path/_data\"
exit 1
fi
if [ -z \"\$(ls -A \"\$volume_path/_data\" 2>/dev/null)\" ]; then
echo \"β οΈ [Destination] Warning: Volume directory is empty: \$volume_path/_data\"
else
echo \"β
[Destination] Found volume data: \$volume_path/_data\"
fi
done
# Start containers with restored data
echo 'π [Destination] Starting containers with restored data...'
for service_path in $servicePaths; do
echo \"[Destination] Starting containers in: \$service_path\"
cd \"\$service_path\"
if ! docker compose up -d; then
echo \"β [Destination] Failed to start containers in: \$service_path\"
exit 1
fi
done
# Wait for containers to be running
echo 'π [Destination] Waiting for containers to initialize...'
sleep 30 # Give containers time to start
# Verify containers are running
echo 'π [Destination] Verifying container status...'
for service_path in $servicePaths; do
cd \"\$service_path\"
if ! docker compose ps | grep -q 'Up'; then
echo \"β οΈ [Destination] Warning: Some containers may not be running in: \$service_path\"
docker compose ps
else
echo \"β
[Destination] Containers are running in: \$service_path\"
fi
done
# Final shutdown of containers
echo 'π [Destination] Shutting down containers...'
for service_path in $servicePaths; do
echo \"[Destination] Stopping containers in: \$service_path\"
cd \"\$service_path\"
if ! docker compose down; then
echo \"β οΈ [Destination] Warning: Failed to stop containers in \$service_path\"
else
echo \"β
[Destination] Stopped containers in \$service_path\"
fi
done
# Cleanup volumes backup
rm -f '${destinationDir}/${volumesBackupFile}'
# Create network names list for summary
network_names=""
for suffix in ${@}; do
network_names+=\"\$suffix \"
done
echo '
π§Ή [Destination] Cleanup Summary:
- Removed temporary file: ${destinationDir}/${volumesBackupFile}
- Stopped all service containers
- Networks remain for future use: '\"\${network_names}\"'
- Volumes remain intact for future use
'
"
if ! ssh -o "StrictHostKeyChecking no" root@"$destinationHost" "$phase2Commands"; then
echo "β Phase 2 remote commands execution failed"
exit 1
fi
echo "β
Phase 2 completed successfully"
# Clean up local backup
rm -rf "$backupDir"
# Clean up ssh-agent
kill "$SSH_AGENT_PID"
echo "
β
Service transfer completed successfully
π§Ή [Origin] Cleanup Summary:
- Removed temporary backup directory: $backupDir
- Terminated SSH agent (PID: $SSH_AGENT_PID)
- All containers are running
- Original volumes remain intact
π§Ή [Destination] Cleanup Summary:
- Service files transferred and extracted
- Volumes transferred and restored
- All containers verified and stopped
- Temporary files cleaned up
- Networks remain for future use: $(echo "$@")
- Volumes ready for use
π‘ Next Steps:
Assuming you manage your [Origin] and [Destination] servers with the same Coolify Application
- Create a test application on the [Destination] server, this will be used to get specific values for 'environment_id', 'server_id', 'destination_id'
- In Coolify UI, for each service you have transferred: Stop the service => It will be stopped on the [Origin] Server
- Manual intervention required in Coolify Postgres Database on [Origin] Server
- In the database 'coolify', table 'services', find the lines for the services you have transferred
- Update the values of columns 'environment_id', 'server_id', 'destination_id' with the sames values of the test application you created directly on the [Destination] server
- In Coolify UI, for each service you have transferred: Deploy the service => It will be deployed on the [Destination] Server with your all your persisted data
- Monitor logs for any issues during startup
- Please note that clean of unused docker volumes and services folders on the [Origin] Server is not done by this script for security reasons
"
Here's a modified version of @AspireOne's ssh-agent version. It fixes the volumes being separated wrong when extracting the volumes from a container.
#!/bin/bash
# This script will backup your Coolify instance and move everything to a new server. Docker volumes, Coolify database, and ssh keys
# 1. Script must run on the source server
# 2. Have all the containers running that you want to migrate
# Configuration - Modify as needed
sshKeyPath="$HOME/.ssh/your_private_key" # Key to destination server
destinationHost="server.example.com" # destination server IP or domain
# -- Shouldn't need to modify anything below --
backupSourceDir="/data/coolify/"
backupFileName="coolify_backup.tar.gz"
# Function to initialize ssh-agent and add the SSH key
initialize_ssh_agent() {
# Check if ssh-agent is already running
if [ -z "$SSH_AGENT_PID" ] || ! ps -p "$SSH_AGENT_PID" > /dev/null 2>&1; then
echo "π Starting ssh-agent..."
eval "$(ssh-agent -s)"
if [ $? -ne 0 ]; then
echo "β Failed to start ssh-agent"
exit 1
fi
echo "β
ssh-agent started"
else
echo "β
ssh-agent is already running"
fi
# Add the SSH key to the agent
echo "π Adding SSH key to ssh-agent"
ssh-add "$sshKeyPath"
if [ $? -ne 0 ]; then
echo "β Failed to add SSH key. Please ensure the passphrase is correct."
exit 1
fi
echo "β
SSH key added to ssh-agent"
}
# Initialize ssh-agent and add the SSH key
initialize_ssh_agent
# Check if the source directory exists
if [ ! -d "$backupSourceDir" ]; then
echo "β Source directory $backupSourceDir does not exist"
exit 1
fi
echo "β
Source directory exists"
# Check if the SSH key file exists
if [ ! -f "$sshKeyPath" ]; then
echo "β SSH key file $sshKeyPath does not exist"
exit 1
fi
echo "β
SSH key file exists"
# Check if we can SSH to the destination server, ignore "The authenticity of host can't be established." errors
if ! ssh -o "StrictHostKeyChecking no" -o "ConnectTimeout=5" root@"$destinationHost" "exit"; then
echo "β SSH connection to $destinationHost failed"
exit 1
fi
echo "β
SSH connection successful"
# Get the names of all running Docker containers
containerNames=$(docker ps --format '{{.Names}}')
# Initialize an empty string to hold the volume paths
volumePaths=""
for containerName in $containerNames; do
# Use a delimiter to separate the volume names
volumeNames=$(docker inspect --format '{{range .Mounts}}{{.Name}}{{print "\n"}}{{end}}' "$containerName")
# Now, we process each line (volume name) from the output
while IFS= read -r volumeName; do
# Check if the volumeName is not empty
if [ -n "$volumeName" ]; then
echo "Adding path: /var/lib/docker/volumes/$volumeName"
volumePaths="$volumePaths /var/lib/docker/volumes/$volumeName"
fi
done <<< "$volumeNames"
done
echo "Final volumePaths: $volumePaths"
# Calculate the total size of the volumes
# shellcheck disable=SC2086
totalSize=$(du -csh $volumePaths 2>/dev/null | grep total | awk '{print $1}')
# Print the total size of the volumes
echo "β
Total size of volumes to migrate: $totalSize"
# Print size of backupSourceDir
backupSourceDirSize=$(du -csh "$backupSourceDir" 2>/dev/null | grep total | awk '{print $1}')
echo "β
Size of the source directory: $backupSourceDirSize"
# Check if the backup file already exists
if [ ! -f "$backupFileName" ]; then
echo "πΈ Backup file does not exist, creating"
# Recommend stopping docker before creating the backup
echo "πΈ It's recommended to stop all Docker containers before creating the backup"
read -rp "Do you want to stop Docker? (y/n): " answer
if [[ "$answer" =~ ^[Yy]$ ]]; then
if ! systemctl stop docker; then
echo "β Docker stop failed"
exit 1
fi
echo "β
Docker stopped"
else
echo "πΈ Docker not stopped, continuing with the backup"
fi
# shellcheck disable=SC2086
if ! tar --exclude='*.sock' -Pczf "$backupFileName" -C / "$backupSourceDir" "$HOME/.ssh/authorized_keys" $volumePaths; then
echo "β Backup file creation failed"
exit 1
fi
echo "β
Backup file created"
else
echo "πΈ Backup file already exists, skipping creation"
fi
# Define the remote commands to be executed
remoteCommands="
# Check if Docker is a service
if systemctl is-active --quiet docker; then
# Stop Docker if it's a service
if ! systemctl stop docker; then
echo 'β Docker stop failed';
exit 1;
fi
echo 'β
Docker stopped';
else
echo 'βΉοΈ Docker is not a service, skipping stop command';
fi
echo 'πΈ Saving existing authorized keys...';
cp ~/.ssh/authorized_keys ~/.ssh/authorized_keys_backup;
echo 'πΈ Extracting backup file...';
if ! tar -Pxzf - -C /; then
echo 'β Backup file extraction failed';
exit 1;
fi
echo 'β
Backup file extracted';
echo 'πΈ Merging authorized keys...';
cat ~/.ssh/authorized_keys_backup ~/.ssh/authorized_keys | sort | uniq > ~/.ssh/authorized_keys_temp;
mv ~/.ssh/authorized_keys_temp ~/.ssh/authorized_keys;
chmod 600 ~/.ssh/authorized_keys;
echo 'β
Authorized keys merged';
if ! curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash; then
echo 'β Coolify installation failed';
exit 1;
fi
echo 'β
Coolify installed';
"
# SSH to the destination server, execute the remote commands
if ! ssh root@"$destinationHost" "$remoteCommands" < "$backupFileName"; then
echo "β Remote commands execution or Docker restart failed"
exit 1
fi
echo "β
Remote commands executed successfully"
# Clean up - Ask the user for confirmation before removing the local backup file
echo "Do you want to remove the local backup file? (y/n)"
read -r answer
if [[ "$answer" =~ ^[Yy]$ ]]; then
if ! rm -f "$backupFileName"; then
echo "β Failed to remove local backup file"
exit 1
fi
echo "β
Local backup file removed"
else
echo "πΈ Local backup file not removed"
fi
# Kill ssh-agent if it was started by this script
if [ -n "$SSH_AGENT_PID" ]; then
echo "π Stopping ssh-agent..."
eval "$(ssh-agent -k)"
echo "β
ssh-agent stopped"
fi
At this point we're ironically pummeling this with unmanaged versions on a version control platform :P
Time to make this a repo and accept PRs? I could do it but it's only right that you do it @Geczy considering you took the initiative :)
Good idea, I've made the repo here! https://github.com/Geczy/coolify-migration
Also updated this gist with a readme linking to the above
Thanks everyone for your contributions! Feel free to open a PR to manage these wonderful changes that are being suggested.
Just wanted to thanks for this perfect script.
Great script, thanks.
I created a coolify db restore, if backing up locally.
Long term we need a baseline CLI similar to Plesk provides for installation, updates, and recovery.
Also reporting that the original script works perfectly with the expected setup on v4.0.0-beta.406
Thanks a lot for the script