Skip to content

Instantly share code, notes, and snippets.

@Jip-Hop
Created February 28, 2021 17:01
Show Gist options
  • Save Jip-Hop/b9ddb2cc124302a5558659e1298c36ec to your computer and use it in GitHub Desktop.
Save Jip-Hop/b9ddb2cc124302a5558659e1298c36ec to your computer and use it in GitHub Desktop.
Autorun Synology Hyper Backup and Integrity Check with Email Notifications
#!/bin/sh
# This script is to be used in combination with Synology Autorun:
# - https://github.com/reidemei/synology-autorun
# - https://github.com/Jip-Hop/synology-autorun
#
# You need to change the task_id to match your Hyper Backup task.
# Get it with command: more /usr/syno/etc/synobackup.conf
#
# I like to keep "Beep at start and end" disabled in Autorun, because I don't
# want the NAS to beep after completing (could be in the middle of the night)
# But beep at start is a nice way to confirm the script has started,
# so that's why this script starts with a beep.
#
# After the backup completes, the integrity check will start.
# Unfortunately in DSM you can't choose to receive email notifications of the integrity check results.
# So there's a little workaround, at the end of this script, to send an (email) notification.
# The results of the integrity check are taken from the synobackup.log file.
#
# In DSM -> Control Panel -> Notification I enabled email notifications,
# I changed its Subject to %TITLE% and the content to:
# Dear user,
#
# Integrity check for %TASK_NAME% is done.
#
# %OUTPUT%
#
# This way I receive email notifications with the results of the Integrity Check.
#
# Credits:
# - https://github.com/Jip-Hop
# - https://bernd.distler.ws/archives/1835-Synology-automatische-Datensicherung-mit-DSM6.html
# - https://www.beatificabytes.be/send-custom-notifications-from-scripts-running-on-a-synology-new/
task_id=6 # Hyper Backup task id, get it with command: more /usr/syno/etc/synobackup.conf
task_name="USB3 3TB Seagate" # Only used for the notification
/bin/echo 2 > /dev/ttyS1 # Beep on start
startTime=$(date +"%Y/%m/%d %H:%M:%S") # Current date and time
device=$2 # e.g. sde1, passed to this script as second argument
# Backup
/usr/syno/bin/synobackup --backup $task_id --type image
while sleep 60 && /var/packages/HyperBackup/target/bin/dsmbackup --running-on-dev $device
do
:
done
# Check integrity
/var/packages/HyperBackup/target/bin/detect_monitor -k $task_id -t -f -g
# Wait a bit before detect_monitor is up and running
sleep 60
# Wait until check is finished, poll every 60 seconds
/var/packages/HyperBackup/target/bin/detect_monitor -k $task_id -p 60
# Send results of integrity check via email (from last lines of log file)
IFS=''
output=""
title=
NL=$'\n'
while read line
do
# Compute the seconds since epoch for the start date and time
t1=$(date --date="$startTime" +%s)
# Date and time in log line (second column)
dt2=$(echo "$line" | cut -d$'\t' -f2)
# Compute the seconds since epoch for log line date and time
t2=$(date --date="$dt2" +%s)
# Compute the difference in dates in seconds
let "tDiff=$t2-$t1"
# echo "Approx diff b/w $startTime & $dt2 = $tDiff"
# Stop reading log lines from before the startTime
if [[ "$tDiff" -lt 0 ]]; then
break
fi
text=`echo "$line" | cut -d$'\t' -f4`
# Get rid of [Local] prefix
text=`echo "$text" | sed 's/\[Local\]//'`
if [ -z ${title} ]; then
title=$text
fi
output="$output${NL}$text"
done <<<$(tac /var/log/synolog/synobackup.log)
# Hijack the ShareSyncError event to send custom message.
# This event is free to reuse because I don't use the Shared Folder Sync (rsync) feature.
# More info on sending custom (email) notifications: https://www.beatificabytes.be/send-custom-notifications-from-scripts-running-on-a-synology-new/
/usr/syno/bin/synonotify "ShareSyncError" "{\"%OUTPUT%\": \"${output}\", \"%TITLE%\": \"${title}\", \"%TASK_NAME%\": \"${task_name}\"}"
# Sleep a bit more before unmounting the disk
sleep 60
# Unmount the disk
exit 100
@gersteba
Copy link

gersteba commented Aug 7, 2023

Hi!

I finally found a chance to try your script.

It succeeds until the integrity check (lines 53 and 57). There I get a 'Permission denied' error. I guess this happens, because I have password protected the backup.
Do you have a hint for me, how I can adapt these lines passing the password to the function, too?

And what about line 42:
Do I have to replace the '$2' with anything?

Thank you very much in advance for any hint!

Dear greetings,
Gerald

@Jip-Hop
Copy link
Author

Jip-Hop commented Aug 7, 2023

I'd try to run the /var/packages/HyperBackup/target/bin/detect_monitor -k $task_id -t -f -g command manually from a terminal and experiment how to get it working. You'd need to manually specify the task_id in this case of course. If you get it working you can adapt the script accordingly.

There's no need to set the $2 on line 42 to anything when using the script in combination with synology-autorun. It will pass several arguments to autorun.sh.

@gersteba
Copy link

gersteba commented Aug 7, 2023

Thank you for your fast response!

When I run sudo /var/packages/HyperBackup/target/bin/detect_monitor -k $task_id -t -f -g command manually in the terminal, I am being asked for the password. But how can I run this command within the script automatically passing the password?

I am not using autorun - isn't it not working anymore with DSM7? (see https://bernd.distler.ws/archives/1835-Synology-automatische-Datensicherung-mit-DSM6.html)
Instead I am running the script using the task scheduler. Is then anything to adapt in your script?

@Jip-Hop
Copy link
Author

Jip-Hop commented Aug 7, 2023

Hyper backup jobs can already be scheduled from the Synology GUI right? Maybe it's easier to do that.

autorun.sh was made specifically to be used combined with synology-aurorun. If you're not using that you'd somehow have to pass the device name of the backup disk as second argument to autorun.sh. But I don't think hardcoding this value will work well, as it may be different depending on the number of disks and the order in which they were attached...

@gersteba
Copy link

gersteba commented Aug 7, 2023

Unfortunately Hyper backup still does not support running this one after the other:

  1. Password-encrypted backup to external USB-Drive
  2. Integrity check
  3. Unmount external USB-Drive
    For this I'm still looking for a scripting solution.

I'm just afraid, I have too little experience in scripting to achieve that.
But your script makes me feel confident of this being possible.
Do you have a hint for me, how to automatically pass/enter the password, which is necessary for the integrity check?

@Jip-Hop
Copy link
Author

Jip-Hop commented Aug 7, 2023

My use case was similar (encrypted backup) and I didn't need to enter/pass the encryption password. Since Hyper backup allows to schedule encrypted backups, the encryption password has to be stored on the Synology NAS itself (and therefore there should be no need to enter/pass it again).

I'd ask for help in some Synology user form if I were you. I can't be of further help since I no longer own Synology devices. Good luck!

@gersteba
Copy link

gersteba commented Aug 7, 2023

Thank you anyway for taking some of your time for me 🤗

EDIT:
Just if you were interested: The problem was not the encryption password, but the owner of the script. As soon as I changed the owner to root, no 'Permission denied' appeared anymore.

EDIT 2:
In DSM there is the Task Scheduler now, which makes it even easier to send notification emails.
Therefore I adapted your script a bit without a need of /usr/syno/bin/synonotify anymore.

It works like a charm - thank you for all your input!

@Jip-Hop
Copy link
Author

Jip-Hop commented Aug 11, 2023

Great job! Glad to hear you got it working. 😄

@gersteba
Copy link

gersteba commented Oct 5, 2023

Hello!

The backup automation is still working fine.
I just noticed something strange concerning the integrity check:
It seems, as if two integrity checks are running at the same time.

Here is the log:

2023/10/01 02:21:57  [MyBackup] Backup integrity check is finished. No error was found.
2023/10/01 02:21:05  [MyBackup] Backup integrity check is finished. No error was found.
2023/10/01 01:31:57  [MyBackup] Data integrity check finished. 2684.8 GB data checked this time, accounting for 100.0% of the total data (2684.8 GB data in total, 100.0% checked already).
2023/10/01 01:31:16  [MyBackup] Data integrity check finished. 2684.8 GB data checked this time, accounting for 100.0% of the total data (2684.8 GB data in total, 100.0% checked already).
2023/09/30 16:01:55  [MyBackup] Backup integrity check has started.
2023/09/30 16:01:34  [usbshare1] Version rotation completed from ID [MyBackup.hbk].
2023/09/30 16:01:34  [usbshare1] Rotate version [2023-09-02 02:00:13] from ID [MyBackup.hbk].
2023/09/30 14:10:25  [usbshare1] Version rotation started from ID [MyBackup.hbk].
2023/09/30 14:10:21  [MyBackup] Trigger version rotation.
2023/09/30 14:10:20  [MyBackup] Backup task finished successfully. [596833 files scanned] [35 new files] [7 files modified] [596791 files unchanged]
2023/09/30 02:00:18  [MyBackup] Backup task started.

The integrity check takes extremely more time, than before I used the automation script.

Here is the script, I am using based on yours:

task_id=12 # Hyper Backup task id, get it with command: more /usr/syno/etc/synobackup.conf

startTime=$(date +"%Y/%m/%d %H:%M:%S") # Current date and time

USBDRV=/volumeUSB1/usbshare
device=sdq1

# Backup
/usr/syno/bin/synobackup --backup $task_id --type image

while sleep 60 && /var/packages/HyperBackup/target/bin/dsmbackup --running-on-dev $device
do
    :
done

# Check integrity
/var/packages/HyperBackup/target/bin/detect_monitor --task-id $task_id --trigger --full --guard
# Wait a bit before detect_monitor is up and running
sleep 60
# Wait until check is finished, poll every 60 seconds
/var/packages/HyperBackup/target/bin/detect_monitor --task-id $task_id --polling 60

# Get results of integrity check (from last lines of log file)

IFS=''
output=""
NL=$'\n'

while read line
do
    
    # Compute the seconds since epoch for the start date and time
    t1=$(date --date="$startTime" +%s)
    
    # Date and time in log line (second column)
    dt2=$(echo "$line" | cut -d$'\t' -f2)
    # Compute the seconds since epoch for log line date and time
    t2=$(date --date="$dt2" +%s)
    
    # Compute the difference in dates in seconds
    let "tDiff=$t2-$t1"
    
    # echo "Approx diff b/w $startTime & $dt2 = $tDiff"
    
    # Stop reading log lines from before the startTime
    if [[ "$tDiff" -lt 0 ]]; then
        break
    fi
    
    text=`echo "$line" | cut -d$'\t' -f4`
    # Get rid of [Local] prefix
    text=`echo "$text" | sed 's/\[Local\]//'`
    # Add date and time
	text=`echo "${dt2}  ${text}"`
    
    output="$output${NL}$text"
    
done <<<$(tac /var/log/synolog/synobackup.log)

echo "${NL}$output${NL}"

# Sleep a bit more before unmounting the disk
sleep 60

# Unmount the disk
sync
sleep 10
umount $USBDRV
/usr/syno/bin/synousbdisk -umount $device; >/tmp/usbtab

exit 0

Do you have any idea, how I can find out, why the integrity check is producing double log lines?

@Jip-Hop
Copy link
Author

Jip-Hop commented Oct 5, 2023

Sorry I have no idea and no Synology hardware any more.

@JimMeLad
Copy link

JimMeLad commented Nov 8, 2023

I think the double log entry issue might be down to the double call to 'detect_monitor' in the script.

On my system (DSM 7.2, HB 4.1.?), if I launch the HB integrity check from the GUI I get these processes running:

/var/packages/HyperBackup/target/bin/synoimgbkptool -r /volumeUSB1/usbshare -t .hbk -k 6 -z i -T 1699459407
/var/packages/HyperBackup/target/bin/detect_monitor -k 6 -tm
/var/packages/HyperBackup/target/bin/detect_monitor -k 6 -p 3

So already there are two 'detect_monitor' instances that are active and the additional call in the script above adds at least another.

If the second call to 'detect_monitor' is replaced by something that waits for one of the three processes above to finish before proceeding to unmount the disk then I only get one set of log entries as expected.

There may be some combination of options passed to 'detect_monitor' that would make the code in the script above it work without doubling-up the logs but in the absence of any documentation that I know of I can't really be sure.

@gersteba
Copy link

gersteba commented Dec 8, 2023

Hi!

Thank you for your thoughts.
To be sure, I understand you right:

I use these lines to start the integrity check and then wait for it to finish:

# Check integrity
/var/packages/HyperBackup/target/bin/detect_monitor --task-id $task_id --trigger --full --guard
# Wait a bit before detect_monitor is up and running
sleep 60
# Wait until check is finished, poll every 60 seconds
/var/packages/HyperBackup/target/bin/detect_monitor --task-id $task_id --polling 60

Which one of the detect_monitor calls should be replaced?

@JimMeLad
Copy link

JimMeLad commented Dec 8, 2023

Hi,
The second call is the one I don’t use. I have instead chosen to wait until ‘synoimgbkptool’ process is no longer running before moving on.

However, that’s not the end of the story because the command ‘dsmbackup —running-on-device’ reports that it is finished BUT if version rotation is configured it then starts and can take some time to complete.

If the integrity check is allowed to start whilst the version rotation is active it will fail, so an additional wait needs to be factored in between the end of the backup and the start of the integrity check.

I’ve done some extensive testing on this and I’m sort of coming to the conclusion that without access to proper technical documentation, there’s likely to be something that will trip you up.

It’s up to you to decide how critical these likely failures are to you. I’ve decided to try and optimise my task timings and take the occasional failure.

@gersteba
Copy link

Hi!
Thank you for your input.

Actually the second call of detect_monitor is just polling the progress for the integrity check every 60 seconds.
But I will give this code a try, instead of the second detect_monitor call:

while (pidof /var/packages/HyperBackup/target/bin/detect_monitor)
	sleep 60
do

What is also irritating:
There is just one log entry "Backup integrity check has started.", but two of "Data integrity check finished. ..." and "Backup integrity check is finished. ..." each.
Who does understand this?

@JimMeLad
Copy link

Hi,
Yes, I realise that the second 'detect_monitor' call is polling for progress, but I believe that it is also writing log entries as well as the first 'detect_monitor' call.

I have chosen instead to monitor the 'synoimgbkptool' command but the result should be the same.

The issue is, I think, that we're trying to use some internal commands written by Synology that are not intended for public consumption, so consequently we have no access to the documentation.

As I said in my previous post, the 'trial and error' approach is just taking too much time as there appears to be no easy way to determine the process dependencies other than by observation, but my integrity check takes over 6 hours to complete so I have got bored trying to second guess what Hyper Backup is going to do next

@gersteba
Copy link

gersteba commented Dec 13, 2023

Wouldn't I just need theses additional lines between backup and integrity check calls to wait for rotation having finished:

# Wait until version rotation is finished, poll every 60 seconds
sleep 60
while (pidof /var/packages/HyperBackup/target/bin/synoimgbkptool)
	sleep 60
do

And synoimgbkptool is also running during integrity check?

@JimMeLad
Copy link

I don't know what process runs the rotation, it could be 'synoimgbkptool' but I'm not sure.
The other problem is the atomic nature of these processes. Let's say that you have monitored the backup and can see that it has finished. If you move straight on to look for the version rotation process how long do you wait for it to start before you look for it? If you look straight away, it might not yet be running so if you now move to the integrity check then it will fail once rotation starts.
If you decide to wait for the rotation task, how long do you wait for? If you wait too long then the rotation may start and end whilst you're waiting in which case your process will stall whilst you wait for a process to end that has already ended.

These are just some of the issues that I can think of that without access to proper documentation I have a feeling that whatever I write will end up too unreliable for my liking where backups are concerned. I've decided to try and time things to be safely concurrent and will have to accept the occasional failure if some task overruns its time slot.

I guess you could monitor the backup log for the relevant activity entries but that too seems a bit clunky.

@gersteba
Copy link

gersteba commented Dec 20, 2023

Hi!
Thank you very much for your input.
I think, i figured it out now and cleaned up the whole script.
It now also creates some own entries in the synobackup.log file to have the same lines in the log as well as in the email notification.

#!/bin/sh

# This script is to be called by a Scheduler task as root user,
# having 'Run command / User-defined script' filled in with your script's path.
# i. e. /bin/bash /volume1/Scripts/autobackup.sh
#
# You need to change the task_id to match your Hyper Backup task.
# Get it with command: more /var/packages/HyperBackup/etc/synobackup.conf
# You also need to change the location of USB device and name of the block device associated with 
# the filesystem partition on the USB disk. Find out with command 'df' having the USB device attached.
#
# I like to keep "Beep at start and end" disabled in Autorun, because I don't
# want the NAS to beep after completing (could be in the middle of the night)
# But beep at start is a nice way to confirm the script has started,
# so that's why this script starts with a beep.
#
# After the backup and the version rotation complete, the integrity check will start. 
# If you like to receive the log entries in an e-mail after this script finished,
# check 'Send run details by email' and fill in 'Email' in the Scheduler task settings.
#
# Tested with DSM 7.2-64570 Update 3 and Hyper Backup 4.1.0-3718.
#
# Credits:
# - https://github.com/Jip-Hop
# - https://bernd.distler.ws/archives/1835-Synology-automatische-Datensicherung-mit-DSM6.html
# - https://derwebprogrammierer.at/

task_id=12 # Hyper Backup task id
task_name="[Omnia Auto-Backup]" # Only used for log entries

# Location of USB device and name of the block device associated with the filesystem partition on the USB disk. Find out with command 'df'.
USBDRV=/volumeUSB1/usbshare # See column 'Mounted on' in df result
device=sdq1 # See column 'Filesystem' in df result

/bin/echo 2 > /dev/ttyS1 # Beep on start

startTime=$(date +"%Y/%m/%d %H:%M:%S") # Current date and time
echo -e "info\t${startTime}\tSYSTEM:\t${task_name} Started." >> /var/log/synolog/synobackup.log

# Backup - Begin
/usr/syno/bin/synobackup --backup $task_id --type image
# Wait until backup is finished, poll every 60 seconds
while sleep 60 && /var/packages/HyperBackup/target/bin/dsmbackup --running-on-dev $device
do
    :
done
# Backup - End

## Version rotation - Begin
# Wait until version rotation is finished, poll every 60 seconds
while sleep 60 && [ "$(pidof /var/packages/HyperBackup/target/bin/synoimgbkptool)" != "" ];
do
    :
done
## Version rotation - End

## Check integrity - Begin
/var/packages/HyperBackup/target/bin/detect_monitor --task-id $task_id --trigger --full --guard

# Wait until check is finished, poll every 60 seconds
#/var/packages/HyperBackup/target/bin/detect_monitor --task-id $task_id --polling 60   # Produces double log entries
while sleep 60 && [ "$(pidof /var/packages/HyperBackup/target/bin/detect_monitor)" != "" ];
do
    :
done
## Check integrity - End

# Sleep a bit more before unmounting the disk
sleep 60

## Unmount USB device - Begin
sync
sleep 10
umount $USBDRV
umountResult=$(/usr/syno/bin/synousbdisk -umount $device; >/tmp/usbtab)
currTime=$(date +"%Y/%m/%d %H:%M:%S") # Current date and time
echo -e "info\t${currTime}\tSYSTEM:\t${umountResult}" >> /var/log/synolog/synobackup.log
## Unmount USB device - End

currTime=$(date +"%Y/%m/%d %H:%M:%S") # Current date and time
echo -e "info\t${currTime}\tSYSTEM:\t${task_name} Finished." >> /var/log/synolog/synobackup.log

## Get results of auto backup (from last lines of log file) - Begin
IFS=''
output=()
NL=$'\n'

while read line
do
    
    # Compute the seconds since epoch for the start date and time
    t1=$(date --date="$startTime" +%s)
    
    # Date and time in log line (second column)
    dt2=$(echo "$line" | cut -d$'\t' -f2)
    # Compute the seconds since epoch for log line date and time
    t2=$(date --date="$dt2" +%s)
    
    # Compute the difference in dates in seconds
    let "tDiff=$t2-$t1"
    
    # Stop reading log lines from before the startTime
    if [[ "$tDiff" -lt 0 ]]; then
        break
    fi
    
    text=`echo "$line" | cut -d$'\t' -f4`
    # Get rid of [Local] prefix
    text=`echo "$text" | sed 's/\[Local\]//'`
    #GBA Add date and time
	text=`echo "${dt2}  ${text}"`
    
	output+=("$text")
    
done <<<$(tac /var/log/synolog/synobackup.log)

n=${#output[*]}
for (( i = n-1; i >= 0; i-- ))
do
    
	echo ${output[i]}
	
done
## Get results ... - End

exit 0

I am just wondering, what this line is for:
IFS=''

And does anybody have any suggestions for improvements?

@JimMeLad
Copy link

I’m glad you got it working, happy to have helped!

As for improvements, you could move the process monitor into its own function to save you having to repeat the code twice, whilst at the same time just tightening up the ‘pidof’ command to only request a single PID. If you are unsure about this then let me know.

As for the IFS=‘’, that statement changes the Internal Field Separators (IFS) from their default values to an empty string so the text read from the log is brought into the program as a complete line, not broken up into ‘words’

@keiranlovett
Copy link

I found this because I am looking for a script to do the following:

  • Check for upcoming scheduled backups
  • If Backup is scheduled soon, mount External USB.
  • Run Backup, Integrity, etc
  • Unmount External USB

I see this script will unmount but would be good to have a condition to mount the USB.
Another suggestion is to allow for multiple tasks, in my particular case I spread my backup tasks over different folders, each with its own backup cadency and schedule.

@JimMeLad
Copy link

Hi,
Mounting the USB is pretty easy, but as I’ve said in an earlier post, stepping too much into the way Synology have constructed their O/S and it’s packages risks becoming too fragile, in my view, for something as important as a backup.

I have my backups set to run at a specific time of day, so have scheduled a ‘mount USB drive’ task to run a few minutes beforehand. Similarly I have a rough idea of how long the backup normally takes so have timed my integrity check to start at ‘the usual backup end time plus 10 mins’ figuring that I can cope with the very occasional occurrence of the backup overrunning the integrity check start time.

I have then written a script that starts about 15 minutes before the integrity check usually completes, which hangs around until the integrity check is done, unmounts the USB drive and then harvests the int. check results from the log and emails them to me so that I can easily see the results without needing to log onto the NAS

@keiranlovett
Copy link

Great to know your approach there, thanks!

In my particular approach I’ve moved to backing up all my volumes individually + some critical folders. Some of them will be >5GB backups run daily, while other tasks are 100-500gb backups run weekly / monthly. In this case the integrity check time length wouldn’t be reliable correct?

@JimMeLad
Copy link

Pin my system, the integrity check seems to be the most consistent in respect of the time it takes to run, typically around 6 and a quarter hours.

The backup time is usually consistent, unless I make a bulk change (eg adding or changing tags on my music files for example). Then the backup end time is unknown/unknowable.

I think the whole lot is controlled by a daemon task that monitors the backup and therefore knows when to kick off the version rotation, but it doesn’t seem to know about any scheduled integrity check, which is why, if the latter kicks off whilst version rotation is happening, the itg.check fails.
Because the events are essentially black-boxes, all you can hope to do is inspect what’s happening and react, but as I said in an earlier post m for me that approach is too fragile to risk with a backup

@gersteba
Copy link

gersteba commented Jan 2, 2024

As for improvements, you could move the process monitor into its own function to save you having to repeat the code twice, whilst at the same time just tightening up the ‘pidof’ command to only request a single PID. If you are unsure about this then let me know.

Hi!

Thank you for that information.
What do you mean with the quoted above?

@JimMeLad
Copy link

JimMeLad commented Jan 2, 2024

Hi,
I don't know how much coding experience you have so sorry if I tell you stuff that you already know :-)
The 'while/sleep' loop appears twice with the only difference between those two loops is the name of the process you're trying to get the PIDs for. You could move the 'pidof' bit into its own function and pass the process name as an argument, e.g.

(and, yes I know 'function' and '()'and not both needed, it's just my preferred style)

function process_is_active() {
local _process_name="$1"
# Find one PID only (-s) across all shells (-x)
pidof -s -x /var/packages/HyperBackup/target/bin/"${_process_name}" > /dev/null
return $?
}

Then use as:
while process_is_active 'synoimgbkptool'; do
sleep 60
done

and

while process_is_active 'detect_monitor'; do
sleep 60s
done

or in your current style:
while sleep 60 && process_is_active 'detect_monitor'; do
:
done

The function 'process_is_active()' will return 0 (zero) if there is at least one process running with the passed name, 1 if not.
This also has the (minor) advantage of only looking for one instance of a process, not all of them (you don't care how many are running, you just need to know if at least one is).

To my mind this construction has the advantages of adhering to the DRY principle (Don't Repeat Yourself) and of replacing a bit of slightly obscure code into a more readable name.

As a personal preference, I'm not a fan of the while-loop format:
while sleep 60 && <some unrelated command>

Whilst I know it doesn't (necessarily) apply in this particular instance, I still try to write code that suits general cases so the reason I'm not a fan of the style above is this:
In the sleep 60 && pidof... construct, if the process being checked by 'pidof...' has already ended, you'll incur a needless 60 second delay before you find out, as the 'sleep' command will run first before the 'pidof' is evaluated

Under the same circumstances in the examples I've given above, where the sleep has been moved into the loop body, the loop will be bypassed completely if the monitored process has already ended.

If you do decide to experiment with this, you need to insert the function code at the beginning of your script, ie before the line that reads
task_id=12
Hope that explains things a bit, happy to answer any other questions

@gersteba
Copy link

gersteba commented Jan 14, 2024

Hi!

Now I reworked the script trying to adapt your advice:

#!/bin/sh

# This script is to be called by a Scheduler task as root user,
# having 'Run command / User-defined script' filled in with your script's path.
# i. e. /bin/bash /volume1/Scripts/autobackup.sh
#
# You need to change the task_id to match your Hyper Backup task.
# Get it with command: more /var/packages/HyperBackup/etc/synobackup.conf
# You also need to change the location of USB device and name of the block device associated with 
# the filesystem partition on the USB disk. Find out with command 'df' having the USB device attached.
#
# I like to keep "Beep at start and end" disabled in Autorun, because I don't
# want the NAS to beep after completing (could be in the middle of the night)
# But beep at start is a nice way to confirm the script has started,
# so that's why this script starts with a beep.
#
# After the backup and the version rotation complete, the integrity check will start. 
# If you like to receive the log entries in an e-mail after this script finished,
# check 'Send run details by email' and fill in 'Email' in the Scheduler task settings.
#
# Tested with DSM 7.2-64570 Update 3 and Hyper Backup 4.1.0-3718.
#
# Credits:
# - https://gist.github.com/Jip-Hop/b9ddb2cc124302a5558659e1298c36ec
# - https://derwebprogrammierer.at/

function process_is_active() {
	local _process_name="$1"
	# Find one PID only (-s) across all shells (-x)
	pidof -s -x "/var/packages/HyperBackup/target/bin/${_process_name}" > /dev/null
	return $?
}

task_id=12 # Hyper Backup task id
task_name="[Omnia Auto-Backup]" # Only used for log entries

# Location of USB device and name of the block device associated with the filesystem partition on the USB disk. Find out with command 'df'.
USBDRV=/volumeUSB1/usbshare # See column 'Mounted on' in df result
device=sdq1 # See column 'Filesystem' in df result

#/bin/echo 2 > /dev/ttyS1 # Beep on start

startTime=$(date +"%Y/%m/%d %H:%M:%S")
echo -e "info\t${startTime}\tSYSTEM:\t${task_name} Started." >> /var/log/synolog/synobackup.log

# Backup - Begin
currTime=$(date +"%Y/%m/%d %H:%M:%S") # Current date and time
echo -e "info\t${currTime}\tSYSTEM:\t${task_name} Backup start ..." >> /var/log/synolog/synobackup.log
/usr/syno/bin/synobackup --backup $task_id --type image

sleep 60
while /var/packages/HyperBackup/target/bin/dsmbackup --running-on-dev $device; do
	sleep 60
done
# Backup - End

## Version rotation - Begin
sleep 60
currTime=$(date +"%Y/%m/%d %H:%M:%S")
echo -e "info\t${currTime}\tSYSTEM:\t${task_name} Rotation start ..." >> /var/log/synolog/synobackup.log
while process_is_active 'synoimgbkptool'; do
	sleep 60
done
## Version rotation - End

## Check integrity - Begin
sleep 60
currTime=$(date +"%Y/%m/%d %H:%M:%S")
echo -e "info\t${currTime}\tSYSTEM:\t${task_name} Integrity check start ..." >> /var/log/synolog/synobackup.log
/var/packages/HyperBackup/target/bin/detect_monitor --task-id $task_id --trigger --full --guard

sleep 60
while process_is_active 'detect_monitor'; do
	sleep 60
done
## Check integrity - End

# Sleep a bit more before unmounting the disk
sleep 60

## Unmount USB device - Begin
sync
sleep 10
umount $USBDRV
umountResult=$(/usr/syno/bin/synousbdisk -umount $device; >/tmp/usbtab)
currTime=$(date +"%Y/%m/%d %H:%M:%S")
echo -e "info\t${currTime}\tSYSTEM:\t${umountResult}" >> /var/log/synolog/synobackup.log
## Unmount USB device - End

currTime=$(date +"%Y/%m/%d %H:%M:%S")
echo -e "info\t${currTime}\tSYSTEM:\t${task_name} Finished." >> /var/log/synolog/synobackup.log

## Get results of auto backup (from last lines of log file) - Begin
IFS=''
output=()
NL=$'\n'

while read line
do
    
    # Compute the seconds since epoch for the start date and time
    t1=$(date --date="$startTime" +%s)
    
    # Date and time in log line (second column)
    dt2=$(echo "$line" | cut -d$'\t' -f2)
    # Compute the seconds since epoch for log line date and time
    t2=$(date --date="$dt2" +%s)
    
    # Compute the difference in dates in seconds
    let "tDiff=$t2-$t1"
    
    # Stop reading log lines from before the startTime
    if [[ "$tDiff" -lt 0 ]]; then
        break
    fi
    
    #text=`echo "$line" | cut -d$'\t' -f4`
    text=$(echo "$line" | cut -d$'\t' -f4)
    # Get rid of [Local] prefix
    text=$(echo "$text" | sed 's/\[Local\]//')
    # Add date and time
	text=$(echo "${dt2}  ${text}")
    
	output+=("$text")
    
done <<<$(tac /var/log/synolog/synobackup.log)

n=${#output[*]}
for (( i = n-1; i >= 0; i-- ))
do
    
	echo "${output[i]}"
	
done
## Get results ... - End

exit 0

Seems to work fine.
The only thing, I do not understand, yet:
Why does the rotation already start while dsmbackup is still running?
Compare script with following log entries:

2024/01/13 02:00:01  [Omnia Auto-Backup] Started.
2024/01/13 02:00:01  [Omnia Auto-Backup] Backup start ...
2024/01/13 02:00:15  [Omnia Backup] Backup task started.
2024/01/13 16:55:18  [Omnia Backup] Backup task finished successfully. [599281 files scanned] [71 new files] [9 files modified] [599201 files unchanged]
2024/01/13 16:55:19  [Omnia Backup] Trigger version rotation.
2024/01/13 16:55:57  [usbshare1] Version rotation started from ID [Omnia.hbk].
2024/01/13 16:56:57  [Omnia Auto-Backup] Rotation start ...
2024/01/13 18:16:29  [usbshare1] Rotate version [2023-12-09 02:00:39] from ID [Omnia.hbk].
2024/01/13 18:16:30  [usbshare1] Version rotation completed from ID [Omnia.hbk].
2024/01/13 18:16:59  [Omnia Auto-Backup] Integrity check start ...
2024/01/13 18:17:10  [Omnia Backup] Backup integrity check has started.
2024/01/14 06:48:30  [Omnia Backup] Data integrity check finished. 3574.4 GB data checked this time, accounting for 100.0% of the total data (3574.4 GB data in total, 100.0% checked already).
2024/01/14 07:34:47  [Omnia Backup] Backup integrity check is finished. No error was found.
2024/01/14 07:36:12  Unmount USB device sdq1 succeeded.
2024/01/14 07:36:12  [Omnia Auto-Backup] Finished.

When I tried to remove the
while process_is_active 'synoimgbkptool';
check, the integrity check failed, because the rotation was not finished, yet.
So this check is also necessary.

And:
How can I find out, whether the backup, the rotation and the integrity check worked fine or not, and create different texts in the last log entry accordingly?
i. e. 'Finished successfully.' or 'Failed during rotation.'

@Jip-Hop
Copy link
Author

Jip-Hop commented Jan 14, 2024

Hi @gersteba I suggest you fork this gist and continue the development (and the discussion) in there. Good luck with the script! 🙂

@gersteba
Copy link

I'd prefer to keep your script as it is - my adaption is targeted to my very special needs.
But maybe others can benefit from the additional input and thoughts, too.
Thank you again for your very valuable starting basis!

@Jip-Hop
Copy link
Author

Jip-Hop commented Jan 14, 2024

Sorry but that's exactly the reason I urge you to fork this gist (or create a new one) so you can work on it for your own very special needs. I have no interest in following the conversation any longer but I receive an email for each comment on this gist and I try to reduce the noise in my inbox. I'd prefer not to delete this gist so others may still find it in the future. So please take the development and discussion elsewhere. You can comment the new location here and ask the other participants to join at the new location.

@gersteba
Copy link

gersteba commented Jan 17, 2024

For further discussion on my adapted script please subscribe here:
https://gist.github.com/gersteba/6b07be49aa94c8df1bb88e7db615987d

@Jip-Hop: Sorry for any inconvenience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment