-
-
Save Jip-Hop/b9ddb2cc124302a5558659e1298c36ec to your computer and use it in GitHub Desktop.
#!/bin/sh | |
# This script is to be used in combination with Synology Autorun: | |
# - https://github.com/reidemei/synology-autorun | |
# - https://github.com/Jip-Hop/synology-autorun | |
# | |
# You need to change the task_id to match your Hyper Backup task. | |
# Get it with command: more /usr/syno/etc/synobackup.conf | |
# | |
# I like to keep "Beep at start and end" disabled in Autorun, because I don't | |
# want the NAS to beep after completing (could be in the middle of the night) | |
# But beep at start is a nice way to confirm the script has started, | |
# so that's why this script starts with a beep. | |
# | |
# After the backup completes, the integrity check will start. | |
# Unfortunately in DSM you can't choose to receive email notifications of the integrity check results. | |
# So there's a little workaround, at the end of this script, to send an (email) notification. | |
# The results of the integrity check are taken from the synobackup.log file. | |
# | |
# In DSM -> Control Panel -> Notification I enabled email notifications, | |
# I changed its Subject to %TITLE% and the content to: | |
# Dear user, | |
# | |
# Integrity check for %TASK_NAME% is done. | |
# | |
# %OUTPUT% | |
# | |
# This way I receive email notifications with the results of the Integrity Check. | |
# | |
# Credits: | |
# - https://github.com/Jip-Hop | |
# - https://bernd.distler.ws/archives/1835-Synology-automatische-Datensicherung-mit-DSM6.html | |
# - https://www.beatificabytes.be/send-custom-notifications-from-scripts-running-on-a-synology-new/ | |
task_id=6 # Hyper Backup task id, get it with command: more /usr/syno/etc/synobackup.conf | |
task_name="USB3 3TB Seagate" # Only used for the notification | |
/bin/echo 2 > /dev/ttyS1 # Beep on start | |
startTime=$(date +"%Y/%m/%d %H:%M:%S") # Current date and time | |
device=$2 # e.g. sde1, passed to this script as second argument | |
# Backup | |
/usr/syno/bin/synobackup --backup $task_id --type image | |
while sleep 60 && /var/packages/HyperBackup/target/bin/dsmbackup --running-on-dev $device | |
do | |
: | |
done | |
# Check integrity | |
/var/packages/HyperBackup/target/bin/detect_monitor -k $task_id -t -f -g | |
# Wait a bit before detect_monitor is up and running | |
sleep 60 | |
# Wait until check is finished, poll every 60 seconds | |
/var/packages/HyperBackup/target/bin/detect_monitor -k $task_id -p 60 | |
# Send results of integrity check via email (from last lines of log file) | |
IFS='' | |
output="" | |
title= | |
NL=$'\n' | |
while read line | |
do | |
# Compute the seconds since epoch for the start date and time | |
t1=$(date --date="$startTime" +%s) | |
# Date and time in log line (second column) | |
dt2=$(echo "$line" | cut -d$'\t' -f2) | |
# Compute the seconds since epoch for log line date and time | |
t2=$(date --date="$dt2" +%s) | |
# Compute the difference in dates in seconds | |
let "tDiff=$t2-$t1" | |
# echo "Approx diff b/w $startTime & $dt2 = $tDiff" | |
# Stop reading log lines from before the startTime | |
if [[ "$tDiff" -lt 0 ]]; then | |
break | |
fi | |
text=`echo "$line" | cut -d$'\t' -f4` | |
# Get rid of [Local] prefix | |
text=`echo "$text" | sed 's/\[Local\]//'` | |
if [ -z ${title} ]; then | |
title=$text | |
fi | |
output="$output${NL}$text" | |
done <<<$(tac /var/log/synolog/synobackup.log) | |
# Hijack the ShareSyncError event to send custom message. | |
# This event is free to reuse because I don't use the Shared Folder Sync (rsync) feature. | |
# More info on sending custom (email) notifications: https://www.beatificabytes.be/send-custom-notifications-from-scripts-running-on-a-synology-new/ | |
/usr/syno/bin/synonotify "ShareSyncError" "{\"%OUTPUT%\": \"${output}\", \"%TITLE%\": \"${title}\", \"%TASK_NAME%\": \"${task_name}\"}" | |
# Sleep a bit more before unmounting the disk | |
sleep 60 | |
# Unmount the disk | |
exit 100 |
Hi,
Yes, I realise that the second 'detect_monitor' call is polling for progress, but I believe that it is also writing log entries as well as the first 'detect_monitor' call.
I have chosen instead to monitor the 'synoimgbkptool' command but the result should be the same.
The issue is, I think, that we're trying to use some internal commands written by Synology that are not intended for public consumption, so consequently we have no access to the documentation.
As I said in my previous post, the 'trial and error' approach is just taking too much time as there appears to be no easy way to determine the process dependencies other than by observation, but my integrity check takes over 6 hours to complete so I have got bored trying to second guess what Hyper Backup is going to do next
Wouldn't I just need theses additional lines between backup and integrity check calls to wait for rotation having finished:
# Wait until version rotation is finished, poll every 60 seconds
sleep 60
while (pidof /var/packages/HyperBackup/target/bin/synoimgbkptool)
sleep 60
do
And synoimgbkptool is also running during integrity check?
I don't know what process runs the rotation, it could be 'synoimgbkptool' but I'm not sure.
The other problem is the atomic nature of these processes. Let's say that you have monitored the backup and can see that it has finished. If you move straight on to look for the version rotation process how long do you wait for it to start before you look for it? If you look straight away, it might not yet be running so if you now move to the integrity check then it will fail once rotation starts.
If you decide to wait for the rotation task, how long do you wait for? If you wait too long then the rotation may start and end whilst you're waiting in which case your process will stall whilst you wait for a process to end that has already ended.
These are just some of the issues that I can think of that without access to proper documentation I have a feeling that whatever I write will end up too unreliable for my liking where backups are concerned. I've decided to try and time things to be safely concurrent and will have to accept the occasional failure if some task overruns its time slot.
I guess you could monitor the backup log for the relevant activity entries but that too seems a bit clunky.
Hi!
Thank you very much for your input.
I think, i figured it out now and cleaned up the whole script.
It now also creates some own entries in the synobackup.log file to have the same lines in the log as well as in the email notification.
#!/bin/sh
# This script is to be called by a Scheduler task as root user,
# having 'Run command / User-defined script' filled in with your script's path.
# i. e. /bin/bash /volume1/Scripts/autobackup.sh
#
# You need to change the task_id to match your Hyper Backup task.
# Get it with command: more /var/packages/HyperBackup/etc/synobackup.conf
# You also need to change the location of USB device and name of the block device associated with
# the filesystem partition on the USB disk. Find out with command 'df' having the USB device attached.
#
# I like to keep "Beep at start and end" disabled in Autorun, because I don't
# want the NAS to beep after completing (could be in the middle of the night)
# But beep at start is a nice way to confirm the script has started,
# so that's why this script starts with a beep.
#
# After the backup and the version rotation complete, the integrity check will start.
# If you like to receive the log entries in an e-mail after this script finished,
# check 'Send run details by email' and fill in 'Email' in the Scheduler task settings.
#
# Tested with DSM 7.2-64570 Update 3 and Hyper Backup 4.1.0-3718.
#
# Credits:
# - https://github.com/Jip-Hop
# - https://bernd.distler.ws/archives/1835-Synology-automatische-Datensicherung-mit-DSM6.html
# - https://derwebprogrammierer.at/
task_id=12 # Hyper Backup task id
task_name="[Omnia Auto-Backup]" # Only used for log entries
# Location of USB device and name of the block device associated with the filesystem partition on the USB disk. Find out with command 'df'.
USBDRV=/volumeUSB1/usbshare # See column 'Mounted on' in df result
device=sdq1 # See column 'Filesystem' in df result
/bin/echo 2 > /dev/ttyS1 # Beep on start
startTime=$(date +"%Y/%m/%d %H:%M:%S") # Current date and time
echo -e "info\t${startTime}\tSYSTEM:\t${task_name} Started." >> /var/log/synolog/synobackup.log
# Backup - Begin
/usr/syno/bin/synobackup --backup $task_id --type image
# Wait until backup is finished, poll every 60 seconds
while sleep 60 && /var/packages/HyperBackup/target/bin/dsmbackup --running-on-dev $device
do
:
done
# Backup - End
## Version rotation - Begin
# Wait until version rotation is finished, poll every 60 seconds
while sleep 60 && [ "$(pidof /var/packages/HyperBackup/target/bin/synoimgbkptool)" != "" ];
do
:
done
## Version rotation - End
## Check integrity - Begin
/var/packages/HyperBackup/target/bin/detect_monitor --task-id $task_id --trigger --full --guard
# Wait until check is finished, poll every 60 seconds
#/var/packages/HyperBackup/target/bin/detect_monitor --task-id $task_id --polling 60 # Produces double log entries
while sleep 60 && [ "$(pidof /var/packages/HyperBackup/target/bin/detect_monitor)" != "" ];
do
:
done
## Check integrity - End
# Sleep a bit more before unmounting the disk
sleep 60
## Unmount USB device - Begin
sync
sleep 10
umount $USBDRV
umountResult=$(/usr/syno/bin/synousbdisk -umount $device; >/tmp/usbtab)
currTime=$(date +"%Y/%m/%d %H:%M:%S") # Current date and time
echo -e "info\t${currTime}\tSYSTEM:\t${umountResult}" >> /var/log/synolog/synobackup.log
## Unmount USB device - End
currTime=$(date +"%Y/%m/%d %H:%M:%S") # Current date and time
echo -e "info\t${currTime}\tSYSTEM:\t${task_name} Finished." >> /var/log/synolog/synobackup.log
## Get results of auto backup (from last lines of log file) - Begin
IFS=''
output=()
NL=$'\n'
while read line
do
# Compute the seconds since epoch for the start date and time
t1=$(date --date="$startTime" +%s)
# Date and time in log line (second column)
dt2=$(echo "$line" | cut -d$'\t' -f2)
# Compute the seconds since epoch for log line date and time
t2=$(date --date="$dt2" +%s)
# Compute the difference in dates in seconds
let "tDiff=$t2-$t1"
# Stop reading log lines from before the startTime
if [[ "$tDiff" -lt 0 ]]; then
break
fi
text=`echo "$line" | cut -d$'\t' -f4`
# Get rid of [Local] prefix
text=`echo "$text" | sed 's/\[Local\]//'`
#GBA Add date and time
text=`echo "${dt2} ${text}"`
output+=("$text")
done <<<$(tac /var/log/synolog/synobackup.log)
n=${#output[*]}
for (( i = n-1; i >= 0; i-- ))
do
echo ${output[i]}
done
## Get results ... - End
exit 0
I am just wondering, what this line is for:
IFS=''
And does anybody have any suggestions for improvements?
I’m glad you got it working, happy to have helped!
As for improvements, you could move the process monitor into its own function to save you having to repeat the code twice, whilst at the same time just tightening up the ‘pidof’ command to only request a single PID. If you are unsure about this then let me know.
As for the IFS=‘’, that statement changes the Internal Field Separators (IFS) from their default values to an empty string so the text read from the log is brought into the program as a complete line, not broken up into ‘words’
I found this because I am looking for a script to do the following:
- Check for upcoming scheduled backups
- If Backup is scheduled soon, mount External USB.
- Run Backup, Integrity, etc
- Unmount External USB
I see this script will unmount but would be good to have a condition to mount the USB.
Another suggestion is to allow for multiple tasks, in my particular case I spread my backup tasks over different folders, each with its own backup cadency and schedule.
Hi,
Mounting the USB is pretty easy, but as I’ve said in an earlier post, stepping too much into the way Synology have constructed their O/S and it’s packages risks becoming too fragile, in my view, for something as important as a backup.
I have my backups set to run at a specific time of day, so have scheduled a ‘mount USB drive’ task to run a few minutes beforehand. Similarly I have a rough idea of how long the backup normally takes so have timed my integrity check to start at ‘the usual backup end time plus 10 mins’ figuring that I can cope with the very occasional occurrence of the backup overrunning the integrity check start time.
I have then written a script that starts about 15 minutes before the integrity check usually completes, which hangs around until the integrity check is done, unmounts the USB drive and then harvests the int. check results from the log and emails them to me so that I can easily see the results without needing to log onto the NAS
Great to know your approach there, thanks!
In my particular approach I’ve moved to backing up all my volumes individually + some critical folders. Some of them will be >5GB backups run daily, while other tasks are 100-500gb backups run weekly / monthly. In this case the integrity check time length wouldn’t be reliable correct?
Pin my system, the integrity check seems to be the most consistent in respect of the time it takes to run, typically around 6 and a quarter hours.
The backup time is usually consistent, unless I make a bulk change (eg adding or changing tags on my music files for example). Then the backup end time is unknown/unknowable.
I think the whole lot is controlled by a daemon task that monitors the backup and therefore knows when to kick off the version rotation, but it doesn’t seem to know about any scheduled integrity check, which is why, if the latter kicks off whilst version rotation is happening, the itg.check fails.
Because the events are essentially black-boxes, all you can hope to do is inspect what’s happening and react, but as I said in an earlier post m for me that approach is too fragile to risk with a backup
As for improvements, you could move the process monitor into its own function to save you having to repeat the code twice, whilst at the same time just tightening up the ‘pidof’ command to only request a single PID. If you are unsure about this then let me know.
Hi!
Thank you for that information.
What do you mean with the quoted above?
Hi,
I don't know how much coding experience you have so sorry if I tell you stuff that you already know :-)
The 'while/sleep' loop appears twice with the only difference between those two loops is the name of the process you're trying to get the PIDs for. You could move the 'pidof' bit into its own function and pass the process name as an argument, e.g.
(and, yes I know 'function' and '()'and not both needed, it's just my preferred style)
function process_is_active() {
local _process_name="$1"
# Find one PID only (-s) across all shells (-x)
pidof -s -x /var/packages/HyperBackup/target/bin/"${_process_name}" > /dev/null
return $?
}
Then use as:
while process_is_active 'synoimgbkptool'; do
sleep 60
done
and
while process_is_active 'detect_monitor'; do
sleep 60s
done
or in your current style:
while sleep 60 && process_is_active 'detect_monitor'; do
:
done
The function 'process_is_active()' will return 0 (zero) if there is at least one process running with the passed name, 1 if not.
This also has the (minor) advantage of only looking for one instance of a process, not all of them (you don't care how many are running, you just need to know if at least one is).
To my mind this construction has the advantages of adhering to the DRY principle (Don't Repeat Yourself) and of replacing a bit of slightly obscure code into a more readable name.
As a personal preference, I'm not a fan of the while-loop format:
while sleep 60 && <some unrelated command>
Whilst I know it doesn't (necessarily) apply in this particular instance, I still try to write code that suits general cases so the reason I'm not a fan of the style above is this:
In the sleep 60 && pidof...
construct, if the process being checked by 'pidof...' has already ended, you'll incur a needless 60 second delay before you find out, as the 'sleep' command will run first before the 'pidof' is evaluated
Under the same circumstances in the examples I've given above, where the sleep has been moved into the loop body, the loop will be bypassed completely if the monitored process has already ended.
If you do decide to experiment with this, you need to insert the function code at the beginning of your script, ie before the line that reads
task_id=12
Hope that explains things a bit, happy to answer any other questions
Hi!
Now I reworked the script trying to adapt your advice:
#!/bin/sh
# This script is to be called by a Scheduler task as root user,
# having 'Run command / User-defined script' filled in with your script's path.
# i. e. /bin/bash /volume1/Scripts/autobackup.sh
#
# You need to change the task_id to match your Hyper Backup task.
# Get it with command: more /var/packages/HyperBackup/etc/synobackup.conf
# You also need to change the location of USB device and name of the block device associated with
# the filesystem partition on the USB disk. Find out with command 'df' having the USB device attached.
#
# I like to keep "Beep at start and end" disabled in Autorun, because I don't
# want the NAS to beep after completing (could be in the middle of the night)
# But beep at start is a nice way to confirm the script has started,
# so that's why this script starts with a beep.
#
# After the backup and the version rotation complete, the integrity check will start.
# If you like to receive the log entries in an e-mail after this script finished,
# check 'Send run details by email' and fill in 'Email' in the Scheduler task settings.
#
# Tested with DSM 7.2-64570 Update 3 and Hyper Backup 4.1.0-3718.
#
# Credits:
# - https://gist.github.com/Jip-Hop/b9ddb2cc124302a5558659e1298c36ec
# - https://derwebprogrammierer.at/
function process_is_active() {
local _process_name="$1"
# Find one PID only (-s) across all shells (-x)
pidof -s -x "/var/packages/HyperBackup/target/bin/${_process_name}" > /dev/null
return $?
}
task_id=12 # Hyper Backup task id
task_name="[Omnia Auto-Backup]" # Only used for log entries
# Location of USB device and name of the block device associated with the filesystem partition on the USB disk. Find out with command 'df'.
USBDRV=/volumeUSB1/usbshare # See column 'Mounted on' in df result
device=sdq1 # See column 'Filesystem' in df result
#/bin/echo 2 > /dev/ttyS1 # Beep on start
startTime=$(date +"%Y/%m/%d %H:%M:%S")
echo -e "info\t${startTime}\tSYSTEM:\t${task_name} Started." >> /var/log/synolog/synobackup.log
# Backup - Begin
currTime=$(date +"%Y/%m/%d %H:%M:%S") # Current date and time
echo -e "info\t${currTime}\tSYSTEM:\t${task_name} Backup start ..." >> /var/log/synolog/synobackup.log
/usr/syno/bin/synobackup --backup $task_id --type image
sleep 60
while /var/packages/HyperBackup/target/bin/dsmbackup --running-on-dev $device; do
sleep 60
done
# Backup - End
## Version rotation - Begin
sleep 60
currTime=$(date +"%Y/%m/%d %H:%M:%S")
echo -e "info\t${currTime}\tSYSTEM:\t${task_name} Rotation start ..." >> /var/log/synolog/synobackup.log
while process_is_active 'synoimgbkptool'; do
sleep 60
done
## Version rotation - End
## Check integrity - Begin
sleep 60
currTime=$(date +"%Y/%m/%d %H:%M:%S")
echo -e "info\t${currTime}\tSYSTEM:\t${task_name} Integrity check start ..." >> /var/log/synolog/synobackup.log
/var/packages/HyperBackup/target/bin/detect_monitor --task-id $task_id --trigger --full --guard
sleep 60
while process_is_active 'detect_monitor'; do
sleep 60
done
## Check integrity - End
# Sleep a bit more before unmounting the disk
sleep 60
## Unmount USB device - Begin
sync
sleep 10
umount $USBDRV
umountResult=$(/usr/syno/bin/synousbdisk -umount $device; >/tmp/usbtab)
currTime=$(date +"%Y/%m/%d %H:%M:%S")
echo -e "info\t${currTime}\tSYSTEM:\t${umountResult}" >> /var/log/synolog/synobackup.log
## Unmount USB device - End
currTime=$(date +"%Y/%m/%d %H:%M:%S")
echo -e "info\t${currTime}\tSYSTEM:\t${task_name} Finished." >> /var/log/synolog/synobackup.log
## Get results of auto backup (from last lines of log file) - Begin
IFS=''
output=()
NL=$'\n'
while read line
do
# Compute the seconds since epoch for the start date and time
t1=$(date --date="$startTime" +%s)
# Date and time in log line (second column)
dt2=$(echo "$line" | cut -d$'\t' -f2)
# Compute the seconds since epoch for log line date and time
t2=$(date --date="$dt2" +%s)
# Compute the difference in dates in seconds
let "tDiff=$t2-$t1"
# Stop reading log lines from before the startTime
if [[ "$tDiff" -lt 0 ]]; then
break
fi
#text=`echo "$line" | cut -d$'\t' -f4`
text=$(echo "$line" | cut -d$'\t' -f4)
# Get rid of [Local] prefix
text=$(echo "$text" | sed 's/\[Local\]//')
# Add date and time
text=$(echo "${dt2} ${text}")
output+=("$text")
done <<<$(tac /var/log/synolog/synobackup.log)
n=${#output[*]}
for (( i = n-1; i >= 0; i-- ))
do
echo "${output[i]}"
done
## Get results ... - End
exit 0
Seems to work fine.
The only thing, I do not understand, yet:
Why does the rotation already start while dsmbackup is still running?
Compare script with following log entries:
2024/01/13 02:00:01 [Omnia Auto-Backup] Started.
2024/01/13 02:00:01 [Omnia Auto-Backup] Backup start ...
2024/01/13 02:00:15 [Omnia Backup] Backup task started.
2024/01/13 16:55:18 [Omnia Backup] Backup task finished successfully. [599281 files scanned] [71 new files] [9 files modified] [599201 files unchanged]
2024/01/13 16:55:19 [Omnia Backup] Trigger version rotation.
2024/01/13 16:55:57 [usbshare1] Version rotation started from ID [Omnia.hbk].
2024/01/13 16:56:57 [Omnia Auto-Backup] Rotation start ...
2024/01/13 18:16:29 [usbshare1] Rotate version [2023-12-09 02:00:39] from ID [Omnia.hbk].
2024/01/13 18:16:30 [usbshare1] Version rotation completed from ID [Omnia.hbk].
2024/01/13 18:16:59 [Omnia Auto-Backup] Integrity check start ...
2024/01/13 18:17:10 [Omnia Backup] Backup integrity check has started.
2024/01/14 06:48:30 [Omnia Backup] Data integrity check finished. 3574.4 GB data checked this time, accounting for 100.0% of the total data (3574.4 GB data in total, 100.0% checked already).
2024/01/14 07:34:47 [Omnia Backup] Backup integrity check is finished. No error was found.
2024/01/14 07:36:12 Unmount USB device sdq1 succeeded.
2024/01/14 07:36:12 [Omnia Auto-Backup] Finished.
When I tried to remove the
while process_is_active 'synoimgbkptool';
check, the integrity check failed, because the rotation was not finished, yet.
So this check is also necessary.
And:
How can I find out, whether the backup, the rotation and the integrity check worked fine or not, and create different texts in the last log entry accordingly?
i. e. 'Finished successfully.' or 'Failed during rotation.'
Hi @gersteba I suggest you fork this gist and continue the development (and the discussion) in there. Good luck with the script! 🙂
I'd prefer to keep your script as it is - my adaption is targeted to my very special needs.
But maybe others can benefit from the additional input and thoughts, too.
Thank you again for your very valuable starting basis!
Sorry but that's exactly the reason I urge you to fork this gist (or create a new one) so you can work on it for your own very special needs. I have no interest in following the conversation any longer but I receive an email for each comment on this gist and I try to reduce the noise in my inbox. I'd prefer not to delete this gist so others may still find it in the future. So please take the development and discussion elsewhere. You can comment the new location here and ask the other participants to join at the new location.
For further discussion on my adapted script please subscribe here:
https://gist.github.com/gersteba/6b07be49aa94c8df1bb88e7db615987d
@Jip-Hop: Sorry for any inconvenience.
Hi!
Thank you for your input.
Actually the second call of detect_monitor is just polling the progress for the integrity check every 60 seconds.
But I will give this code a try, instead of the second detect_monitor call:
What is also irritating:
There is just one log entry "Backup integrity check has started.", but two of "Data integrity check finished. ..." and "Backup integrity check is finished. ..." each.
Who does understand this?