-
-
Save catchdave/69854624a21ac75194706ec20ca61327 to your computer and use it in GitHub Desktop.
# MOVED to public repo: https://github.com/catchdave/ssl-certs/blob/main/replace_synology_ssl_certs.sh |
So I updated to the latest DSM 7.1.1-42962 Update 4, and I'm still facing the same issues.
After running the script, basically everything stops working, and I have to manually import the certificate files in Control Panel --> Security, to restore functionality.
I even changed the script back, to copy the files first to the default directory - thought it could make a difference.
I really don't get what I'm doing wrong, compared to your positive experiences.
Running Task as root
#!/bin/bash
DEBUG= # Set to any non-empty value to turn on debug mode
error_exit() { echo "[ERROR] $1"; exit 1; }
warn() { echo "[WARN ] $1"; }
info() { echo "[INFO ] $1"; }
debug() { [[ "${DEBUG}" ]] && echo "[DEBUG ] $1"; }
# 1. Initialization
# =================
[[ "$EUID" -ne 0 ]] && error_exit "Please run as root" # Script only works as root
certs_src_dir="/volume1/exch-cert" # where the new certificate files are located
certs_default_dir="/usr/syno/etc/certificate/system/default"
services_to_restart=("nmbd" "avahi" "ldap-server" "ftpd")
packages_to_restart=(
"FQDN"
"HyperBackup"
"HyperBackupVault"
"LogCenter"
"PrestoServer"
"ReplicationService"
"ScsiTarget"
"SynologyDrive"
"VPNCenter"
"WebDAVServer"
"SurveillanceStation"
)
# "ActiveBackup" # ActiveBackup uses a self signed certificate.
target_cert_dirs=(
"/usr/syno/etc/certificate/system/FQDN"
"/usr/syno/etc/certificate/smbftpd/ftpd"
"/usr/local/etc/certificate/HyperBackupVault/HyperBackupVault"
"/usr/local/etc/certificate/LogCenter/pkg-LogCenter"
"/usr/local/etc/certificate/PrestoServer/PrestoServer"
"/usr/local/etc/certificate/ReplicationService/snapshot_receiver"
"/usr/local/etc/certificate/ScsiTarget/pkg-scsi-plugin-server"
"/usr/local/etc/certificate/SynologyDrive/SynologyDrive"
"/usr/local/etc/certificate/VPNCenter/OpenVPN"
"/usr/local/etc/certificate/WebDAVServer/webdav"
)
# "/usr/local/etc/certificate/ActiveBackup/ActiveBackup" # ActiveBackup uses a Long Term self signed certificate.
# Check files exist at source dir
if [ -e "${certs_src_dir}/privkey.pem" ]
then
info "ok privkey.pem exists in source dir."
else
error_exit "privkey.pem not found in source dir. Exiting with no changes."
fi
if [ -e "${certs_src_dir}/cert.pem" ]
then
info "ok cert.pem exists in source dir."
else
error_exit "cert.pem not found in source dir. Exiting with no changes."
fi
if [ -e "${certs_src_dir}/fullchain.pem" ]
then
info "ok fullchain.pem exists in source dir."
else
error_exit "fullchain.pem not found in source dir. Exiting with no changes."
fi
# Find the default certificate directory
default_dir_name=$(</usr/syno/etc/certificate/_archive/DEFAULT)
if [[ -n "$default_dir_name" ]]; then
target_cert_dirs+=("/usr/syno/etc/certificate/_archive/${default_dir_name}")
debug "Default cert directory found: '/usr/syno/etc/certificate/_archive/${default_dir_name}'"
else
warn "No default directory found. Probably unusual? Check: 'cat /usr/syno/etc/certificate/_archive/DEFAULT'"
fi
# Add Reverse Proxy App directories
for proxy in /usr/syno/etc/certificate/ReverseProxy/*/; do
debug "Found ReverseProxy dir: ${proxy}"
target_cert_dirs+=("${proxy}")
done
# Add AppPortal directories
for proxy in /usr/syno/etc/certificate/AppPortal/*/; do
debug "Found AppPortal dir: ${proxy}"
target_cert_dirs+=("${proxy}")
done
[[ "${DEBUG}" ]] && set -x
# 2. root own certificates in source dir
# =============================================================
cp "${certs_src_dir}/"{privkey,fullchain,cert}.pem "${certs_default_dir}/" || error_exit "Halting because of error moving files"
chown root:root "${certs_default_dir}/"{privkey,fullchain,cert}.pem || error_exit "Halting because of error chowning files"
info "Certs copied and chowned at system default dir."
# 3. Copy certificates to target directories if they exist
# ========================================================
for target_dir in "${target_cert_dirs[@]}"; do
if [[ ! -d "$target_dir" ]]; then
debug "Target cert directory '$target_dir' not found, skipping..."
continue
fi
info "Copying and chowning certificates to '$target_dir'"
if ! cp "${certs_default_dir}/"{privkey,fullchain,cert}.pem "$target_dir/" && \
chown root:root "$target_dir/"{privkey,fullchain,cert}.pem; then
warn "Error copying or chowning certs to ${target_dir}"
fi
done
# Remove cert files from source folder
rm "${certs_src_dir}/privkey.pem"
rm "${certs_src_dir}/fullchain.pem"
rm "${certs_src_dir}/last-cert.pem"
# Rename cert.pem to last-cert.pem
mv "${certs_src_dir}/cert.pem" "${certs_src_dir}/last-cert.pem"
# 4. Restart services & packages
# ==============================
info "Rebooting all the things..."
for service in "${services_to_restart[@]}"; do
/usr/syno/bin/synosystemctl restart "$service"
done
for package in "${packages_to_restart[@]}"; do # Restart packages that are installed & turned on
/usr/syno/bin/synopkg is_onoff "$package" 1>/dev/null && /usr/syno/bin/synopkg restart "$package"
done
# Restart nginx !!! WARNING !!! this may behave unexpectedly (for instance restarts running VMs / Docker images)
# /usr/syno/bin/synosystemctl restart nginx
# In DSM7 to avoid docker and VMM to restart when restarting nginx use:
# synow3tool --gen-all && systemctl reload nginx
# source: https://www.reddit.com/r/synology/comments/olve56/comment/h5hsogq/
# Faster ngnix restart (if certs don't appear to be refreshing, change to synosystemctl
if ! /usr/syno/bin/synow3tool --gen-all && sudo systemctl reload nginx; then
warn "nginx failed to reload"
fi
info "Completed"
Been playing with this quite a bit, and note I'm using DSM 7.2 beta right now. Was working "ok" but refreshing the certs remained a problem for me, unless I did the full restart which restarted docker and everything - less than optimal.
Here's my modified section 4 which seems to work and I can consistently reproduce (test is to manually update cert with an old one thru UI, then once I've validated the date of the cert, fully close browser, run the script in command line, wait 20 seconds for all restarts to 'take', and then re-open DSM in the browser and re-check the certificate date. So far working well.
# 4. Restart services & packages
# ==============================
info "Rebooting all the things..."
for service in "${services_to_restart[@]}"; do
/usr/syno/bin/synosystemctl restart "$service"
done
for package in "${packages_to_restart[@]}"; do # Restart packages that are installed & turned on
/usr/syno/bin/synopkg is_onoff "$package" 1>/dev/null && /usr/syno/bin/synopkg restart "$package"
done
# Faster ngnix restart (if certs don't appear to be refreshing, change to synosystemctl
if ! /usr/syno/bin/synow3tool --gen-all ; then
warn "synow3tool --gen-all failed"
fi
if ! /usr/syno/bin/synow3tool --nginx=reload ; then
warn "/usr/syno/bin/synow3tool --nginx=reload failed"
fi
if ! /usr/syno/bin/synow3tool --restart-dsm-service; then
warn "/usr/syno/bin/synow3tool --restart-dsm-service failed"
fi
info "Completed"
If anyone is still watching this thread / script... I did some research on what exactly happens when you run synow3tool --gen-all
Basically this command will take the certs from the /usr/syno/etc/certificate/_archive
folder and sync all of the other appropriate folders with that certificate. It:
- Reads the cert files in
/usr/syno/etc/certificate/_archive/{randomchars}
- Re-creates all of the folders in
/usr/syno/etc/certificate/ReverseProxy
with completely new folders with the certs, and removes the old ones. - Creates a new version of the
/usr/syno/etc/certificate/system/FQDN
folder (naming it something likeFQDN.temp
) - Updates the certs in
/usr/syno/etc/certificate/system/FQDN.temp
with the ones fromarchive
- Deletes
/usr/syno/etc/certificate/system/FQDN
- Renames
/usr/syno/etc/certificate/system/FQDN.temp
to/usr/syno/etc/certificate/system/FQDN
- Creates
/usr/syno/etc/certificate/system/default.temp
- Copy all of the certs from
/usr/syno/etc/certificate/_archive/{randomchars}
to/usr/syno/etc/certificate/system/default.temp
- Renames
/usr/syno/etc/certificate/system/default.temp
to/usr/syno/etc/certificate/system/default
...So... basically it looks like the much simpler option is to drop your new certs in the /usr/syno/etc/certificate/_archive/{randomchars}
folder and then run synow3tool --gen-all
as root
. There's no need to do the ReverseProxy folders or the FQDN folders for example.
Because I have no services installed like HyperBackupVault
etc., I can't tell if those are taken care of by the synow3tool
as well or not, I'll do more research there.
In my case, the only additional folders I have that don't seem to be taken care of by the synow3tool
sync are:
/usr/syno/etc/certificate/kmip/kmip
(Key Manager)/usr/syno/etc/certificate/smbftpd/ftpd
(FTPS)
@telnetdoogie this is still not working for me. But at least now I got more descriptive errors...
[WARN ] synow3tool --gen-all failed
(...)
nginx: [emerg] SSL_CTX_use_PrivateKey("/usr/syno/etc/www/certificate/system_default/b4d6e608-ec11-4368-8219-db76840ed58f.pem") failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch)
(...)
/usr/syno/bin/synow3tool: unrecognized option '--restart-dsm-service'
[EDIT] Deleting stuff under /usr/syno/etc/www/certificate/[...]
messed up my DSM installation.
So my recommendation is don't do it.
If I remove everything inside
/usr/syno/etc/www/certificate/system_default/
, thensynow3tool --gen-all
will generate the necessary certificates in those folders.
Also, the SurveillanceStation folder that was missing from my configuration was also located there/usr/syno/etc/www/certificate/
However, I still get this errorGenerated nginx tmp config is not valid
, which only happens if I change the certificates in the other folders.
I still don't know what am I missing here.
@footswitch The error message hints at the key you are copying doesn't match the underlying CSR of the certificate. It's not a matter of the script.
@fhemberger THANK YOU for making me recheck everything. It was painful but I got it now.
DSM was misleading me to use the wrong "chain" file all along.
My certificate files are generated with win-acme, which outputs four files:
...-key.pem
--> privkey.pem
...-crt.pem
--> cert.pem
...-chain-only.pem
--> fullchain.pem
(wrong)
...-chain.pem
-- NOT USED
When I replace the certificate in the UI (Control Panel --> Security --> Certificates), I select these files above, with ...-chain-only.pem
being the intermediate certificate, and the certificate was getting replaced with no errors.
But in order for the script to work, the files must be:
...-key.pem
--> privkey.pem
...-crt.pem
--> cert.pem
...-chain-only.pem
-- NOT USED
...-chain.pem
--> fullchain.pem
(right)
After reading your last comment, specifically the part of Win-Acme...
you (and others) may benefit of using docker-certbot direct in your synology.
I use the certificates only for the Reverse Proxy (in conjunction to Pomerium) to access my docker containers and the DSM interface
I created the certificates with:
docker run -it --rm --name certbot -v "/volume1/Primero/Certs/certbot/etc/letsencrypt:/etc/letsencrypt" -v "/volume1/Primero/Certs/certbot/var/lib/letsencrypt:/var/lib/letsencrypt" -v "/volume1/Primero/Certs/certbot/log/:/var/log" -v "/volume1/Primero/Certs/certbot/certs_to_syno:/certs" certbot/certbot certonly --agree-tos --manual --manual-auth-hook /etc/letsencrypt/acme-dns-auth.py --preferred-challenges dns --debug-challenges -d *.carXXXXXX.xyz -d carXXXXXX.xyz
And then I have a scheduled task in Synology's task scheduler that runs every X days to renovate them with:
docker run --rm --name certbot -v "/volume1/Primero/Certs/certbot/etc/letsencrypt:/etc/letsencrypt" -v "/volume1/Primero/Certs/certbot/var/lib/letsencrypt:/var/lib/letsencrypt" -v "/volume1/Primero/Certs/certbot/log/:/var/log/letsencrypt/" -v "/volume1/Primero/Certs/certbot/certs_to_syno:/certs" certbot/certbot renew --no-random-sleep-on-renew --deploy-hook "cp /etc/letsencrypt/live/caXXXXXX.xyz/*.pem /certs"
With the last part of the renovation command (--deploy-hook "cp...."), if the certificates were renovated (meaning the time to do so has arrived, the standard is 30 days before expiration) , the certificates are copied to a specific directory and then (not included above) copied to the correspondent directory inside the synology directory (in my case: /usr/syno/etc/certificate/_archive/u0rjgL) . I could copy them directly to the final the synology directory but I use this bridge to detect any error in the process.
Then, for me it is enough to have in the script:
synow3tool --gen-all && systemctl reload nginx
to have the new certifcates "activated" for the reverse proxy
With this script I have the full end-to-end process automated: from renovation to activation.
The script has to be run with root privileges.
The complete script does a lot of validations that may be redundant, I am not sharing it here because it does not have a good "coding standard" (i.e. I use hard values instead of variables) and comments are in spanish but happy to share more info if required.
@carmatana thank you for your input.
In my case we have the certs in Windows because we need them there to begin with.
In this discussion we all have the same starting point: our certificates aren't managed by DSM.
The only thing I didn't test, is if it's enough to copy just to a single default folder, and run:
synow3tool --gen-all && systemctl reload nginx
Or if we do need to copy to every folder and restart every service.
@footswitch dropping the certs, named appropriately, in the …/_archive/{randomletters referred to in DEFAULT}
and then running the three commands:
/usr/syno/bin/synow3tool --gen-all
/usr/syno/bin/synow3tool --nginx=reload
/usr/syno/bin/synow3tool --restart-dsm-service
should take care of most needs. But as the script above shows, there are some additional folders where certificates are also stored that the above three commands won’t sync to. My latest thoughts are that it might be a good idea to just replace these folders with symlinks to the default folder so it’s easier to keep everything synced, versus copying the certs into more locations. But I haven’t tried that, nor am I sure that those other locations are actually needed or referenced anywhere.
I’ve been keeping my own tweaked version of this awesome script Here which, similar to @carmatana - I generate certs with certbot in docker on the synology on a schedule. I don’t use the scp
portion to copy, and I just schedule this script to run every week so I added logic to check to see whether the latest certs differ from those already installed for DSM and only do the updates and sync if they are different.
@telnetdoogie essentially, your "external" source of certs is the docker.
It's still external; the difference is you have kinda direct access to the Synology filesystem.
Some people have SMB access to the filesystem.
Some have scp
access to the filesystem.
It's all good. :-D
The big picture for this:
- We all generate the certs somewhere, potentially for use in a number of places on our network.
- We need to get the certs regularly and automagically installed and running in a Synology
So we
- copy updated certs into the Syno
- run this script which puts them in all the necessary places
- AND it restarts necessary packages/services
Sorry, but I think acmesh perfectly fits this use case. That said, of course, this is not the same case where your Synology device has Internet access and you can use Let's Encrypt directly from DSM web interface.
Sorry, but I think acmesh perfectly fits this use case. That said, of course, this is not the same case where your Synology device has Internet access and you can use Let's Encrypt directly from DSM web interface.
Many of us use exactly that somewhere else. The whole purpose of this script is properly injecting the resulting certs into DSM :)
Sorry, but I think acmesh perfectly fits this use case. That said, of course, this is not the same case where your Synology device has Internet access and you can use Let's Encrypt directly from DSM web interface.
Many of us use exactly that somewhere else. The whole purpose of this script is properly injecting the resulting certs into DSM :)
What I meant was that acmesh has a deploy hook for Synology DSM. Read some info here.
Highly suggest checking out https://github.com/reddec/syno-cli/tree/master
It uses the same API as acmesh but it's in an easy to consume cli for folks to leverage.
What I meant was that acmesh has a deploy hook for Synology DSM. Read some info here.
According to those documents, this deploy hook does install the cert into DSM, but does NOT install it into the various packages and services ("Certificate should now show up in "Control Panel" -> "Security" -> "Certificates" and can be assigned to Services..."). This script does it all rather nicely.
I love the exploring being done by everyone! It's true:
- Those who can use acme.sh deploy hooks can take care of the transfer to DSM more easily that way
- If
synow3tool --gen-all
takes care of deploying into all packages and services, then the rest of the work can be avoided
I'll do some exploring on both for my use case, when I get a few more round 'tuits. ;)
(My use case: certs generated in pfSense on another host+VM.)
- If
synow3tool --gen-all
takes care of deploying into all packages and services, then the rest of the work can be avoided
it doesn’t do ALL of them, but does seem to take care of the majority. I added a check script that shows all the locations that still have differing certs and eventually incorporated that into my modified version.
I had a hard time understanding if the ones it “leaves out” are even really used / needed but my use cases aren’t that complex (I don’t use ftp
etc)
BTW @catchdave thank you SO much for this script! It’s made ssl renewals and applications incredibly simple and reliable. I’m all about “no human interaction” automation and this and the derivatives have been key to that for me!
Thanks you very much for this scripts which is a great help!!
I managed to make it work on my NAS from an NGINX linux server that is used as reverse proxy for all my services. The wildcard let's encrypt certificate for my domain is generated from that server too which explain why this script is perfect for me.
I have a question (hope not too stupid); how did you managed to avoid having to write the password of the user created on synology each time the commands are sent?
When using this command from my nginx server, the password for the synology user is requested twice (for the 2 actions it does)
sudo bash -c scp ${CERT_DIRECTORY}/{privkey,fullchain,cert}.pem $USER@$SYNOLOGY_SERVER:/tmp/ \
&& ssh $USER@$SYNOLOGY_SERVER 'sudo ./replace_synology_ssl_certs.sh'
I would like to find a way to do it passwordless so that I can put it into a crontab on the nginx server.
How did you guys managed to fully automate it (if some of you did)?
I also noticed on the command that the single quote is missing before scp and after /tmp/ like this. Before finding this, the command was not working for me.
sudo bash -c 'scp ${CERT_DIRECTORY}/{privkey,fullchain,cert}.pem $USER@$SYNOLOGY_SERVER:/tmp/' \
&& ssh $USER@$SYNOLOGY_SERVER 'sudo ./replace_synology_ssl_certs.sh'
It was too beautiful to be real... When I execute the script, it reset (hard reboot) all my VMs running on VMM which is a big problem for me :(
I see that footswitch is mentioning using "reload" for nginx but this is already implemented on the script provided by catchdave.
I any case, I tried commenting this on the script but it still resetting all VMs.
I found out that this is the restart of the package ScsiTarget that do it...
Do you think this is important restart that package once the certificate is changed?
If you have any idea , I would be very grateful since this issue is blocking me to use this amazing script.
Do you think this is important restart that package once the certificate is changed?
I'm not convinced that those apps need to be restarted; As a check, I'd recommend you try the scripts with service_to_restart
and packages_to_restart
empty, as well as emptying target_cert_dirs
and then run this script once everything is done. It will identify the folders you need to add back to target_cert_dirs
that weren't handled by the synology OS and then you'll be able to populate the target folders and the packages / services to restart based only on those that it finds. Then you can test those apps you depend on to ensure they're working properly.
regarding your SSH login, you should be able to use import your public SSH key from the host you're copying FROM into the ~/.ssh/authorized_keys
file on your synology (whichever user you're logging in as) and use key authentication which won't require a password prompt.
For example, if you're copying the files from linuxhost
as user bob
:
login to linuxhost
as bob
and copy the contents of the file ~/.ssh/id_rsa.pub
If you're logging INTO the admin
user of the synology:
login to synology as admin
and edit the file ~/.ssh/authorized_keys
paste the contents from bob
's public key into the bottom of the authorized_keys
file (should be a single line, don't add any additional line breaks etc)
then when you attempt to ssh into the synology from linuxhost
it should use the key instead of a password prompt. Do it manually the first time for a prompt to add the machine to known_hosts
and then once it's done, you won't have to use a password any more.
I added the script I use to renew my ssl and copy the cert everywhere here: https://gist.github.com/catchdave/3f6f412bbf0f0cec32469fb0c9747295
Could be a useful starting place for folks to create their own (won't be usable out of the box, since you will have different combinations services & servers). This specific one is controlled by a cronjob on the server that plex runs on.
Do you think this is important restart that package once the certificate is changed?
I'm not convinced that those apps need to be restarted; As a check, I'd recommend you try the scripts with
service_to_restart
andpackages_to_restart
empty, as well as emptyingtarget_cert_dirs
and then run this script once everything is done. It will identify the folders you need to add back totarget_cert_dirs
that weren't handled by the synology OS and then you'll be able to populate the target folders and the packages / services to restart based only on those that it finds. Then you can test those apps you depend on to ensure they're working properly.regarding your SSH login, you should be able to use import your public SSH key from the host you're copying FROM into the
~/.ssh/authorized_keys
file on your synology (whichever user you're logging in as) and use key authentication which won't require a password prompt.For example, if you're copying the files from
linuxhost
as userbob
: login tolinuxhost
asbob
and copy the contents of the file~/.ssh/id_rsa.pub
If you're logging INTO theadmin
user of the synology: login to synology asadmin
and edit the file~/.ssh/authorized_keys
paste the contents frombob
's public key into the bottom of theauthorized_keys
file (should be a single line, don't add any additional line breaks etc)then when you attempt to ssh into the synology from
linuxhost
it should use the key instead of a password prompt. Do it manually the first time for a prompt to add the machine toknown_hosts
and then once it's done, you won't have to use a password any more.
Thank you very much for the quality of your answer telnetdoogie!
I managed to make passwordless work very well and I finally did my own script to replace the certificate on the synology because my need was not to replace the default certificate for everything, but just to replace the one I use only for synology drive because the port 6690 used by drive cannot be proxied by nginx since it is not a https protocol. So for this port, I do a port forwarding while all the rest is going through my nginx server which has the certbot certificate automatically renewed. This explain why I needed a script to send the certificate to the NAS each time it is renewed on the nginx.
So my need was just to use this certificate for drive and let the default one for all the rest.
I do the same with my firewall which is also used for the SSL vpn. I used a script that use the API of the firewall to send and update the certificate to it each time it is renewed on the nginx.
Thank you for the great job that has been made here. It helped me a lot!
Here is my script if it can help anyone.
So first I have manually uploaded a certificate in to the synology so that it creates the cert folder name which is using random characters. I also configure "Synology Drive Server" to use this certificate so that the link is made on the system.
Then, I just update the script to use that folder name and the script just update the cert and restart the "Synology Drive" service.
I made another script on the nginx that do the action of copying and executing that script into the synology NAS.
In my case too it was enough to run the synow3tool and do an nginx restart, thanks for the hints.
My script to find the correct folder and an opportunity to check nginx config before restarting the server:
cp ~/certs/* /usr/syno/etc/certificate/_archive/$(cat /usr/syno/etc/certificate/_archive/DEFAULT)
/usr/syno/bin/synow3tool --gen-all
nginx -t
read -p "nginx config ok?"
systemctl restart nginx
Hi Catchdave,
I'm looking for a solution for my DS1010+ with DSM 5.2-5967 Update 9.
I found many article on the net but nothing help me until yet.
Maybe you have already an idee.
Best Regards
DJG.
Hi Catchdave,
I'm looking for a solution for my DS1010+ with DSM 5.2-5967 Update 9. I found many article on the net but nothing help me until yet.
@DJGauthier : I have never had a system with 5.x OS so I simply don't know what it looks like. Fundamentally, you simply need to find where the SSL certs are stored and then find out the right way to restart associated services to use the new certs when you upload them. Perhaps the locations are similar to the 6.x version of this (look at the gist's history). You can probably easily find certs using a find
command once you SSH in.
FWIW, I just did a deeper dive on why platforms like pfSense don't have bash-type shells at all. It's because the security "attack surface" is too large for such capable shells. So they don't have it and discourage installing it! I can understand that... and in this case, the cost is simply to pre-expand those file lists. :)