Sync SSL certificates from Linux to a Synology NAS

Preamble

I have a wildcard SSL certificate issued by Let’s Encrypt on a Debian Linux-based webserver, which also covers hostnames I reverse proxy with a Synology NAS. Up until recently, I have been manually copying the SSL certificate across after renewal. With the upcoming reduction in permitted validity for SSL certificates, it’s time to automate!

Synology DiskStation Manager (DSM) doesn’t make this easy.

The DSM API to add certificates isn’t documented in any useful way. This has resulted in some (IMHO) pretty messy scripts that mostly seem to overwrite files in various directories, restart services and hope for the best. If you also need to routinely copy an SSL certificate to a Synology NAS, here’s an approach that uses the DSM API instead:

How it works

There are two scripts:

  1. A script on the Linux server that publishes the issued certificate into a specific directory
  2. A script on the Synology NAS that uses SCP to retrieve the certificates and install them

The user on the NAS needs to be able to SSH into the server using an unencrypted key, with group access to the directory containing the certificates.

Checklist

There are quite a few things to check before the scripts will work:

Linux server

  • Internet-facing, or at least accessible to the NAS via SSH
  • SSH configuration that permits key authentication
  • Group on the server called certsync – this helps keep the certificate files secure
  • Directory on the server called /var/lib/certsyncroot is the owner, certsync is the group, 750 for the directory permissions (i.e., no global permissions)
  • Non-root user account – needs to be a member of the certsync group, and must be able to SSH in

This also assumes you’re using certbot to issue Let’s Encrypt certificates. If you are using a different tool or a different CA, you will need to make a bunch of changes to how the script works.

Synology NAS

  • Running some flavour of DSM 7.x (I’ve tested this with DSM 7.1.1, but it should work with others)
  • SSH service enabled (Control Panel > Terminal & SNMP > Terminal)
  • User account that is a member of the administrators group
  • User home service enabled (Control Panel > User & Group > Advanced)
  • Optional: email notifications enabled (Control Panel > Notification > Email)

After confirming that the NAS user account can SSH in successfully: you need to create an RSA-based SSH key with no password, and transfer the public key to the user account on the Linux server. The easiest way to do this is using the ssh-keygen and ssh-copy-id commands.

This guide doesn’t cover how to do this, but you ultimately need to check that you can log into the Linux server via SSH from the Synology NAS without being prompted for a password.

Scripts

Linux server

This script runs as a deploy hook after any certificate has been renewed, and if the renewed certificate is in the list of certificates to copy – copies all of the relevant files atomically to a specified folder.

There are three configuration parameters – you probably only need to adjust LINEAGE_SUFFIXES (an array of certificate suffixes – check the directory structure in /etc/letsencrypt/live for what to use here).

#!/usr/bin/bash
# /etc/letsencrypt/renewal-hooks/deploy/synology-sync.sh
# Certbot deploy hook to sync PEM files for Synology DSM pull

set -euo pipefail

# === Configuration parameters  - edit these ===
STAGE_GROUP="certsync"
STAGE_DIR="/var/lib/certsync"
LINEAGE_SUFFIXES=("example.com-wildcard" "example.com.au-wildcard")

# === Stop editing ===

# Determine whether renewed lineage needs to be copied to the staging dir
lineage_allowed() {
  # Allow all if no filters specified
  if [[ ${#LINEAGE_SUFFIXES[@]} -eq 0 ]]; then
    return 0
  fi
  local lineage_path="$1"
  local base; base="$(basename -- "$lineage_path")"
  local sfx
  for sfx in "${LINEAGE_SUFFIXES[@]}"; do
    # Match either by full last component, or by path suffix
    if [[ "$base" == "$sfx" ]] || [[ "$lineage_path" == */"$sfx" ]]; then
      return 0
    fi
  done
  return 1
}

err() { echo "ERROR: $*" >&2; }

LINEAGE="${RENEWED_LINEAGE:-}"
if [[ -z "$LINEAGE" ]]; then
  err "RENEWED_LINEAGE is not set (this script is meant to run as a certbot deploy hook)."
  exit 1
fi

if ! lineage_allowed "$LINEAGE"; then
  echo "Skipping lineage not in LINEAGE_SUFFIXES: $LINEAGE"
  exit 0
fi

SRC_PRIVKEY="${LINEAGE}/privkey.pem"
SRC_CERT="${LINEAGE}/cert.pem"
SRC_CHAIN="${LINEAGE}/chain.pem"

[[ -f "$SRC_PRIVKEY" && -f "$SRC_CERT" && -f "$SRC_CHAIN" ]] || {
  err "Missing certificate files in $LINEAGE"
  exit 1
}

# Create staging dir if needed
install -d -o root -g "${STAGE_GROUP}" -m 0750 "${STAGE_DIR}"

# Create subdirectory for this specific lineage
LINEAGE_NAME="$(basename -- "$LINEAGE")"
LINEAGE_STAGE_DIR="${STAGE_DIR}/${LINEAGE_NAME}"
install -d -o root -g "${STAGE_GROUP}" -m 0750 "${LINEAGE_STAGE_DIR}"

# Stage atomically: copy into a temp dir, then move into place
TMP="$(mktemp -d "${STAGE_DIR}.tmp.XXXXXX")"
trap 'rm -rf "$TMP"' EXIT

# Copy with correct permissions (group-read for sync group; no world access to private key)
install -m 0640 "$SRC_PRIVKEY" "${TMP}/privkey.pem"
install -m 0644 "$SRC_CERT" "${TMP}/cert.pem"
install -m 0644 "$SRC_CHAIN" "${TMP}/chain.pem"

# Version marker: Subject + NotAfter + SHA256 fingerprint
CERTLOG="$(openssl x509 -in "$SRC_CERT" -noout -subject -enddate -fingerprint -sha256)"
printf '%s\n\n' "$CERTLOG" > "${TMP}/version.txt"
chmod 0644 "${TMP}/version.txt"

# Finalize atomically (keep stable file names expected by the NAS)
mv -f "${TMP}/privkey.pem"  "${LINEAGE_STAGE_DIR}/privkey.pem"
mv -f "${TMP}/cert.pem"     "${LINEAGE_STAGE_DIR}/cert.pem"
mv -f "${TMP}/chain.pem"    "${LINEAGE_STAGE_DIR}/chain.pem"
mv -f "${TMP}/version.txt"  "${LINEAGE_STAGE_DIR}/version.txt"

# Ensure ownership/perms
chown root:"${STAGE_GROUP}" "${LINEAGE_STAGE_DIR}/privkey.pem"
chmod 0640                  "${LINEAGE_STAGE_DIR}/privkey.pem"
chown root:"${STAGE_GROUP}" "${LINEAGE_STAGE_DIR}/cert.pem" "${LINEAGE_STAGE_DIR}/chain.pem" "${LINEAGE_STAGE_DIR}/version.txt"
chmod 0644                  "${LINEAGE_STAGE_DIR}/cert.pem" "${LINEAGE_STAGE_DIR}/chain.pem" "${LINEAGE_STAGE_DIR}/version.txt"

echo "Staged cert for $(basename -- "$LINEAGE") at ${LINEAGE_STAGE_DIR}"

Once edited, the script needs to be placed in /etc/letsencrypt/renewal-hooks/deploy – call it synology-sync.sh, and give it execute permissions (chmod a+x synology-sync.sh).

Synology NAS

This script is designed to run via Task Scheduler in the Control Panel, and handles one certificate per Task Scheduler entry. If you have multiple certificates, you need to customise and create multiple Task Scheduler entries!

The script logs into the Linux server on a schedule (I recommend daily to start, then move to weekly if everything is working), pulls the certificate files down, checks to see if they’re different to the installed version and if so – installs them.

There are several configuration parameters that you’ll need to customise to taste. You definitely need to adjust the following, but review all options down to where it says stop editing:

  • CERT_NAME – the name of the certificate in the DSM UI
  • LINUX_HOST – the FQDN of the Linux server
  • LINUX_USER – the user on the Linux server you’re SSH’ing in as
  • LINUX_LINEAGE_SUFFIX – the suffix used in the Let’s Encrypt directory structure
  • LOCAL_USER – the NAS user account that you’re SSH’ing from
#!/bin/sh
# synology-import-cert.sh — DSM 7.1.1: pull PEMs from a Linux server, validate, import, and reload nginx

set -eu
umask 077
export PATH="/usr/syno/bin:/usr/syno/sbin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:$PATH"

# === Configuration parameters - edit these (one cert per Task Scheduler entry) ===

# Name / description of the certificate entry in the DSM to import
CERT_NAME="*.example.com"

# Set this certificate as the default certificate for DSM (true/false)
SET_AS_DEFAULT="true"

# Linux server details and user account
LINUX_HOST="linux.server.host.name"
LINUX_USER="linuxuser"
LINUX_BASE_PATH="/var/lib/certsync"
LINUX_LINEAGE_SUFFIX="example.com-wildcard" # lineage directory name (e.g., from /etc/letsencrypt/live/...)
LINUX_PATH="${LINUX_BASE_PATH}/${LINUX_LINEAGE_SUFFIX}" # contains privkey.pem, cert.pem, chain.pem, version.txt

# (Optional but recommended) Strict host key checking — populate /root/.ssh/known_hosts first.
SSH_OPTS="-o BatchMode=yes -o StrictHostKeyChecking=yes"

# Local DSM user that owns the SSH key used for scp
LOCAL_USER="synouser"
LOCAL_HOME="/var/services/homes/${LOCAL_USER}"
LOCAL_SSH_KEY="${LOCAL_HOME}/.ssh/id_rsa"

# === Stop editing ===

# Working/state locations on DSM
WORKDIR="/tmp/certsync/${LINUX_LINEAGE_SUFFIX}" #transient
STATEDIR="/var/packages/CertSync/var/${LINUX_LINEAGE_SUFFIX}" # persistent
LOGFILE="${STATEDIR}/run.log"

# Find command paths
JQ="/bin/jq"
SCP="/bin/scp"
OPENSSL="/bin/openssl"
SYNOWEBAPI="/usr/syno/bin/synowebapi"

# Set up directories
install -d -m 0755 "$STATEDIR"
install -d -o "${LOCAL_USER}" -g users -m 0700 "$WORKDIR"

# Logging
touch "$LOGFILE"
log() { printf '%s %s\n' "$(date -u +'%Y-%m-%dT%H:%M:%SZ')" "$*" | tee -a "$LOGFILE" ; }

# Pull into a temp dir then install atomically
TMPDIR="$(mktemp -d "${WORKDIR}/.pull.XXXXXX")"
cleanup() { rm -rf "$TMPDIR" ; }
trap cleanup EXIT

# Let LOCAL_USER write into TMPDIR (scp runs as that user)
chown "${LOCAL_USER}:users" "$TMPDIR"
chmod 0700 "$TMPDIR"

log "Pulling certs from ${LINUX_USER}@${LINUX_HOST}:${LINUX_PATH}"

pull_file() {
  _remote="$1"
  _dest="$2"
  _error="$(/bin/su -s /bin/sh "${LOCAL_USER}" -c \
    "$SCP -i '${LOCAL_SSH_KEY}' ${SSH_OPTS} '${LINUX_USER}@${LINUX_HOST}:${LINUX_PATH}/${_remote}' '${_dest}'" 2>&1)" || {
    log "ERROR: scp failed for ${_remote}"
    log "SCP error output: ${_error}"
    return 1
  }
  log "Successfully pulled: ${_remote}"
}

pull_file "privkey.pem"  "${TMPDIR}/privkey.pem"
pull_file "cert.pem"     "${TMPDIR}/cert.pem"
pull_file "chain.pem"    "${TMPDIR}/chain.pem"
pull_file "version.txt"  "${TMPDIR}/version.txt"

for f in privkey.pem cert.pem chain.pem version.txt; do
  [ -f "${TMPDIR}/${f}" ] || { log "ERROR: failed to pull ${f}"; exit 1; }
done

# Sanity checks
$OPENSSL pkey -in "${TMPDIR}/privkey.pem" -noout >/dev/null 2>&1 || { log 'ERROR: invalid privkey.pem'; exit 1; }
$OPENSSL x509 -in "${TMPDIR}/cert.pem" -noout >/dev/null 2>&1 || { log 'ERROR: invalid cert.pem'; exit 1; }
$OPENSSL x509 -in "${TMPDIR}/chain.pem" -noout >/dev/null 2>&1 || { log 'ERROR: invalid chain.pem'; exit 1; }

# cert ↔ key match (key-type agnostic: SPKI hash)
CERT_SPKI="$(
  $OPENSSL x509 -in "${TMPDIR}/cert.pem" -noout -pubkey |
  $OPENSSL pkey -pubin -outform der 2>/dev/null |
  $OPENSSL dgst -sha256 | awk '{print $2}'
)"
KEY_SPKI="$(
  $OPENSSL pkey -in "${TMPDIR}/privkey.pem" -pubout -outform der 2>/dev/null |
  $OPENSSL dgst -sha256 | awk '{print $2}'
)"
[ "$CERT_SPKI" = "$KEY_SPKI" ] || { log "ERROR: public key mismatch (SPKI)"; exit 1; }

# Skip if unchanged
if [ -f "${STATEDIR}/version.txt" ] && cmp -s "${STATEDIR}/version.txt" "${TMPDIR}/version.txt"; then
  log "No new certificate (version unchanged)."
  exit 0
fi

# Install with correct permissions
install -m 0600 "${TMPDIR}/privkey.pem"  "${WORKDIR}/privkey.pem"
install -m 0644 "${TMPDIR}/cert.pem"     "${WORKDIR}/cert.pem"
install -m 0644 "${TMPDIR}/chain.pem"    "${WORKDIR}/chain.pem"
install -m 0644 "${TMPDIR}/version.txt"  "${WORKDIR}/version.txt"

# Find certificate ID using CERT_NAME
CERT_LIST="$("$SYNOWEBAPI" --exec api=SYNO.Core.Certificate.CRT method=list version=1)"
CERT_ID="$(echo "$CERT_LIST" | $JQ -r --arg desc "$CERT_NAME" '.data.certificates[] | select(.desc == $desc) | .id')"

if [ -z "$CERT_ID" ]; then
  log "ERROR: Certificate '${CERT_NAME}' not found in DSM"
  exit 1
fi

log "Certificate ID: ${CERT_ID}"

# Set whether certificate is configured as default
if [ "$SET_AS_DEFAULT" = "true" ]; then
  AS_DEFAULT_PARAM='as_default="\"true\""'
  IMPORT_ACTION="Certificate updated and set as default."
else
  AS_DEFAULT_PARAM='as_default="\"false\""'
  IMPORT_ACTION="Certificate updated."
fi

# Install certificate and restart nginx
"$SYNOWEBAPI" --exec api=SYNO.Core.Certificate method=import version=1 \
  key_tmp="\"${WORKDIR}/privkey.pem\"" \
  cert_tmp="\"${WORKDIR}/cert.pem\"" \
  inter_cert_tmp="\"${WORKDIR}/chain.pem\"" \
  id="\"${CERT_ID}\"" \
  desc="\"${CERT_NAME}\"" \
  $AS_DEFAULT_PARAM || { log "ERROR: import for certificate name: ${CERT_NAME} failed"; exit 1; }

# Persist version marker
cp -f "${WORKDIR}/version.txt" "${STATEDIR}/version.txt"
log "${IMPORT_ACTION}"
exit 0

Once edited, you need to create a Task Scheduler entry in the DSM (Control Panel > Task Scheduler > Create > Scheduled Task > User-defined script).

On the General tab, give the task a descriptive name and select the root user.

On the Schedule tab, set it to run daily. Once working, I recommend backing this off to weekly.

On the Task Settings tab, configure Notifications to your taste (if you have email notifications running), then paste the script in the User-defined script box.

Click OK, then click OK on the warning window.

Home Internet, 2025 edition

Beginnings of Broadband

In June of 2002, I signed up for my first broadband internet plan. I had very little money with which to buy equipment – and broadband internet at home was an extravagance at the time – but I couldn’t be more excited! Only $70 a month for blistering 256 kbit/s DSL with Internode, using a PCI DSL modem with drivers that crashed the computer on a semi-regular basis .. but hey, at least it wasn’t dialup.

Fibre for the Few

Home fibre internet services have been available in certain parts of Australia since 2010, with progressively more areas coming online over the following years. Through a series of questionable government decisions, the Australian national wholesale broadband provider NBNCo expanded the network by using a mix of different technologies – mostly VDSL and HFC.

Compared to these access technologies, fibre is significantly more capable and more resilient. In some ways, it’s a situation of the “haves vs. have-nots” – with the haves getting faster and more reliable home internet connections.

In the late 2010s, NBNCo made it possible to pay for an upgrade to fibre. The quotes were astronomical (into the six figures in some cases), so not many people went through with the upgrade. Reports at the time suggested there had been hundreds of applications, though only tens of actual upgrades.

Waiting for the NBN

When the NBN rollout finally reached me in April of 2020, I was lucky enough to live in an area being activated with the second-best technology type: HFC. I was finally able to get myself a “real” broadband plan – 100 Mbit/s with Aussie Broadband! Not the 1 Gbit/s the government had promised many years earlier, but a significant improvement on the increasingly unreliable 10 Mbit/s DSL I had been stuck using:

I moved home in 2021, to an apartment with a VDSL service – once again limited to no more than 100 Mbit/s. NBNCo wanted just over $8000 for the upgrade to fibre. I genuinely thought about it for more than a few seconds, but ended up politely declining..

Finally on Fibre

In late 2024, NBNCo productised a process to upgrade apartment buildings using certain types of VDSL services to fibre. It was a silly amount of money – though not nearly as silly as the $8000 figure provided to me a few years prior. I talked myself into it in early 2025, and signed on the dotted line to get the process underway.

The whole thing took four months and far too many follow-up emails .. but I ended up with an ugly GPON fibre box on the wall of my apartment, and a much much faster connection speed than the previous VDSL service – 1 Gbit/s:

Shortly after I had fibre installed, NBNCo made good on their promise to release multi-gigabit services. All of a sudden, what I’d just upgraded to was not fast enough! 🤓

Fast forward to today, a lovely tech from NBNCo replaced my fibre box with a brand new, XGS-PON capable fibre box (though the service still uses the GPON standard at the moment) – and the speeds speak for themselves:

2 Gbit/s! Not quite the 10 Gbit/s dream that the new fibre box promises, but getting there. Still, a casual 8000 times faster than that first broadband connection I had all those years ago.

Terms

Just in case some of the acronyms don’t make sense –

  • DSL – Digital Subscriber Line. Used for running slower broadband internet services over phone lines, while permitting phone calls at the same time.
  • VDSL – Very-high-bit-rate Digital Subscriber Line. Still uses phone lines, but much faster than standard DSL. Can be just as unreliable too!
  • HFC – Hybrid Fibre Coaxial. Rather than phone lines, it uses the same cabling as cable TV to more reliably offer high speed broadband internet.
  • GPON – Gigabit Passive Optical Network. A standard for fibre to the home internet services; allows for up to 2.5 Gbit/s.
  • XGS-PON – 10 Gigabit Symmetrical Passive Optical Network. A newer standard that offers up to 10 Gbit/s symmetrical speeds over fibre.
Tux the penguin watching movies on a Linux server

Plex Media Server on Debian Bookworm, Synology NAS

After mistakes were made with a previous installation, I had to completely reinstall the Linux server that I use to run Plex Media Server. For the sake of familiarity, I am using the latest version of Debian (at time of writing: Debian 12.9 “bookworm”). All media is stored on a Synology NAS, shared to the Linux server via a few NFS mounts – this adds a few extra complications that are worth being aware of.

Here are the notes I took during set up – sharing them here in case they are useful to anyone else:

Networking

Make sure that both the Linux server and the Synology NAS have fixed IP addresses and are able to communicate with each other. I’ve got mine set up on the same subnet using fixed DHCP leases, but whatever works for you.

Synology NFS sharing

Make sure the Synology NAS has the folders with your media shared via NFS. There are other options available, but NFS is probably the easiest. First, enable NFS:

Control Panel > File Services > NFS

  • Check Enable NFS service
  • Maximum NFS protocol: NFSv4.1

Then, for each of the folders you want Plex Media Server to have access to:

Control Panel > Shared Folder > select folder > Edit > NFS Permissions > Create

  • Enter IP of the Linux server
  • Check Enable asyncronous
  • Check Allow connections from non-privileged ports
  • Take note of the Mount path at the bottom of the window

Debian Install

Assuming you’re working from the base install – you’ll need a few things set up:

  • Install the nfs-common package
  • Install the gpg package (needed for automatically updating Plex)

Linux NFS mounts

Next, you need to be able to have the NFS mounts come up on boot. Keep in mind this is Debian 12 with no GUI – if you have another distribution, or if you have a GUI – there may be a better way to do this.

Create the relevant directories in /media – I used /media/nfs/Movies and /media/nfs/TV Shows

Edit the /etc/fstab file and add in the following (you’ll need to make changes to suit your environment)

# NFS mounts for Plex Media Server
192.168.x.x:/volume1/Movies /media/nfs/Movies nfs defaults 0 0
192.168.x.x:/volume1/TV\040Shows /media/nfs/TV\040Shows nfs defaults 0 0

A few notes:

  • The IP address before the : is the Synology NAS
  • The folder after the : is the mount point you noted earlier
  • If you have spaces in the directory name, use \040 in place of the space character

I had a huge amount of trouble getting the NFS mounts to come up on boot. The mount process was attempting to mount them before the DHCP client had received an IP address – even with all the various incantations to have fstab wait for the network to be available before attempting to mount!

If you also use DHCP to assign an IP address, the simple solution is to have an “exit script” for the DHCP client that runs when the IP address is bound – these scripts live in /etc/dhcp/dhclient-exit-hooks.d. Add a file with the following:

if ([ $reason = "BOUND" ])
then

mount -t nfs /media/nfs/Movies
mount -t nfs '/media/nfs/TV Shows'

fi

Reboot, make sure that you have all of the media mounted and accessible in the relevant folders before continuing.

Find out what all of the unit names for the mounts are called – run the following command:

sudo systemctl list-units -t mount

That should show you something like the following:

  UNIT                            LOAD   ACTIVE SUB     DESCRIPTION                                        
  server-nfs-Movies.mount         loaded active mounted /media/nfs/Movies
  server-nfs-TV\x20Shows.mount    loaded active mounted /media/nfs/TV Shows

You’ll need the unit names in the next step.

Install Plex Media Server

Install as per the Server Installation instructions for Linux / Ubuntu – but don’t go to the URL to complete setup yet! The installation should have created a Plex Media Server service; it needs to be updated before the next reboot – I really recommend doing this part before completing the setup.

Stop the existing server first, if needed:

sudo systemctl stop plexmediaserver.service

Add a custom Unit section to the startup script to ensure Plex Media Server is started after the NFS mounts are available:

sudo systemctl edit plexmediaserver.service

Add the following in the section indicated at the top of the file:

[Unit]
Description=Plex Media Server
After=network.target network-online.target server-nfs-Movies.mount server-nfs-TV\x20Shows.mount

Basically – add the unit names with spaces in between each for the mount points after network.target and network-online.target. Save, then start the Plex Media Server service:

sudo systemctl start plexmediaserver.service

Enable the service on startup:

sudo systemctl enable plexmediaserver.service

Finally, go to the server’s URL to complete setup. Expect this to take a while if you have a lot of media to index and create thumbnails for; the server may also crash and need a reboot a time or two – but it will settle down after the indexing is complete.