Sync SSL certificates from Linux to a Synology NAS

Preamble

I have a wildcard SSL certificate issued by Let’s Encrypt on a Debian Linux-based webserver, which also covers hostnames I reverse proxy with a Synology NAS. Up until recently, I have been manually copying the SSL certificate across after renewal. With the upcoming reduction in permitted validity for SSL certificates, it’s time to automate!

Synology DiskStation Manager (DSM) doesn’t make this easy.

The DSM API to add certificates isn’t documented in any useful way. This has resulted in some (IMHO) pretty messy scripts that mostly seem to overwrite files in various directories, restart services and hope for the best. If you also need to routinely copy an SSL certificate to a Synology NAS, here’s an approach that uses the DSM API instead:

How it works

There are two scripts:

  1. A script on the Linux server that publishes the issued certificate into a specific directory
  2. A script on the Synology NAS that uses SCP to retrieve the certificates and install them

The user on the NAS needs to be able to SSH into the server using an unencrypted key, with group access to the directory containing the certificates.

Checklist

There are quite a few things to check before the scripts will work:

Linux server

  • Internet-facing, or at least accessible to the NAS via SSH
  • SSH configuration that permits key authentication
  • Group on the server called certsync – this helps keep the certificate files secure
  • Directory on the server called /var/lib/certsyncroot is the owner, certsync is the group, 750 for the directory permissions (i.e., no global permissions)
  • Non-root user account – needs to be a member of the certsync group, and must be able to SSH in

This also assumes you’re using certbot to issue Let’s Encrypt certificates. If you are using a different tool or a different CA, you will need to make a bunch of changes to how the script works.

Synology NAS

  • Running some flavour of DSM 7.x (I’ve tested this with DSM 7.1.1, but it should work with others)
  • SSH service enabled (Control Panel > Terminal & SNMP > Terminal)
  • User account that is a member of the administrators group
  • User home service enabled (Control Panel > User & Group > Advanced)
  • Optional: email notifications enabled (Control Panel > Notification > Email)

After confirming that the NAS user account can SSH in successfully: you need to create an RSA-based SSH key with no password, and transfer the public key to the user account on the Linux server. The easiest way to do this is using the ssh-keygen and ssh-copy-id commands.

This guide doesn’t cover how to do this, but you ultimately need to check that you can log into the Linux server via SSH from the Synology NAS without being prompted for a password.

Scripts

Linux server

This script runs as a deploy hook after any certificate has been renewed, and if the renewed certificate is in the list of certificates to copy – copies all of the relevant files atomically to a specified folder.

There are three configuration parameters – you probably only need to adjust LINEAGE_SUFFIXES (an array of certificate suffixes – check the directory structure in /etc/letsencrypt/live for what to use here).

#!/usr/bin/bash
# /etc/letsencrypt/renewal-hooks/deploy/synology-sync.sh
# Certbot deploy hook to sync PEM files for Synology DSM pull

set -euo pipefail

# === Configuration parameters  - edit these ===
STAGE_GROUP="certsync"
STAGE_DIR="/var/lib/certsync"
LINEAGE_SUFFIXES=("example.com-wildcard" "example.com.au-wildcard")

# === Stop editing ===

# Determine whether renewed lineage needs to be copied to the staging dir
lineage_allowed() {
  # Allow all if no filters specified
  if [[ ${#LINEAGE_SUFFIXES[@]} -eq 0 ]]; then
    return 0
  fi
  local lineage_path="$1"
  local base; base="$(basename -- "$lineage_path")"
  local sfx
  for sfx in "${LINEAGE_SUFFIXES[@]}"; do
    # Match either by full last component, or by path suffix
    if [[ "$base" == "$sfx" ]] || [[ "$lineage_path" == */"$sfx" ]]; then
      return 0
    fi
  done
  return 1
}

err() { echo "ERROR: $*" >&2; }

LINEAGE="${RENEWED_LINEAGE:-}"
if [[ -z "$LINEAGE" ]]; then
  err "RENEWED_LINEAGE is not set (this script is meant to run as a certbot deploy hook)."
  exit 1
fi

if ! lineage_allowed "$LINEAGE"; then
  echo "Skipping lineage not in LINEAGE_SUFFIXES: $LINEAGE"
  exit 0
fi

SRC_PRIVKEY="${LINEAGE}/privkey.pem"
SRC_CERT="${LINEAGE}/cert.pem"
SRC_CHAIN="${LINEAGE}/chain.pem"

[[ -f "$SRC_PRIVKEY" && -f "$SRC_CERT" && -f "$SRC_CHAIN" ]] || {
  err "Missing certificate files in $LINEAGE"
  exit 1
}

# Create staging dir if needed
install -d -o root -g "${STAGE_GROUP}" -m 0750 "${STAGE_DIR}"

# Create subdirectory for this specific lineage
LINEAGE_NAME="$(basename -- "$LINEAGE")"
LINEAGE_STAGE_DIR="${STAGE_DIR}/${LINEAGE_NAME}"
install -d -o root -g "${STAGE_GROUP}" -m 0750 "${LINEAGE_STAGE_DIR}"

# Stage atomically: copy into a temp dir, then move into place
TMP="$(mktemp -d "${STAGE_DIR}.tmp.XXXXXX")"
trap 'rm -rf "$TMP"' EXIT

# Copy with correct permissions (group-read for sync group; no world access to private key)
install -m 0640 "$SRC_PRIVKEY" "${TMP}/privkey.pem"
install -m 0644 "$SRC_CERT" "${TMP}/cert.pem"
install -m 0644 "$SRC_CHAIN" "${TMP}/chain.pem"

# Version marker: Subject + NotAfter + SHA256 fingerprint
CERTLOG="$(openssl x509 -in "$SRC_CERT" -noout -subject -enddate -fingerprint -sha256)"
printf '%s\n\n' "$CERTLOG" > "${TMP}/version.txt"
chmod 0644 "${TMP}/version.txt"

# Finalize atomically (keep stable file names expected by the NAS)
mv -f "${TMP}/privkey.pem"  "${LINEAGE_STAGE_DIR}/privkey.pem"
mv -f "${TMP}/cert.pem"     "${LINEAGE_STAGE_DIR}/cert.pem"
mv -f "${TMP}/chain.pem"    "${LINEAGE_STAGE_DIR}/chain.pem"
mv -f "${TMP}/version.txt"  "${LINEAGE_STAGE_DIR}/version.txt"

# Ensure ownership/perms
chown root:"${STAGE_GROUP}" "${LINEAGE_STAGE_DIR}/privkey.pem"
chmod 0640                  "${LINEAGE_STAGE_DIR}/privkey.pem"
chown root:"${STAGE_GROUP}" "${LINEAGE_STAGE_DIR}/cert.pem" "${LINEAGE_STAGE_DIR}/chain.pem" "${LINEAGE_STAGE_DIR}/version.txt"
chmod 0644                  "${LINEAGE_STAGE_DIR}/cert.pem" "${LINEAGE_STAGE_DIR}/chain.pem" "${LINEAGE_STAGE_DIR}/version.txt"

echo "Staged cert for $(basename -- "$LINEAGE") at ${LINEAGE_STAGE_DIR}"

Once edited, the script needs to be placed in /etc/letsencrypt/renewal-hooks/deploy – call it synology-sync.sh, and give it execute permissions (chmod a+x synology-sync.sh).

Synology NAS

This script is designed to run via Task Scheduler in the Control Panel, and handles one certificate per Task Scheduler entry. If you have multiple certificates, you need to customise and create multiple Task Scheduler entries!

The script logs into the Linux server on a schedule (I recommend daily to start, then move to weekly if everything is working), pulls the certificate files down, checks to see if they’re different to the installed version and if so – installs them.

There are several configuration parameters that you’ll need to customise to taste. You definitely need to adjust the following, but review all options down to where it says stop editing:

  • CERT_NAME – the name of the certificate in the DSM UI
  • LINUX_HOST – the FQDN of the Linux server
  • LINUX_USER – the user on the Linux server you’re SSH’ing in as
  • LINUX_LINEAGE_SUFFIX – the suffix used in the Let’s Encrypt directory structure
  • LOCAL_USER – the NAS user account that you’re SSH’ing from
#!/bin/sh
# synology-import-cert.sh — DSM 7.1.1: pull PEMs from a Linux server, validate, import, and reload nginx

set -eu
umask 077
export PATH="/usr/syno/bin:/usr/syno/sbin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:$PATH"

# === Configuration parameters - edit these (one cert per Task Scheduler entry) ===

# Name / description of the certificate entry in the DSM to import
CERT_NAME="*.example.com"

# Set this certificate as the default certificate for DSM (true/false)
SET_AS_DEFAULT="true"

# Linux server details and user account
LINUX_HOST="linux.server.host.name"
LINUX_USER="linuxuser"
LINUX_BASE_PATH="/var/lib/certsync"
LINUX_LINEAGE_SUFFIX="example.com-wildcard" # lineage directory name (e.g., from /etc/letsencrypt/live/...)
LINUX_PATH="${LINUX_BASE_PATH}/${LINUX_LINEAGE_SUFFIX}" # contains privkey.pem, cert.pem, chain.pem, version.txt

# (Optional but recommended) Strict host key checking — populate /root/.ssh/known_hosts first.
SSH_OPTS="-o BatchMode=yes -o StrictHostKeyChecking=yes"

# Local DSM user that owns the SSH key used for scp
LOCAL_USER="synouser"
LOCAL_HOME="/var/services/homes/${LOCAL_USER}"
LOCAL_SSH_KEY="${LOCAL_HOME}/.ssh/id_rsa"

# === Stop editing ===

# Working/state locations on DSM
WORKDIR="/tmp/certsync/${LINUX_LINEAGE_SUFFIX}" #transient
STATEDIR="/var/packages/CertSync/var/${LINUX_LINEAGE_SUFFIX}" # persistent
LOGFILE="${STATEDIR}/run.log"

# Find command paths
JQ="/bin/jq"
SCP="/bin/scp"
OPENSSL="/bin/openssl"
SYNOWEBAPI="/usr/syno/bin/synowebapi"

# Set up directories
install -d -m 0755 "$STATEDIR"
install -d -o "${LOCAL_USER}" -g users -m 0700 "$WORKDIR"

# Logging
touch "$LOGFILE"
log() { printf '%s %s\n' "$(date -u +'%Y-%m-%dT%H:%M:%SZ')" "$*" | tee -a "$LOGFILE" ; }

# Pull into a temp dir then install atomically
TMPDIR="$(mktemp -d "${WORKDIR}/.pull.XXXXXX")"
cleanup() { rm -rf "$TMPDIR" ; }
trap cleanup EXIT

# Let LOCAL_USER write into TMPDIR (scp runs as that user)
chown "${LOCAL_USER}:users" "$TMPDIR"
chmod 0700 "$TMPDIR"

log "Pulling certs from ${LINUX_USER}@${LINUX_HOST}:${LINUX_PATH}"

pull_file() {
  _remote="$1"
  _dest="$2"
  _error="$(/bin/su -s /bin/sh "${LOCAL_USER}" -c \
    "$SCP -i '${LOCAL_SSH_KEY}' ${SSH_OPTS} '${LINUX_USER}@${LINUX_HOST}:${LINUX_PATH}/${_remote}' '${_dest}'" 2>&1)" || {
    log "ERROR: scp failed for ${_remote}"
    log "SCP error output: ${_error}"
    return 1
  }
  log "Successfully pulled: ${_remote}"
}

pull_file "privkey.pem"  "${TMPDIR}/privkey.pem"
pull_file "cert.pem"     "${TMPDIR}/cert.pem"
pull_file "chain.pem"    "${TMPDIR}/chain.pem"
pull_file "version.txt"  "${TMPDIR}/version.txt"

for f in privkey.pem cert.pem chain.pem version.txt; do
  [ -f "${TMPDIR}/${f}" ] || { log "ERROR: failed to pull ${f}"; exit 1; }
done

# Sanity checks
$OPENSSL pkey -in "${TMPDIR}/privkey.pem" -noout >/dev/null 2>&1 || { log 'ERROR: invalid privkey.pem'; exit 1; }
$OPENSSL x509 -in "${TMPDIR}/cert.pem" -noout >/dev/null 2>&1 || { log 'ERROR: invalid cert.pem'; exit 1; }
$OPENSSL x509 -in "${TMPDIR}/chain.pem" -noout >/dev/null 2>&1 || { log 'ERROR: invalid chain.pem'; exit 1; }

# cert ↔ key match (key-type agnostic: SPKI hash)
CERT_SPKI="$(
  $OPENSSL x509 -in "${TMPDIR}/cert.pem" -noout -pubkey |
  $OPENSSL pkey -pubin -outform der 2>/dev/null |
  $OPENSSL dgst -sha256 | awk '{print $2}'
)"
KEY_SPKI="$(
  $OPENSSL pkey -in "${TMPDIR}/privkey.pem" -pubout -outform der 2>/dev/null |
  $OPENSSL dgst -sha256 | awk '{print $2}'
)"
[ "$CERT_SPKI" = "$KEY_SPKI" ] || { log "ERROR: public key mismatch (SPKI)"; exit 1; }

# Skip if unchanged
if [ -f "${STATEDIR}/version.txt" ] && cmp -s "${STATEDIR}/version.txt" "${TMPDIR}/version.txt"; then
  log "No new certificate (version unchanged)."
  exit 0
fi

# Install with correct permissions
install -m 0600 "${TMPDIR}/privkey.pem"  "${WORKDIR}/privkey.pem"
install -m 0644 "${TMPDIR}/cert.pem"     "${WORKDIR}/cert.pem"
install -m 0644 "${TMPDIR}/chain.pem"    "${WORKDIR}/chain.pem"
install -m 0644 "${TMPDIR}/version.txt"  "${WORKDIR}/version.txt"

# Find certificate ID using CERT_NAME
CERT_LIST="$("$SYNOWEBAPI" --exec api=SYNO.Core.Certificate.CRT method=list version=1)"
CERT_ID="$(echo "$CERT_LIST" | $JQ -r --arg desc "$CERT_NAME" '.data.certificates[] | select(.desc == $desc) | .id')"

if [ -z "$CERT_ID" ]; then
  log "ERROR: Certificate '${CERT_NAME}' not found in DSM"
  exit 1
fi

log "Certificate ID: ${CERT_ID}"

# Set whether certificate is configured as default
if [ "$SET_AS_DEFAULT" = "true" ]; then
  AS_DEFAULT_PARAM='as_default="\"true\""'
  IMPORT_ACTION="Certificate updated and set as default."
else
  AS_DEFAULT_PARAM='as_default="\"false\""'
  IMPORT_ACTION="Certificate updated."
fi

# Install certificate and restart nginx
"$SYNOWEBAPI" --exec api=SYNO.Core.Certificate method=import version=1 \
  key_tmp="\"${WORKDIR}/privkey.pem\"" \
  cert_tmp="\"${WORKDIR}/cert.pem\"" \
  inter_cert_tmp="\"${WORKDIR}/chain.pem\"" \
  id="\"${CERT_ID}\"" \
  desc="\"${CERT_NAME}\"" \
  $AS_DEFAULT_PARAM || { log "ERROR: import for certificate name: ${CERT_NAME} failed"; exit 1; }

# Persist version marker
cp -f "${WORKDIR}/version.txt" "${STATEDIR}/version.txt"
log "${IMPORT_ACTION}"
exit 0

Once edited, you need to create a Task Scheduler entry in the DSM (Control Panel > Task Scheduler > Create > Scheduled Task > User-defined script).

On the General tab, give the task a descriptive name and select the root user.

On the Schedule tab, set it to run daily. Once working, I recommend backing this off to weekly.

On the Task Settings tab, configure Notifications to your taste (if you have email notifications running), then paste the script in the User-defined script box.

Click OK, then click OK on the warning window.

Padlock with Chain Header

Let’s Encrypt wildcard certificates with Hurricane Electric DNS

Renewing Let’s Encrypt wildcard certificates is generally a massive pain. You need to be able to automatically update DNS records for the domain – which is fine if you use a DNS provider that has an official Let’s Encrypt DNS plugin, but less so if you use a DNS provider that doesn’t – such as Hurricane Electric.

Side note – I’m not particularly interested in arguments for and against wildcard certs. If you’re reading this, you’ve obviously come to the conclusion that they’re probably fine and you just want them automated like the rest of your Let’s Encrypt certs!

This post assumes the following:

  • You are obtaining wildcard certificates from Let’s Encrypt
  • Your DNS is hosted with Hurricane Electric
  • Your ACME client for Let’s Encrypt is Certbot
  • You have shell access to your server

Obtain a new Let’s Encrypt wildcard certificate

Note: if you’ve already got a wildcard certificate, you can mostly skip this bit – but skim this section to make sure you’ve done everything you need to do!

1. Request wildcard certificate

Here’s the command-line incantation to request a new wildcard certificate:

sudo certbot certonly --cert-name example.com-wildcard -d '*.example.com' --manual --preferred-challenges dns 

Couple things to note here:

  • I’m not including the base domain in the certificate here – I wouldn’t be able to automate it if I did, as I’d end up needing two TXT records for the same hostname. My solution was to separate the base domain out into its own certificate, which works perfectly for me.
  • I’m specifying a certificate name. This is important, as otherwise it ends up trying to name it the same as the base domain – which is no good if you have a cert for the base domain as well.

Run the Certbot wizard – it will soon ask you to create a TXT record!

2. Create TXT record

The Certbot wizard will ask you to create a record, something like the following:

Please deploy a DNS TXT record under the name
_acme-challenge.example.com with the following value:

qwertyuiop-1234567890

Log into the Hurricane Electric DNS console, select your domain and create a new TXT record with the following settings:

Name_acme-challenge.example.com
Text dataqwertyuiop-1234567890
TTL (Time to live)5 minutes (300)

Don’t check the Enable entry for dynamic dns box yet! Save the record and complete the Certbot wizard. Your certificate should now be issued, and you can configure your Apache / Nginx / etc server as appropriate.

Set up renewal scripts

1. Create DDNS TXT record key

As of current writing, the various Hurricane Electric DNS plugins for Certbot that I’ve seen all log into the actual account – which is horrendous from a security perspective. You don’t want a script to have full control over all of your domain records!

Thankfully, Hurricane Electric now allow TXT records to be updated with a key that only has control over just that one record.

Go back to the _acme-challenge TXT record for your domain, check the box to enable the entry for dynamic dns and Update. This enables the DDNS feature – you should now see an “arrow circle” symbol for that record:

Hurricane Electric DDNS record

Click the arrow circle symbol to generate a new DDNS key (save this key somewhere – this is the last time you’ll see it in the Hurricane Electric interface!)

For the purposes of this example, I’ll assume that the generated key looks something like ‘qwertyuiop123456’.

2. Add manual authentication script

Copy and paste the following into /etc/letsencrypt/he-dns-update.sh:

#!/bin/bash

# Do we have everything we need?
if [[ -z "$CERTBOT_DOMAIN" ]] || [[ -z "$CERTBOT_VALIDATION" ]]; then
    echo '$CERTBOT_DOMAIN and $CERTBOT_VALIDATION environment variables required.'
    exit 1
fi

# Add all HE TXT record DDNS keys to the txt_key object
# Remember to protect this script file - chmod 700!
declare -A txt_key
txt_key['_acme-challenge.example.com']='qwertyuiop123456'

# Create a FQDN based on $CERTBOT_DOMAIN
HE_DOMAIN="_acme-challenge.$CERTBOT_DOMAIN"

# Update HE DNS record
curl -s -X POST "https://dyn.dns.he.net/nic/update" -d "hostname=$HE_DOMAIN" -d "password=${txt_key[$HE_DOMAIN]}" -d "txt=$CERTBOT_VALIDATION"

# Sleep to make sure the change has time to propagate over to DNS
sleep 30

As the comment suggests – protect this file from prying eyes by using chmod 700. If you have multiple wildcard certificates, you can add in extra entries to the txt_key object.

3. Add post deployment script

As much as you can manually restart services after the new certificate has been issued, you can automate that as well. Copy and paste the following into /etc/letsencrypt/renewal-hooks/deploy/restart-services.sh:

#!/bin/sh

set -e

for domain in $RENEWED_DOMAINS; do
  case $domain in
    *.example.com)
      systemctl restart apache2
      ;;
    *.example.com.au)
      systemctl restart nginx
      systemctl restart postfix
      ;;
    *)
  esac
done

Update the script to restart specific services for particular domains as required. I recommend chmod’ing this file to 755 – unlike the previous script, there’s nothing sensitive here!

Request Let’s Encrypt wildcard renewal

We’re going to force a renewal of the certificate here – this will test the two scripts above, plus update the renewal config so that the certificate will automatically renew in the future. If your certificate is due for renewal already, you don’t need to include the –force-renewal flag. Run the following command:

sudo certbot renew --cert-name example.com-wildcard --manual --manual-auth-hook /etc/letsencrypt/he-dns-update.sh --preferred-challenges dns --force-renewal

Check that:

  • There are no errors in the Certbot logs,
  • The certificate renewed successfully, and
  • All services restarted appropriately

If everything worked – your Let’s Encrypt wildcard certificates should now renew automagically!

UniFi Controller 5.11, Let’s Encrypt SSL and Docker

A slight change of plans from earlier posts on the topic of UniFi Controllers! Here’s how to get a UniFi Controller running inside a Docker container, along with a trusted Let’s Encrypt SSL certificate.

Note: this guide assumes you’re configuring things on a server or VM with public Internet access. You’ll also need a fixed public IP and functional DNS to get an SSL certificate.

Here we go:

Firewall

UniFi needs a bunch of inbound ports open. Here’s the official list – it differs slightly to what I use:

PortDescription
UDP/3478STUN – required for device communication with the controller
TCP/8080Inform – required to adopt devices
TCP/8443GUI – required even if you use the Cloud Controller access
TCP/8880Captive Portal – HTTP – only needed if you use the captive portal feature
TCP/8843Captive Portal – HTTPS – only needed if you use the captive portal feature
TCP/6789Speed Test – only needed if you use the speed test feature

Let’s Encrypt also needs a port open:

PortDescription
TCP/80HTTP – required for the HTTP-01 challenge type

I use ufw to configure iptables – first, set up an application definition for the UniFi Controller – in /etc/ufw/applications.d/unifi:

[unifi]
title=unifi
description=UniFi Controller
ports=6789,8080,8880,8443,8843/tcp|3478/udp

Run the following four commands to configure and enable the firewall. I’ve made some assumptions about what’s needed – you may need to customise things a little more:

sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow unifi
sudo ufw enable

User Account

UniFi probably shouldn’t be run as root – this is generally a good idea, plus it may also become a requirement for the Docker image I’m using in the future. This will also affect what ports you can configure the controller to use – the default ports work fine for any user, but changing any of the ports to <1024 requires root.

Create the unifi user and group accounts:

sudo adduser unifi --system --group --no-create-home

Pay attention to the UID and GID that get created; you need them in the Docker Compose file below.

Docker

Here’s the tl;dr version of the installation instructions, but if you want to read the full version with all the details – check the Docker website.

Configure the Docker repository – it contains a more up-to-date version:

sudo apt-get update && sudo apt-get upgrade
sudo apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"

Install Docker and related tools:

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

UniFi Controller

There are a number of UniFi Docker images out there, but I like the one by jacobalberty as it’s kept up to date – plus it exposes a volume for adding trusted certificates. His Docker Compose file isn’t quite to my taste, so I’ve adjusted things. Create the file /opt/unifi/docker-compose.yml:

version: '2.2'
services:
  mongo:
    image: 'mongo:3.4'
    restart: always
    volumes:
      - db:/data/db
  controller:
    image: 'jacobalberty/unifi:${TAG:-latest}'
    depends_on:
      - mongo
    init: true
    restart: always
    volumes:
      - data:/unifi/data
      - log:/unifi/log
      - cert:/unifi/cert
      - init:/unifi/init.d
    environment:
      RUNAS_UID0: 'false'
      UNIFI_UID: 100
      UNIFI_GID: 100
      JVM_MAX_THREAD_STACK_SIZE: 1280k
      DB_URI: mongodb://mongo/unifi
      STATDB_URI: mongodb://mongo/unifi_stat
      DB_NAME: unifi
    ports:
      - '3478:3478/udp'
      - '6789:6789/tcp'
      - '8080:8080/tcp'
      - '8443:8443/tcp'
      - '8880:8880/tcp'
      - '8843:8843/tcp'
  logs:
    image: bash
    depends_on:
      - controller
    command: bash -c 'tail -F /unifi/log/*.log'
    restart: always
    volumes:
      - log:/unifi/log

volumes:
  db:
  data:
  log:
  cert:
  init:

Note: if you’re going to change the location of this file, it should be in a directory called ‘unifi’. Bring the stack up like so (it will take a fair while first time around):

sudo docker-compose up -d

Install SSL

This part requires a few sections that need to be completed in order – first you need a script to load the SSL certificate into the UniFi Docker cert volume, then you need to run a certbot command to obtain the certificate.

If you use a provider other than Let’s Encrypt for SSL certificates, these instructions will need to be adjusted.

UniFi SSL Deploy Script

It may seem backwards, but the deploy script needs to exist before obtaining the certificate. Read through this script carefully and adjust any domains and directories as needed. Create the file /opt/unifi/unifi-ssl-deploy.sh:

#!/bin/sh

set -e

for domain in $RENEWED_DOMAINS; do
  case $domain in
  unifi.example.com)
    # Where does the Docker cert data volume live?
    cert_root=/var/lib/docker/volumes/unifi_cert/_data
    # Where is the Docker Compose file?
    compose_file=/opt/unifi/docker-compose.yml

    # Make sure the certificate and private key files are
    # never world readable, even just for an instant while
    # we're copying them into cert_root.
    umask 077

    cp "$RENEWED_LINEAGE/cert.pem" "$cert_root/cert.pem"
    cp "$RENEWED_LINEAGE/privkey.pem" "$cert_root/privkey.pem"
    cp "$RENEWED_LINEAGE/chain.pem" "$cert_root/chain.pem"

    # Apply the proper file permissions
    # Files can be owned by root
    chmod 400 "$cert_root/cert.pem" \
      "$cert_root/privkey.pem" \
      "$cert_root/chain.pem"

    # Restart the Docker container
    docker-compose -p unifi -f $compose_file stop
    docker-compose -p unifi -f $compose_file start
    ;;
  esac
done

Now make the file executable:

sudo chmod a+x unifi-ssl-deploy.sh

Obtain SSL with Certbot

Conveniently, Certbot has its own mechanism for obtaining an SSL certificate without using a webserver. If you have a webserver configured, you will want to adjust these instructions accordingly.

As above, adjust the following to suit your domain:

sudo apt-get install certbot
sudo certbot certonly --standalone --domain unifi.example.com --deploy-hook /opt/unifi/unifi-ssl-deploy.sh

The command to obtain the certificate will ask a few questions – you may also see an error from the deploy script, but it’s not actually an error per se.

Note: After the deploy script has run, you need to wait up to 5 minutes for the UniFi Controller to fully start back up again. If you don’t, you’re likely to get an SSL error (PR_END_OF_FILE_ERROR) in the browser!

We’re all done – your UniFi Controller should now be available via: https://unifi.example.com:8443

Reverse Proxy

I’ve opted to not configure a reverse proxy, as I don’t believe one is needed. If port 8443 is blocked on your network, you can configure cloud access via https://unifi.ui.com.

If you want to configure a reverse proxy, note you’ll need something that handles websockets gracefully – Nginx and Traefik are probably your best options.