Tux the penguin watching movies on a Linux server

Plex Media Server on Debian Bookworm, Synology NAS

After mistakes were made with a previous installation, I had to completely reinstall the Linux server that I use to run Plex Media Server. For the sake of familiarity, I am using the latest version of Debian (at time of writing: Debian 12.9 “bookworm”). All media is stored on a Synology NAS, shared to the Linux server via a few NFS mounts – this adds a few extra complications that are worth being aware of.

Here are the notes I took during set up – sharing them here in case they are useful to anyone else:

Networking

Make sure that both the Linux server and the Synology NAS have fixed IP addresses and are able to communicate with each other. I’ve got mine set up on the same subnet using fixed DHCP leases, but whatever works for you.

Synology NFS sharing

Make sure the Synology NAS has the folders with your media shared via NFS. There are other options available, but NFS is probably the easiest. First, enable NFS:

Control Panel > File Services > NFS

  • Check Enable NFS service
  • Maximum NFS protocol: NFSv4.1

Then, for each of the folders you want Plex Media Server to have access to:

Control Panel > Shared Folder > select folder > Edit > NFS Permissions > Create

  • Enter IP of the Linux server
  • Check Enable asyncronous
  • Check Allow connections from non-privileged ports
  • Take note of the Mount path at the bottom of the window

Debian Install

Assuming you’re working from the base install – you’ll need a few things set up:

  • Install the nfs-common package
  • Install the gpg package (needed for automatically updating Plex)
  • Enable the systemd-networkd-wait-online service (see notes below)

I ran into a few problems with the systemd-networkd-wait-online service trying to wait for unused network interfaces; easiest fix for this was to edit the service file and specify the interface as part of the ExecStart line:

ExecStart=/lib/systemd/systemd-networkd-wait-online --interface=eno1

Linux NFS mounts

Next, you need to be able to have the NFS mounts come up on boot. Keep in mind this is Debian 12 with no GUI – if you have another distribution, or if you have a GUI – there may be a better way to do this.

Create the relevant directories in /media – I used /media/nfs/TV Shows and /media/nfs/Movies

Edit the /etc/fstab file and add in the following (you’ll need to make changes to suit your environment)

# NFS mounts for Plex Media Server
192.168.x.x:/volume1/TV\040Shows        /media/nfs/TV\040Shows   nfs     _netdev    0       0
192.168.x.x:/volume1/Movies    /media/nfs/Movies       nfs     _netdev    0       0

A few notes:

  • The IP address before the : is the Synology NAS
  • The folder after the : is the mount point you noted earlier
  • If you have spaces in the directory name, use \040 in place of the space character

Reboot, make sure that you have all of the media mounted and accessible in the relevant folders before continuing.

Find out what all of the unit names for the mounts are called – run the following command:

sudo systemctl list-units -t mount

That should show you something like the following:

  UNIT                            LOAD   ACTIVE SUB     DESCRIPTION                                        
  server-nfs-Movies.mount         loaded active mounted /media/nfs/Movies
  server-nfs-TV\x20Shows.mount    loaded active mounted /media/nfs/TV Shows

You’ll need the unit names in the next step.

Install Plex Media Server

Install as per the Server Installation instructions for Linux / Ubuntu – but don’t go to the URL to complete setup yet! The installation should have created a Plex Media Server service; it needs to be updated before the next reboot – I really recommend doing this part before completing the setup.

Stop the existing server first, if needed:

sudo systemctl stop plexmediaserver.service

Add a custom Unit section to the startup script to ensure Plex Media Server is started after the NFS mounts are available:

sudo systemctl edit plexmediaserver.service

Add the following in the section indicated at the top of the file:

[Unit]
Description=Plex Media Server
After=network.target network-online.target server-nfs-Movies.mount server-nfs-TV\x20Shows.mount

Basically – add the unit names with spaces in between each for the mount points after network.target and network-online.target. Save, then start the Plex Media Server service:

sudo systemctl start plexmediaserver.service

Enable the service on startup:

sudo systemctl enable plexmediaserver.service

Finally, go to the server’s URL to complete setup. Expect this to take a while if you have a lot of media to index and create thumbnails for; the server may also crash and need a reboot a time or two – but it will settle down after the indexing is complete.

Create a symbolic link in Linux using the ‘ln -s’ command

Writing this mostly because it’s just about impossible to find a simple answer to the question of “how to create a symbolic link” in Google without wading through page after page of ads. Yes, I know I’m being a hypocrite as I’ve got ads as well – but the placement and number are hopefully less offensive!

Command Syntax

If you need one file to be available in multiple locations in Linux, you can create a symbolic link to make this happen. The command syntax is:

ln -s {actual file location} {symbolic link location}

Example

After managing to hose my Node.js install for Homebridge again, I decided the quickest fix would be to just install Node.js directly from the system package manager. Once installed, I figured I could convince Homebridge to use the system version of Node.js, rather than having its own.

First, I deleted the broken built-in Node executable: sudo rm /opt/homebridge/bin/node

Then I installed the system version of Node.js, which places the Node executable at: /usr/bin/node

Because Homebridge expects the Node executable to be available at the original location, I needed to create a link from the system version to the location it wanted:

sudo ln -s /usr/bin/node /opt/homebridge/bin/node

(To get it working I had to change ownership of the symbolic link to the Homebridge user, but that’s a topic for another post!)

UniFi Controller 5.11, Let’s Encrypt SSL and Docker

A slight change of plans from earlier posts on the topic of UniFi Controllers! Here’s how to get a UniFi Controller running inside a Docker container, along with a trusted Let’s Encrypt SSL certificate.

Note: this guide assumes you’re configuring things on a server or VM with public Internet access. You’ll also need a fixed public IP and functional DNS to get an SSL certificate.

Here we go:

Firewall

UniFi needs a bunch of inbound ports open. Here’s the official list – it differs slightly to what I use:

PortDescription
UDP/3478STUN – required for device communication with the controller
TCP/8080Inform – required to adopt devices
TCP/8443GUI – required even if you use the Cloud Controller access
TCP/8880Captive Portal – HTTP – only needed if you use the captive portal feature
TCP/8843Captive Portal – HTTPS – only needed if you use the captive portal feature
TCP/6789Speed Test – only needed if you use the speed test feature

Let’s Encrypt also needs a port open:

PortDescription
TCP/80HTTP – required for the HTTP-01 challenge type

I use ufw to configure iptables – first, set up an application definition for the UniFi Controller – in /etc/ufw/applications.d/unifi:

[unifi]
title=unifi
description=UniFi Controller
ports=6789,8080,8880,8443,8843/tcp|3478/udp

Run the following four commands to configure and enable the firewall. I’ve made some assumptions about what’s needed – you may need to customise things a little more:

sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow unifi
sudo ufw enable

User Account

UniFi probably shouldn’t be run as root – this is generally a good idea, plus it may also become a requirement for the Docker image I’m using in the future. This will also affect what ports you can configure the controller to use – the default ports work fine for any user, but changing any of the ports to <1024 requires root.

Create the unifi user and group accounts:

sudo adduser unifi --system --group --no-create-home

Pay attention to the UID and GID that get created; you need them in the Docker Compose file below.

Docker

Here’s the tl;dr version of the installation instructions, but if you want to read the full version with all the details – check the Docker website.

Configure the Docker repository – it contains a more up-to-date version:

sudo apt-get update && sudo apt-get upgrade
sudo apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"

Install Docker and related tools:

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

UniFi Controller

There are a number of UniFi Docker images out there, but I like the one by jacobalberty as it’s kept up to date – plus it exposes a volume for adding trusted certificates. His Docker Compose file isn’t quite to my taste, so I’ve adjusted things. Create the file /opt/unifi/docker-compose.yml:

version: '2.2'
services:
  mongo:
    image: 'mongo:3.4'
    restart: always
    volumes:
      - db:/data/db
  controller:
    image: 'jacobalberty/unifi:${TAG:-latest}'
    depends_on:
      - mongo
    init: true
    restart: always
    volumes:
      - data:/unifi/data
      - log:/unifi/log
      - cert:/unifi/cert
      - init:/unifi/init.d
    environment:
      RUNAS_UID0: 'false'
      UNIFI_UID: 100
      UNIFI_GID: 100
      JVM_MAX_THREAD_STACK_SIZE: 1280k
      DB_URI: mongodb://mongo/unifi
      STATDB_URI: mongodb://mongo/unifi_stat
      DB_NAME: unifi
    ports:
      - '3478:3478/udp'
      - '6789:6789/tcp'
      - '8080:8080/tcp'
      - '8443:8443/tcp'
      - '8880:8880/tcp'
      - '8843:8843/tcp'
  logs:
    image: bash
    depends_on:
      - controller
    command: bash -c 'tail -F /unifi/log/*.log'
    restart: always
    volumes:
      - log:/unifi/log

volumes:
  db:
  data:
  log:
  cert:
  init:

Note: if you’re going to change the location of this file, it should be in a directory called ‘unifi’. Bring the stack up like so (it will take a fair while first time around):

sudo docker-compose up -d

Install SSL

This part requires a few sections that need to be completed in order – first you need a script to load the SSL certificate into the UniFi Docker cert volume, then you need to run a certbot command to obtain the certificate.

If you use a provider other than Let’s Encrypt for SSL certificates, these instructions will need to be adjusted.

UniFi SSL Deploy Script

It may seem backwards, but the deploy script needs to exist before obtaining the certificate. Read through this script carefully and adjust any domains and directories as needed. Create the file /opt/unifi/unifi-ssl-deploy.sh:

#!/bin/sh

set -e

for domain in $RENEWED_DOMAINS; do
  case $domain in
  unifi.example.com)
    # Where does the Docker cert data volume live?
    cert_root=/var/lib/docker/volumes/unifi_cert/_data
    # Where is the Docker Compose file?
    compose_file=/opt/unifi/docker-compose.yml

    # Make sure the certificate and private key files are
    # never world readable, even just for an instant while
    # we're copying them into cert_root.
    umask 077

    cp "$RENEWED_LINEAGE/cert.pem" "$cert_root/cert.pem"
    cp "$RENEWED_LINEAGE/privkey.pem" "$cert_root/privkey.pem"
    cp "$RENEWED_LINEAGE/chain.pem" "$cert_root/chain.pem"

    # Apply the proper file permissions
    # Files can be owned by root
    chmod 400 "$cert_root/cert.pem" \
      "$cert_root/privkey.pem" \
      "$cert_root/chain.pem"

    # Restart the Docker container
    docker-compose -p unifi -f $compose_file stop
    docker-compose -p unifi -f $compose_file start
    ;;
  esac
done

Now make the file executable:

sudo chmod a+x unifi-ssl-deploy.sh

Obtain SSL with Certbot

Conveniently, Certbot has its own mechanism for obtaining an SSL certificate without using a webserver. If you have a webserver configured, you will want to adjust these instructions accordingly.

As above, adjust the following to suit your domain:

sudo apt-get install certbot
sudo certbot certonly --standalone --domain unifi.example.com --deploy-hook /opt/unifi/unifi-ssl-deploy.sh

The command to obtain the certificate will ask a few questions – you may also see an error from the deploy script, but it’s not actually an error per se.

Note: After the deploy script has run, you need to wait up to 5 minutes for the UniFi Controller to fully start back up again. If you don’t, you’re likely to get an SSL error (PR_END_OF_FILE_ERROR) in the browser!

We’re all done – your UniFi Controller should now be available via: https://unifi.example.com:8443

Reverse Proxy

I’ve opted to not configure a reverse proxy, as I don’t believe one is needed. If port 8443 is blocked on your network, you can configure cloud access via https://unifi.ui.com.

If you want to configure a reverse proxy, note you’ll need something that handles websockets gracefully – Nginx and Traefik are probably your best options.