2 min read

🐳 Docker issues with NFS mount

Samsung 970 EVO Plus NVMe
Samsung 970 EVO Plus NVMe
Samsung 970 EVO Plus NVMe

I recently upgraded the Samsung 860 EVO 500GB SSD drive in my home server to a Samsung 970 Plus NVMe drive for faster read/write speeds but ran into some issues after the reinstall with the NFS mount and Docker.

The issue that I was having was that with the added speed of the NVMe drive, Docker was attempting to start all of the containers before the NFS mount had time to mount. This was causing either the containers to not mount successfully, or have issues once they mounted since many of them depended on the resources on the NFS mount.

After doing a bit of research I came across the /etc/systemd/system/multi-user.target.wants/docker.service file.  Within this file, you can specify the system requirements that need to be met before Docker will attempt to start the containers.

Below is the original section of the file as installed by default.

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

I updated that section of the file to include mnt-nas.mount in multiple locations in order to make sure that Docker will wait until the mount had completed before starting.

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service mnt-nas.mount
Wants=network-online.target mnt-nas.mount
Requires=docker.socket mnt-nas.mount

If you are not certain of the mount name, you can use the following command to list the ones on your system.

systemctl list-units | grep /mnt/

After you have made the changes you should be able to restart the system and Docker should only start after the mount has become fully active.

On top of the upgraded drive, I also ended up adding a dedicated NIC for the NFS connection between the server and my Synology NAS on its own VLAN, and things seem to be running very smoothly now.