Switching on NFS4.1 In the Homelab
I like a number of Homelabers use Synology for storage. In my case, I have two a 2 bay DS216+ and a 4 bay DS918 That I have filled with SSD’s
NFS has been the prefered storage protocol for most people with Synology for two main reasons the biggest being simplicity but it’s tended to offer better performance than iSCSI by all accounts.
For me, the performance (especially on the DS918+) is great with one clear exception. That would be Storage vMotion. It’s not often that I move VM’s around but when I do its a tad painful. This is because I only have gigabit networking and NFS was limited to a single connection. However, it’s now possible to fix this…..
I have tried to find out when Synology officially supported NFS 4.1 but couldn’t find a reliable answer. It has been a CLI option for a while but it certainly exists in DSM 6.2.1
The first thing to do is to make sure it’s enabled.
Then from vSphere create a new datastore
Make sure to select NFS 4.1
Then add the Name and configuration this is where the subtle differences kick in.
Note the plus on the server line where multiple inputs can be added. In my setup, I have two IP addresses (one for each interface on my DS918)
Although NFS 4.1 supports Kerberos I don’t use it.
Finally, mount to the required hosts.
Of course, if you want to do with Powershell that’s also an option
Get-VMHost | New-Datastore -Name NFS6 -Nfs -FileSystemVersion '4.1' -NfsHost @('10.0.0.226','10.0.0.227') -Path 'volume3/NFS6'
The other really nice thing is VAAI is still supported and if you want to see the difference here is a network graph from the Synology during a Storage vMotion clearly better than the single network performance. This makes me much happier.
A Note of caution for anyone wanting to do this. DONT have the same NFS datastore presented into VMware with NFS 3 and NFS 4.1 protocols. The locking mechanisms are different and so bad things are likely to happen. I chose to evacuate the datastore unmount and represent as 4.1 for all of mine.