Table of Contents

Intro

I have run a number of systems utilising ZFS since the earliest days of my homelab using Nexenta all the way back in 2010. The image below is my lab at the time with an IBM Head unit that I think had 18GB of RAM 6x450GB SAS drives and this was then connected to the Dell PowerVault SCSI Array above it with 14x146GB 10K SAS drives….

Original Nexenta Setup

The number one rule is to ALWAYS give ZFS access to the underlying raw storage. You don’t want a raid controller or anything else interfering with the IO path. This is similar to how vSAN works with VMware.

But rules are meant to be broken right….. I have virtualised a few copies of TrueNAS Scale which utilises ZFS on top of VMware and in these particular instances I specifically DONT want to passthough the storage in the form of an HBA or drives. Why would I do this? Mainly for two reasons. This allows me to test upgrades of my physical TrueNAS setup with an easy rollback if needed by not passing the drives or controllers in I can clone and snapshot the VM just as if it was any other and move it around my lab infrastructure.

Copy on Write

ZFS is a “Copy on Write” filesystem which means that it never overwrites existing blocks of storage. It always places writes into new blocks. This is unfriendly to “thin provisioning” something I am a huge fan of. This means that over time even a tiny database writing one megabyte file over and over again will slowly crawl the entire file system.

So if your going to break the rules. The way I see it is you might as well do it properly

The first requirement is that the VM be provisioned with thin disks in vSphere. If it’s not thin then unmap won’t work. This is important in case you are thin at the underlay storage level.

Disk ID’s

You also need to do is to ensure that TrueNAS can see unique disk ID’s. To do this shut down the VM and add the following parameter to the VM config

 disk.EnableUUID=TRUE

Once you power the VM on you should be able to see unique serials for each disk similar to this screenshot. Prior to this change, the serial section is blank.

Trim

Once the disks are seen as unique it is possible to enable trim. ( I have no idea why this is a blocker but it is)

You can enable auto trim in the Storage Dashboard under “ZFS Health”

I decided to manually enable it by executing the below command in the shell. ( My Pool is called Pool-1)

sudo zpool trim Pool-1

To confirm that trim is working execute the below command

sudo zpool status Pool-1

If everything is working you will see the trimming command as per below

A further validation that this is working is to review the VM storage used, see the before and after for this VM

Additional confirmation can be seen by reviewing the underlying (in this case vSAN consumption). Before and after listed below

Previous post Self Hosting AI Stack using vSphere, Docker and NVIDIA GPU