My lab is in a constant state of flux.   I am going to try and keep this page up to date with the tech I’m running.  My setup is spread across the local stuff running in the garage and “the cloud” a Mix of AWS and Cloudflare.

Obviously with homelab’s everyone has different requirements and also opinions. My lab has grown organically over the last 10 years some kit has been purchased new. A lot has been bought off eBay or secondhand, and some has been donated. If I were starting from scratch it would look quite different I think. I suspect the replacement cost of what I’m running currently would be in the £15-20k mark

I have several requirements some of them are hard requirements and others I try to meet if possible.

  • Must be able to run vSphere (HCL preferred)
  • IPMI/Remote management preferred
  • Rack Mount Preferred
  • Ability to achieve a low power configuration for 24×7 Operations
  • Noise isn’t really a factor due to location
  • Heat Output isn’t a huge factor
  • GPU is nice to have

Physical Hardware

Most hardware lives in a StarTech 25U Openframe Server Rack

Compute

Nutanix NX – 3x NX-1365-G4

3x Nodes in a Supermicro Big-Twin with an identical configuration as below

This is currently running Nutanix Community Edition with vSphere 7u3 as the Hypervisor. I had some issues with the vSphere 8 deployment.

Component Description
Model NXS2U4NL12G400
CPU 2x XEON E5-2640 V3
RAM 255GB
NIC AOC-STGN-i2S 2-PORT SFP+ 10GBE
Boot Disk 64 GB SATADOM
SSD 1 Samsung PM863 960GB
SSD 2 Samsung 860 EVO 2TB
SSD 3 Samsung 860 EVO 2TB

As the cluster has 3x960GB and 6x2TB Drives this gives a raw capacity of just under 11.5TB. With the RF=2 replication factor I have applied to the cluster, this gives a usable capacity of 5.7TB before any compression etc.

SuperMicro 2027TR with 4x X9DRT-IBQF nodes

Node 1&2

Component Description
Model Supermicro X9DRT
CPU 2x Intel Xeon E5-2670
RAM 192GB
NIC 3 Intel Ethernet Controller XXV710 for 25GbE SFP28
NIC 4 Intel Ethernet Controller XXV710 for 25GbE SFP28
Boot Volume 16GB USB

Nodes 1 and 2 are part of a vSphere Cluster running VMware vSphere 8.0.2

Nodes 3 has a Nvidia GPU and is running vSphere 7 for compatibility reasons with the Mellanox Connect X-3

Node 3

Component Description
Model Supermicro X9DRT
CPU 2x Intel Xeon E5-2670
RAM 192GB
NIC 0 Mellanox ConnectX-3 40Gb/s
GPU Nvidia GPU P4
Boot Volume 16GB USB

Node 4 Is out of service

External Storage

At present, I have 4 different storage locations

HP Z840 – TrueNas Scale

My HP Z840 running Truenas Scale is the primary external storage. See Homelab Storage Refresh (Part 1) for more details

HP Microserver Gen 8 – TrueNas Scale

This runs TrueNAS scale with a simple pair of 3TB disks in a mirror. Its often used as swing space or temporary storage. It holds backups and some ISO’s but isn’t on 24×7

Synology DS216+

Supermicro Node

The Supermicro nodes are of a Hyperconverged setup due to the age of the hardware the storage controllers stopped working with vSphere at the 6.x branch and the Mellanox Nic stopped working with anything past 7.x To make effective use of this hardware I have decided to keep it at vSphere 7 and pass the RAID storage controller directly into a Windows VM to use as a Veeam Repo. With 6x 1TB disks in a RAID6 configuration this gives me just over 3.6TB of Veeam Repo which is more than enough.

Hostname Model NVMe Capacity (TB) SSD Capacity (TB) HDD Capacity (TB)
Synology DS216+ 0 0 3
HP Z840 (Truenas Scale) 0.8 7 21
HP Microserver Gen8 0 0 3
Supermicro Node3 0 0 3.2
Total TB 0.8 7 30.2

Network

Physical Network

Model Description
Watchguard M200 Core Firewall
Mikrotik CRS504-4XQ-IN Core Fibre Switch
Mikrotik CSS610-8P-2S+IN POE Switch
Mikrotik CRS305-1G-4S+ Fibre to Ethernet Switch

Wireless Network

Ap Model Description
Ubiquiti Unifi AC-Pro Access Point Office
Ubiquiti UniFi 6 Pro  Access Point House

WAN

Wan Providers Description
Zen FTTP (Via CityFibre) 900/900 with a /29 IPV4