My lab is in a constant state of flux. I am going to try and keep this page up to date with the tech I’m running. My setup is spread across the local stuff running in the garage and “the cloud” a Mix of AWS and Cloudflare.
Table of Contents
Obviously with homelab’s everyone has different requirements and also opinions. My lab has grown organically over the last 10 years some kit has been purchased new. A lot has been bought off eBay or secondhand, and some has been donated. If I were starting from scratch it would look quite different I think. I suspect the replacement cost of what I’m running currently would be in the ยฃ15-20k mark
I have several requirements some of them are hard requirements and others I try to meet if possible.
- Must be able to run vSphere (HCL preferred)
- IPMI/Remote management preferred
- Rack Mount Preferred
- Ability to achieve a low power configuration for 24×7 Operations
- Noise isn’t really a factor due to location
- Heat Output isn’t a huge factor
- GPU is nice to have
Physical Hardware
Most of the serious hardware lives in a StarTech 25U Openframe Server Rack contained in a “server room”
Compute
Nutanix NX – 3x NX-1365-G4
3x Nodes in a Supermicro Big-Twin with an identical configuration as below
This is currently running Nutanix Community Edition with vSphere 7u3 as the Hypervisor. I had some issues with the vSphere 8 deployment.
Component | Description |
Model | NXS2U4NL12G400 |
CPU | 2x XEON E5-2640 V3 |
RAM | 255GB |
NIC | AOC-STGN-i2S 2-PORT SFP+ 10GBE |
Boot Disk | 64 GB SATADOM |
SSD 1 | Samsung PM863 960GB |
SSD 2 | Samsung 860 EVO 2TB |
SSD 3 | Samsung 860 EVO 2TB |
As the cluster has 3x960GB and 6x2TB Drives this gives a raw capacity of just under 11.5TB. With the RF=2 replication factor, I have applied to the cluster, this gives a usable capacity of 5.7TB before any compression etc.
SuperMicro 2027TR with 4x X9DRT-IBQF nodes
Node 1&2
Component | Description |
Model | Supermicro X9DRT |
CPU | 2x Intel Xeon E5-2670 |
RAM | 192GB |
NIC 3 | Intel Ethernet Controller XXV710 for 25GbE SFP28 |
NIC 4 | Intel Ethernet Controller XXV710 for 25GbE SFP28 |
Boot Volume | 16GB USB |
Nodes 1 and 2 are part of a vSphere Cluster running VMware vSphere 8.0.2
Nodes 3 has a Nvidia GPU and is running vSphere 7 for compatibility reasons with the Mellanox Connect X-3
Node 3
Component | Description |
Model | Supermicro X9DRT |
CPU | 2x Intel Xeon E5-2670 |
RAM | 192GB |
NIC 0 | Mellanox ConnectX-3 40Gb/s |
GPU | Nvidia GPU P4 |
Boot Volume | 16GB USB |
Node 4 Is out of service
Storage
Quanta D51PH-1ULH
Quanta server running TrueNAS scale 43.5 TB usable
Synology DS216+
This is in the house used for “offsite backups” as well as running PHPIpam
Supermicro Node
The Supermicro nodes are of a Hyperconverged setup due to the age of the hardware the storage controllers stopped working with vSphere at the 6.x branch and the Mellanox Nic stopped working with anything past 7.x To make effective use of this hardware I have decided to keep it at vSphere 7 and pass the RAID storage controller directly into a Windows VM to use as a Veeam Repo. With 6x 1TB disks in a RAID6 configuration, this gives me just over 3.6TB of Veeam Repo which is more than enough.
Hostname | Model | NVMe Capacity (TB) | SSD Capacity (TB) | HDD Capacity (TB) |
Quanta D51PH-1ULH | 0 | 0 | 43.5 | |
Synology DS216+ | 0 | 0 | 2 | |
Supermicro Node3 | 0 | 0 | 3.2 | |
Total TB (Usable) | 7 | 48.7 |
Network
Physical Network
Model | Description |
Watchguard M200 | Core Firewall |
Mikrotik CRS504-4XQ-IN | Core Fibre Switch |
Mikrotik CSS610-8P-2S+IN | POE Switch |
Mikrotik CRS305-1G-4S+ | Fibre to Ethernet Switch |
Wireless Network
Ap Model | Description |
Ubiquiti Unifi AC-Pro | Access Point Office |
Ubiquiti UniFi 6 Pro | Access Point House |
WAN
Wan Providers | Description |
Zen FTTP (Via CityFibre) | 900/900 with a /29 IPV4 |