Nutanix Personal

Nutanix NCP

I saw a tweet a couple of weeks ago mentioning that Nutanix were offering a free go at the Nutanix Certified Professional exam. They are also offering free on-demand training to go with it. In my current role, I haven’t used Nutanix however I have good experience using it as the storage platform with vSphere and have also run the Nutanix CE software both physically and nested to give me a better overview of the AHV platform.

The online training is delivered over a pretty modern platform and is done in a way to keep you engaged. The Labs aren’t super complicated but are buried into the training as opposed to a separate task. I think I prefer this approach.

This was the first online proctored exam for me so the experience was a bit different. I initially had some issues getting the software to recognise my web camera. After a bit of troubleshooting, this was resolved and the exam began. The questions were fair and followed the blueprint and the time allocated was ample for me.

I wasn’t confident when I pressed the submit button but the results came back instantly and luckily I passed. They also give you a breakdown of how you scored in the various sections this is a really useful piece of feedback that a number of tests deprive you of.

All in all top work Nutanix

Homelab Nutanix

Nutanix CE 5.6

Nutanix CE 5.6


I have been running Nutanix CE at home for quite a while now and the new version has just dropped so I had to try it out.   I decided to destroy the current ( Single Node ) Cluster that I was running and start again.   This was for a few reasons but primarily I wanted to introduce disk redundancy and add some extra drives.  I was previously running this on a Dell T20 with 1x Sandisk 240GB SSD and 1x WD Red 3TB drive.    So I added another WD Red and SSD.  I have also moved over to using an old Intel 80Gb SSD as the boot volume (rather than a USB stick)

I decided to stck with the orginal installer method and copyied the new build onto the SSD

[codeblocks name='ntnxprep']

Then run the installer and choose not to build a cluster.  I then needed to revoke the previous SSH keys for the Host and the CVM ( as i had used the same address’s)

I then SSH’d into the CVM and create the single node cluster but with disk redundancy

[codeblocks name='clustercreate']

On login to Prism Element the cluster shows that Im now on the later version and I now do indeed have storage redundancy.   The big green ok shows this at a glance.  The middle image shows a detailed view confirming that all the required components can tolerate a failure but obviously I cant tolerate a host loss as i only have 1 (last image ) 🙁

The cluster is pretty bare at the moment but I will start adding back the various systems.

Data is all ok

I am fully protected at the storage component level

I cant loose a host 🙁



Nutanix Command Reference Guide


Nutanix Command List

This is a list of Nutanix commands I have found useful. Its here as a reference and if i need a command more than a few times ill generally add it here.


  • ncli cluster get-domain-fault-tolerance-status type=node  ( Checks if all of the storage components meet the desired replication factor)
  • cvm_shutdown -P now  ( Correct way to shutdown a CVM)
  • ncc health_checks run_all –parallel=4 .  ( 4 is the max number)
  • curator_cli get_under_replication_info summary=true Checks if any objects are not at the desired replication factor
  • curl localhost:2019/prism/leader . Find the leader


  • http://{curator-master-cvm-ip}:2010/master/control ( If you want to invoke a curator scan manually )

Nutanix Life Cycle Manager

Nutanix Life Cycle Manager (LCM)

With the introduction of AOS 5 Nutanix introduced Life Cycle Manager (LCM),  something that Is one of the best but least known Nutanix features.  Put simply its part of the Nutanix update mechanism but for dealing with hardware rather than the software components.

To me what makes LCM stand out is its pure simplicity.  I have seen other solutions where it can be confusing to find out what hardware is on the HCL and then what firmware version is required and then the appropriate driver for that combination.   This becomes unmanageable at a large enough scale.   Where I currently work we have a mix of five different Nutanix node types of different hardware generations. ( All based on Supermicro hardware) . The below screenshots walk through an upgrade of one of these clusters.

The above picture demonstrates the simplicity,   It’s showing that at present the only update available is to the “Cluster Software Component”   Once this has been updated the next step is to perform an Inventory of your cluster.
LCM will then show you all of the components in your cluster and the relevant upgrades available.  If you work in a “Dark Site” . Offline downloads are also available.
The below cluster has not had any updates run against it.

Once the inventory has been done its time to decide if you want to run all of the updates or just a selection and off you go.

You can see above all of the available updates in this 3 node cluster.   Note than only 2 of the SSD’s needed updates as we had previously had one replaced and this was shipped with a later firmware.

Because the LCM is aware of the end to end stack it’s aware of any relevant dependencies.  The upgrade for the HBA listed below doesn’t have any.

Once you have started the upgrade progress LCM handles the orchestration piece, stopping just the required services and functions to allow the upgrade to complete.

For the HBA upgrade, LCM stopped the storage-related services on the CVM but it left the CVM powered on and was not required to evacuate VM’s from the ESXi host. This meant that the upgrade was done very quickly and the storage services started again before moving to the next node.

As you can see the Host Boot Device (SATADom) and drives do require maintenance mode but again all of this is handled by LCM

And thats it….