Categories
Homelab Storage VMware

NFS 4.1

Switching on NFS4.1 In the Homelab

I like a number of Homelabers use Synology for storage.  In my case, I have two a 2 bay DS216+ and a 4 bay DS918 That I have filled with SSD’s

NFS has been the prefered storage protocol for most people with Synology for two main reasons the biggest being simplicity but it’s tended to offer better performance than iSCSI by all accounts.

For me, the performance (especially on the DS918+) is great with one clear exception.   That would be Storage vMotion. It’s not often that I move VM’s around but when I do its a tad painful.   This is because I only have gigabit networking and NFS was limited to a single connection. However, it’s now possible to fix this…..

I have tried to find out when Synology officially supported NFS 4.1 but couldn’t find a reliable answer.  It has been a CLI option for a while but it certainly exists in DSM 6.2.1

The first thing to do is to make sure it’s enabled.

Then from vSphere create a new datastore

Make sure to select NFS 4.1

Then add the Name and configuration this is where the subtle differences kick in.

Note the plus on the server line where multiple inputs can be added.   In my setup, I have two IP addresses (one for each interface on my DS918)

Although NFS 4.1 supports Kerberos I don’t use it.

Finally, mount to the required hosts.

Of course, if you want to do with Powershell that’s also an option

[codeblocks name=’NFSMount’]

The other really nice thing is VAAI is still supported and if you want to see the difference here is a network graph from the Synology during a Storage vMotion clearly better than the single network performance.  This makes me much happier.

A Note of caution for anyone wanting to do this. DONT have the same NFS datastore presented into VMware with NFS 3 and NFS 4.1 protocols. The locking mechanisms are different and so bad things are likely to happen.   I chose to evacuate the datastore unmount and represent as 4.1 for all of mine.

Categories
VMware

vRealize Suite LifeCycle Manager – Environment

Intro

As part of my new role, I have worked extensively on a project deploying VMware’s vRealize Suite Lifecycle Manager. It’s a project that is fairly new in the VMware ecosystem and not a lot of people have come across it.  If you run any of the following products then it’s worth checking it out.

vRealize Automation

vRealize Operations

vRealize Log Insight

vRealize Network Insight

This is the first post of a few that I’m going to do on vRSLCM showing a bit of the environment Management and a product deployment

 

Split Personality

vRSLCM performs two fairly distinct roles.   These are Environment and Content Management. The Content management part of the product is a replacement for codestream (Houdini)

Environment management is used for deployment, Patching, Certificate and environment config management.

Environment Management

To deploy a component you must first set up an environment within vRSLCM

This involves creating a datacentre where you also add the target vCentre

 

When that is done you can create an environment.   An environment is like a wrapper for the products and controls them as a set. You can have multiple of these with the idea that you would have production, test development etc.   You can have multiple of each of these if required.

When you have your environment created its time to deploy a product. vRSLCM will do this step for you but it needs to have the relevant files within the appliance.  These are added in the settings section under product binaries.

You can use three different ways to add the relevant files into the appliance.  Connect it to an NFS share with the relevant files,  Manually upload via SSH or by far the easiest is to add your “My VMware” details and then vRSLCM will download them automatically.     The advantage to the My VMware method is that it can also track the available patches for the products.

I used a combination of NFS and My VMware to add the product binaries

Product Patches

 

Once that was done I added two separate environments  (one for testing and one to simulate production) and then deployed some workloads

Here you can see that I have vROPS deployed in test and both Log Insight and vROPS in the production environment.

I am now going to use vRSLCM to deploy Log Insight Into the test environment by adding it as a product

 

Here you can see I have selected Log Insight to be added (It is possible to add multiple at the same time.). I have gone for a “small” config and I have chosen version 4.6.0.

You will then be asked to confirm the user agreement and vRSLCM will take you into the deployment step.  Here you will provide the specific info. A really nice feature is that as part of this wizard if you provided the My VMware details earlier it will list the keys for you.

 

Most of the Infrastructure details are taken from the environment set up earlier Including vCentre, Cluster, Network details, Datastore NTP etc. It will also deploy certificates for you at this step (a really nice feature)

 

These are the only questions that the wizard can’t answer which is basically the node size to be deployed.   The name for the VM, Hostname and IP address.  Once you have added these to the wizard check that both forward and reverse DNS Is in place before going any further. This is because on the next step the vRSLCM does a prerequisite check.

 

Here you can see that the precheck failed as I had a clash of virtual machine names between my test and production environments. This is an issue as they are in the same vCentre/Cluster

With the precheck passing, you submit this and vRSLCM will go off and deploy.  Obviously, this can take quite a while depending on the config you have asked to deploy.   This can be monitored in the requests section.

Here you can see all the steps and if required can troubleshoot any failures.    When complete it should look something like the below

 

Going back to the Environment we now see that Test matches Production

Categories
Homelab

Lab Storage

Lab Storage Update.

 

Since starting my new role with Xtravirt my Homelab has gone under a number of fairly significant changes.  At the moment its very much focused around the VMware stack and one of the things I needed was some more storage and especially some more storage performance.  With that in mind, I purchased a new Synology a  DS918+

It’s a very compact unit with a quad-core Intel Celeron & I have left the Ram at 4 GB for now.

I have added some of the existing SSD’s that I had giving me about 3TB of usable flash.  I am presenting this back to my VMware hosts using NFS 4.1.   I must have missed the announcement as this is now built into the Synology GUI ( It used to be a command line only option) I have verified the VAAI works as expected in this configuration.  At present I am using this with a single network connection however I will be testing NFS Multipathing shortly.

The performance improvement has been noticeable and I have now removed all non-Synology systems from primary storage.   This has left me with the DS918+ detailed here and a DS216+ with 2TB of Raid1 WD Reds. I am using this for ISO’s and some general file storage.

 

 

Categories
Homelab Hosting

Sophos UTM – Lets Encrypt

Lets Encrypt

 

I have written previously around my use of Sophos UTM within my homelab.   Now I know it’s not a perfect device and some diehard network engineers will say it doesn’t have a CLI. But for my lab, my requirements and my level of skill its a dam good device with SO many features.  It may not have a CLI but it does have an API which has been on my backlog to look into for a long time.

Version 9.6 has just been released and one of the features that has been added was the integration of let’s encrypt certificates. Here is a quick intro to get up and running with them.

Create a certificate

To get started first of we need to enable Lets Encrypt.  This is done in the advanced section of the Certificate Management console with a simple tickbox.

Once that’s been enabled its time to request some certificates.

Navigate to Webserver Protection > Certificate Management > Certificates.

Click on +New Certificate…  

Hosting.jameskilby.net Certificate Creation

 

When you select save the UTM Appliance creates a self signed certificate that can be used immediatly.  In the background it requests a certificate from lets encrypt and providing it passes the validation checks the signed Let’s Encrypt certicate is recieved back from Let’s encrypt.

 

Lets Encrypt Certificate

 

Then its simply a case of applying it. In this example I have added to the Web Application Firewall section protecting the webserver

This can then be validated by visiting the site and as can be seen its displaying properly.

I have created Lets Encrypt certificates for all of the services that I run on the UTM,  they auto renew and generally make life a lot easier.

Categories
Consulting

Whats in my backpack?

I have seen a few posts online recently about the tools and technology people use on a day to day basis.    A few components that I have no one seems to have mentioned yet.  So I thought it was probably a good idea to share my list. I will probably do a separate list of the software I use at some point as well.

Below is a list of the things that I tend to carry in my bag.  I don’t always have all of them with me but I typically swap out what I need as appropriate depending on if need to go to a datacentre or just a meeting/onsite client work.

  • Macbook Air 2018 ( My goto machine)
  • iPad Pro 9.7  & Apple Pencil allows for quick sketching but also used as a presentation remote.
  • Anker Powercore 20100 ( Huge battery for keeping iPhone/Ipad charged)
  • Beats Solo3 Wireless headphones.  Equally useful in a noisy datacentre or a noisy office.
  • 2.5″ USB Drive caddy
  • Bunch of USB Sticks typically with a few different ESX builds 6.0, 6.5 and 6.7 and an empty one just in case
  • EasyAcc Wireless Storage PowerBank – Battery powered AP ( Insanely useful device)
  • Huawei 3G Mifi
  • Startech USB KVM (Another lifesaver)
  • Apple Lightning to HDMI Adaptor + HDMI cable
  • Leatherman Wingman
  • Led Headtorch
  • Dymo portable label printer
  • Trusty vendor supplied notepad and pens.

Based on the recommendation of Chris Lewis I have also added some Whiteboard & Flipchart Pens

 

Oh and a Tie……

Categories
Security

2FA

In this day and age, two-factor authentication (2FA)  is basically a must. I try and use it on any system that supports it. I have a Yubikey and I am a massive fan of this but not everything supports U2F and sometimes it’s not convenient.  I recently have seen an announcement that Yubikey is developing a Lightning based version including USB-C which is awesome,  as at the moment I have a suboptimal experience with my new Mac.

 

 

Suboptimal

 

For the systems that don’t support my Yubikey but do support the Google authenticator protocols, I have moved to using Authy as the 2FA application.  The primary reason behind this Is that I use multiple devices and having to add secrets multiple times and then keep them in sync is a pain. For me using multiple devices (2 iPhones, iPAD, Mac & Work Laptop) It was too much hassle to try and keep them in sync. Authy has a sync feature that totally solves this.  Add once and the token is passed to all your other devices.  One feature that I only found out post install is that Authy works on an Apple Watch.  For me, this is a killer feature that I didn’t even know I needed.  I have had occasions in the past where I have been away from home and my iPhone has a flat battery etc.

Some people may be unhappy with the secret synchronizing feature of Authy. For me, this is a very acceptable trade-off.  It can be turned off if required and in the event of a device loss, I can revoke access from any of the other devices.

Revoke

I recommend having a look at twofactorauth and adding any company/device that supports it.   A few companies were listed that I use but wasn’t aware they supported 2FA

I have a few more of my lab systems to add but at the moment I have 16 services in Authy with a subset shown below

Categories
Apple

New Laptop

I decided it was about time I replaced my trusted MacBook Air that I purchased back in 2011.   After waiting and watching the Apple announcements over the last couple of years I decided the MacBook Pro’s weren’t worth it.   So I have replaced my Air with yes you guessed it  another MacBook Air.

This time I went for the retina version with 16Gb of Ram and  512GB PCIe-based SSD.  The spec was certainly good enough for 99% of my needs yet still being lighter and with better battery life than a MacBook Pro. It also has the security benefits of the T2 Security chip

I also thought it was time to tidy up and publish my Mac build scripts for anyone else to use.  They are available below and hopefully pretty easy for anyone to follow.

https://github.com/jameskilbynet/MacSetup

 

Categories
Personal

And now for something completely different

I have worked for my current employer Zen Internet for  3.5 years.  Over that time I have changed roles from what was originally a customer focused role into a role with one of the core platform teams.  This has meant looking after the majority of the Internal and customer Virtual platforms.  During this time Zen has undergone a number of large migration programs and countless smaller ones. Some of the big ones include:

I have been lucky enough to be involved with all of these ( some much more than others)  Although the work is never complete Zen are in a good place.

At the same time, my personal circumstances have changed somewhat. My upcoming wedding and my fiancé not knowing exactly where she will be based meant having the freedom to be located anywhere ( within reason) was a huge win for me.

I was also looking for a role challenging me using some of the toolsets that I love but also being on the cutting edge of the various stacks.

And with that –  I am proud to say I am joining one of the big hitters,  awarded both the EMEA and Global VMware Partner Innovation Award for professional services 2018

Yes, I’m Joining Xtravirt as a Senior Consultant.

This should allow me to utilize and improve my core VMware knowledge, but also leverage some of my other skills including Veeam, Nutanix and AWS. Something I’m really looking forward to.  I also get to work with Chris Lewis, Simon Eady and one or two others…..

 

Categories
AWS Homelab Money

AWS IoT Button

AWS IOT Button

Back Story:

My AWS Solution Archictect certification is due to expire in the next 6 months and given I have not done a huge amount with AWS since getting certified I thought it was worth kicking the tyres again and running a few bits and pieces within AWS. One of the first things I did was move my blog over to AWS lightsail.

In addition to the  above I thought I would purchase an AWS IOT button  and have a play. The setup for these is now MUCH simpler with the introduction of the iOS and Android  setup apps.

Part 1   Button setup to email

To start with I just wanted to do something easy so I set it up so that with a press of the button it would send me an email via SES  This was to get to grips with the button check I had the comms setup correctly etc etc. I chose to use one of the prebuilt python functions for this. It deliveries a basic email like the below.

[codeblocks name='Pythonmail']

Part 2 IFTTT integration

Once i had this working I then decided to hook it into my Phillips Hue setup to turn the lights on or off. This was done mainly with the help of  this post from Joseph Guerra This wasnt quite straightforward as IFTTT have renamed some of the parts of the site. Joseph did a great post explaining this, however where he describes maker this is now called webhooks within IFTTT.  This is the full code that i’m using ( just with my key masked

[codeblocks name=’IFTTTLambda’]

Part 3 Monzo

Once the AWS to IFTTT integration was setup the next steps were quite easy.   Monzo is becoming my goto bank for most things.  They recently announced IFTTT integration so I wondered if i could hook my IOT button into Monzo.  I decided to create an action that when my button was pressed it would move £1 into a savings pot.

First you need to login to your IFTTT account and then add the Monzo channel.  This is pretty straight forward if you do from your phone where IFTTT and Monzo is installed.

I then went back to IFTTT on my laptop  to create the new applet using the create link https://ifttt.com/create

Then click on the + icon and drill down to find the webhook section

Then you need to check that the eventname on the webhook matches the AWS lambda event in my case I am using “buttonpress”

This should complete the “this” section, Now you to sort the “that”

Click on the plus and select the Monzo service with an action to move money into a pot ( Within the Monzo app I have already created a savings pot called IFTTT)

At the end of the process you should have something that looks like the below

If everything is setup ok a button press will move money over in a few seconds.

Categories
Homelab Nutanix

Nutanix CE 5.6

Nutanix CE 5.6

 

I have been running Nutanix CE at home for quite a while now and the new version has just dropped so I had to try it out.   I decided to destroy the current ( Single Node ) Cluster that I was running and start again.   This was for a few reasons but primarily I wanted to introduce disk redundancy and add some extra drives.  I was previously running this on a Dell T20 with 1x Sandisk 240GB SSD and 1x WD Red 3TB drive.    So I added another WD Red and SSD.  I have also moved over to using an old Intel 80Gb SSD as the boot volume (rather than a USB stick)

I decided to stck with the orginal installer method and copyied the new build onto the SSD

[codeblocks name='ntnxprep']

Then run the installer and choose not to build a cluster.  I then needed to revoke the previous SSH keys for the Host and the CVM ( as i had used the same address’s)

I then SSH’d into the CVM and create the single node cluster but with disk redundancy

[codeblocks name='clustercreate']

On login to Prism Element the cluster shows that Im now on the later version and I now do indeed have storage redundancy.   The big green ok shows this at a glance.  The middle image shows a detailed view confirming that all the required components can tolerate a failure but obviously I cant tolerate a host loss as i only have 1 (last image ) 🙁

The cluster is pretty bare at the moment but I will start adding back the various systems.

Data is all ok
I am fully protected at the storage component level
I cant loose a host 🙁