Categories
VMware VMware Cloud on AWS

VMC Host Errors

When you run a large enough Infrastructure failure is inevitable. How you handle that can be a big differentiator. With VMware Cloud on AWS, the hosts are monitored 24×7 by VMware/AWS Support all as part of the service. If you pay for X number of hosts you should have X. That includes during maintenance and failure operations.

I’m not sure lucky is the right word but I did witness a host issue with a customer I was working with. True to the marketing It was picked up and automatically remediated.

Looking at the log extract above a new host was being provisioned the same minute the issue was identified. Obviously this needed to boot and join the VMware/vSAN cluster before a full data evacuation takes place on the faulty host and finally, the host is removed.

All of this was seamless to the customer. I noticed it as a few HA alarms tripped in the vCentre ( These were cosmetic only)

Just another reason why you should look at the VMware Cloud on AWS Service

Categories
VMware

VMware Certified Master Specialist HCI 2020

I recently sat ( and passed the VMware HCI Master Specialist exam (5V0-21.20). I wont go into any details of the contents but I will comment that I felt the questions were fair and that there wasn’t anything in it to trip you up. The required knowledge was certainly wider than the vSAN specialist exam.

This was my third remote proctored exam and I must say the experience has improved. Partially that is down to Pearson Vue improving the process and partially down to knowing what to expect a bit better.

Together with the vSAN exam that I passed earlier in the month, this entitles me to a VMware Master Services Competency. This is now the second that I hold and I must say I like the way VMware is delivering a thought out learning path.

Categories
Cloudflare Hosting

Cloudflare Workers

I have been reading more and more about Cloudflare Workers and it looking really cool but I couldn’t think of how I might make use of them. Then I came across a post showing how to use the WP2Static plugin to take an existing WordPress site and migrate it into Cloudflare workers. That was the kick that I needed to migrate my site over. What I like about this solution is I get to keep the ease of content creation that goes with WordPress but the speed and security associated with a serverless site.

Therefore what I will do is maintain a WordPress Instance locally. It doesn’t need to be publically accessible & therefore no security risk. The Instance doesn’t need to be online either except if I want to edit content.

The WP2Static plugin supports a number of automated deployments to some hosting platforms. At the moment Cloudflare isn’t one of them so it does require a bit more manual work. Hopefully, they will support this in the future. I think this trade-off is worth it. Cloudflare has an extensive global network in more than 200 cities I’m going to leverage these to run this site.

The process itself is pretty straightforward

Add the WP2Static plugin to your WordPress site like any other WordPress plugin.

Make sure to set the export as a zip file and select offline usage. Click start static site export and then download the generated file, you will need this later.

On your machine, you need to have Wrangler installed. I have installed this with NPM

npm i @cloudflare/wrangler -g

Then navigate to your working folder and generate the config I have called my site wp-static

wrangler generate --site wp-static

This will then generate three things

  • Public Directory
  • Workers-site Directory
  • Wrangler.toml file

Then take the zip file generated earlier and drop the contents of it into the public folder.

The next step is to add the relevant config to your Wrangler.toml file You need to get the Zone Id and Account Id from within the Cloudflare portal. Below is an example of what my site looks like.

name = "wp-static"
type = "webpack"
account_id = "c9xxxxxxxxxxxxxxxxxxd"
workers_dev = true
route = ""
zone_id = ""

[site]
bucket = "./public"


[env.production]
# The ID of your domain you're deploying to
zone_id = "9881xxxxxxxxxxx4"
# The route pattern your Workers application will be served at
route = "jameskilby.co.uk/*"

Executing “Wrangler Preview” will build and deploy the site and launch a preview of it in your browser to check the site.

If it all looks good

wrangler publish 

will deploy it to a workers.dev based domain in my case I have registered kilby.workers.dev with Cloudflare so the site becomes:

https://wp-static.kilby.workers.dev

The final step is to publish calling the –env production parameter in my .toml file.

wrangler publish --env production

This is the final step and automatically deploys to Cloudflare under the jameskilby.co.uk domain.

Issues:

I have had to change a few things with the move to workers. I am now having to inject Google Analytics statically into the website. Previously I was dynamically injecting this on runtime however that didn’t appear to work with the new setup.

The way I have deployed the site means none of the dynamic elements of WordPress will work, however, I wasn’t using comments etc so this isn’t a big thing.

Performance:

The Cloudflare workers site Is much snappier than the native WordPress site even when it was using the Cloudflare CDN. It will also perform consistently globally whereas the old site would perform worse the further away from the UK the visitor was. This was due to only having a single origin server (running from my lab in the UK)

Testing with gtmetrix.com before and after the fully loaded page time has reduced to 1.6s from 5.5s

Cloudflare reports on the CPU consumed as part of running the side.

Costs

Due to the way this is using Cloudflare Workers it uses an element called Workers KV this is the global low latency Key-Value store. This is unfortunately not available in the free tier. Therefore I have upgraded to the Workers Unlimited plan for $5 a month something that I think is good value for money.

Categories
VMware VMware Cloud on AWS

New Host Family

VMware Cloud on AWS has introduced a new host to its lineup the “i3en”. This is based on the i3en.metal AWS instance.

The specifications are certainly impressive packing in 96 logical cores, 768GiB RAM, and approximately 45.84 TiB of NVMe raw storage capacity per host.

It’s certainly a monster with a 266% uplift in CPU, 50% increase in RAM and a whopping 440% increase in raw storage per host compared to the i3. Most of the engagements I have worked on so far have discovered that they are storage limited requiring extra hosts to handle the required storage footprint. With such a big uplift in Storage capacity hopefully, this will trend towards filling up CPU, RAM & Storage at the same time. This is the panacea of Hyperconvergence.

The other two noticeable changes are that the processor is based on a much later Intel family. It is now based on 3.1 GHz all-core turbo Intel® Xeon® Scalable (Skylake) processors. This is a much more modern processor than the Broadwell’s in the original i3. This brings a number of processor extension improvements including Intel AVX, Intel AVX2, Intel AVX-512

The other noticeable change is the networking uplift with 100Gb/s available to each host.

Model pCPUMemory GiBNetworking GbpsStorage TBAWS Host Pricing (On-demand in US-East-2 Ohio)
i3.metal36*512258×1.9$5.491
i3en.metal967681008×7.5$11.933

*The i3.metal instance, when used with VMware Cloud on AWS has hyperthreading disabled.

At present this host is only available in the newer SDDC versions (1.10v4 or later) and in limited locations.

It also looks like the i3 still has to be the node used in the first cluster within the SDDC (where the management components reside) and they aren’t supported in 2 node clusters.

At the time of writing pricing from VMware is not available however pricing is available for the hosts if they were bought directly from AWS. Assuming the VMware costs fall broadly in line with this giving:

VMware have now released pricing. The below is for On-Demand in the AWS US-East region.

i3.Metal is £6.213287 per hour & i3en.Metal £13.6221 per hour giving:

  • A cost per GB of SSD instance storage that is up to 50% lower
  • Storage density (GB per vCPU) that is roughly 2.6x greater
  • Ratio of network bandwidth to vCPUs that is up to 2.7x greater

This new host type adds an additional complication into choosing host types within VMware Cloud on AWS but makes it a very compelling solution.

Categories
Nutanix Personal

Nutanix NCP

I saw a tweet a couple of weeks ago mentioning that Nutanix were offering a free go at the Nutanix Certified Professional exam. They are also offering free on-demand training to go with it. In my current role, I haven’t used Nutanix however I have good experience using it as the storage platform with vSphere and have also run the Nutanix CE software both physically and nested to give me a better overview of the AHV platform.

The online training is delivered over a pretty modern platform and is done in a way to keep you engaged. The Labs aren’t super complicated but are buried into the training as opposed to a separate task. I think I prefer this approach.

This was the first online proctored exam for me so the experience was a bit different. I initially had some issues getting the software to recognise my web camera. After a bit of troubleshooting, this was resolved and the exam began. The questions were fair and followed the blueprint and the time allocated was ample for me.

I wasn’t confident when I pressed the submit button but the results came back instantly and luckily I passed. They also give you a breakdown of how you scored in the various sections this is a really useful piece of feedback that a number of tests deprive you of.

All in all top work Nutanix

Categories
AWS Veeam VMware VMware Cloud on AWS

Monitoring VMC – Part 1

As previously mentioned I have been working a lot with VMware Cloud on AWS and one of the questions that often crops up is around an approach to monitoring.

This is an interesting topic as VMC is technicaly “as a service” therefore the monitoring approach is a bit different. Technically AWS and VMware’s SRE teams will be monitoring all of the infrastructure components,

however you still need to monitor your own Virtual Machines. If it was me I would still want some monitoring on the Infrastructure and I see two different reasons why you would want to do this:

Firstly I want to check that the VMware Cloud on AWS service is doing what I am paying for. Secondly I still need to monitor my VM’s to ensure they are all behaving properly, the added factor is that with a good realtime view of my workload I can potential optimise the number of VMC hosts in my fleet reducing the costs.

With that in mind, I decided to look at a few options for connecting some monitoring tools to a VMC enviroment to see what worked and what didn’t.  I am expecting some things could behave differently as you don’t have true root/admin access as you would usually do.  All of the tests will be done with the cloudadmin@vmc.local account.   This is the highest level account that a service user has within VMC.

The first product that I decided to test was Veeam One.  This made sense for a few reasons:  Firstly I’m a Veeam Vanguard and am very familiar with the product. I also have access to the Beta versions of the v10 products as part of the Vanguard program.

Secondly, it’s pretty easy to spin up a test server to kick the tyres and finally, the config is incredibly quick to implement.

I could have easily added a VMC vCentre to my existing Veeam servers however I choose to deploy a new server just for this testing.  Assuming you have network access between your Veeam One server and the VMC vCentre then adding to Veeam One is straightforward. If not you will need to open up the relevant firewall’s

Once done Veeam performs an inventory operation and returns all of the objects you would expect.   This test was shortly after a VMC environment was created so it doesn’t yet have any workloads migrated to it.  However, as you can see below its correctly reporting on the hosts and VM workloads. It is correctly reporting back that the hosts are running ESXi 6.9.1

I also ran a couple of test reports to check they functioned as expected. Everything seemed to work as I would expect.

In Part two I am going to look at using  Grafana, Influxdb and Telegraf and seeing if this common opensource monitoring stack works with VMC.

Categories
Cloudflare Hosting

Cloudflare Setup for WordPress Users

I have been a huge fan of Cloudflare since they first came to my attention.  I did a post on them a few years ago. They do an excellent job of improving web performance and increasing security. I also find Cloudflare’s Blog a fascinating read

I saw a tweet by Chris Wahl recently where he talked about a Cloudflare firewall rule he is using to protect his WordPress instance.

I am using something similar in the Firewall section and also leveraging a couple of other cool features.

Chris has done an excellent write up on the firewall part including how to achieve this with Terraform so for the detailed look check out his blog post here for a slightly simpler version see below.  This post will talk about some of the other features I am using to improve the Speed, Security and functionality of my site.

Firewall Rules

The most important thing when hosting a WordPress site is to protect the admin section.  This should be done with a strong password and preferably two-factor authentication.  However, if you can stop people even accessing this part of the site then even better.  If you are using Cloudflare then this is easy to achieve.

From the Cloudflare, portal navigate to My account > Firewall Firewall Rules and create a rule and give it an appropriate name then configure the settings as per below.   The IP(s) in the value section are the only ones that will be able to access the site once this configuration is live.

When the rule is live it will look like the below.  A really nice touch is the graph showing how many requests have matched this rule and you can also dig into see the individual events if required.  An example drop log is shown below

Page Rules

I also use another feature within Cloudflare called Page Rules. My account > Page Rules

Within the free tier of Cloudflare, you are allowed to create up to 3 rules.  At the moment I am using two of these.

The first of these is an automatic rule to rewrite to HTTPS. I am using this with wildcards to ensure that all pages are taken care of but still land on the intended page.   Details of what Cloudflare does are here

The other rule I use is for a status page.  This is more for demonstrating some AWS features as a status page but I am sure multiple other use cases exist.   As Cloudflare intercepts the request before any webserver the redirect is quicker.  However, in this case, they can do the redirect even if my webserver is not online.

Cloudflare Applications

Another really nice use case is Cloudflare’s applications.   As the HTML  CSS etc is passing through the Cloudflare network they can manipulate it.  They do this to improve performance using compression.  They also have the capability to inject code I use this to add Google Analytics into every web page.  They have a large number of Apps available to easily make functional changes to your site.

WordPress Plugins

WordPress has a plugin for interacting with Cloudflare via the API.  This has a couple of uses and it is highly recommended.  Firstly the plugin can optimise your WordPress install to work best with Cloudflare. It also gives you access to some of the basic settings allowing anyone with admin access to WordPress to tweak Cloudflare settings if required.

The second function that it performs is that performs automatic cache management automatically invalidating cache as the content is changed as required.

Categories
Veeam

VeeamON2020

As everyone knows by now the world has changed possibly forever.  Due to Covid19 working from home has become the new normal.  We are lucky in the IT world that this has been fairly straightforward for most of us.  We are privileged in that it’s possible for us to continue indefinitely.  Organisations still need to move forward, to progress and adapt into the new normal.

In the words of Charles Darwin “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change.”

With that most (if not all IT conferences have been postponed or gone online)   Veeam’s annual conference VeeamON is no exception and now it’s here!!

As a Veeam Vanguard I was privileged to be given an early briefing on some of the announcements.  These are summarised below but for all the details make sure you sign up and view some of the great sessions

If you haven’t managed to sign up you still can here.

Headline Announcements

  • Veeam Backup for Office 365 v5

– Microsoft Teams backup

– Modern Authentication

  • Veeam Backup for AWS v2

– Snapshot Replication

– Hybrid Cloud

  • Veeam Availability Orchestrator v3

– Fast recovery using Netapp Snapshots

– DR Pack purchase options

  • Veeam Availablity Suite v11

-Continuous Data Protection.

-Object Storage Enhancements – Capacity Tier now supporting Google Cloud Object Storage

-New Archive Tier- Supporting AWS S3 Glacier

-Instant Reocovery improvements  – Instant NAS & Instant Database recovery.

Last but not least a feature that I have been asking about for over 3 years

 

Yes Veeam Backup Agent for MAC

 

Categories
AWS Veeam VMware

Monitoring VMC – Part 1

As previously mentioned I have been working a lot with VMware Cloud on AWS and one of the questions that often crops up is around an approach to monitoring.

This is an interesting topic as VMC is technicaly “as a service” therefore the monitoring approach is a bit different. Technically AWS and VMware’s SRE teams will be monitoring all of the infrastructure components,

however you still need to monitor your own Virtual Machines. If it was me I would still want some monitoring on the Infrastructure and I see two different reasons why you would want to do this:

Firstly I want to check that the VMware Cloud on AWS service is doing what I am paying for. Secondly I still need to monitor my VM’s to ensure they are all behaving properly, the added factor is that with a good realtime view of my workload I can potential optimise the number of VMC hosts in my fleet reducing the costs.

With that in mind, I decided to look at a few options for connecting some monitoring tools to a VMC enviroment to see what worked and what didn’t.  I am expecting some things could behave differently as you don’t have true root/admin access as you would usually do.  All of the tests will be done with the cloudadmin@vmc.local account.   This is the highest level account that a service user has within VMC.

The first product that I decided to test was Veeam One.  This made sense for a few reasons:  Firstly I’m a Veeam Vanguard and am very familiar with the product. I also have access to the Beta versions of the v10 products as part of the Vanguard program.

Secondly, it’s pretty easy to spin up a test server to kick the tyres and finally, the config is incredibly quick to implement.

I could have easily added a VMC vCentre to my existing Veeam servers however I choose to deploy a new server just for this testing.  Assuming you have network access between your Veeam One server and the VMC vCentre then adding to Veeam One is straightforward. If not you will need to open up the relevant firewall’s

Once done Veeam performs an inventory operation and returns all of the objects you would expect.   This test was shortly after a VMC environment was created so it doesn’t yet have any workloads migrated to it.  However, as you can see below its correctly reporting on the hosts and VM workloads. It is correctly reporting back that the hosts are running ESXi 6.9.1

I also ran a couple of test reports to check they functioned as expected. Everything seemed to work as I would expect.

 

 

In Part two I am going to look at using  Grafana, Influxdb and Telegraf and seeing if this common opensource monitoring stack works with VMC.

 

Categories
AWS Personal

AWS Solution Architect – Associate

Today was a good day.   I renewed my AWS Solution Architect certification.   Although my work is primarily in and around the VMware ecosystem I have been working a lot with VMware Cloud on AWS recently with a number of our customers.     Having a good foundation of the core AWS services has made this much much easier to be able to articulate to the customers.

Due to the non-disclosure agreement, I can’t talk about any specific questions however I will say that the questions and the focus have changed quite significantly since I sat the exam a few years ago.  Like AWS itself they were at the cutting edge of the industry and the number of services they now offer is extreme, adding to the challenge of the exam.  I am now good for the next three years however I am planning to sit the Professional level exam in the new year.