I’m a Solution Architect in Broadcom’s EMEA Hyperscalers team, a twelve-year VMware vExpert, and co-founder of the UK Veeam User Group. Alongside my day-to-day work I share practical insights on VMware, AWS cloud infrastructure, and enterprise AI through podcasts, user group sessions, and conference talks. Below you’ll find recordings of some of those appearances. You can find me on/in the following Videos/Podcasts
Xtravirt CloudInsiders Podcast – VMware Cloud on AWS
In this episode of the Xtravirt CloudInsiders podcast, I discuss the practical realities of running VMware Cloud on AWS (VMC on AWS) — covering architecture decisions, common migration patterns, and what organisations need to consider before moving workloads to the cloud. We explore the commercial model, when VMC on AWS makes sense versus a native AWS approach, and the operational differences teams should plan for.
Xtravirt CloudInsiders Podcast – Data Availability
In this episode I join the Xtravirt CloudInsiders panel to explore data availability in hybrid and multi-cloud environments. The conversation covers backup strategy, recovery point and recovery time objectives, business continuity planning, and how organisations can protect critical workloads whether they sit on-premises, in VMware Cloud on AWS, or across multiple clouds.
Data availability is often an afterthought in cloud migrations. This episode takes a pragmatic look at what it takes to keep production data protected and recoverable at scale, drawing on real-world experience with Veeam and the broader backup ecosystem. I draw on four years as a Veeam Vanguard and my work co-founding the UK Veeam User Group to ground the discussion in practitioner reality.
UK VMUG UserCon 2025
Presented at the UK VMware User Group (VMUG) UserCon 2025 in London, this session — Garage to Boardroom: Scaling AI Innovation from Homelab to Enterprise Success — explores how homelab experimentation can directly inform and accelerate AI adoption at enterprise scale. Myself and Gareth walk through our own experience running large language models, GPU inference, and AI pipelines on home hardware, and translate those lessons into practical architecture guidance for enterprise teams evaluating their own AI infrastructure strategy.