Thursday, May 13, 2021

Enabling Supervisor Cluster with AVI

A few weeks ago, I blogged about how to enable Workload Management Cluster (WMC) on vSphere with Tanzu using NSX and vSphere Distributed Switch (vDS) with HAProxy.


You might also have heard that starting with vSphere Update 2 there’s a new Load Balancer option for vDS use cases, NSX Advanced Load Balancer.


NSX Advanced Load Balancer uses a software-defined architecture that separates the central control from the distributed data plane where the services run. It represents full-featured, enterprise-grade load balancers, web application firewall (WAF), and analytics that provide traffic managementand application security while collecting real-time analytics from the traffic flows.


The best thing about it is that NSX Advanced Load Balancer is included in every Tanzu Edition.


Before enabling it, make sure all the requirements aremet, here’s the video;



Thursday, May 6, 2021

Tanzu Kubernetes Cluster and Virtual Machine Class


Recently I upgraded my environment with vSphere 7 Update 2 and while doing some new tests with vSphere with Tanzu, like Virtual Machine services (more about it on a latter post), to my surprise the creation of Tanzu Kubernetes Cluster with an YAML file I’ve been using for several months for my demos failed


When I describe the deployment of TKG cluster I found out an error saying VirtualMachineClassBindingNotFound.


Running the command “kubectl get virtualmachineclasses” proved that the Virtual Machine Class I was using on my descriptive file was available inside the Namespace.


That’s when I remember the new Virtual Machine Service and decided to take an extra look to my Namespace’s configuration.


To my surprise (it should not be) there’s an entire new section for VM Service and one of the options is to configure VM Classes


Once I enabled the VM Class I was using on my TKG Cluster everything worked as expected and my cluster was created successfully.


This improvement enhanced a lot the governance of the platform, allowing operators to control the resources the developers can consume in a very stylish way.


See you next


Friday, April 16, 2021

Enabling Workload Management Cluster


Undoubtedly vSphere with Tanzu is the greatest innovation brought to vSphere in years, deeply integrating Kubernetes within the platform, enabling all of you to consume K8s, Pods and VMs side by side in an agile and integrated way without compromising governance.


I’ve been demoing Workload Management Cluster (WMC) feature and all its beauty to my customers, but one thing always missing, mostly because of time constraints, is showing how to enable WMC on the cluster.


So, I took some videos for future references.


If you remember, when vSphere 7 was first released, NSX-T was a requirement to enable WMC, because it's the technology  providing pod to pod communication and services, like Load Balancer to the cluster.


So the first video is showing how to enable WMC with NSX-T.

Be mindful that before I had to take care of NSX-T implementation and requirements as listed here.



Starting with vSphere Update 1, you can connect your cluster directly to your vSphere Distributed Switch (vDS) and use independent Load Balancer, allowing a broader reach of vSphere users without the need of NSX-T.

This way pod-to-pod communication will be handled by Antrea CNI and services will follow through HAProxy, which is the first independent load balancer supported.


This model is a good fit for entry level users, PoCs and Labs, mostly because HAProxy lacks some enterprise features for a Production implementation, here's a basic comparison.


Here’s the video enabling WMC with HAProxy.

As imagined, I had to take care of HAProxy implementation and requirements as well.


 Most recently VMware released vSphere Update 2, with an alternative option to HAProxy.

NSX Advanced Load Balancer became the second supported independent Load Balancer to vSphere with Tanzu, a real enterprise grade solution.


Unfortunately I don’t have the video, yet, so keep watching for some updates over here.

******                 Updated information - May 13, 2021                                              ******
******                   Enabling WMC on NSX Adv. Load Balancer                               ****** *************************************************************************



Wednesday, March 31, 2021

Demystifying vSphere Replication 8.4


One of my blogs tradition is about demystifying vSphere Replication's operational limits;

I've started it in 2013 with vSphere Replication (vR) 5.0 and keep updating it every time a major enhancement was made, like on vR 5.5 and vR 6.0.


If you are new over here, vSphere Replication is a replication engine feature provided by VMware that allows data replication from one storage to another, since it does not depend on array based replication technology, you can use it to replicate data between storages from different vendors. It's also the main component behind VMware Site Recovery, where customers protect their on-prem workloads to VMware Cloud on AWS solution.




Now back to the operational limits;


Starting with version 8.4 the maximum protected VMs a single appliance can handle has been increased from 200 to 300.

That means, using the solution at it’s maximum, you can reach a total of 3000 protected VM.


As stated in KB2102463, to protect more than 500 VMs you need to adjust 

/opt/vmware/hms/conf/hms-configuration.xml file and set the parameter as bellow:




There's also a few requirements to the environment protecting 3000 VMs; like isolate replication traffic, check the KB2107869 for a comprehensive list.


It worth to mention some others enhancements since I last post about vSphere Replication:


- 5min RPO is now supported on VMFS Datastores, NFS, VVOLS along with vSAN; check the KB2102453 for supported version;


- Minimize security risks by enabling network encryption: You can enable encryption of replication data transfer in VMware vSphere Replication;


- Seamless disk re-sizing. You can increase the virtual disks of virtual machines that are configured for replication, without interruption of ongoing replication;


- Reprotect optimization when vSphere Replication is used with Site Recovery Manager. Checksum is no longer used for reprotect run soon after a planned migration. Instead, changes are tracked at the site of the recovered VM and only those changes are replicated when reprotect is run, speeding up a lot the reprotect process.


Good replication !!! 

Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions. Reach me at @dumeirell

Most Viewed Posts

Blog Archive