Friday, April 16, 2021

Enabling Workload Management Cluster


Undoubtedly vSphere with Tanzu is the greatest innovation brought to vSphere in years, deeply integrating Kubernetes within the platform, enabling all of you to consume K8s, Pods and VMs side by side in an agile and integrated way without compromising governance.


I’ve been demoing Workload Management Cluster (WMC) feature and all its beauty to my customers, but one thing always missing, mostly because of time constraints, is showing how to enable WMC on the cluster.


So, I took some videos for future references.


If you remember, when vSphere 7 was first released, NSX-T was a requirement to enable WMC, because it's the technology  providing pod to pod communication and services, like Load Balancer to the cluster.


So the first video is showing how to enable WMC with NSX-T.

Be mindful that before I had to take care of NSX-T implementation and requirements as listed here.



Starting with vSphere Update 1, you can connect your cluster directly to your vSphere Distributed Switch (vDS) and use independent Load Balancer, allowing a broader reach of vSphere users without the need of NSX-T.

This way pod-to-pod communication will be handled by Antrea CNI and services will follow through HAProxy, which is the first independent load balancer supported.


This model is a good fit for entry level users, PoCs and Labs, mostly because HAProxy lacks some enterprise features for a Production implementation, here's a basic comparison.


Here’s the video enabling WMC with HAProxy.

As imagined, I had to take care of HAProxy implementation and requirements as well.


 Most recently VMware released vSphere Update 2, with an alternative option to HAProxy.

NSX Advanced Load Balancer became the second supported independent Load Balancer to vSphere with Tanzu, a real enterprise grade solution.


Unfortunately I don’t have the video, yet, so keep watching for some updates over here.


Wednesday, March 31, 2021

Demystifying vSphere Replication 8.4


One of my blogs tradition is about demystifying vSphere Replication's operational limits;

I've started it in 2013 with vSphere Replication (vR) 5.0 and keep updating it every time a major enhancement was made, like on vR 5.5 and vR 6.0.


If you are new over here, vSphere Replication is a replication engine feature provided by VMware that allows data replication from one storage to another, since it does not depend on array based replication technology, you can use it to replicate data between storages from different vendors. It's also the main component behind VMware Site Recovery, where customers protect their on-prem workloads to VMware Cloud on AWS solution.




Now back to the operational limits;


Starting with version 8.4 the maximum protected VMs a single appliance can handle has been increased from 200 to 300.

That means, using the solution at it’s maximum, you can reach a total of 3000 protected VM.


As stated in KB2102463, to protect more than 500 VMs you need to adjust 

/opt/vmware/hms/conf/hms-configuration.xml file and set the parameter as bellow:




There's also a few requirements to the environment protecting 3000 VMs; like isolate replication traffic, check the KB2107869 for a comprehensive list.


It worth to mention some others enhancements since I last post about vSphere Replication:


- 5min RPO is now supported on VMFS Datastores, NFS, VVOLS along with vSAN; check the KB2102453 for supported version;


- Minimize security risks by enabling network encryption: You can enable encryption of replication data transfer in VMware vSphere Replication;


- Seamless disk re-sizing. You can increase the virtual disks of virtual machines that are configured for replication, without interruption of ongoing replication;


- Reprotect optimization when vSphere Replication is used with Site Recovery Manager. Checksum is no longer used for reprotect run soon after a planned migration. Instead, changes are tracked at the site of the recovered VM and only those changes are replicated when reprotect is run, speeding up a lot the reprotect process.


Good replication !!! 

Friday, March 26, 2021

Scale-out VMware Identity Manager


Recently I’ve worked with one of my customers to scale-out their vRealize Automation (vRA) environment enabling High Availability for their VMware Identity Manager (vIDM) appliances

Initially they deployed the environment with a single node and as the solution became successfully and the key point of their automation strategy, increasing the availability of it looked like a good idea.

 It’s a smart customer, they deployed the solution through the use of vRealize Suite LifeCycle Manager (vRSLCM) which comes as no extra cost for all vRealize Suite users. 

Besides deploying the solutions, it also takes cares of all day 2 activities, like patching, upgrading and scale-out as well.


Although the platform will take care automatically of the scale-out activity, provisioning new nodes, configure them as a cluster, replacing certificates for VIPs, etc...there’s still just a few tasks you need to take care first.

-       Replacing the certificate to include SAN for VIP and extra vIDM nodes;

-       Register the vIDM VIP FQDN on DNS system;

-       Configure the external Load Balancer to handle the requests.

That’s when we realized vRSLCM’s documentation does not include much information about it, like health checks, ports, HTTP method, etc.

So I had to dig this information from several other documentation and it’s here for easier consumption.

Be aware of an issue scaling vIDM 3.3.1 with vRSLCM 8.x 
If your environment match this specific matrix, check KB79040 for the fix


I'm adding here the source information if you need to check for yourselves ; )

- Configure Load Balancer on vRealize Automation's documentation


- Create Service Monitors for the Cross-Region on VMware Validated Design's documentation

- Using a Load Balancer or Reverse Proxy to Enable External Access to VMware Identity Manager on Workspace One’s documentation


- VMware Identity Manager URL Endpoints for Monitoringon Workspace OneDocumentation

Tuesday, September 8, 2020

Workload Management Cluster incompatible

The most wanted vSphere 7 feature is undoubted the Workload Management Cluster, integrating K8s cluster into vSphere to allow true visibility of PODs, containers and VMs, side by side.

You all know that, what I really want to cover now is a very simple tip, but took me sometime to find the right answer, so here it is.

When you try to enable Workload Management for a cluster you might end up getting some incompatibility errors:

Sometimes de error is clear, sometimes not too much....

In my case I was sure all the requirements has been took care properly, I double checked just in case.

You might get lucky getting more information through the use o DCLI in vCenter

Run: dcli com vmware vcenter cluster list
 you can also get some extra details 
Run: dcli com vmware vcenter namespacemanagement distributedswitchcompatibility 
list --cluster

 or as always, the true is on the logs

Run: tail -f /var/log/vmware/wcp/wcpsvc.log 

all this process is documented here.

back to my case,  it seemed it was some communication issues between NSX and vCenter... that's when I remember when registering vCenter as a computer management in NSX there's a trust option I missed.

Seems not all requirements was ready after all !!!

Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions. Reach me at @dumeirell

Most Viewed Posts

Blog Archive