Wednesday, March 31, 2021

Demystifying vSphere Replication 8.4


One of my blogs tradition is about demystifying vSphere Replication's operational limits;

I've started it in 2013 with vSphere Replication (vR) 5.0 and keep updating it every time a major enhancement was made, like on vR 5.5 and vR 6.0.


If you are new over here, vSphere Replication is a replication engine feature provided by VMware that allows data replication from one storage to another, since it does not depend on array based replication technology, you can use it to replicate data between storages from different vendors. It's also the main component behind VMware Site Recovery, where customers protect their on-prem workloads to VMware Cloud on AWS solution.




Now back to the operational limits;


Starting with version 8.4 the maximum protected VMs a single appliance can handle has been increased from 200 to 300.

That means, using the solution at it’s maximum, you can reach a total of 3000 protected VM.


As stated in KB2102463, to protect more than 500 VMs you need to adjust 

/opt/vmware/hms/conf/hms-configuration.xml file and set the parameter as bellow:




There's also a few requirements to the environment protecting 3000 VMs; like isolate replication traffic, check the KB2107869 for a comprehensive list.


It worth to mention some others enhancements since I last post about vSphere Replication:


- 5min RPO is now supported on VMFS Datastores, NFS, VVOLS along with vSAN; check the KB2102453 for supported version;


- Minimize security risks by enabling network encryption: You can enable encryption of replication data transfer in VMware vSphere Replication;


- Seamless disk re-sizing. You can increase the virtual disks of virtual machines that are configured for replication, without interruption of ongoing replication;


- Reprotect optimization when vSphere Replication is used with Site Recovery Manager. Checksum is no longer used for reprotect run soon after a planned migration. Instead, changes are tracked at the site of the recovered VM and only those changes are replicated when reprotect is run, speeding up a lot the reprotect process.


Good replication !!! 

Friday, March 26, 2021

Scale-out VMware Identity Manager


Recently I’ve worked with one of my customers to scale-out their vRealize Automation (vRA) environment enabling High Availability for their VMware Identity Manager (vIDM) appliances

Initially they deployed the environment with a single node and as the solution became successfully and the key point of their automation strategy, increasing the availability of it looked like a good idea.

 It’s a smart customer, they deployed the solution through the use of vRealize Suite LifeCycle Manager (vRSLCM) which comes as no extra cost for all vRealize Suite users. 

Besides deploying the solutions, it also takes cares of all day 2 activities, like patching, upgrading and scale-out as well.


Although the platform will take care automatically of the scale-out activity, provisioning new nodes, configure them as a cluster, replacing certificates for VIPs, etc...there’s still just a few tasks you need to take care first.

-       Replacing the certificate to include SAN for VIP and extra vIDM nodes;

-       Register the vIDM VIP FQDN on DNS system;

-       Configure the external Load Balancer to handle the requests.

That’s when we realized vRSLCM’s documentation does not include much information about it, like health checks, ports, HTTP method, etc.

So I had to dig this information from several other documentation and it’s here for easier consumption.

Be aware of an issue scaling vIDM 3.3.1 with vRSLCM 8.x 
If your environment match this specific matrix, check KB79040 for the fix


I'm adding here the source information if you need to check for yourselves ; )

- Configure Load Balancer on vRealize Automation's documentation


- Create Service Monitors for the Cross-Region on VMware Validated Design's documentation

- Using a Load Balancer or Reverse Proxy to Enable External Access to VMware Identity Manager on Workspace One’s documentation


- VMware Identity Manager URL Endpoints for Monitoringon Workspace OneDocumentation

Tuesday, September 8, 2020

Workload Management Cluster incompatible

The most wanted vSphere 7 feature is undoubted the Workload Management Cluster, integrating K8s cluster into vSphere to allow true visibility of PODs, containers and VMs, side by side.

You all know that, what I really want to cover now is a very simple tip, but took me sometime to find the right answer, so here it is.

When you try to enable Workload Management for a cluster you might end up getting some incompatibility errors:

Sometimes de error is clear, sometimes not too much....

In my case I was sure all the requirements has been took care properly, I double checked just in case.

You might get lucky getting more information through the use o DCLI in vCenter

Run: dcli com vmware vcenter cluster list
 you can also get some extra details 
Run: dcli com vmware vcenter namespacemanagement distributedswitchcompatibility 
list --cluster

 or as always, the true is on the logs

Run: tail -f /var/log/vmware/wcp/wcpsvc.log 

all this process is documented here.

back to my case,  it seemed it was some communication issues between NSX and vCenter... that's when I remember when registering vCenter as a computer management in NSX there's a trust option I missed.

Seems not all requirements was ready after all !!!

Friday, February 7, 2020

Enterprise PKS Management Console installed, What now ?

I’ve been working on a VMware Enterprise PKS proof of concept for a customer and instead of installing all the components, ops manager, ops director, tiles, individually I decided to use the Enterprise PKS Management Console.
If you have not heard about it yet, it’s a single OVA that provides a unified installation in an automated way, which simplifies and expedite a lot the process of making PKS available.

But this post is more related to day 2, once everything is installed what now ?!?!?

In the past I wrote some posts about how to manage the solution and the need to install some tools like Bosh CLI, UaaC CLI e PKS CLI.

While those tools still exist and are needed, I found the use of Enterprise PKS Management Console a lot more simple, Bosh CLI and PKS CLI are already installed on the appliance, so, just SSH into it to create and manage the clusters immediately.

If you remember, in order to create a cluster you need a user with such permission and UaaC client was not installed on the appliance, that’s because the identity management has been integrated on the PKS Management Console.

To create and manage users, just select Identity Manager on the left pane.

Not only local users are allowed but also based on AD/LDAP and SAML providers, it will depend the option you select during the setup.

Always good to remember the roles and scope available within the solution.

- pks.clusters.admin: allow the user to create and manage all clusters within
the system;
- it’s a read only role for all clusters created on the solution;
- pks.clusters.manage: allow the user to create and manage only the cluster’s they own;

How about you, already using PKS Management Console, let me know what you think about it ?

Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions. Reach me at @dumeirell

Most Viewed Posts

Blog Archive