Wednesday, February 16, 2022

Tanzu Kubernetes Cluster creation stucks

 

I've been playing with Tanzu Kubernetes Cluster (TKC) on vSphere with Tanzu since vSphere 7.0 GA, recently, to be honest, have been a few months I could not create any Guest Clusters anymore, it does not matter if I'm using v1alpha1 or the new v1alpha2 API, it does not matter if my environment is based on NSX or vDS.

When I try to create my Guest Cluster the control plane got provisioned successfully, customized, but nothing else happens, my worker nodes are never provisioned and the cluster status remains on the creating phase.
 


The only message I see is on vCenter: error creating client and cache for remote cluster. Error creating dynamic rest mapper for remote cluster. Get "https://10.40.14.67:6443/api?timeout=10s"dial tcp 10.40.14.67:6443 connect: connection refused.
 


I did countless tests until I finally found the issue.
On my descriptor file, I was using a custom VM Class, You might remember, I wrote about it too.
It turns out, there's a bug when using the Custom VM Class within Guest Clusters, when I went back using the built-in ones, my cluster got created successfully.
 

 
Until this bug is not fixed, make sure you are using the built-in VM Class instead of custom ones.
I hope this post helps someone, it took me literally months to figure this out.

See you next


Friday, February 4, 2022

VMware Identity Manager and Delegate IP

While working with one of my customers to deploy a new automation platform (vRealize Automation), which will provide and manage multi-cloud resources, like on AWS, Google, and vSphere for hundreds of end-users providing real self-service portal to give them freedom and agility we decided it was a good idea to consider high-availability to this solution.

You might recall when I talked about scale-out VMware Identity Manager, vIDM, to provide high availability. At that time I covered most about load balancer health checks for the services, but there's an extra requirement;  delegate IP.

First thing first, what is delegate IP ?

When you have your vIDM in cluster mode, it will also cluster their internal Postgres database, the delegate IP is the Active IP receiving the request and will fluctuate between the nodes when needed.

So far so good, but what's the problem ?

What was not clear is if this delegate IP needs an external load balancer or not, in fact, the documentation points to Identity Manager load balancing Documentation... and to your surprise, there's no mention about requirements to set up this service.

A more detailed documentation about vIDM load balancing needs shows no evidence of the need for it.

So, to solve anyone's doubt.

There's NO need for an external load balancer for the delegate IP, the nodes themselves will manage it.

You still need an extra free IP on the same segment where your vIDM nodes are provisioned.

be safe people !!!


Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions. Reach me at @dumeirell

Most Viewed Posts

Blog Archive