I've
been playing with Tanzu Kubernetes Cluster (TKC) on vSphere with Tanzu
since vSphere 7.0 GA, recently, to be honest, have been a few months I could not create any Guest Clusters anymore, it does
not matter if I'm using v1alpha1 or the new v1alpha2 API, it does not
matter if my environment is based on NSX or vDS.
When I try to create my Guest Cluster the control plane got provisioned
successfully, customized, but nothing else happens, my worker nodes are
never provisioned and the cluster status remains on the creating phase.
The only message I see is on vCenter: error creating client and cache for remote cluster. Error creating dynamic rest mapper for remote cluster. Get "https://10.40.14.67:6443/api?timeout=10s"dial tcp 10.40.14.67:6443 connect: connection refused.
I did countless tests until I finally found the issue.
On my descriptor file, I was using a custom VM Class, You might remember, I wrote about it too.
It
turns out, there's a bug when using the Custom VM Class within Guest Clusters,
when I went back using the built-in ones, my cluster got created
successfully.
Until this bug is not fixed, make sure you are using the built-in VM Class instead of custom ones.
I hope this post helps someone, it took me literally months to figure this out.
See you next