Tuesday, June 29, 2021

Updating Tanzu Kubernetes Clusters

  vSphere with Tanzu delivers real Kubernetes self-services to developers, allowing them to create upstream Kubernetes cluster, as declarative objects the same way they are used to create deployments, pods and so on.

 

As important as cluster creation is the ability to maintain them up to date with the latest releases in order to benefits for latest features and bugs fixed.

Well, VMware made it very simple as I showed on my post about updating Supervisor Cluster, and now it's time to show how to update your upstream K8s cluster.

 

Since this post is about updating Tanzu Kubernetes Cluster (TKC), I’ll not cover the creation of it, but here’s a documentation about how to do it.

 

Before start I just checked my nodes, they are Ready and running version 1.18

 

Logged on to the Supervisor Cluster you can get more details about your TKC

-       kubectl get tkc

 

you can see some details like, cluster name, number of masters and worker nodes, the actual version and the available version to update to.

 

To check what versions of Kubernetes are available on your platform, run:

-       kubectl get tanzukubernetesreleases

 

 Take note of what version you are supposed to update to base on your actual version.
Don’t try to skip versions !!!

 

To update your TKC use kube edit command

     -       kubectl edit tanzukubernetescluster/”TKC_NAME”

 

Look for the section spec; distribution and change the fullVersion and version fields for the new supported Kubernetes version you are updating to.

 

Immediately after saving and closing the specs your cluster will start to update itself

 

First thing happening is the creation of a new master node VM on vCenter

 


Once the VM is created, the new Master will join the K8s cluster and schedule will be disabled on the old Master;

 

notice the new Master already has the specified Kubernetes version.

 

As soon the new Master takes control of the cluster the old one is removed from K8s cluster and the respective VM is deleted from vCenter.

 

 

The update process will now repeat for worker nodes, starting with the creation of a new worker node VM on vCenter.

 



Once the new Worker node is ready, it will join the cluster and an old Worker node will have its schedule disabled, the node removed from K8s cluster and the respective VM deleted from vCenter.

 

 

The process will repeat again for every worker node until all of them were replaced by the new ones finishing the update process.

 

 

As you can see the lifecycle of clusters could not be more easier.

 

You can use the same method of editing the specs of your cluster to scale-out your cluster as easy as just adding the number of desired worker nodes on the counts field, the plataform will take care of create a new worker VM and add it to K8s cluster on your behalf.



Life is good !!!

No comments:

Post a Comment

Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions. Reach me at @dumeirell

Most Viewed Posts

Blog Archive