vSphere with Tanzu delivers real Kubernetes self-services to developers, allowing them to create upstream Kubernetes cluster, as declarative objects the same way they are used to create deployments, pods and so on.
As important as cluster creation is the ability to maintain them up to date with the latest releases in order to benefits for latest features and bugs fixed.
Well, VMware made it very
simple as I showed on my post about updating Supervisor Cluster, and now it's time to show how to update your upstream K8s cluster.
Since this post is about updating Tanzu Kubernetes Cluster (TKC), I’ll not cover the creation of it, but here’s a documentation about how to do it.
Before start I just checked my nodes, they are Ready and running version 1.18
- kubectl get tkc
you can see some details like, cluster name, number of masters and worker nodes, the actual version and the available version to update to.
To check what versions of Kubernetes are available on your platform, run:
- kubectl get tanzukubernetesreleases
Take note of what version
you are supposed to update to base on your actual version.
Don’t try to skip versions !!!
To update your TKC use kube edit command
- kubectl edit tanzukubernetescluster/”TKC_NAME”
Look for the section spec; distribution and change the
fullVersion and version fields for the new supported Kubernetes version you are updating to.
Immediately after saving and closing the specs your cluster will start to update itself
First thing happening is the creation of a new master node VM on vCenter
Once the VM is created, the new Master will join the
K8s cluster and schedule will be disabled on the old Master;
notice the new Master
already has the specified Kubernetes version.
As soon the new Master takes control of the cluster the old one is removed from K8s cluster and the respective VM is deleted from vCenter.
The update process will now repeat for worker nodes, starting with the creation of a new worker node VM on vCenter.
Once the new Worker node is ready, it will join the cluster and an old Worker node will have its schedule disabled, the node removed from K8s cluster and the respective VM deleted from vCenter.
The process will repeat again for every worker node
until all of them were replaced by the new ones finishing the update process.
As you can see the lifecycle of clusters could not be more easier.
You can use the same method of editing the specs of your cluster to scale-out your cluster as easy as just adding the number of desired worker nodes on the counts field, the plataform will take care of create a new worker VM and add it to K8s cluster on your behalf.
Life is good !!!