Friday, July 16, 2021

Creating Virtual Machines with Tanzu 1/2

We have seen the increase of containers adoption at companies of all sizes, driving innovation and conquering new markets by the release of new apps or features faster and faster. It would not be possible without the use of modern applications, mostly running on top of Kubernetes, but it’s also unliked to think that those applications will be 100% based on microservices, in fact those new Apps are hybrid, part microservices, part running on virtual machines, like databases or applications that demand a more traditional runtime and even functions, so what's better than having a single platform that can run them all, integrated, self-service and transparent to the developer ?!?

 

That’s what VM Service is all about, to allow developers to create VMs using K8s manifests on top of vSphere with Tanzu just the same way they are used to deploy all others K8s constructs, eliminating manual or ticketing requests, improving their autonomy and delivering value faster to business.

 

I’ll cover this subject under two different angles:

- The Operator, which is responsible for the infrastructure, concerning about it’s availability, security and compliance.

- The Developer, which is concern about delivering value through the deployment of applications and features as fast as possible without need to worry so much about the infrastructure.

 

Let’s start with the Operator.

First of all VM Service has been released to vSphere 7 update 2, so make sure you update your vCenter and Supervisor Cluster to at least this version.

 

Once available you will notice a new tab on Workload Management called Services.

 


VM Service has two main components, VM Class and Content Library 

 


You can think of VM Class as a profile for VMs, like T-shirt sizes on public clouds, where you define the VM resources in terms of amount of CPU and memory which will be allocated, also you can specify how much of those resources are guarantee (reservation). 

 

 By default vSphere with Tanzu offers a few classes, but you can also create your own, it’s very intuitive, just give it a name and set the values you desire, please avoid to change the default ones, if you need different parameters create your own instead.

 

 

Content Library is where the VM images or templates are stored, so developers can pick one desired OS flavor during provisioning.

 


 The creation of Content Library is straight forward, and you probably have been doing this for years, so I don’t want to bother you here with the steps.

 

Once the library is created you just need to add the images you want.

VMware is gradualy releasing supported and curated images on Marketplace, just search for VM Service and download the template and add it to the Library.

 

 

I created a Library called Tanzu-VMs and added two templates, CentOs and Ubuntu, I used a prefix to help find it easier and distinguish them from the images to Tanzu Kubernetes Cluster.


 

Now that the requirements are ready it’s time to allow developers to consume those resources. 

 

That’s when the governance comes in place, allowing the operator to adjust the guardrails on a Namespace basis, like specifying which VM Class to each Namespace to avoid the creation of bigger VMs not suitable for the environment or the use of only approved OS images.

 

Select the desired Namespace, you will see a new widget, VM Service;

 

Click on Manage VM Classes to select what classes you want developers to have access to.

 

Now click on Add Content Library, and select the Library with the Tanzu images you want developers to have access to.

 

 

At this point developers are ready to create Virtual Machines as part of their deployments, stay tuned next post I’ll show you how developers can consume this new service.

 

See you soon.

Tuesday, June 29, 2021

Updating Tanzu Kubernetes Clusters

  vSphere with Tanzu delivers real Kubernetes self-services to developers, allowing them to create upstream Kubernetes cluster, as declarative objects the same way they are used to create deployments, pods and so on.

 

As important as cluster creation is the ability to maintain them up to date with the latest releases in order to benefits for latest features and bugs fixed.

Well, VMware made it very simple as I showed on my post about updating Supervisor Cluster, and now it's time to show how to update your upstream K8s cluster.

 

Since this post is about updating Tanzu Kubernetes Cluster (TKC), I’ll not cover the creation of it, but here’s a documentation about how to do it.

 

Before start I just checked my nodes, they are Ready and running version 1.18

 

Logged on to the Supervisor Cluster you can get more details about your TKC

-       kubectl get tkc

 

you can see some details like, cluster name, number of masters and worker nodes, the actual version and the available version to update to.

 

To check what versions of Kubernetes are available on your platform, run:

-       kubectl get tanzukubernetesreleases

 

 Take note of what version you are supposed to update to base on your actual version.
Don’t try to skip versions !!!

 

To update your TKC use kube edit command

     -       kubectl edit tanzukubernetescluster/”TKC_NAME”

 

Look for the section spec; distribution and change the fullVersion and version fields for the new supported Kubernetes version you are updating to.

 

Immediately after saving and closing the specs your cluster will start to update itself

 

First thing happening is the creation of a new master node VM on vCenter

 


Once the VM is created, the new Master will join the K8s cluster and schedule will be disabled on the old Master;

 

notice the new Master already has the specified Kubernetes version.

 

As soon the new Master takes control of the cluster the old one is removed from K8s cluster and the respective VM is deleted from vCenter.

 

 

The update process will now repeat for worker nodes, starting with the creation of a new worker node VM on vCenter.

 



Once the new Worker node is ready, it will join the cluster and an old Worker node will have its schedule disabled, the node removed from K8s cluster and the respective VM deleted from vCenter.

 

 

The process will repeat again for every worker node until all of them were replaced by the new ones finishing the update process.

 

 

As you can see the lifecycle of clusters could not be more easier.

 

You can use the same method of editing the specs of your cluster to scale-out your cluster as easy as just adding the number of desired worker nodes on the counts field, the plataform will take care of create a new worker VM and add it to K8s cluster on your behalf.



Life is good !!!

Wednesday, June 2, 2021

NSX Advanced Load Balancer for Tanzu step-by-step

NSX Advanced Load Balancer is indisputably the best load balancing solution for Supervisor Cluster when enabling it on a vSphere Distributed Switch configuration, proving a lot of enterprise grade features, like high availability, auto scale, metrics and more, specially as it's included on every Tanzu edition, so there's no question about what load balacing solution to use with your kubernetes plataform.

 

Let's go through a step by step journey on how to install and configure it for vSphere with Tanzu.

Hold on to your hat it will be a little bit longer post than what I'm used to produce.

 

My Lab has the following scenario, 3 fully routable network segments with no DHCP enable as follow:

  • Management Network: where all my management components are placed, vCenter, NSX, ESXi and now AVI Controller and Service Engines;
  • Service Network: where the Kubernetes Services (Load Balancer) will be allocated to;
  • Workload Network: where the Tanzu Kubernetes Cluster Nodes will be placed; 

 

Every environment is unique, so it's imperative you take sometime to go through the topologies and requirements before standing up your own solution. 

 

First thing first, NSX Advanced Load Balancer OVA deployment, also known as AVI Controller; it's the central control plane of the solution, responsable for creation, configuration, management of Service Engines and services that are created on demand by developers.

 

Deploying an OVA is a pretty straightforward operation that you probably have done one thousand times during the years, so I'll skip it.

 

Once it's done, just power it on and wait a few minutes to the startup process finishes the configuration (it might take around 10 minutes, depending on each environment).

 

Just open up a browser and hit the IP address you just specified during OVA deployment.

- Create an admin account and set the password;

 


- On System Settings section create a Passphare, it's used  when backing up and restoring the controller, also setup your DNS and DNS domain,SMTP information, which I skip because I don’t have it on my environment.



- On Multi-Tenant section keep the default, and click SAVE;

 

Now that the system is ready to go, let’s start configuring the secure access to the controller replacing the self-signed certificates generated during deployment.

 

- On the main menu select Administration, Settings tab and then Access Settings;

Click on the Edit button on your far right;

 

- Remove all self-signed certificates, clicking on the X button under SSL/TLS Certificate;

 


- Create a new on, click on the arrow and select Create Certificate;

 


I’ll create a self-signed certificate, but I could generate a CSR and import certificates as well. 

- Give it a meaningful name and fill the fields as you would to create any certificate.

For common name I used my controller's FQDP an it's IP address as SAN, click SAVE;

 

The new certificate should appear on the SSL/TLS Certificate field, click SAVE;

 

After chancing the certificates you will need to login again.

 

- Navigate to DNS/NTP, click on the Edit button on your far right;

 

- There you can adjust your DNS settings but equally important is to configure your NTP settings, removing or adding new entries, just click SAVE when you are done.

 

 

It's time to configure your endpoint, the source resource holding your workloads.

 

- On the main menu select Infrastructure, Clouds tab;

 

There’s a default cloud already created, but as you can see, it has None as type, indicating there’s no configuration so far, click the Edit button;

 

- Select VMware vCenter and click Next;

 

- Type the Username and password for the user with the required privileged on vCenter, select Write if you wish the Controller to provision the Service Engines… and trust me YOU DO… and lastly the vCenter name, click NEXT;

 

- Select the Data Center where the Service Engines will be provisioned

Enable Prefer Static Routes vs Directly Connected Network and click Next;

 

- For the Management Network, select the Port Group designed for the management traffic, fill out the information about the subnet and add a range of IP address , click SAVE;

 

When creating Service Engines on-demand, those IPs will be the ones assigned to them.

 

If everything is fine, now you should have VMware vCenter as type and a green light next to it.

 

Service Engines are grouped together providing a concise configuration and easier/faster management. Service Engine Group rules how the service engines are placed, their configuration and quantity.

 

- On Service Engine Group tab, there’s an already created default group, just click on the Edit button next to it;

 

- There you can configure several aspects of the Service Engine. which is not part of this tutorial so just stick with default options.

 

- Select your High Availabilty Mode depending on your license type;

Essential license only allows you to Legacy HA mode.

Enterprise license allows you to select Elastic HA in addition to Legacy HA.

 

- Click on Advanced tab;

Configure a name prefix and a folder to organize the Service Engine VMs, last piece is to configure the Cluster where the SE VMs will be created, click SAVE;

 

When creating Kubernetes Services, a free IP will be pulled from the IP Pool and allocated to the service being provisioning. 

- Click on Network tab;

You can see all Port Groups discovered on the vCenter you just assigned on the Cloud section.

 

- Click on the Edit button of the Port Group providing the Kubernetes Service or VIPs if you will.

 

- On the discovered subnet click on the Edit button;

 

- Click on Add Static IP Address Pool

 

- Select Use Static IP Address for VIPs and SE and add a range of free IPs, click SAVE;

 


Make sure the IP Range is now shown and click SAVE;

 

Since I'm using a fully routable network I need to specify how the Services network reach my Workload network, where the K8s nodes are placed, It's done by creating a static route.

 

- On the Routing tab, click CREATE;

 

- Fill the fields with the Workload subnet information and the gateway on the Service Network, click SAVE;

 


Also, in order to configure the Virtual Services properly we need to provide some profile information.

 

- On the main menu select Templates, Profiles tab and then IPAM/DNS Profiles,

click on Create;

 

 - Select IPAM Profile

 


- Give it a name, select Allocate IP in VRF and click +Add Usable Network;

 

- Select the Cloud endpoint where your vCenter is configured and the Port Group designated for the Service, click SAVE;

 

- Back on Profiles page click on Create again and select DNS Profile this time;

 

 - Give it a name and click on +Add DNS Service Domain;

 

- Just add your domain name and click SAVE;

 

- Back on Infrastructure menu and then Clouds tab, click on the Edit button for your vCenter Cloud endpoint.

 

- Make sure to configure the IPAM and DNS Profiles we just created , click SAVE;

 

 

If you got this far, THANKS A LOT, and now your system is ready to enable vSphere with Tanzu with NSX Advanced Load Balanced.

  

If you are not sure how to enable it, just check my post on how to do it.

 

 


Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions. Reach me at @dumeirell

Most Viewed Posts

Blog Archive