Wednesday, June 2, 2021

NSX Advanced Load Balancer for Tanzu step-by-step

NSX Advanced Load Balancer is indisputably the best load balancing solution for Supervisor Cluster when enabling it on a vSphere Distributed Switch configuration, proving a lot of enterprise grade features, like high availability, auto scale, metrics and more, specially as it's included on every Tanzu edition, so there's no question about what load balacing solution to use with your kubernetes plataform.

 

Let's go through a step by step journey on how to install and configure it for vSphere with Tanzu.

Hold on to your hat it will be a little bit longer post than what I'm used to produce.

 

My Lab has the following scenario, 3 fully routable network segments with no DHCP enable as follow:

  • Management Network: where all my management components are placed, vCenter, NSX, ESXi and now AVI Controller and Service Engines;
  • Service Network: where the Kubernetes Services (Load Balancer) will be allocated to;
  • Workload Network: where the Tanzu Kubernetes Cluster Nodes will be placed; 

 

Every environment is unique, so it's imperative you take sometime to go through the topologies and requirements before standing up your own solution. 

 

First thing first, NSX Advanced Load Balancer OVA deployment, also known as AVI Controller; it's the central control plane of the solution, responsable for creation, configuration, management of Service Engines and services that are created on demand by developers.

 

Deploying an OVA is a pretty straightforward operation that you probably have done one thousand times during the years, so I'll skip it.

 

Once it's done, just power it on and wait a few minutes to the startup process finishes the configuration (it might take around 10 minutes, depending on each environment).

 

Just open up a browser and hit the IP address you just specified during OVA deployment.

- Create an admin account and set the password;

 


- On System Settings section create a Passphare, it's used  when backing up and restoring the controller, also setup your DNS and DNS domain,SMTP information, which I skip because I don’t have it on my environment.



- On Multi-Tenant section keep the default, and click SAVE;

 

Now that the system is ready to go, let’s start configuring the secure access to the controller replacing the self-signed certificates generated during deployment.

 

- On the main menu select Administration, Settings tab and then Access Settings;

Click on the Edit button on your far right;

 

- Remove all self-signed certificates, clicking on the X button under SSL/TLS Certificate;

 


- Create a new on, click on the arrow and select Create Certificate;

 


I’ll create a self-signed certificate, but I could generate a CSR and import certificates as well. 

- Give it a meaningful name and fill the fields as you would to create any certificate.

For common name I used my controller's FQDP an it's IP address as SAN, click SAVE;

 

The new certificate should appear on the SSL/TLS Certificate field, click SAVE;

 

After chancing the certificates you will need to login again.

 

- Navigate to DNS/NTP, click on the Edit button on your far right;

 

- There you can adjust your DNS settings but equally important is to configure your NTP settings, removing or adding new entries, just click SAVE when you are done.

 

 

It's time to configure your endpoint, the source resource holding your workloads.

 

- On the main menu select Infrastructure, Clouds tab;

 

There’s a default cloud already created, but as you can see, it has None as type, indicating there’s no configuration so far, click the Edit button;

 

- Select VMware vCenter and click Next;

 

- Type the Username and password for the user with the required privileged on vCenter, select Write if you wish the Controller to provision the Service Engines… and trust me YOU DO… and lastly the vCenter name, click NEXT;

 

- Select the Data Center where the Service Engines will be provisioned

Enable Prefer Static Routes vs Directly Connected Network and click Next;

 

- For the Management Network, select the Port Group designed for the management traffic, fill out the information about the subnet and add a range of IP address , click SAVE;

 

When creating Service Engines on-demand, those IPs will be the ones assigned to them.

 

If everything is fine, now you should have VMware vCenter as type and a green light next to it.

 

Service Engines are grouped together providing a concise configuration and easier/faster management. Service Engine Group rules how the service engines are placed, their configuration and quantity.

 

- On Service Engine Group tab, there’s an already created default group, just click on the Edit button next to it;

 

- There you can configure several aspects of the Service Engine. which is not part of this tutorial so just stick with default options.

 

- Select your High Availabilty Mode depending on your license type;

Essential license only allows you to Legacy HA mode.

Enterprise license allows you to select Elastic HA in addition to Legacy HA.

 

- Click on Advanced tab;

Configure a name prefix and a folder to organize the Service Engine VMs, last piece is to configure the Cluster where the SE VMs will be created, click SAVE;

 

When creating Kubernetes Services, a free IP will be pulled from the IP Pool and allocated to the service being provisioning. 

- Click on Network tab;

You can see all Port Groups discovered on the vCenter you just assigned on the Cloud section.

 

- Click on the Edit button of the Port Group providing the Kubernetes Service or VIPs if you will.

 

- On the discovered subnet click on the Edit button;

 

- Click on Add Static IP Address Pool

 

- Select Use Static IP Address for VIPs and SE and add a range of free IPs, click SAVE;

 


Make sure the IP Range is now shown and click SAVE;

 

Since I'm using a fully routable network I need to specify how the Services network reach my Workload network, where the K8s nodes are placed, It's done by creating a static route.

 

- On the Routing tab, click CREATE;

 

- Fill the fields with the Workload subnet information and the gateway on the Service Network, click SAVE;

 


Also, in order to configure the Virtual Services properly we need to provide some profile information.

 

- On the main menu select Templates, Profiles tab and then IPAM/DNS Profiles,

click on Create;

 

 - Select IPAM Profile

 


- Give it a name, select Allocate IP in VRF and click +Add Usable Network;

 

- Select the Cloud endpoint where your vCenter is configured and the Port Group designated for the Service, click SAVE;

 

- Back on Profiles page click on Create again and select DNS Profile this time;

 

 - Give it a name and click on +Add DNS Service Domain;

 

- Just add your domain name and click SAVE;

 

- Back on Infrastructure menu and then Clouds tab, click on the Edit button for your vCenter Cloud endpoint.

 

- Make sure to configure the IPAM and DNS Profiles we just created , click SAVE;

 

 

If you got this far, THANKS A LOT, and now your system is ready to enable vSphere with Tanzu with NSX Advanced Load Balanced.

  

If you are not sure how to enable it, just check my post on how to do it.

 

 


Wednesday, May 19, 2021

Updating Supervisor Cluster

 

It’s not news that Kubernetes has a massive rate of development and new releases bringing to the market fixes and new features, which imposes a challenge to every enterprise trying to keep up with the new releases.

 

Luckily VMware has streamlined the update process on vSphere with Tanzu as I intent to show here.

 

It worth’s mention that VMware keeps a n-1 release cycle model, meaning that we provide the version which has the major bugs already fixes, for instances as the time of this post Kubernetes upstream version is 1.20, so we are working with version 1.19.

The new release will the Kubernetes software versions and the infrastructure and services supporting the Kubernetes clusters, such as virtual machine configurations and resources, vSphere services and namespaces, and custom resources

 

Once the new release is available, usually following a vCenter update, the new release will be available inside the Workload Management

 

Just go to Updates tab and you can see your actual version and the version available for an update.

 


 

To start the update process, just select the cluster, desired release and click on Apply Update.

 

 


You must update incrementally. Do not skip updates, such as from 1.17 to 1.19. The path should be 1.17, 1.18 and then 1.19

 

vSphere with Tanzu supports rolling updates ensuring there’s a minimal downtime for the workloads, of course if sometime goes wrong rollback is also in place to bring your services back.

 

The new control plane will be deployed, once a new node is available it will be joined to the cluster and an older out-of-date node is removed, the objects are also migrated from the old to the new one.

This process repeats one-by-one until all control plane nodes are updated then the older nodes are deleted from vSphere inventory

 


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Once all the Control Planes were deployed the old ones will be removed (you might realize based on the Control Plane numbers.

Once the control plane is updated (you might realize by the new control plane numbers), the worker nodes (ESXi hosts) are updated in a similar rolling update fashion, and each spherelet process on each ESXi host is updated one-by-one.

 

 


 

Starting with vSphere with Tanzu Update 2 there’s a new auto update policy model of n-2, which means that if you have a Supervisor Cluster that falls behind the rule it will be automatically updated, this is done to prevent the customers to run in a non-supported environment.

So, if your vCenter has been updated to include version 1.19, already provisioned clusters with version 1.18 and 1.17 will be fine, but if any of your cluster is 1.16 it will be automatically updated.

 

There’s a lot of information about the update process that I really encourage all of you to read here.

 

 

 

Thursday, May 13, 2021

Enabling Supervisor Cluster with AVI

A few weeks ago, I blogged about how to enable Workload Management Cluster (WMC) on vSphere with Tanzu using NSX and vSphere Distributed Switch (vDS) with HAProxy.

 

You might also have heard that starting with vSphere Update 2 there’s a new Load Balancer option for vDS use cases, NSX Advanced Load Balancer.

 

NSX Advanced Load Balancer uses a software-defined architecture that separates the central control from the distributed data plane where the services run. It represents full-featured, enterprise-grade load balancers, web application firewall (WAF), and analytics that provide traffic managementand application security while collecting real-time analytics from the traffic flows.

 

The best thing about it is that NSX Advanced Load Balancer is included in every Tanzu Edition.

 

Before enabling it, make sure all the requirements aremet, here’s the video;

 


 

Thursday, May 6, 2021

Tanzu Kubernetes Cluster and Virtual Machine Class

.

Recently I upgraded my environment with vSphere 7 Update 2 and while doing some new tests with vSphere with Tanzu, like Virtual Machine services (more about it on a latter post), to my surprise the creation of Tanzu Kubernetes Cluster with an YAML file I’ve been using for several months for my demos failed

 

When I describe the deployment of TKG cluster I found out an error saying VirtualMachineClassBindingNotFound.

 



Running the command “kubectl get virtualmachineclasses” proved that the Virtual Machine Class I was using on my descriptive file was available inside the Namespace.

 



That’s when I remember the new Virtual Machine Service and decided to take an extra look to my Namespace’s configuration.

 

To my surprise (it should not be) there’s an entire new section for VM Service and one of the options is to configure VM Classes

 



Once I enabled the VM Class I was using on my TKG Cluster everything worked as expected and my cluster was created successfully.

 


This improvement enhanced a lot the governance of the platform, allowing operators to control the resources the developers can consume in a very stylish way.

 

See you next

 

Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions. Reach me at @dumeirell

Most Viewed Posts

Blog Archive