Thursday, September 6, 2018

vRealize Automation – Highly Available Directories Management

One of the most appealing feature of any application/solution is it’s ability to provide higher levels of availability and resiliency, it becomes even more important when the solution in question places a critical role in your business, like a self-service portal where your clients request any kind of service and been server almost instantly providing agility and faster time to market value to your business.

vRealize Automation, when configured in a highly available deployment, provides this level of availability, enabling clustered services for all it’s components, but there’s one piece of the solution that is common over looked, Directories Management.

As a Tenant administrator, it’s pretty common to configure a directory over LDAP to provide user’s authentication, this way your users could benefit from using it’s already familiar user’s id and password to authenticate into the portal.



The support of user’s authentication in vRA is made through the use of connectors, each vRA appliance is a connector itself, but typically only one connector is configure to perform directory synchronization.



In order to provide Directories Management in high availability you must configure a second connector, with this configuration if one appliance fails the second one takes over the management of user’s authentication.

To configure a second connector, go to Administration / Directory Management / Identity Providers and click on the specific provider.



Click add connector and select the additional connector, make sure both connectors are enabled.
Last piece is to change the IdP hostname to point to your’s vRA VIP address.



Please, be aware that this configuration should be done on each Tenant.
Have you been configuring your Directories Management in high availability ?


Thursday, August 23, 2018

Additional Charges Missing

vRealize Business for Cloud is a great tool for Cloud cost analysis and price visibility details, more and more as companies mature its operational mode it becomes imperative the ability to control its costs and be transparent with end users.

There are situations where additional charges might need to be applied in order to form a more complete pricing policy, I covered it some time ago using vRealize Automation tags;
Last week I was taking the same approach but using vSphere tags instead, my client does not have vRA yet, but for some reason, the charge has not been applied the VMs.

Let's dig deeper into how did I configure this;
On vSphere, I created a tag Category called  "Aplicação" (it's the Portuguese word for application) and some other tags for specific software licenses, all tied to the "Aplicação" Category.


 The VM has been tagged with the appropriate License tag;



Also, my pricing policy was set up to include an additional charge based on the vSphere tag





To make sure vRB was receiving the inventory information correctly I ran the VM Configuration report;


and indeed vRB was identifying the tag on the VM properly.



Everything was configured correctly !!!

Time for a deeper troubleshoot;
vRB does the collection from vCenter and caches the results on its own internal database for faster operations, while checking the collection logs we found that vRB was messing with the fancy Portuguese letters ("ç" and "ã") and caching a weird word instead, so when comparing the values for the pricing policy, they don't match and so the additional charge was not applied.



The workaround was to create tags without those characters or even special ones. This behavior was found on vRB 7.4, so if your idiom uses fancy characters as well be aware of this.
I do have an issue open with VMware for fixing this bug.

Let me know if you faced it too and what idiom were you using.


Tuesday, July 24, 2018

ESXi host upgrade failed - 0x8b in position 513


Most of my time as a Consulting Architect at VMware Professional Services I spend with clients, helping them to create innovative solutions, overcoming challenges, etc.
Since every environment is unique, sometimes I stumble to some weird situations, this past week was one of them.

The client was upgrading their ESXi hosts from version 6.0 to 6.5, while the majority of the hosts went smoothly, a couple of them presented some undesired behavior.

Update Manager was used to remediate the hosts, everything was going fine, the patches have been staged and the first reboot occurred as expected, but during the installation, it crashed with a blue screen and an error message:

*******************
An expected error occurred

See logs for details

UnicoDecodeError: ‘utf-8’ codec can’t decode byte 0x8b in position 513: invalid start byte
*******************

(I'm sorry about the image quality, I was in a hurry trying to figure it out)

And then the installation rollback automatically to ESXi 6.0
Surprisingly all hosts were the same model, installed at the same period, the same way with the same ISO, so there’s nothing special about those hosts we could think off.

After some basic troubleshooting nothing pops up and an internet search for this error did not return anything relevant.
Time to search internally, VOILA ….that’s when I found a couple of past cases with the same behavior.

Long story short, the altbootbank for some reason was corrupted, we never found out why.
The solution was to recreate the altbookbank from the bootbank partition.
First, we got rid of the content in /altbootbank and then we copied the content from /bootbank to it.

Wait a minute, what /altbootbank and /bootbank is all about ?

ESXi keeps two independents copies of its boot partition, bootbank and altbootbank. One of them will have the active image, bootbank, which is used to boot up the system and the other one will have an alternate image, altbootbank, you can imagine that as the last good known state, so in case your boot partition becomes corrupted you can reboot your host from the last good know state (altbootbank).

It really took me a while to figured out how to solve it. I’m publishing it hoping it can save some of your time too, just let me know if you faced this issue too.

Tuesday, July 17, 2018

vSphere Integrated Containers – Affinity rules


Managing a vSphere environment is not about making use of the technology by itself, in fact, it’s leveraging this technology to fulfill business needs what it really matters. Often vSphere Administrators utilize DRS affinity rules to control virtual machines placement specifying a group of hosts which might fulfill those needs, reasons vary from license constraints, specific hardware needs, increase availability etc.

With the advent of vSphere Integrated Containers, VIC, developers can instantiate their own containers, container-vms to be more precise, without the interference of a vSphere Admin, while it increases the agility of the business it also places a new challenge; as containers-vms come and go as need how admins can keep their affinity rules updated in order to fulfill the business need ??? For sure a manual intervention is not up for debate.

Luckily VIC 1.4 brought a new functionality, host affinity. When enabled, there will be a DRS VM Group for each Virtual Container Host, VCH, and as containers are created or deleted this group will be updated accordingly, helping administrators and developers life to adhere to those business need automatically.

During the creation of a VCH, you enable host affinity just specifying ”--affinity-vm-group” option on the vic-machine command line (not yet available on VCH wizard).

A new DRS VM Group will be created with the same name of the VCH, you will also notice VCH VM is part of this group, it’s made that way because it’s impossible to create an empty VM Group, while an empty group can exist as a result of removing all VMs from it.

But what about new existing VCHs ???
Starting with VIC 1.4.1 you can reconfigure them enabling host affinity as well.
 
After creating the VCH, vSphere Administrators just need to create a VM-Host Affinity rule that matches this newly created VM Group and a Host Group, before handling the VCH for the developers.

So every time a developer creates or deletes containers on the VCH, the VM group membership will be updated accordingly and DRS will take care of the scheduling container-vm based on the rule create before automatically.
Enabling higher agility and improving operational efficiency while keeping the business need into account.

If you are still not sure why would you use this feature, I’d like to expose a few use cases;
On a hypothetical scenario of a single vSphere cluster made of 10 hosts distributed on 2 physical racks you may have;

*** Licensing needs ***
Let’s imagine you have an application that is licensed per physical host or processor, to decrease your license cost you might create a host group containing just the hosts you have this application license for and match this group with an affinity rule for the VCH VM Group, this way you don’t need to license your entire cluster;



*** Specific Hardware needs ***
Now if your container benefits from a graphical intensive processor, GPU, you can create a host group containing those hosts and match this group with the VCH VM Group, now those intensive containers will always be scheduled on the right host;



*** Fault Domains ***
Increasing your fault domain is always a plus when it comes to availability.
You can use a Host Group to create a kind of virtual cluster inside your vSphere cluster where the members of this host are spread among the racks. While you cannot guarantee your application will always be spread evenly between racks, HA will restart your container-vms on the remaining hosts in case of rack failure.



But, if you wanna make sure your application will always be spread evenly between racks you can create two VCHs, where each one will have an affinity rule to a Host Group based on hosts of a single rack.
Like VCH01 will use hosts from rack-A and VCH02 will use hosts from rack-B, now you can control the placement of your containers assuring that your application will always be available in case of a rack failure.


As you can see there's so many use cases for this feature but even more important is to support the agility you need while still aligned with your business need.

Do you have a different use case for this feature ? let us know...

Tuesday, June 5, 2018

vSphere Integrated Containers 1.4 - Upgrade 3/3


Finally, we get to the last step on this journey of upgrading my environment to the latest and greatest vSphere Integration Containers so far.

Today I'll upgrade my Virtual Container Host, VCH, to version 1.4

VCH management and lifecycle actions, like upgrades, are performed through the use of VIC Engine Bundle.
If you haven't that available yet or is using an older version, grab it now.
Engine Bundle can be found on the VIC Getting Start page (https://VIC:9443)

Unpack the binaries from Engine bundle tar file;
Run: tar -zxf vic_1.4.0.tar.gz

Check the VCH version on your environment;
Run: vic-machine ls
As you can see it’s running version 1.3 and has one container running.

The amazing thing about VIC is that interruptions, like upgrades, to VCH, does not cause any outage to the containers, basically because they are running independently as container-vms;
if you are using NAT based port forwarding then communication will be briefly interrupted but if you are using the exclusive VIC feature, Container Network, you are in good shape them.

Now that we know the ID of our VCH we can upgrade it.
Execute the vic-machine upgrade command and specify the VCH's ID we just got from the previous step.
Run: vic-machine upgrade --id "VCH_ID"

In less than 3 minutes my VCH has been upgraded to version 1.4 and as you can see my container kept running for the entire process.


WHAT ABOUT MY CONTAINERS ???

There’s no process to upgrade your running containers !!!
Containers are ephemeral by nature, so if you want a newer version, delete it and create a new one. Welcome to the container world !!!!

VIC containers are based on Photon OS, and with VIC 1.4 comes a new bootstrap.iso version; don't get too excited, it was not this time it got upgraded to version 2.0, but it got some nice minor packages updates.
So any previous container will still be running the old Photon OS version but the new ones will get the new bootstrap.iso version.

For comparison, I just run a new container based on the same image and compared the OS versions.


That’s all folks there are no more excuses to be running an older VIC version if you still unsure about why to upgrade, take a look at the complete list of what's new.

See you
 

Wednesday, May 30, 2018

vSphere Integrated Containers 1.4 - Upgrade 2/3


Hello there, following my series of upgrading vSphere Integration Containers to version 1.4, I'll cover today how to upgrade the vSphere Web Client Plug-in.

Let’s start with some housekeeping:
I’m considering you are running vCenter Server Appliance right, who is running the Windows version anyway ?!?!?
  • You already upgraded your VIC appliance to version 1.4;
  • VIC plugin 1.2.x or 1.3.x is already installed on vCenter;
  • The bash shell is enabled on VCSA; (just check it on vCenter's VAMI page)
 
Copy the VIC Engine Bundle file from the new appliance to VCSA;
Engine Bundle can be found on the VIC Getting Start page (https://VIC:9443)

Unpack the binaries from Engine bundle;
Run: tar -zxf vic_1.4.0.tar.gz

Execute the upgrade script;
Run: ../vic/ui/VCSA/upgrade.sh

Once on the VCSA bash shell, set up some environments variables;
Run: export VIC_ADDRESS="VIC_v1.4_IP"
Run: export VIC_BUNDLE="vic_engine_bundle_version"




Provide the vCenter name, the user with privileges to register plugins, If the plugin version you want to upgrade is correct, just hit yes to proceed


The log is provided on the screen during the process;

If everything ran as expected, just restart the web client services for the new version takes place.

Run: service-control --stop vsphere-ui
Run: service-control --stop vsphere-client
Run: service-control --start vsphere-ui
Run: service-control --start vsphere-client

When logging back to vCenter we see the plug-in has been upgraded


Unfortunately, this new plug-in has no new features, but VCH Creation Wizard has gone through some design improvements, collapsing and sorting some information in order to make the deployment more intuitive and easier.


Now we are missing the last piece of it, upgrading Virtual Container Hosts....keep watching.


Friday, May 25, 2018

vSphere Integrated Containers 1.4 - Upgrade 1/3



On May 15th VMware has released a new version of its own docker implementation product, vSphere Integrated Containers 1.4, as always it comes with enhancements that include but not limited to support for vSphere 6.7 and ROBO deployments, affinity rules (more on that in a future post) and some management portal improvements.

But today I wanna cover the upgrade process, I will break it into 3 phases for easier consumption;

- Upgrade vSphere Integrated Containers Appliance (This post)

To be honest the upgrade itself is not really an in-place upgrade, in fact, the process involves deploying a new VIC appliance and copying the relevant information from the previous appliance to the new one, including management portal, registry configuration and data.

The good thing about it is that it leaves you with an easy rollback option, since the previous appliance is kept intact, in case of any problem, you can just get rid of the new appliance and power the previous one back on and everything will still there just the way it was before the upgrade.

My actual environment compresses of a VIC 1.3 and one VCH connected to it;
obs: you can upgrade from any version later than 1.2.x
I also have a project (cnameetupsp), with a few images, which has been scanned for vulnerabilities and signed by Content Trust (another post I own you guys).


Let’s start downloading and deploying VIC 1.4 since it’s a new appliance, give it it’s own IP address and hostname.
OVA deploy process is pretty standard among VMware’s solutions, so I’m not going through it, but if you still have doubts the product's documentation is your friend.

Important: Make sure to use the Flex-based vSphere Web Client to deploy it, even if you are using vSphere 6.7, because HTML5 Web Client is not ready for VIC yet, although the deployment my succeed the configuration required for VIC to work might not be implemented properly.

 
Once the appliance is deployed access it through SSH. Make sure to enable it during OVA deployment.

Important 2: do NOT go to the Getting Started page of the new appliance, because it will initiate the services for a new set up and would cause the upgrade to fail, if you have done it, just deploy a new appliance ; )


Once on the new appliance console just run the upgrade script;
Run: ./etc/vmware/upgrade/upgrade.sh

The script will prompt you with the information about the vCenter where the previous appliance is provisioned, if you have an external PSC provide their information as well otherwise just leave it blank

Now you need to provide the information about your previous VIC appliance, make sure the appliance is power on and with SSH enabled, if not, power off the appliance and enable it through Permit Root Login within vApp Options.

There you go, just sit back, relax and watch the upgrade process running;
During the process, the relevant files are copied over and the old appliance is shut down.
If you need more information or for troubleshooting needs, the log is saved on /var/log/vmware/upgrade.log

Once it’s done, just power on the new appliance and log in.
As we can see the upgrade was successful, my VCH is connected to the new VIC Appliance

My projects and images are there as well.


The only downside is that my images came unsigned, it’s due as a new appliance comes with a different certificate from my previous one.
So, if you are using Content Trust you will have to plan it accordingly and resigned your images after the upgrade so the users will be able to pull and run them again.

That’s all for today, stay tuned for the remaining upgrade phases; vSphere Plugin and Virtual Container Hosts.


-->

Friday, May 4, 2018

VMware Cloud on AWS – Test Drive


There's this big buzz since VMworld last year, where the partnership between VMware and Amazon has been announced, creating what's being called VMware Cloud on AWS.
If you are like me, you are deadly curious to put your hands on this beauty, well let me tell you this,  there’s a Hand-on-Lab about it.

HOLs are the fastest and easiest way to test drive the full technical capabilities of VMware products, including VMC on AWS.

As any VMware HOL, there’s a manual you can read the instructions that will guide you through the lab, but if are more a freestyle type of guy, just open up Chrome and it will redirect you to the Cloud Services page.


From there you can consume VMware Cloud on AWS Service with a single click


The creation of your first SDDC could not be easier.


In less than 3 minutes, literally, my SDDC was ready and I could throw workloads on it.


It’s a fully functional service, you can play with all capabilities;


After playing with it, let’s say you are convinced that the solution fits your needs, but you are not sure on how to start and size your environment.

The first approach would be the “Start Small”, get the smaller SDDC possible, nowadays it starts with 4 hosts and increases the number of hosts as you need, Scale-out capability is part of VMC on AWS and just take minutes to spin un a new host to your cluster.

Just fill out a simple form with a couple of information like VM count, storage size, vCPU/core ratio, IP profile….

 
…. and BANG, the recommendation for your environment is presented to you.


I personally love the detailed storage consumption information chart

Along with a Cluster break down information



What else do we need ?!?!?


Friday, April 13, 2018

VMware's release storm




You probably woke up today just thinking about just another regular Friday, right?
During my matinal e-mail check, I was surprised by the number of new product's version VMware has released, some are just bug fixes while others contain full new amazing features.

I will highlight the ones I think it’s more relevant, but you can check all the details on each Product’s release.
Don’t be afraid with the number of details to read, focus on the products you have on your environment today instead, I’m sure you will find some fix or new feature that will certainly make your life easier !!!

Should we begin ?

The most amazing feature is new out of the box custom request form that removed the need for wrapping infrastructure and PaaS blueprints into XaaS blueprints.
Definitely a game change. You might also enjoy the capability of deploying vSphere blueprints from OVF or OVA.

vRB still closing the gap against Chargeback when it comes to vCD;
Talking about it, what about Overage policy for allocation pool models applying differential rate for vCPU and RAM usage, killer case for Service Providers!!
Also adding storage tiers pricing based on storage policy, charge of network bandwidth usage and data ingress and egress.
VMware Cloud on AWS got some new feature that worth checking as well.

That’s definitely a product you want on your toolbelt, debuting on this version is the install pre-checker, validating your environment before starting the deployment, content lifecycle management allowing you to manage the release of content such blueprints and workflows across multiples environment.

vROPs still focus on continuous performance optimization and capacity management features, what you can prove by those enhancements;
- 30% footprint reduction when monitoring the same number of objects;
- predictive real-time analytics;
- software license optimization aware for workload placements;
- native cost visibility;
- new quickstart page, customized homepage, enhanced vSAN dashboards and much more.

If you are looking for a central place to check the health of your entirely SDDC, this management pack is made for you. Now with improved vSAN and vCenter alerts, agentless vRA health and vROPs sizing dashboard.

This management pack is great, it provides out of the box vCenter self-healing workflows that act upon alerts…. How great is that ?!?!

Besides some bug fixes, It added support for vCloud Director 9.1

Although it’s in maintenance mode it still receiving fixes and now it’s compatible with vROPs 6.7
If you are looking for a more long-term solution, take a look at vRealize Network Insight
 
You don’t need to deploy dedicated instances for tenants anymore, multi-tenancy is now available on vRO, along with an updated Web-Based Clarity UI that brings centralized log views and workflow details directly on the monitoring client.

This minor release added support for Kubernetes 1.9.5 and Golang 1.9.4 along with a few fixes, from which I highlight;
- nodes are drained before they stop, minimizing downtime.
- unmount docker volumes fixes,
- BOSH DNS issues fix.

vRealize Log Insight 4.6.0 | April 12th 2018
Enhanced support for up to 15 vCenters per node, If some component suddenly stops sending log you can be notified about it, additional APIs for creating/deleting.

It worth mentioning the vRCS will no longer work with vRA 7.4 or later and its functionality is moving to vRealize Lifecycle Manager, but if you still at vRA 7.3 you should get its new version with a lot of improvements and defect fixes, like name of approvers is now recorded in pipeline executions, SLA for manual approvals and out of the box destroy action for deployments.

Happy Friday 13th

Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions. Reach me at @dumeirell

Most Viewed Posts

Blog Archive