Tuesday, July 4, 2017

vRealize Automation – Installation fails with FQDN for RabittMQ enabled


This week my adventures lead me to the implementation of vRealize Automation 7.3 for one of my clients.
It has been some time I last install vRA, so taking a look at Installation Guide sounded me the right thing to do before I start anything, soon I realized I new step “Activate FQDN for RabbitMQ Before Installation” (pg 36) , as a good boy I enabled it as stated and started the installation through the Wizard.

OK, I will stop here and save you from having to read it all.
DO NOT, I repeat, DO NOT perform any activity on the appliances before running the installation, as it could cause you installation issues which would be hard to troubleshoot later.

Now, if you still want to read what issues it caused me, here they are:

The first issue, appeared when creating a self-signed certificate for vRA appliance, (yes, I was using a self-signed one), when I hit “Save Generated Certificate” button the process started but never finished with a loading message on the screen which never goes away.


After 40 minutes I decided to close the browser and open again, when opened again the wizard jump me to the subsequent step, showing a green mark for the certificate step, allowing me to proceed.

I went through all the steps without a problem, even the Validation step, but when I started the installation it immediately fails with the error:
The following parameters: [ VraWebCertificateThumbprint, ] do not exists for command [ install-web ]

 
It was clear to me that the previous certificate step did not conclude successfully, so I went back and tried to create it again; like the first time, the process stuck with the loading message. 
This time I decided to reboot the appliance when it back up I could re-create the certificate (within seconds), and proceed with the installation step.
But now it fails on the first installation step “Configure Single Sign-On”

 
This time the log shows the Application Server won't start. Identity Manager receiving a "404" when attempting to connect

After an entire day of attempts, troubleshooting and logs reading I gave up and started fresh on the next day.

This time without enabling FQDN for RabittMQ before installation, Surprisingly, or not, this time installation was smooth from start to end.

Don’t worry you can enable FQDN for RabittMQ after installation (pg 119).

obs: forgive the screen shoots quality, I dit not realize I might need them until I did.

See you next

Wednesday, June 21, 2017

vSphere Integrated Containers – Performance over the Limits - 1/2


After I talked about resources management within vSphere Integrated Containers some questions arose immediately:

What happens if I try to create containers above the limits?

That’s what I’m about to demonstrate today, let’s take this journey together, shall we?

By default, the VCH endpoint VM has 1vCPU and 2GB of memory and that's what we'll work on with.
With that said, the first VCH I’ll create has a low CPU limit (only 100MHz);

Apparently, the VCH creation went normally, but during the endpoint VM communication validation step some errors were reported, which what looks like is a timeout issue;

If we take a look at vSphere, the endpoint VM is taking a long time to boot-up, just showing a banner on it’s console...
… eventually, it boot’s up but with some errors on the console, you might try, but even a simple “ping” does not work.
Looking further, we can see on the performance chart that the VMs is Entitled to only 100MHz, the same amount which was specified by the VCH creation.
So during the boot process, the endpoint VM Demands more than what it’s entitled to, that’s when we saw a high READY time, meaning that the VMs is ready to run but could not get scheduled on the physical ESXi CPU.
Now the time-out make sense ; )


Let’s try differently this time, let’s create a VCH with a low memory limit (only 100MB);

This time the VCH creation fails with a more descriptive message. "Failed to power on appliance. The available Memory resources in the parent resource pool are insufficient for the operation"
As we can see there’s not enough memory to power one the endpoint VM so it cannot proceed.

Well, it’s clear that at least we need to set the limits higher that the VCH endpoint configuration.

I think it’s enough for one post, next I will cover what happens when the containers consume more than the Limits.

Stay tuned !!

Tuesday, June 13, 2017

vSphere Integrated Containers – User Defined Network


A few weeks ago I talked about vSphere Integrated ContainersNetworking, what they are used for, syntaxes and how the traffic flows to and from the containers, but they were all from the point of view of vSphere Administrators provisioning virtual container host, VCH.

Developers, on the other hand, are used to create their own networks; for several reasons, like; isolating containers from each other, creating a backend network for some application or just for service discovery outside of the default bridge network, they are called User-Defined Networks.

Let’s see how it works:
The standard deployment of VCH comes with a default bridge network;

When we create containers without any specification, it’s connected to the port group backing up the bridge network, which was specified during VCH provisioning, in this case, "backend" and got and IP address from 172.16.0.0/24 address space.
 

Now, let’s create a user-defined network;
Obs: I’m using --subnet option because I don’t have a DHCP listening on that segment.



This time I will create another container connected to this new user-defined network I just created.


As expected the container is connected to the same port group backing up the bridge network but received an IP address from the range specified during the user-defined network creation (10.10.10.0/24).



My point here is, although they are connected to the same segment (port group) the different  address space provides enough segregation between containers.

That’s one of the reasons we recommend a dedicated segment for each VCH bridge network, otherwise diferent users could create additional user-defined networks with the same address space of each other, which might inadvertently allow access to each other containers or cause and IP conflict.

See you next

Wednesday, June 7, 2017

vSphere Integrated Containers – Resource Manager

One of the many vSphere Integrated Containers, VIC, benefits is its ability to control and manage resource allocation. 
Let’s make a comparison between VIC and traditional containers deployment to clarify what I mean with that.

With traditional containers deployment, you have to size your container host upfront, we all know how easy it is to foresee adoption and future grown, right ?!
Inevitably you will end up in two situations, either you sized your container host to small and in a few months or weeks it will be full and your developers will ask for another one or you size it too big and the container host is out there just wasting resources, which could be utilized somewhere else, not efficient.
Last be honest neither of them is good scenarios.

VIC, on the other hand, approaches resource allocation in a different way: 
first, when you create your virtual container host, VCH, you are not allocating resources, you are just defining its boundaries, think of it as vSphere resource pool definition which we all knows for years.

When you create your VCH it will show up at vCenter as a vApp (nothing really new).


By default, VCH is created without any limitation, just edit the VCH and you will see it.

































At this point, you are probably worried that your developers would consume ALL your resources.
Luckily VIC has all the tools to solve the problem, during VCH creation you can specify the limits of memory (in MB) and cpu (in MHz), just adding the options  --memory “size” or/and --cpu “amount"


Now the limitation is applied to the vApp

































Also, it's reported back to the developers as well



Well, it does not prevent us from an unexpected grow, doesn't it ? 
But since VCH is just a resource pool like, you can manually edit it for expanding or shrinking it’s limitation without any impact or downtime to the actual containers.
It’s what I call an elastic solution !!!

What about the containers itself ?

By default, they are created with 2vCPUS and 2GB of RAM

































 If you want you can give them more or fewer resources, just add the options --memory “size” or/and --cpuset-set “amount” when creating your container.


Remember, since every container is a unique VM on vCenter you can see it’s allocations is properly set up



Now you can size your container host like a boss !!!

Who am I

My photo
I’m and IT specialist with over 15 years of experience, working from IT infraestructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Senior Consultant, helping customers to embrace the Cloud Era and make them succefully on this journay. Despite the fact I'm a VMware employee these postings reflect my own opnion and do not represents VMware's position, strategies or opinios.

Most Viewed Posts

Blog Archive