Friday, August 18, 2017

vSphere Integrated Containers – Performance over the Limits 2/2

I'm back to finish the post series about resource management within vSphere Integrated Containers, last one I discussed what happens to Virtual Container Host when the limits are set too low, now let's dive in on what happens to your containers in similar situations, shall we ?!?

** If you are in a hurry, you can skip to the Conclusion section by the end of this post ; )

To run these tests I set up an environment with two VCHs.
- Endpoint: (NO LIMITS)

- Endpoint (Limits of 500MHz for CPU and 500MB for memory)

The first question is, what happens when I create a container with more resources than what I’ve been entitled to (over the limits of your VCH). ?

On the VCH with limits, I’ll create a busybox container assigned with 2vCPU (each vCPU get 1.8GHz) and 4GB of RAM.

As you could see above, the container has been created successfully, without providing any message or warning.

So the inevitable question, how the containers will perform in such scenario?

To answer that question, I used a specific container to run stress tests, called progrium/stress

*** CPU Utilization Test ***

First I want to test the impact on CPU utilization, running a cpu stress on 1 vCPU during 5 minutes.

Since containers in VIC are represented by VMs, vSphere specific monitoring tools, like vROps, are perfect to monitor the performance of them. In this case, I used the built-in vCenter performance chart.

We can notice that it’s Entitlement is less than VCH CPU limitation. 
The stress test Demand is higher than the Entitlement, that’s why there’s an insane Ready time, meaning the process is ready to run but it's waiting to be scheduled by the ESXi host.

Whatever your container is running, will take a long time to conclude.

I ran the same test on a VCH without limits;

As we can see, the Demand was lower than what the container is Entitle to, also Entitlement here is 5X higher than the previous test, Ready time is very low too. 
Meaning this container has no constraints at all.

*** Memory Utilization Test ***

Now let’s see the impact on Memory utilization
I ran this memory stress to consume 512MB during 5 minutes.

As we can see on the graph, the container VM spent an average of 40% of it’s time waiting memory to be available (Latency metric), because the container VM cannot have access to host’s physical memory, imposed by VCH limits, we see a higher Balloning utilization, which is slow compared to physical memory access.

On the other had, running the same test on the VCH with no limits;

We can see that the container VM could use all the memory available to him, and because there are no constraints, we see no Ballon activity or Latency metrics.

******* Conclusion ******

- containers VMs will behave exactly as traditional VMs and will be affected by the same ESXi host contention mechanisms;
- VIC does not prevent you from creating containers above VCH limits;
- There's no warning or message if you create containers above VCH limits;
- If container CPU consumption is higher than VCH limits, cpu world cycles will be queued by ESXi host until available schedule to run, high Ready time will be noted;
- If container Memory consumption is higher than VCH limits, ESXi host will prevent container VM access to physical memory and memory management technics will be utilized, like balloning and swapping.
Either way, over CPU or Memory utilization, will decrease the performance of your container and application within.
Monitoring and Capacity/Performance methodology are your friends, keep them close to you !!

Wednesday, July 26, 2017

Additional vSphere Replication Servers

If you have been following my Demystifying vSphere Replication posts, you will remember that to achieve the maximum VMs replicated through vSphere Replication you need to provision additional vSphere Replication Servers, VRS,  to spread the load.

So, let's see how to add more VRS to your environment.

- Open vSphere Web Client and select your vCenter;
- On the Configuration tab, select Replication Servers;

- Click on, "Deploy new vSphere Replication Server from OVF";

- Click Browse, to select your OVF and the click Next;

 BTW: vSphere Replication Servers is the one ending with "AddOn";

- Give it a name and a location and click Next;

- Select a cluster where the VRS will be deployed at and click Next;

- Review de details and click Next;

- Select a datastore to store the VRS and click Next;

- Provide the network information related to your environment and click Next;

- Review the details and click Finish to star the deploy;
Wait until the deployment finishes.

- Click on "Register a virtual machine as a vSphere Replication Server";

- Just  browse until you find your recently created VRS and click OK;

Bada bing, bada boom.... you have a new vSphere Replication on your environment !!!

obs: obvisouly you must have a vSphere Replication Management Server already implemented, I'm not covering it.  There's a bunch of blog posts about it, but basically it's a OVF deployment, just follow the wizard and , it's done.

Tuesday, July 4, 2017

vRealize Automation – Installation fails with FQDN for RabittMQ enabled

This week my adventures lead me to the implementation of vRealize Automation 7.3 for one of my clients.
It has been some time I last install vRA, so taking a look at Installation Guide sounded me the right thing to do before I start anything, soon I realized I new step “Activate FQDN for RabbitMQ Before Installation” (pg 36) , as a good boy I enabled it as stated and started the installation through the Wizard.

OK, I will stop here and save you from having to read it all.
DO NOT, I repeat, DO NOT perform any activity on the appliances before running the installation, as it could cause you installation issues which would be hard to troubleshoot later.

Now, if you still want to read what issues it caused me, here they are:

The first issue, appeared when creating a self-signed certificate for vRA appliance, (yes, I was using a self-signed one), when I hit “Save Generated Certificate” button the process started but never finished with a loading message on the screen which never goes away.

After 40 minutes I decided to close the browser and open again, when opened again the wizard jump me to the subsequent step, showing a green mark for the certificate step, allowing me to proceed.

I went through all the steps without a problem, even the Validation step, but when I started the installation it immediately fails with the error:
The following parameters: [ VraWebCertificateThumbprint, ] do not exists for command [ install-web ]

It was clear to me that the previous certificate step did not conclude successfully, so I went back and tried to create it again; like the first time, the process stuck with the loading message. 
This time I decided to reboot the appliance when it back up I could re-create the certificate (within seconds), and proceed with the installation step.
But now it fails on the first installation step “Configure Single Sign-On”

This time the log shows the Application Server won't start. Identity Manager receiving a "404" when attempting to connect

After an entire day of attempts, troubleshooting and logs reading I gave up and started fresh on the next day.

This time without enabling FQDN for RabittMQ before installation, Surprisingly, or not, this time installation was smooth from start to end.

Don’t worry you can enable FQDN for RabittMQ after installation (pg 119).

obs: forgive the screen shoots quality, I dit not realize I might need them until I did.

See you next

Wednesday, June 21, 2017

vSphere Integrated Containers – Performance over the Limits - 1/2

After I talked about resources management within vSphere Integrated Containers some questions arose immediately:

What happens if I try to create containers above the limits?

That’s what I’m about to demonstrate today, let’s take this journey together, shall we?

By default, the VCH endpoint VM has 1vCPU and 2GB of memory and that's what we'll work on with.
With that said, the first VCH I’ll create has a low CPU limit (only 100MHz);

Apparently, the VCH creation went normally, but during the endpoint VM communication validation step some errors were reported, which what looks like is a timeout issue;

If we take a look at vSphere, the endpoint VM is taking a long time to boot-up, just showing a banner on it’s console...
… eventually, it boot’s up but with some errors on the console, you might try, but even a simple “ping” does not work.
Looking further, we can see on the performance chart that the VMs is Entitled to only 100MHz, the same amount which was specified by the VCH creation.
So during the boot process, the endpoint VM Demands more than what it’s entitled to, that’s when we saw a high READY time, meaning that the VMs is ready to run but could not get scheduled on the physical ESXi CPU.
Now the time-out make sense ; )

Let’s try differently this time, let’s create a VCH with a low memory limit (only 100MB);

This time the VCH creation fails with a more descriptive message. "Failed to power on appliance. The available Memory resources in the parent resource pool are insufficient for the operation"
As we can see there’s not enough memory to power one the endpoint VM so it cannot proceed.

Well, it’s clear that at least we need to set the limits higher that the VCH endpoint configuration.

I think it’s enough for one post, next I will cover what happens when the containers consume more than the Limits.

Stay tuned !!

Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions.

Most Viewed Posts

Blog Archive