Friday, August 18, 2017

vSphere Integrated Containers – Performance over the Limits 2/2


I'm back to finish the post series about resource management within vSphere Integrated Containers, last one I discussed what happens to Virtual Container Host when the limits are set too low, now let's dive in on what happens to your containers in similar situations, shall we ?!?

** If you are in a hurry, you can skip to the Conclusion section by the end of this post ; )

To run these tests I set up an environment with two VCHs.
- Endpoint: 192.168.100.124 (NO LIMITS)


- Endpoint 192.168.100.126 (Limits of 500MHz for CPU and 500MB for memory)



The first question is, what happens when I create a container with more resources than what I’ve been entitled to (over the limits of your VCH). ?

On the VCH with limits, I’ll create a busybox container assigned with 2vCPU (each vCPU get 1.8GHz) and 4GB of RAM.


As you could see above, the container has been created successfully, without providing any message or warning.

So the inevitable question, how the containers will perform in such scenario?

To answer that question, I used a specific container to run stress tests, called progrium/stress

*** CPU Utilization Test ***

First I want to test the impact on CPU utilization, running a cpu stress on 1 vCPU during 5 minutes.


Since containers in VIC are represented by VMs, vSphere specific monitoring tools, like vROps, are perfect to monitor the performance of them. In this case, I used the built-in vCenter performance chart.

We can notice that it’s Entitlement is less than VCH CPU limitation. 
The stress test Demand is higher than the Entitlement, that’s why there’s an insane Ready time, meaning the process is ready to run but it's waiting to be scheduled by the ESXi host.

Whatever your container is running, will take a long time to conclude.

I ran the same test on a VCH without limits;


As we can see, the Demand was lower than what the container is Entitle to, also Entitlement here is 5X higher than the previous test, Ready time is very low too. 
Meaning this container has no constraints at all.



 
*** Memory Utilization Test ***

Now let’s see the impact on Memory utilization
I ran this memory stress to consume 512MB during 5 minutes.


As we can see on the graph, the container VM spent an average of 40% of it’s time waiting memory to be available (Latency metric), because the container VM cannot have access to host’s physical memory, imposed by VCH limits, we see a higher Balloning utilization, which is slow compared to physical memory access.

On the other had, running the same test on the VCH with no limits;


We can see that the container VM could use all the memory available to him, and because there are no constraints, we see no Ballon activity or Latency metrics.

  
******* Conclusion ******

- containers VMs will behave exactly as traditional VMs and will be affected by the same ESXi host contention mechanisms;
- VIC does not prevent you from creating containers above VCH limits;
- There's no warning or message if you create containers above VCH limits;
- If container CPU consumption is higher than VCH limits, cpu world cycles will be queued by ESXi host until available schedule to run, high Ready time will be noted;
- If container Memory consumption is higher than VCH limits, ESXi host will prevent container VM access to physical memory and memory management technics will be utilized, like balloning and swapping.
Either way, over CPU or Memory utilization, will decrease the performance of your container and application within.
Monitoring and Capacity/Performance methodology are your friends, keep them close to you !!
 

Wednesday, July 26, 2017

Additional vSphere Replication Servers

If you have been following my Demystifying vSphere Replication posts, you will remember that to achieve the maximum VMs replicated through vSphere Replication you need to provision additional vSphere Replication Servers, VRS,  to spread the load.

So, let's see how to add more VRS to your environment.

- Open vSphere Web Client and select your vCenter;
- On the Configuration tab, select Replication Servers;

- Click on, "Deploy new vSphere Replication Server from OVF";




- Click Browse, to select your OVF and the click Next;

 BTW: vSphere Replication Servers is the one ending with "AddOn";

- Give it a name and a location and click Next;

- Select a cluster where the VRS will be deployed at and click Next;

- Review de details and click Next;

- Select a datastore to store the VRS and click Next;

- Provide the network information related to your environment and click Next;


- Review the details and click Finish to star the deploy;
Wait until the deployment finishes.

- Click on "Register a virtual machine as a vSphere Replication Server";


- Just  browse until you find your recently created VRS and click OK;



Bada bing, bada boom.... you have a new vSphere Replication on your environment !!!


 
obs: obvisouly you must have a vSphere Replication Management Server already implemented, I'm not covering it.  There's a bunch of blog posts about it, but basically it's a OVF deployment, just follow the wizard and , it's done.
 

Tuesday, July 4, 2017

vRealize Automation – Installation fails with FQDN for RabittMQ enabled


This week my adventures lead me to the implementation of vRealize Automation 7.3 for one of my clients.
It has been some time I last install vRA, so taking a look at Installation Guide sounded me the right thing to do before I start anything, soon I realized I new step “Activate FQDN for RabbitMQ Before Installation” (pg 36) , as a good boy I enabled it as stated and started the installation through the Wizard.

OK, I will stop here and save you from having to read it all.
DO NOT, I repeat, DO NOT perform any activity on the appliances before running the installation, as it could cause you installation issues which would be hard to troubleshoot later.

Now, if you still want to read what issues it caused me, here they are:

The first issue, appeared when creating a self-signed certificate for vRA appliance, (yes, I was using a self-signed one), when I hit “Save Generated Certificate” button the process started but never finished with a loading message on the screen which never goes away.


After 40 minutes I decided to close the browser and open again, when opened again the wizard jump me to the subsequent step, showing a green mark for the certificate step, allowing me to proceed.

I went through all the steps without a problem, even the Validation step, but when I started the installation it immediately fails with the error:
The following parameters: [ VraWebCertificateThumbprint, ] do not exists for command [ install-web ]

 
It was clear to me that the previous certificate step did not conclude successfully, so I went back and tried to create it again; like the first time, the process stuck with the loading message. 
This time I decided to reboot the appliance when it back up I could re-create the certificate (within seconds), and proceed with the installation step.
But now it fails on the first installation step “Configure Single Sign-On”

 
This time the log shows the Application Server won't start. Identity Manager receiving a "404" when attempting to connect

After an entire day of attempts, troubleshooting and logs reading I gave up and started fresh on the next day.

This time without enabling FQDN for RabittMQ before installation, Surprisingly, or not, this time installation was smooth from start to end.

Don’t worry you can enable FQDN for RabittMQ after installation (pg 119).

obs: forgive the screen shoots quality, I dit not realize I might need them until I did.

See you next

Wednesday, June 21, 2017

vSphere Integrated Containers – Performance over the Limits - 1/2


After I talked about resources management within vSphere Integrated Containers some questions arose immediately:

What happens if I try to create containers above the limits?

That’s what I’m about to demonstrate today, let’s take this journey together, shall we?

By default, the VCH endpoint VM has 1vCPU and 2GB of memory and that's what we'll work on with.
With that said, the first VCH I’ll create has a low CPU limit (only 100MHz);

Apparently, the VCH creation went normally, but during the endpoint VM communication validation step some errors were reported, which what looks like is a timeout issue;

If we take a look at vSphere, the endpoint VM is taking a long time to boot-up, just showing a banner on it’s console...
… eventually, it boot’s up but with some errors on the console, you might try, but even a simple “ping” does not work.
Looking further, we can see on the performance chart that the VMs is Entitled to only 100MHz, the same amount which was specified by the VCH creation.
So during the boot process, the endpoint VM Demands more than what it’s entitled to, that’s when we saw a high READY time, meaning that the VMs is ready to run but could not get scheduled on the physical ESXi CPU.
Now the time-out make sense ; )


Let’s try differently this time, let’s create a VCH with a low memory limit (only 100MB);

This time the VCH creation fails with a more descriptive message. "Failed to power on appliance. The available Memory resources in the parent resource pool are insufficient for the operation"
As we can see there’s not enough memory to power one the endpoint VM so it cannot proceed.

Well, it’s clear that at least we need to set the limits higher that the VCH endpoint configuration.

I think it’s enough for one post, next I will cover what happens when the containers consume more than the Limits.

Stay tuned !!

Tuesday, June 13, 2017

vSphere Integrated Containers – User Defined Network


A few weeks ago I talked about vSphere Integrated ContainersNetworking, what they are used for, syntaxes and how the traffic flows to and from the containers, but they were all from the point of view of vSphere Administrators provisioning virtual container host, VCH.

Developers, on the other hand, are used to create their own networks; for several reasons, like; isolating containers from each other, creating a backend network for some application or just for service discovery outside of the default bridge network, they are called User-Defined Networks.

Let’s see how it works:
The standard deployment of VCH comes with a default bridge network;

When we create containers without any specification, it’s connected to the port group backing up the bridge network, which was specified during VCH provisioning, in this case, "backend" and got and IP address from 172.16.0.0/24 address space.
 

Now, let’s create a user-defined network;
Obs: I’m using --subnet option because I don’t have a DHCP listening on that segment.



This time I will create another container connected to this new user-defined network I just created.


As expected the container is connected to the same port group backing up the bridge network but received an IP address from the range specified during the user-defined network creation (10.10.10.0/24).



My point here is, although they are connected to the same segment (port group) the different  address space provides enough segregation between containers.

That’s one of the reasons we recommend a dedicated segment for each VCH bridge network, otherwise diferent users could create additional user-defined networks with the same address space of each other, which might inadvertently allow access to each other containers or cause and IP conflict.

See you next

Wednesday, June 7, 2017

vSphere Integrated Containers – Resource Manager

One of the many vSphere Integrated Containers, VIC, benefits is its ability to control and manage resource allocation. 
Let’s make a comparison between VIC and traditional containers deployment to clarify what I mean with that.

With traditional containers deployment, you have to size your container host upfront, we all know how easy it is to foresee adoption and future grown, right ?!
Inevitably you will end up in two situations, either you sized your container host to small and in a few months or weeks it will be full and your developers will ask for another one or you size it too big and the container host is out there just wasting resources, which could be utilized somewhere else, not efficient.
Last be honest neither of them is good scenarios.

VIC, on the other hand, approaches resource allocation in a different way: 
first, when you create your virtual container host, VCH, you are not allocating resources, you are just defining its boundaries, think of it as vSphere resource pool definition which we all knows for years.

When you create your VCH it will show up at vCenter as a vApp (nothing really new).


By default, VCH is created without any limitation, just edit the VCH and you will see it.

































At this point, you are probably worried that your developers would consume ALL your resources.
Luckily VIC has all the tools to solve the problem, during VCH creation you can specify the limits of memory (in MB) and cpu (in MHz), just adding the options  --memory “size” or/and --cpu “amount"


Now the limitation is applied to the vApp

































Also, it's reported back to the developers as well



Well, it does not prevent us from an unexpected grow, doesn't it ? 
But since VCH is just a resource pool like, you can manually edit it for expanding or shrinking it’s limitation without any impact or downtime to the actual containers.
It’s what I call an elastic solution !!!

What about the containers itself ?

By default, they are created with 2vCPUS and 2GB of RAM

































 If you want you can give them more or fewer resources, just add the options --memory “size” or/and --cpuset-set “amount” when creating your container.


Remember, since every container is a unique VM on vCenter you can see it’s allocations is properly set up



Now you can size your container host like a boss !!!

Thursday, June 1, 2017

Top vBlog 2017


http://topvblog2017.questionpro.com/

Once again, Eric Siebert , launched it’s popular VMware related blog contest for 2017. We should give this man a credit, despite all personal challenges he made to put it out there again.
Thanks, Eric !!!

When voting tries to keep in mind the following criteria:

  • Longevity – Anyone can start a blog but it requires dedication, time & effort to keep it going. Some bloggers start a blog only to have it fall to the wayside several months later. Things always come up in life but the good bloggers keep going regardless of what is happening in their life.
  • Length – It’s easy to make a quick blog post without much content, nothing wrong with this as long as you have good content in the post that people will enjoy. But some bloggers post pretty long detailed posts which take a lot of time and effort to produce. The tip of the hat goes to these guys that burn the midnight oil trying to get you some great detailed information.
  • Frequency – Some bloggers post several times a week which provides readers with lots of content. This requires a lot of effort as bloggers have to come up with more content ideas to write about. Frequency ties into length, some do high frequency/low length, some do low frequency/high length, some do both. They’re all good and require a lot of time and effort on the bloggers part.
  • Quality – It all comes down to what in the blog post regardless of how often or how long the blog posts are. After reading a blog post if you come away with learning something that you did not previously know and it benefits you in some way then you know you are reading a quality post. Good quality is usually the result of original content, it's easy to re-hash something previously published elsewhere, the good bloggers come up with unique content or put their own unique spin on popular topics.
 
What you are waiting for, show your appreciation for the bloggers that help you the most.

GO VOTE !!!

Ohhh yes, I'm there as well...


Let's see if I can make better this year ; )


 

Friday, May 19, 2017

vSphere Integrated Containers - Networking


At this point, you probably already heard about vSphere Integrated Containers and how it solves many of the challenges with traditional containers management at scale.

One of its key pillars is the implementation of  Container-VMs, meaning that for each container you have a corresponding VM, this approach allows us to use vSphere distributed Port groups or NSX Logical Switches to secure and segregate network communication as has never seen before.

Let’s examine what network options are available for VIC.

Public Network
Mandatory: YES
Syntax: --public-network “port_group”
Its main use is to pull container images from registries, like Docker Hub, and enable access to containers services through port mapping.
If not another optional network is specified during VCH creation it will also handle communication from VCH to vCenter and ESXi hosts and Docker API calls from clients creating and managing containers.

Bridged Network
Mandatory: YES
Syntax: --bridge-network “port_group”
It’s the segment where the containers communicate with each other.
If Management network is not specified it will also handle communication between VCH and containers.
Best practice is to use a dedicated segment for each VCH

Management Network
Mandatory: NO
Syntax: --management-network “port_group”
When specified, It will be used by VCH to communicate with vCenter and ESXi and
communication between VCH and containers.

Client Network
Mandatory: No
Syntax: --client-network “port_group”
When specified, it will be used to expose the Docker API service to clients in order to create and manage containers by the clients.

Containers Network
Mandatory: No
Syntax: --container-network “port_group:friendly_name”
When specified, it will expose containers services directly on the network, bypassing the VCH altogether.
That’s a great option which only VIC can provide, avoiding the single point of failure VCH can be, also providing dedicated network bandwidth per container.

Let’s see all the communication possible within this solution.



1 – Developers create containers through the Client network leveraging Docker API calls
2 – VCH pulls image from Docker Hub through the Public network
3 – VCH, through the management network, calls vCenter and ESXi hosts to created Container-VMs
4 – Through the Public network, users access container services by port mapping
5 – Frontend container app access data from the backend database through the use of Bridge network.
6 – Users connect directly to the container's services

If you want to learn more about the networking and advanced examples, please, check VIC documentation here.

Wednesday, April 26, 2017

vSAN HCL Warning

A few weeks ago I talked about Identifying your controllers and installing the correct firmware and drivers for a vSAN environment.
Even though you follow my recommendations vSAN Health Check still showing a warning for the vSAN cache tier controllers.


Don’t be afraid, there’s a bug that prevents Health Check to correctly identify PCIE/NVMe controllers and so the alarm is triggered, even with the certified driver and firmware installed.

It took me some time to find this information, so I thought I could save some of your time too.
Here's the KB2146676 that outline this bug.

Obs: This bug has been fixed on vSAN 6.5

Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions.

Most Viewed Posts

Blog Archive