Thursday, August 31, 2017

vSphere Integrated Containers – Protecting VCH 1/2

Client/Server certificates have been leveraged to secure access to Docker API hosts on any traditional Docker implementation.
When it comes to protecting Virtual Container Hosts (VCH) it’s not different, vSphere Integrated Containers (VIC), provides 3 categories as follow:

  • Certificate Base Authentication and Traffic Encryption
  • No Authentication and Traffic Encryption 
  • No Authentication and No Traffic Encryption
All of them can be speficied during VCH creation.

Obs: the examples you will see bellow are simplified deployments just to facilitate understanding. VCH’s creation has many deployment options.

Let’s start with the simplest one;

*** No Authentication and No Traffic Encryption ***
With this method, the user does not have to provide any certificate to authenticate to VCH endpoint, also the traffic between them is not encrypted.
This method is NOT recommended for production nor non-trusted environments, but I understand the simplicity of it when it comes to quick demos and POCs.
One last thing, in this case, the Docker API service is listening on port 2375.

Just provide --no-tls option during VCH creation.

 After creation, you can access it just pointing to its API endpoint, in fact anyone can just do that, as long as they know it’s IP address, you see now why it’s not secure or recommended ?!?

Let's try a little better method now;

*** No Authentication and Traffic Encryption ***
Like the previous one, the user does not have to provide any certificate to authenticate to VCH endpoint, but now the traffic between client and VCH are encrypted.
Again, since it does not provide any authentication mechanism it’s not recommended for production.
With the traffic being encrypted the Docker API service is now listening on port 2376.

You just need to provide --no-tlsverify option during VCH creation.
Even though no authentication is required, VIC will create certificates, which will be used to encrypt the traffic. But you don’t need to worry about it.

As I said before, the endpoint is not listening on port 2375 anymore, you will need to use port 2376.
Again anyone can just point to the endpoint’s IP and start issuing Docker commands, no authentication is required.

I think it’s enough for a post.
Next one is when things get really interesting, let’s protect our VCH with two-way authentication.

Stay tuned.

Friday, August 18, 2017

vSphere Integrated Containers – Performance over the Limits 2/2

I'm back to finish the post series about resource management within vSphere Integrated Containers, last one I discussed what happens to Virtual Container Host when the limits are set too low, now let's dive in on what happens to your containers in similar situations, shall we ?!?

** If you are in a hurry, you can skip to the Conclusion section by the end of this post ; )

To run these tests I set up an environment with two VCHs.
- Endpoint: (NO LIMITS)

- Endpoint (Limits of 500MHz for CPU and 500MB for memory)

The first question is, what happens when I create a container with more resources than what I’ve been entitled to (over the limits of your VCH). ?

On the VCH with limits, I’ll create a busybox container assigned with 2vCPU (each vCPU get 1.8GHz) and 4GB of RAM.

As you could see above, the container has been created successfully, without providing any message or warning.

So the inevitable question, how the containers will perform in such scenario?

To answer that question, I used a specific container to run stress tests, called progrium/stress

*** CPU Utilization Test ***

First I want to test the impact on CPU utilization, running a cpu stress on 1 vCPU during 5 minutes.

Since containers in VIC are represented by VMs, vSphere specific monitoring tools, like vROps, are perfect to monitor the performance of them. In this case, I used the built-in vCenter performance chart.

We can notice that it’s Entitlement is less than VCH CPU limitation. 
The stress test Demand is higher than the Entitlement, that’s why there’s an insane Ready time, meaning the process is ready to run but it's waiting to be scheduled by the ESXi host.

Whatever your container is running, will take a long time to conclude.

I ran the same test on a VCH without limits;

As we can see, the Demand was lower than what the container is Entitle to, also Entitlement here is 5X higher than the previous test, Ready time is very low too. 
Meaning this container has no constraints at all.

*** Memory Utilization Test ***

Now let’s see the impact on Memory utilization
I ran this memory stress to consume 512MB during 5 minutes.

As we can see on the graph, the container VM spent an average of 40% of it’s time waiting memory to be available (Latency metric), because the container VM cannot have access to host’s physical memory, imposed by VCH limits, we see a higher Balloning utilization, which is slow compared to physical memory access.

On the other had, running the same test on the VCH with no limits;

We can see that the container VM could use all the memory available to him, and because there are no constraints, we see no Ballon activity or Latency metrics.

******* Conclusion ******

- containers VMs will behave exactly as traditional VMs and will be affected by the same ESXi host contention mechanisms;
- VIC does not prevent you from creating containers above VCH limits;
- There's no warning or message if you create containers above VCH limits;
- If container CPU consumption is higher than VCH limits, cpu world cycles will be queued by ESXi host until available schedule to run, high Ready time will be noted;
- If container Memory consumption is higher than VCH limits, ESXi host will prevent container VM access to physical memory and memory management technics will be utilized, like balloning and swapping.
Either way, over CPU or Memory utilization, will decrease the performance of your container and application within.
Monitoring and Capacity/Performance methodology are your friends, keep them close to you !!

Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions.

Most Viewed Posts

Blog Archive