Monday, September 18, 2017

vSphere Integrated Containers – container network firewall

One of the unique and amazing features of vSphere Integrated Containers, VIC,  is its ability to expose containers services directly on a network, which means the traffic would not need to pass through the container host (port mapping), full network throughput per container and outages at the container host DO NOT cause any outages to the container service itself.
This capability is possible through the use of Container Network option.

On traditional Docker implementation, you could just pass the option -P and all container’s exposed ports will be published, while it’s great, it also raises security concerns about publishing ports and services that you are unaware of and might, potentially, increase your attack surface.

With that in mind, VMware, enhanced the security and control of container services with a new security feature, container network firewall, available starting from VIC 1.2.

This new feature comes with 5 levels of security trust, as follow;

  • Closed: no traffic come in or out of the container interface;
  • Open: all traffic is permitted; it allows the use of option -P during container creation;
  • Outbound: only outbound connections are permitted, good for containers consuming services but not providing any
  •  Published: only connections to published ports are permitted; you need to explicitly tell which port will be permitted during container creation; Ex: docker run -d –p 80 nginx
  • Peers: only containers on the same “peer” interface are permitted to communicate with each other. To establish peers you need to provide a range of IPs to the container network during VCH creation, (--container-network-ip-range)

By default the behavior of container network firewall is Publish, that’s why -P option might suddenly stop working after you upgrade to VIC 1.2.

To control the container firewall behavior you need to specify the trust level during VCH creation:
--container-network “PortGroup”:Internet --container-network-firewall "PortGroup":open

Now you have all the control you need on your container’s services.

Tuesday, September 5, 2017

vSphere Integrated Containers – Protecting VCH 2/2

This is post two of protecting your Virtual Container Host (VCH), if you did not check post one, I really encourage you to check it out before proceeding.

As promised, now I will show how we can secure our VCH leveraging two-way authentication with TLS certificates.

vSphere Integrated Containers (VIC), provides self-signed certificate capability, where, during VCH creation, it creates it's own CA in order to create and sign server and client certificates.
Bear in mind that self-signed certificates provide all the security and encryption required, but they don’t provide aspects such expiration, intermediate certificate authorities and so on.

*** Certificate Base Authentication and Traffic Encryption ***
Unlike the previous methods, now the users MUST provide client certificate in order to authenticate to the VCH endpoint any time they want to issue Docker commands, if you are using a self-signed or untrusted certificate, you also need to provide the CA certificate which signed them. 
Besides authentication, the traffic between client station and VCH are encrypted as well (Docker API service is listening on port 2376).
Being this method the one recommended for Production environments.

You just need to provide --tls-cname “name” option during VCH creation.
This name is the common name that will be added to the certificate and how your users will connect to the endpoint.

VIC will create a folder with the VCH name in the current directory and all certificates will be stored within it. 
Those are the self-signed certificates generated;
ca.pem    -> send to users
cert.pem -> send to users
key.pem  -> send to users

if you want to specify a different location to store the certificates once created, use the option --tls-cert-path “path”

Now if I try to connect to my recently created VCH just pointing its endpoint... think again...

You, as a Cloud Admin, must deliver to the users who will connect to your endpoint the required certificates, cert.pem and key.pem, that were generated during VCH creation, remember to send the ca.pem as well (just in case it’s a self-signed certificate).

The users will then copy the certificates to the client station, personally, I like setting up some variables to tell Docker client that I need to enable TLS check and where my certificates could be found.

export DOCKER_TLS=1
export DOCKER_CERT_PATH=”path_to_certificates”

As you can see now, I can securely connect to my VCH endpoint.

You might be asking, OK, but what about the use of custom/trusted certificates ?

YES !!! VIC allows the use of them as well.

Make sure your certificate is:
  • an X.509 certificate
  •  KeyEncipherment 
  • DigitalSignature 
  • KeyAgreement 
  • ServerAuth

*** Leveraging Custom certificates ***
First, make sure you have your valid certificate signed by a trusted CA to a folder where you have access to.

Besides the --tls-cname “name” option, now you need to provide some few other options  during VCH creation:
--tls-ca “file” the location for the CA certificate.
--tls-server-cert “file” the location for the custom server certificate.
--tls-server-key “file” the location for the private key which generated the server certificate.
--tls-cert-path “path” for the location to save your client certificate

As we could see, VCH has loaded server certificate in order to generate the client certificate, which the users will be required to connect to it.
Again, delivery the client certificates to your users and don’t forget to adjust the environment to point to the new certificates and you are ready to go.

As a last tip, do not delete the folders and certificates of your VCH, it's might be useful if you need to redeploy a VCH, reusing the certificates means you dont need to send new certificates to your users.

I hope by now you are empowered with all the knowledge to protect your environment.

See you next

Thursday, August 31, 2017

vSphere Integrated Containers – Protecting VCH 1/2

Client/Server certificates have been leveraged to secure access to Docker API hosts on any traditional Docker implementation.
When it comes to protecting Virtual Container Hosts (VCH) it’s not different, vSphere Integrated Containers (VIC), provides 3 categories as follow:

  • Certificate Base Authentication and Traffic Encryption
  • No Authentication and Traffic Encryption 
  • No Authentication and No Traffic Encryption
All of them can be speficied during VCH creation.

Obs: the examples you will see bellow are simplified deployments just to facilitate understanding. VCH’s creation has many deployment options.

Let’s start with the simplest one;

*** No Authentication and No Traffic Encryption ***
With this method, the user does not have to provide any certificate to authenticate to VCH endpoint, also the traffic between them is not encrypted.
This method is NOT recommended for production nor non-trusted environments, but I understand the simplicity of it when it comes to quick demos and POCs.
One last thing, in this case, the Docker API service is listening on port 2375.

Just provide --no-tls option during VCH creation.

 After creation, you can access it just pointing to its API endpoint, in fact anyone can just do that, as long as they know it’s IP address, you see now why it’s not secure or recommended ?!?

Let's try a little better method now;

*** No Authentication and Traffic Encryption ***
Like the previous one, the user does not have to provide any certificate to authenticate to VCH endpoint, but now the traffic between client and VCH are encrypted.
Again, since it does not provide any authentication mechanism it’s not recommended for production.
With the traffic being encrypted the Docker API service is now listening on port 2376.

You just need to provide --no-tlsverify option during VCH creation.
Even though no authentication is required, VIC will create certificates, which will be used to encrypt the traffic. But you don’t need to worry about it.

As I said before, the endpoint is not listening on port 2375 anymore, you will need to use port 2376.
Again anyone can just point to the endpoint’s IP and start issuing Docker commands, no authentication is required.

I think it’s enough for a post.
Next one is when things get really interesting, let’s protect our VCH with two-way authentication.

Stay tuned.

Friday, August 18, 2017

vSphere Integrated Containers – Performance over the Limits 2/2

I'm back to finish the post series about resource management within vSphere Integrated Containers, last one I discussed what happens to Virtual Container Host when the limits are set too low, now let's dive in on what happens to your containers in similar situations, shall we ?!?

** If you are in a hurry, you can skip to the Conclusion section by the end of this post ; )

To run these tests I set up an environment with two VCHs.
- Endpoint: (NO LIMITS)

- Endpoint (Limits of 500MHz for CPU and 500MB for memory)

The first question is, what happens when I create a container with more resources than what I’ve been entitled to (over the limits of your VCH). ?

On the VCH with limits, I’ll create a busybox container assigned with 2vCPU (each vCPU get 1.8GHz) and 4GB of RAM.

As you could see above, the container has been created successfully, without providing any message or warning.

So the inevitable question, how the containers will perform in such scenario?

To answer that question, I used a specific container to run stress tests, called progrium/stress

*** CPU Utilization Test ***

First I want to test the impact on CPU utilization, running a cpu stress on 1 vCPU during 5 minutes.

Since containers in VIC are represented by VMs, vSphere specific monitoring tools, like vROps, are perfect to monitor the performance of them. In this case, I used the built-in vCenter performance chart.

We can notice that it’s Entitlement is less than VCH CPU limitation. 
The stress test Demand is higher than the Entitlement, that’s why there’s an insane Ready time, meaning the process is ready to run but it's waiting to be scheduled by the ESXi host.

Whatever your container is running, will take a long time to conclude.

I ran the same test on a VCH without limits;

As we can see, the Demand was lower than what the container is Entitle to, also Entitlement here is 5X higher than the previous test, Ready time is very low too. 
Meaning this container has no constraints at all.

*** Memory Utilization Test ***

Now let’s see the impact on Memory utilization
I ran this memory stress to consume 512MB during 5 minutes.

As we can see on the graph, the container VM spent an average of 40% of it’s time waiting memory to be available (Latency metric), because the container VM cannot have access to host’s physical memory, imposed by VCH limits, we see a higher Balloning utilization, which is slow compared to physical memory access.

On the other had, running the same test on the VCH with no limits;

We can see that the container VM could use all the memory available to him, and because there are no constraints, we see no Ballon activity or Latency metrics.

******* Conclusion ******

- containers VMs will behave exactly as traditional VMs and will be affected by the same ESXi host contention mechanisms;
- VIC does not prevent you from creating containers above VCH limits;
- There's no warning or message if you create containers above VCH limits;
- If container CPU consumption is higher than VCH limits, cpu world cycles will be queued by ESXi host until available schedule to run, high Ready time will be noted;
- If container Memory consumption is higher than VCH limits, ESXi host will prevent container VM access to physical memory and memory management technics will be utilized, like balloning and swapping.
Either way, over CPU or Memory utilization, will decrease the performance of your container and application within.
Monitoring and Capacity/Performance methodology are your friends, keep them close to you !!

Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions.

Most Viewed Posts

Blog Archive