Monday, October 9, 2017

vSphere Integrated Containers – Name Convention


During the past several years, infrastructure Administrators came up with different methods to organize its vCenters, some with folders structures, some with fancy VMs nomenclature and prefixes.

When we think about vSphere Integrated Containers (VIC) consumption model, containers as Virtual Machines (containerVMs), we realize it kinds of disrupted this organizational model, mainly because VM's creation is no longer under administrators control,  but also because developers are now creating and deleting their own containerVMs inside vCenter without even noticing the existence of it, breaking the standards and controls applied so far by this new dynamic nature.

When a container is created by a developer within VIC, it’s respective containerVM, by default, is created in vCenter using the nomenclature of CONTAINER NAME + CONTAINER ID



For an infrastructure administrator, these VMs names are not very helpful, especially if you have controls and policies in place which depend on those VMs name.
Tools like vRealize Operations, VMware NSX or chargeback tools, might be ineffective when trying to manage and control those containerVMs.

To address these challenges, VIC 1.2.1 introduces the --container-name-convention option, which allows you to specify a prefix that will be applied to every containerVM during its creation, giving back to the administrator the control they require.

This convention option is determined during Virtual Container Host (VCH) creation. When creating your VCH there are many options available depending on your environment and your needs, I’m just focusing on --container-name-convention today, but if you are interested, check this link for the full options.

First, let’s check the prefix option based on the container name.
--container-name-convention “prefix”-{name}


This option enforces that each containerVM name will be made of the prefix you specify (in this example DEV) + the CONTAINER NAME


The second option is the prefix based on the container ID. 
--container-name-convention “prefix”-{id}


This option enforces that each containerVM name will be made of the prefix you specify (in this example PROD) + the CONTAINER ID


As you can see, it’s a win-win situation; from a developer’s point of view, nothing changes, they can still see it’s containers name and IDs just before, from a vSphere Infrastructure point of view it can create a standard that will be honored during containerVMs creation.




Monday, October 2, 2017

vSphere Integrated Containers – Developer Workflow



There’s no doubt vSphere Integrated Containers (VIC), is getting more and more powerful at each version, just check the several enhancements and capabilities which were released with version 1.2 and you will see what I'm talking about.

Today, I wanna walk you through a developer point of view workflow where we can create an image, storage it on a registry and then run it in production, with just 6 easy steps, all within VIC 1.2.

note: I already deployed a Virtual Container Host (VCH),  I'm leveraging Harbor, which is also part of VIC as my registry. If you think you need some help with those steps, let me know and I'll write another post explaining how I did it.

Let's do it;

Step 1 – run my Docker Container Host
Since VIC engine does not support (yet), docker build and docker push I will make use of a Docker Container Host (DCH), a native Docker host leveraged directly from VCH.


With the use of port mapping, DCH's services will be available through my VCH on port 12375.

docker run -d -p 12375:2375 vmware/dch-photon:1.13

Wait until it pulls the image from Docker Hub and starts it.
 
Step 2 – Set my docker client to the newly DCH
From this point on, I want all my commands to run against the recently created DCH, pointing my docker client to it. not really a required step but it makes things easier.

export DOCKER_HOST=192.168.100.160:12375

Step 3 – Build my image
It’s time to build my new image.
I’m using a simple Dockerfile, which uses nginx:latest image and update it. 
But you can be as much creative as Docker permits, remember; DCH is 100% Docker API compatible.

For easier identification, I’m building it and tagging it with a meaningful name. 
You can tag it something else later as well.

docker build -t registry.corp.local/justait/nginx:patched . 

it will take some time to pull the image and apply the updates. don't you worry.

Step 4 – Push to registry
My image is ready!! 
Now I can push it to my registry, everyone will be able to consume it.

docker push registry.corp.local/justait/nginx:patched 


I also pushed the unpatched version of nginx, just in case someone wants to use it too.
As you can see, both images are now stored in my registry under justait project


Step 5 – Set my docker client to my VCH
I want all my production containers to run on VIC, where they can benefit from vSphere features like DRS and HA.
So, let's set my docker client back to VCH,  

export DOCKER_HOST=192.168.100.160:2375

Step 6 – Run your container
It’s just a matter of running the container based on the new image.
But I'll not just simply run the container, I want to use this unique VIC feature of exposing the container's service directly on the network (Container Network).

docker run -d --net routable registry.corp.local/justait/nginx:patched

Here it is, from building an image to running a container in production and all we needed was a VCH. 

Writing a blog on your own free time, sometimes leave you without imagination, this topic was a suggestion from one of my readers, if you want to see something here, please, leave a comment or contact me on twitter, I really appreciate those ideas ;)
 

Monday, September 18, 2017

vSphere Integrated Containers – container network firewall




One of the unique and amazing features of vSphere Integrated Containers, VIC,  is its ability to expose containers services directly on a network, which means the traffic would not need to pass through the container host (port mapping), full network throughput per container and outages at the container host DO NOT cause any outages to the container service itself.
This capability is possible through the use of Container Network option.

On traditional Docker implementation, you could just pass the option -P and all container’s exposed ports will be published, while it’s great, it also raises security concerns about publishing ports and services that you are unaware of and might, potentially, increase your attack surface.

With that in mind, VMware, enhanced the security and control of container services with a new security feature, container network firewall, available starting from VIC 1.2.

This new feature comes with 5 levels of security trust, as follow;

  • Closed: no traffic come in or out of the container interface;
  • Open: all traffic is permitted; it allows the use of option -P during container creation;
  • Outbound: only outbound connections are permitted, good for containers consuming services but not providing any
  •  Published: only connections to published ports are permitted; you need to explicitly tell which port will be permitted during container creation; Ex: docker run -d –p 80 nginx
  • Peers: only containers on the same “peer” interface are permitted to communicate with each other. To establish peers you need to provide a range of IPs to the container network during VCH creation, (--container-network-ip-range)

By default the behavior of container network firewall is Publish, that’s why -P option might suddenly stop working after you upgrade to VIC 1.2.

To control the container firewall behavior you need to specify the trust level during VCH creation:
--container-network “PortGroup”:Internet --container-network-firewall "PortGroup":open

Now you have all the control you need on your container’s services.



Tuesday, September 5, 2017

vSphere Integrated Containers – Protecting VCH 2/2


This is post two of protecting your Virtual Container Host (VCH), if you did not check post one, I really encourage you to check it out before proceeding.

As promised, now I will show how we can secure our VCH leveraging two-way authentication with TLS certificates.

vSphere Integrated Containers (VIC), provides self-signed certificate capability, where, during VCH creation, it creates it's own CA in order to create and sign server and client certificates.
Bear in mind that self-signed certificates provide all the security and encryption required, but they don’t provide aspects such expiration, intermediate certificate authorities and so on.

*** Certificate Base Authentication and Traffic Encryption ***
Unlike the previous methods, now the users MUST provide client certificate in order to authenticate to the VCH endpoint any time they want to issue Docker commands, if you are using a self-signed or untrusted certificate, you also need to provide the CA certificate which signed them. 
Besides authentication, the traffic between client station and VCH are encrypted as well (Docker API service is listening on port 2376).
Being this method the one recommended for Production environments.

You just need to provide --tls-cname “name” option during VCH creation.
This name is the common name that will be added to the certificate and how your users will connect to the endpoint.


VIC will create a folder with the VCH name in the current directory and all certificates will be stored within it. 
Those are the self-signed certificates generated;
ca.pem    -> send to users
ca-key.pem
cert.pem -> send to users
key.pem  -> send to users
server-cert.pem
server-key.pem

if you want to specify a different location to store the certificates once created, use the option --tls-cert-path “path”

Now if I try to connect to my recently created VCH just pointing its endpoint... think again...

 
You, as a Cloud Admin, must deliver to the users who will connect to your endpoint the required certificates, cert.pem and key.pem, that were generated during VCH creation, remember to send the ca.pem as well (just in case it’s a self-signed certificate).

The users will then copy the certificates to the client station, personally, I like setting up some variables to tell Docker client that I need to enable TLS check and where my certificates could be found.

export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=”path_to_certificates”

As you can see now, I can securely connect to my VCH endpoint.


You might be asking, OK, but what about the use of custom/trusted certificates ?

YES !!! VIC allows the use of them as well.

Make sure your certificate is:
  • an X.509 certificate
  •  KeyEncipherment 
  • DigitalSignature 
  • KeyAgreement 
  • ServerAuth

*** Leveraging Custom certificates ***
First, make sure you have your valid certificate signed by a trusted CA to a folder where you have access to.

Besides the --tls-cname “name” option, now you need to provide some few other options  during VCH creation:
--tls-ca “file” the location for the CA certificate.
--tls-server-cert “file” the location for the custom server certificate.
--tls-server-key “file” the location for the private key which generated the server certificate.
--tls-cert-path “path” for the location to save your client certificate

As we could see, VCH has loaded server certificate in order to generate the client certificate, which the users will be required to connect to it.
Again, delivery the client certificates to your users and don’t forget to adjust the environment to point to the new certificates and you are ready to go.

As a last tip, do not delete the folders and certificates of your VCH, it's might be useful if you need to redeploy a VCH, reusing the certificates means you dont need to send new certificates to your users.

I hope by now you are empowered with all the knowledge to protect your environment.

See you next

Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions.

Most Viewed Posts

Blog Archive