Monday, October 9, 2017

vSphere Integrated Containers – Name Convention


During the past several years, infrastructure Administrators came up with different methods to organize its vCenters, some with folders structures, some with fancy VMs nomenclature and prefixes.

When we think about vSphere Integrated Containers (VIC) consumption model, containers as Virtual Machines (containerVMs), we realize it kinds of disrupted this organizational model, mainly because VM's creation is no longer under administrators control,  but also because developers are now creating and deleting their own containerVMs inside vCenter without even noticing the existence of it, breaking the standards and controls applied so far by this new dynamic nature.

When a container is created by a developer within VIC, it’s respective containerVM, by default, is created in vCenter using the nomenclature of CONTAINER NAME + CONTAINER ID



For an infrastructure administrator, these VMs names are not very helpful, especially if you have controls and policies in place which depend on those VMs name.
Tools like vRealize Operations, VMware NSX or chargeback tools, might be ineffective when trying to manage and control those containerVMs.

To address these challenges, VIC 1.2.1 introduces the --container-name-convention option, which allows you to specify a prefix that will be applied to every containerVM during its creation, giving back to the administrator the control they require.

This convention option is determined during Virtual Container Host (VCH) creation. When creating your VCH there are many options available depending on your environment and your needs, I’m just focusing on --container-name-convention today, but if you are interested, check this link for the full options.

First, let’s check the prefix option based on the container name.
--container-name-convention “prefix”-{name}


This option enforces that each containerVM name will be made of the prefix you specify (in this example DEV) + the CONTAINER NAME


The second option is the prefix based on the container ID. 
--container-name-convention “prefix”-{id}


This option enforces that each containerVM name will be made of the prefix you specify (in this example PROD) + the CONTAINER ID


As you can see, it’s a win-win situation; from a developer’s point of view, nothing changes, they can still see it’s containers name and IDs just before, from a vSphere Infrastructure point of view it can create a standard that will be honored during containerVMs creation.




Monday, October 2, 2017

vSphere Integrated Containers – Developer Workflow



There’s no doubt vSphere Integrated Containers (VIC), is getting more and more powerful at each version, just check the several enhancements and capabilities which were released with version 1.2 and you will see what I'm talking about.

Today, I wanna walk you through a developer point of view workflow where we can create an image, storage it on a registry and then run it in production, with just 6 easy steps, all within VIC 1.2.

note: I already deployed a Virtual Container Host (VCH),  I'm leveraging Harbor, which is also part of VIC as my registry. If you think you need some help with those steps, let me know and I'll write another post explaining how I did it.

Let's do it;

Step 1 – run my Docker Container Host
Since VIC engine does not support (yet), docker build and docker push I will make use of a Docker Container Host (DCH), a native Docker host leveraged directly from VCH.


With the use of port mapping, DCH's services will be available through my VCH on port 12375.

docker run -d -p 12375:2375 vmware/dch-photon:1.13

Wait until it pulls the image from Docker Hub and starts it.
 
Step 2 – Set my docker client to the newly DCH
From this point on, I want all my commands to run against the recently created DCH, pointing my docker client to it. not really a required step but it makes things easier.

export DOCKER_HOST=192.168.100.160:12375

Step 3 – Build my image
It’s time to build my new image.
I’m using a simple Dockerfile, which uses nginx:latest image and update it. 
But you can be as much creative as Docker permits, remember; DCH is 100% Docker API compatible.

For easier identification, I’m building it and tagging it with a meaningful name. 
You can tag it something else later as well.

docker build -t registry.corp.local/justait/nginx:patched . 

it will take some time to pull the image and apply the updates. don't you worry.

Step 4 – Push to registry
My image is ready!! 
Now I can push it to my registry, everyone will be able to consume it.

docker push registry.corp.local/justait/nginx:patched 


I also pushed the unpatched version of nginx, just in case someone wants to use it too.
As you can see, both images are now stored in my registry under justait project


Step 5 – Set my docker client to my VCH
I want all my production containers to run on VIC, where they can benefit from vSphere features like DRS and HA.
So, let's set my docker client back to VCH,  

export DOCKER_HOST=192.168.100.160:2375

Step 6 – Run your container
It’s just a matter of running the container based on the new image.
But I'll not just simply run the container, I want to use this unique VIC feature of exposing the container's service directly on the network (Container Network).

docker run -d --net routable registry.corp.local/justait/nginx:patched

Here it is, from building an image to running a container in production and all we needed was a VCH. 

Writing a blog on your own free time, sometimes leave you without imagination, this topic was a suggestion from one of my readers, if you want to see something here, please, leave a comment or contact me on twitter, I really appreciate those ideas ;)
 

Monday, September 18, 2017

vSphere Integrated Containers – container network firewall




One of the unique and amazing features of vSphere Integrated Containers, VIC,  is its ability to expose containers services directly on a network, which means the traffic would not need to pass through the container host (port mapping), full network throughput per container and outages at the container host DO NOT cause any outages to the container service itself.
This capability is possible through the use of Container Network option.

On traditional Docker implementation, you could just pass the option -P and all container’s exposed ports will be published, while it’s great, it also raises security concerns about publishing ports and services that you are unaware of and might, potentially, increase your attack surface.

With that in mind, VMware, enhanced the security and control of container services with a new security feature, container network firewall, available starting from VIC 1.2.

This new feature comes with 5 levels of security trust, as follow;

  • Closed: no traffic come in or out of the container interface;
  • Open: all traffic is permitted; it allows the use of option -P during container creation;
  • Outbound: only outbound connections are permitted, good for containers consuming services but not providing any
  •  Published: only connections to published ports are permitted; you need to explicitly tell which port will be permitted during container creation; Ex: docker run -d –p 80 nginx
  • Peers: only containers on the same “peer” interface are permitted to communicate with each other. To establish peers you need to provide a range of IPs to the container network during VCH creation, (--container-network-ip-range)

By default the behavior of container network firewall is Publish, that’s why -P option might suddenly stop working after you upgrade to VIC 1.2.

To control the container firewall behavior you need to specify the trust level during VCH creation:
--container-network “PortGroup”:Internet --container-network-firewall "PortGroup":open

Now you have all the control you need on your container’s services.



Tuesday, September 5, 2017

vSphere Integrated Containers – Protecting VCH 2/2


This is post two of protecting your Virtual Container Host (VCH), if you did not check post one, I really encourage you to check it out before proceeding.

As promised, now I will show how we can secure our VCH leveraging two-way authentication with TLS certificates.

vSphere Integrated Containers (VIC), provides self-signed certificate capability, where, during VCH creation, it creates it's own CA in order to create and sign server and client certificates.
Bear in mind that self-signed certificates provide all the security and encryption required, but they don’t provide aspects such expiration, intermediate certificate authorities and so on.

*** Certificate Base Authentication and Traffic Encryption ***
Unlike the previous methods, now the users MUST provide client certificate in order to authenticate to the VCH endpoint any time they want to issue Docker commands, if you are using a self-signed or untrusted certificate, you also need to provide the CA certificate which signed them. 
Besides authentication, the traffic between client station and VCH are encrypted as well (Docker API service is listening on port 2376).
Being this method the one recommended for Production environments.

You just need to provide --tls-cname “name” option during VCH creation.
This name is the common name that will be added to the certificate and how your users will connect to the endpoint.


VIC will create a folder with the VCH name in the current directory and all certificates will be stored within it. 
Those are the self-signed certificates generated;
ca.pem    -> send to users
ca-key.pem
cert.pem -> send to users
key.pem  -> send to users
server-cert.pem
server-key.pem

if you want to specify a different location to store the certificates once created, use the option --tls-cert-path “path”

Now if I try to connect to my recently created VCH just pointing its endpoint... think again...

 
You, as a Cloud Admin, must deliver to the users who will connect to your endpoint the required certificates, cert.pem and key.pem, that were generated during VCH creation, remember to send the ca.pem as well (just in case it’s a self-signed certificate).

The users will then copy the certificates to the client station, personally, I like setting up some variables to tell Docker client that I need to enable TLS check and where my certificates could be found.

export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=”path_to_certificates”

As you can see now, I can securely connect to my VCH endpoint.


You might be asking, OK, but what about the use of custom/trusted certificates ?

YES !!! VIC allows the use of them as well.

Make sure your certificate is:
  • an X.509 certificate
  •  KeyEncipherment 
  • DigitalSignature 
  • KeyAgreement 
  • ServerAuth

*** Leveraging Custom certificates ***
First, make sure you have your valid certificate signed by a trusted CA to a folder where you have access to.

Besides the --tls-cname “name” option, now you need to provide some few other options  during VCH creation:
--tls-ca “file” the location for the CA certificate.
--tls-server-cert “file” the location for the custom server certificate.
--tls-server-key “file” the location for the private key which generated the server certificate.
--tls-cert-path “path” for the location to save your client certificate

As we could see, VCH has loaded server certificate in order to generate the client certificate, which the users will be required to connect to it.
Again, delivery the client certificates to your users and don’t forget to adjust the environment to point to the new certificates and you are ready to go.

As a last tip, do not delete the folders and certificates of your VCH, it's might be useful if you need to redeploy a VCH, reusing the certificates means you dont need to send new certificates to your users.

I hope by now you are empowered with all the knowledge to protect your environment.

See you next

Thursday, August 31, 2017

vSphere Integrated Containers – Protecting VCH 1/2


Client/Server certificates have been leveraged to secure access to Docker API hosts on any traditional Docker implementation.
When it comes to protecting Virtual Container Hosts (VCH) it’s not different, vSphere Integrated Containers (VIC), provides 3 categories as follow:

  • Certificate Base Authentication and Traffic Encryption
  • No Authentication and Traffic Encryption 
  • No Authentication and No Traffic Encryption
All of them can be speficied during VCH creation.

Obs: the examples you will see bellow are simplified deployments just to facilitate understanding. VCH’s creation has many deployment options.

Let’s start with the simplest one;

*** No Authentication and No Traffic Encryption ***
With this method, the user does not have to provide any certificate to authenticate to VCH endpoint, also the traffic between them is not encrypted.
This method is NOT recommended for production nor non-trusted environments, but I understand the simplicity of it when it comes to quick demos and POCs.
One last thing, in this case, the Docker API service is listening on port 2375.

Just provide --no-tls option during VCH creation.

 After creation, you can access it just pointing to its API endpoint, in fact anyone can just do that, as long as they know it’s IP address, you see now why it’s not secure or recommended ?!?




Let's try a little better method now;

*** No Authentication and Traffic Encryption ***
Like the previous one, the user does not have to provide any certificate to authenticate to VCH endpoint, but now the traffic between client and VCH are encrypted.
Again, since it does not provide any authentication mechanism it’s not recommended for production.
With the traffic being encrypted the Docker API service is now listening on port 2376.

You just need to provide --no-tlsverify option during VCH creation.
Even though no authentication is required, VIC will create certificates, which will be used to encrypt the traffic. But you don’t need to worry about it.


As I said before, the endpoint is not listening on port 2375 anymore, you will need to use port 2376.
Again anyone can just point to the endpoint’s IP and start issuing Docker commands, no authentication is required.



I think it’s enough for a post.
Next one is when things get really interesting, let’s protect our VCH with two-way authentication.

Stay tuned.

Friday, August 18, 2017

vSphere Integrated Containers – Performance over the Limits 2/2


I'm back to finish the post series about resource management within vSphere Integrated Containers, last one I discussed what happens to Virtual Container Host when the limits are set too low, now let's dive in on what happens to your containers in similar situations, shall we ?!?

** If you are in a hurry, you can skip to the Conclusion section by the end of this post ; )

To run these tests I set up an environment with two VCHs.
- Endpoint: 192.168.100.124 (NO LIMITS)


- Endpoint 192.168.100.126 (Limits of 500MHz for CPU and 500MB for memory)



The first question is, what happens when I create a container with more resources than what I’ve been entitled to (over the limits of your VCH). ?

On the VCH with limits, I’ll create a busybox container assigned with 2vCPU (each vCPU get 1.8GHz) and 4GB of RAM.


As you could see above, the container has been created successfully, without providing any message or warning.

So the inevitable question, how the containers will perform in such scenario?

To answer that question, I used a specific container to run stress tests, called progrium/stress

*** CPU Utilization Test ***

First I want to test the impact on CPU utilization, running a cpu stress on 1 vCPU during 5 minutes.


Since containers in VIC are represented by VMs, vSphere specific monitoring tools, like vROps, are perfect to monitor the performance of them. In this case, I used the built-in vCenter performance chart.

We can notice that it’s Entitlement is less than VCH CPU limitation. 
The stress test Demand is higher than the Entitlement, that’s why there’s an insane Ready time, meaning the process is ready to run but it's waiting to be scheduled by the ESXi host.

Whatever your container is running, will take a long time to conclude.

I ran the same test on a VCH without limits;


As we can see, the Demand was lower than what the container is Entitle to, also Entitlement here is 5X higher than the previous test, Ready time is very low too. 
Meaning this container has no constraints at all.



 
*** Memory Utilization Test ***

Now let’s see the impact on Memory utilization
I ran this memory stress to consume 512MB during 5 minutes.


As we can see on the graph, the container VM spent an average of 40% of it’s time waiting memory to be available (Latency metric), because the container VM cannot have access to host’s physical memory, imposed by VCH limits, we see a higher Balloning utilization, which is slow compared to physical memory access.

On the other had, running the same test on the VCH with no limits;


We can see that the container VM could use all the memory available to him, and because there are no constraints, we see no Ballon activity or Latency metrics.

  
******* Conclusion ******

- containers VMs will behave exactly as traditional VMs and will be affected by the same ESXi host contention mechanisms;
- VIC does not prevent you from creating containers above VCH limits;
- There's no warning or message if you create containers above VCH limits;
- If container CPU consumption is higher than VCH limits, cpu world cycles will be queued by ESXi host until available schedule to run, high Ready time will be noted;
- If container Memory consumption is higher than VCH limits, ESXi host will prevent container VM access to physical memory and memory management technics will be utilized, like balloning and swapping.
Either way, over CPU or Memory utilization, will decrease the performance of your container and application within.
Monitoring and Capacity/Performance methodology are your friends, keep them close to you !!
 

Wednesday, July 26, 2017

Additional vSphere Replication Servers

If you have been following my Demystifying vSphere Replication posts, you will remember that to achieve the maximum VMs replicated through vSphere Replication you need to provision additional vSphere Replication Servers, VRS,  to spread the load.

So, let's see how to add more VRS to your environment.

- Open vSphere Web Client and select your vCenter;
- On the Configuration tab, select Replication Servers;

- Click on, "Deploy new vSphere Replication Server from OVF";




- Click Browse, to select your OVF and the click Next;

 BTW: vSphere Replication Servers is the one ending with "AddOn";

- Give it a name and a location and click Next;

- Select a cluster where the VRS will be deployed at and click Next;

- Review de details and click Next;

- Select a datastore to store the VRS and click Next;

- Provide the network information related to your environment and click Next;


- Review the details and click Finish to star the deploy;
Wait until the deployment finishes.

- Click on "Register a virtual machine as a vSphere Replication Server";


- Just  browse until you find your recently created VRS and click OK;



Bada bing, bada boom.... you have a new vSphere Replication on your environment !!!


 
obs: obvisouly you must have a vSphere Replication Management Server already implemented, I'm not covering it.  There's a bunch of blog posts about it, but basically it's a OVF deployment, just follow the wizard and , it's done.
 

Tuesday, July 4, 2017

vRealize Automation – Installation fails with FQDN for RabittMQ enabled


This week my adventures lead me to the implementation of vRealize Automation 7.3 for one of my clients.
It has been some time I last install vRA, so taking a look at Installation Guide sounded me the right thing to do before I start anything, soon I realized I new step “Activate FQDN for RabbitMQ Before Installation” (pg 36) , as a good boy I enabled it as stated and started the installation through the Wizard.

OK, I will stop here and save you from having to read it all.
DO NOT, I repeat, DO NOT perform any activity on the appliances before running the installation, as it could cause you installation issues which would be hard to troubleshoot later.

Now, if you still want to read what issues it caused me, here they are:

The first issue, appeared when creating a self-signed certificate for vRA appliance, (yes, I was using a self-signed one), when I hit “Save Generated Certificate” button the process started but never finished with a loading message on the screen which never goes away.


After 40 minutes I decided to close the browser and open again, when opened again the wizard jump me to the subsequent step, showing a green mark for the certificate step, allowing me to proceed.

I went through all the steps without a problem, even the Validation step, but when I started the installation it immediately fails with the error:
The following parameters: [ VraWebCertificateThumbprint, ] do not exists for command [ install-web ]

 
It was clear to me that the previous certificate step did not conclude successfully, so I went back and tried to create it again; like the first time, the process stuck with the loading message. 
This time I decided to reboot the appliance when it back up I could re-create the certificate (within seconds), and proceed with the installation step.
But now it fails on the first installation step “Configure Single Sign-On”

 
This time the log shows the Application Server won't start. Identity Manager receiving a "404" when attempting to connect

After an entire day of attempts, troubleshooting and logs reading I gave up and started fresh on the next day.

This time without enabling FQDN for RabittMQ before installation, Surprisingly, or not, this time installation was smooth from start to end.

Don’t worry you can enable FQDN for RabittMQ after installation (pg 119).

obs: forgive the screen shoots quality, I dit not realize I might need them until I did.

See you next

Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions.

Most Viewed Posts

Blog Archive