Friday, May 4, 2018

VMware Cloud on AWS – Test Drive


There's this big buzz since VMworld last year, where the partnership between VMware and Amazon has been announced, creating what's being called VMware Cloud on AWS.
If you are like me, you are deadly curious to put your hands on this beauty, well let me tell you this,  there’s a Hand-on-Lab about it.

HOLs are the fastest and easiest way to test drive the full technical capabilities of VMware products, including VMC on AWS.

As any VMware HOL, there’s a manual you can read the instructions that will guide you through the lab, but if are more a freestyle type of guy, just open up Chrome and it will redirect you to the Cloud Services page.


From there you can consume VMware Cloud on AWS Service with a single click


The creation of your first SDDC could not be easier.


In less than 3 minutes, literally, my SDDC was ready and I could throw workloads on it.


It’s a fully functional service, you can play with all capabilities;


After playing with it, let’s say you are convinced that the solution fits your needs, but you are not sure on how to start and size your environment.

The first approach would be the “Start Small”, get the smaller SDDC possible, nowadays it starts with 4 hosts and increases the number of hosts as you need, Scale-out capability is part of VMC on AWS and just take minutes to spin un a new host to your cluster.

Just fill out a simple form with a couple of information like VM count, storage size, vCPU/core ratio, IP profile….

 
…. and BANG, the recommendation for your environment is presented to you.


I personally love the detailed storage consumption information chart

Along with a Cluster break down information



What else do we need ?!?!?


Friday, April 13, 2018

VMware's release storm




You probably woke up today just thinking about just another regular Friday, right?
During my matinal e-mail check, I was surprised by the number of new product's version VMware has released, some are just bug fixes while others contain full new amazing features.

I will highlight the ones I think it’s more relevant, but you can check all the details on each Product’s release.
Don’t be afraid with the number of details to read, focus on the products you have on your environment today instead, I’m sure you will find some fix or new feature that will certainly make your life easier !!!

Should we begin ?

The most amazing feature is new out of the box custom request form that removed the need for wrapping infrastructure and PaaS blueprints into XaaS blueprints.
Definitely a game change. You might also enjoy the capability of deploying vSphere blueprints from OVF or OVA.

vRB still closing the gap against Chargeback when it comes to vCD;
Talking about it, what about Overage policy for allocation pool models applying differential rate for vCPU and RAM usage, killer case for Service Providers!!
Also adding storage tiers pricing based on storage policy, charge of network bandwidth usage and data ingress and egress.
VMware Cloud on AWS got some new feature that worth checking as well.

That’s definitely a product you want on your toolbelt, debuting on this version is the install pre-checker, validating your environment before starting the deployment, content lifecycle management allowing you to manage the release of content such blueprints and workflows across multiples environment.

vROPs still focus on continuous performance optimization and capacity management features, what you can prove by those enhancements;
- 30% footprint reduction when monitoring the same number of objects;
- predictive real-time analytics;
- software license optimization aware for workload placements;
- native cost visibility;
- new quickstart page, customized homepage, enhanced vSAN dashboards and much more.

If you are looking for a central place to check the health of your entirely SDDC, this management pack is made for you. Now with improved vSAN and vCenter alerts, agentless vRA health and vROPs sizing dashboard.

This management pack is great, it provides out of the box vCenter self-healing workflows that act upon alerts…. How great is that ?!?!

Besides some bug fixes, It added support for vCloud Director 9.1

Although it’s in maintenance mode it still receiving fixes and now it’s compatible with vROPs 6.7
If you are looking for a more long-term solution, take a look at vRealize Network Insight
 
You don’t need to deploy dedicated instances for tenants anymore, multi-tenancy is now available on vRO, along with an updated Web-Based Clarity UI that brings centralized log views and workflow details directly on the monitoring client.

This minor release added support for Kubernetes 1.9.5 and Golang 1.9.4 along with a few fixes, from which I highlight;
- nodes are drained before they stop, minimizing downtime.
- unmount docker volumes fixes,
- BOSH DNS issues fix.

vRealize Log Insight 4.6.0 | April 12th 2018
Enhanced support for up to 15 vCenters per node, If some component suddenly stops sending log you can be notified about it, additional APIs for creating/deleting.

It worth mentioning the vRCS will no longer work with vRA 7.4 or later and its functionality is moving to vRealize Lifecycle Manager, but if you still at vRA 7.3 you should get its new version with a lot of improvements and defect fixes, like name of approvers is now recorded in pipeline executions, SLA for manual approvals and out of the box destroy action for deployments.

Happy Friday 13th

Friday, April 6, 2018

VMware Pivotal Container Service - PKS CLI

Finally, VMware Pivotal Container Service command line options series got to its end.
Featuring today PKS command line tool, allowing cloud admins to create, manage and delete kubernetes clusters within PKS.

Let's the fun begins

*** Installing PKS CLI ***
- Download PKS CLI tool from Pivotal Network;

obs: I'm focused on the Linux version, but it has a version for other platforms as well.


- Once transferred to your system make it executable;
Run: chmod +x pks-linux-amd64

- Move the pks cli to the bin directory;
Run: cp pks-linux-amd64 /usr/local/bin/pks

- You can test and check it’s version just running the command below;
Run: pks --version






*** Connecting to PKS *** 
Now that we have everything set up it's time to have some fun.

Let’s begin logging in to our PKS system:
Run: pks login -a “UAA_URL” -u “user_id” -p “password” -k

Obs: If you don’t know what the UAA_URL is or the user to connect to, go back and check the previous post.

*** Managing Kubernetes Cluster ***
- Creating kubernetes cluster;
Run: pks create-cluster “cluster_name” --external-hostname “address” --plan “plan_name”

Obs: external-hostname is the address from which to access the cluster Kubernetes API, also plans are part of Pivotal Container Service Tile.


- It's also easy to check all clusters in the system;
Run: pks clusters

- If you want to get a cluster's details;
Run: pks cluster “cluster_name”



- What about scaling out your cluster with a single command and the platform will take care of everything on your behalf
Run: pks resize “cluster_name” --num-nodes “X”

“X” means the number of desired worker nodes

- Finally you can delete your cluster when you don't need it anymore
Run: pks delete-cluster “cluster_name”

That’s all for the PKS command line series, now you are empowered with all the tools required to deliver and manager Kubernetes cluster at the speed and agility the business demands.

Wednesday, March 28, 2018

VMware Pivotal Container Service – User Account and Authentication CLI

Continuing my series of command lines options for managing VMware Pivotal Container Service (PKS), today I present you the User Account and Authentication command line.

The primary role of UAAC is to create, delete and manage users within the context of PKS, which means give Cloud Admins authority to create and manage Kubernetes clusters themselves, giving them the freedom and agility the business demands.

Let's see how easy it is:

*** Installing UAAC CLI ***
UAAC is installed with the use of gem, which allows you to interact with RubyGems, so in order to use it first, we need to install ruby and ruby-dev.
Obs: I’m using Ubuntu, if you are using other distribution use the accordingly install tool.

- Install Ruby
Run: apt install ruby

- Install ruby-dev
Run: apt install ruby-dev

Now that the prerequisites are done, let's install UAAC
- Install UAAC
Run: gem install cf-uaac

To make sure UAAC has been installed successfully
- Testing UAAC installation
Run: uaac version


*** Connecting to PKS ***
With UAAC installed the first thing we have to do is point it to our PKS target.

- Targeting PKS
Run: uaac target https://”UAA_URL”:8443 --skip-ssl-validation

During Pivotal Container Service Tile configuration, we set up the UAA URL.

 
Once the target is configured, log in with the credential to perform the actions you want.
 Since I want to create users I’m using admin.

- Login to UAA
Run: uaac token client get admin -s “password”

You can find the password as part of Pivotal Container Service Tile


































*** Creating Users ***
Now it’s just a matter of adding the users

- create user
Run: uaac user add “user_id” --emails “e-mail” -p “password”

The final thing is to attribute some privileges to the user
- adjusting group membership
Run: uaac member add “group” “user_id”

Thinking about PKS cluster’s management we have two main groups;
- pks.clusters.admin: allow the user to create and manage all clusters within the system;
- pks.clusters.manage: allow the user to create and manage only the cluster’s they own;

That’s all I have for today’s post, next one I will show you how to create Kubernetes cluster with the users we just created.

Stay tuned


Monday, March 5, 2018

VMware Pivotal Container Service - Bosh CLI

While VMware Pivotal Container Service UI is great, there are a few things you still need to perform on the command line (or API requests), like troubleshooting and stuff.

Of course, you could just SSH into Ops Manager and run every command from there, but giving access to this crucial resource to others is far from ideal.
That’s why I decided to create some basic tutorials on how to access and perform PKS tasks remotely throughout the CLI.

The tutorials will cover:

Bosh CLI is intended to manage Bosh resources, tasks and objects; In other for PKS to be able to instantiate VMs (K8s Master and Nodes) on vCenter, it needs a broker which has specific vSphere CPI, that’s the Ops Director you first deploy on your PKS environment.



After you deploy it you can connect to the Bosh Director service running within and see the VMs and tasks it’s managing.

*** Installing Bosh CLI ***

- Download bosh cli;   
  Run: wget https://s3.amazonaws.com/bosh-cli-artifacts/bosh-cli-2.0.48-linux-amd64
 

- Make it executable;
  Run: chmod +x bosh-cli-2.0.48-linux-amd64

- Move the CLI to the bin directory;
  Run: cp bosh-cli-2.0.28-linux-amd64 /usr/local/bin/bosh


- You can test if it's installed properly;
  Run: bosh -v

*** Connecting to Bosh Director ***
Once you get the bosh CLI installed you can point it to your Bosh Director and start issuing commands.

If you are using a self-signed certificate, don’t forget to first download the root CA certificate
- Go to PKS Settings;

On the Advanced option, download the Root CA Certificate;



- Create an environment alias for future reference;
  Run: bosh alias-env "alias" -e "Ops-Director" --ca-cert "CA_CERT_Path"

- Login with the desired credentials (for this purpose I’m using director);
  Run: bosh -e “alias” log-in

You can get the credentials from the Ops Director’s Credentials tab



You are ready to go !! It's possible to create as many environments as you need and you just need to specify which environment the command will run against, like:
bosh -e "alias" tasks

But, If you have a single environment it’s easier to set up a system environment and then you can omit the parameter.
Export BOSH_ENVIRONMENT=”alias”

*** Bosh CLI examples ***

Here are a few commands to get you started

Checking all tasks performed on the system;
Run: bosh tasks -ar

If you need details about a specific task;
Run: bosh tasks "ID" 

To list all VMs provisioned by the system;
Run:  bosh vms

SSH into a specific VM without providing any credentials;
Run: bosh ssh -e “alias” -d “deployment_ID” “vm_Instance”

You can get the deployment ID and VM instance from the bosh vms command.

that’s it, stay tuned for the next basic PKS tutorials.


Thursday, March 1, 2018

Pivotal Container Service and NSX-T integration

I have to admit, since the launch of VMware Pivotal Container Service (PKS), I was very anxious to start creating and managing production grade Kubernetes clusters from within.
As soon as the product gets released I just grabbed my copy and started playing with that without reading anything about it (yeah I know), a few minutes late I realized there were some concepts that I needed to grasp if I want to succeed.

The automation PKS provides is amazing and make the platform deployment very easy to consume, but if you don’t know exactly what the input parameters are there’s a huge chance you get yourself into problems.

Thinking about that I decided to share what I learned about the network aspects of PKS, specially NSX-T integration and how to set it up.

I'm not showing how to configure NSX-T components, like T0, Logical Switch, etc.. but how do you consume them from PKS.

Once Ops Manager OVA has been deployed; 
The first thing you need to configure is the Ops Manager Director Tile.


Right on the vCenter Config section, you will see an option to configure the integration with NSX-T


LEAVE it to Standard vCenter Networking option, I know the anxiety to start using PKS with NSX-T, but it’s not there yet, in fact, this option is to allow other Pivotal solutions to communicate with NSX-T, like PAS.

 

































Jumping to Network section, you need to create the networks where your components will be hooked to;
I created two networks;



- one for management components, like Ops Director, PKS broker and Harbor
- another one for Service components, like Kubernetes Master, ETCD and nodes VMs
the only difference between them is that on the service network you select the service check box.
 Don't forget to configure what vSphere Network (Port Group or Logical Switches) the VMs will be connected to, CIDR and others network parameters accordingly.

Once you are done with Ops Director it’s time to configure Pivotal Container Service Tile.


On the Networking section is where you configure your PKS integration with NSX-T, just provide your NSX-T Manager hostname and credentials


Scrolling down a little bit you will see the fields for the NSX-T integration details


1 – T0 Router ID, this one is easy, If you remember about how PKS works (when integrated with NSX) every time you create a new Kubernetes namespace a new T1 router will be created to segregate and secure this new namespace workloads, in order to allow communication this T1 will be connected with a T0, that’s why it needs to know T0 ID.

2 – IP Block ID, that’s the range of IPs to be assigned to your PODs. Through the use of NSX-T Container Plugin, those address will be configured through the use of Container Network Interface (CNI).

On NSX go to DDI/IPAM Section and create a new IP Block, the CIDR recommend is 172.16.0.0/16


3 – Floating IP Pool ID, that’s the range of IPs assigned to NSX-T Load Balance when Kubernetes Services and Ingress be created.

On NSX go to Inventory/Groups and create a new IP Pool with the desired range of IPs
 No doubt there's a lot to learn and understand yet, but I hope this post reduce a little the burden to get your PKS up and running.

See you


Friday, February 16, 2018

vSphere Integrated Containers and VMware NSX better together


The dynamics and agile nature of container world are constantly challenging us that need to deal with securing our environments, it’s clear manual operations are just not enough to keep up with innovations pace this new world brings us.

But How can I protect my production container workloads in a dynamic and agile way?
VMware NSX has the answer.

Leveraging Security Groups and dynamic membership you can create rules that match specific vSphere objects, which is a perfect match for vSphere Integrated Containers and its container-vm constructs, allowing or blocking traffic on-demand whenever a new workload is created or deleted, providing the agility developers love without giving up on security that infrastructure guys need.

Let me give you a couple of use cases:

- Protect containers based on name
VIC allows you to expose a container service directly on the network with the use of container networks (just covered it on another post), so you can protect them just allowing certain services based on the container-vm name, like access to HTTP to only container-vms starting with web. (check the video below for a quick demo).

- Container to container communication
You could also use the container-vm names to create rules between containers, like, just container-vms starting with app could communicate with database container-vms (starting with db).

- Protect tenant containers
I might be pushing a little bit on that one, it’s not a real tenant construct OK, but it serves my point.
Image 2 distinct developers or projects, you can provision a VCH for each one of them leveraging distinct name prefixes/suffixes (check my post about it if you are not sure), that way you can create a security boundary based on the prefixes, where they can communicate freely between the container-vms with the same one, but not with the others, this way providing isolation and security between projects.

You guys are clever than me and I’m sure you can come up with some new and innovative ideas to use NSX to protect the containers, so tell us about it, leave your comment below.



Monday, January 22, 2018

vSphere Integrated Containers – VCH Wizard


With the increase of user’s adoption of vSphere Integrated Containers, it became clear a new consumption model of Virtual Container Hosts (VCH) is needed. There was nothing wrong with the VIC Engine Bundle command line, but due to VIC powerful features, sometimes, VIC creation command line string is just too long.

In order to provide an easier, faster and agile way for initial setup, VIC 1.3 brought a new VCH creation/deletion Wizard !!!


This Wizard is part of the new VIC Plugin for vCenter HTML5 UI, so, don't forget to install/upgrade the plugin to version 1.3 before start using it.

A new “+  New Virtual Container Host” action is available on the Virtual Container Hosts tab where you can initiate the creation of your VCHs any time.

 
Once started, the Wizard will guide you throughout all the sections related to VCH and on the right will be shown the specifics about each section.

Some sections also provide advanced parameters to be configured, just expand it to configure them, otherwise default values will be used instead.

 
On the Storage section, a good practice is to always have one volume named “Default”, eliminating the need to explicitly tell datastore names during container’s volume use.


The network section is where you have the most interesting things, don't forget to hit Advanced to see it all

 
There you can find the options to configure Container networks, firewall behavior and optional VCH networks, like management and client networks.

There's also an entire section dedicated to protect your VCH with TLS.


You can also grant the required privileges of the operations user on demand.


Double check the details on the Summary, if everything is fine just hit Finish



One nice touch is during the creation of VCH, you can watch the log live, just expands the VCH details.



If you are a fan of VIC Engine Bundle, don’t worry the script still available and you can still use it to create VCHs.


Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions. Reach me at @dumeirell

Most Viewed Posts

Blog Archive