Friday, May 25, 2018

vSphere Integrated Containers 1.4 - Upgrade 1/3



On May 15th VMware has released a new version of its own docker implementation product, vSphere Integrated Containers 1.4, as always it comes with enhancements that include but not limited to support for vSphere 6.7 and ROBO deployments, affinity rules (more on that in a future post) and some management portal improvements.

But today I wanna cover the upgrade process, I will break it into 3 phases for easier consumption;

- Upgrade vSphere Integrated Containers Appliance (This post)
- Upgrade vSphere Client Plug-in
- Upgrade Virtual Container Hosts (VCH)

To be honest the upgrade itself is not really an in-place upgrade, in fact, the process involves deploying a new VIC appliance and copying the relevant information from the previous appliance to the new one, including management portal, registry configuration and data.

The good thing about it is that it leaves you with an easy rollback option, since the previous appliance is kept intact, in case of any problem, you can just get rid of the new appliance and power the previous one back on and everything will still there just the way it was before the upgrade.

My actual environment compresses of a VIC 1.3 and one VCH connected to it;
obs: you can upgrade from any version later than 1.2.x
 
I also have a project (cnameetupsp), with a few images, which has been scanned for vulnerabilities and signed by Content Trust (another post I own you guys).


Let’s start downloading and deploying VIC 1.4 since it’s a new appliance, give it it’s own IP address and hostname.
OVA deploy process is pretty standard among VMware’s solutions, so I’m not going through it, but if you still have doubts the product's documentation is your friend.

Important: Make sure to use the Flex-based vSphere Web Client to deploy it, even if you are using vSphere 6.7, because HTML5 Web Client is not ready for VIC yet, although the deployment my succeed the configuration required for VIC to work might not be implemented properly.
 
Once the appliance is deployed access it through SSH. Make sure to enable it during OVA deployment.

Important 2: do NOT go to the Getting Started page of the new appliance, because it will initiate the services for a new set up and would cause the upgrade to fail, if you have done it, just deploy a new appliance ; )

Once on the new appliance console just run the upgrade script;
Run: ./etc/vmware/upgrade/upgrade.sh

The script will prompt you with the information about the vCenter where the previous appliance is provisioned, if you have an external PSC provide their information as well otherwise just leave it blank

Now you need to provide the information about your previous VIC appliance, make sure the appliance is power on and with SSH enabled, if not, power off the appliance and enable it through Permit Root Login within vApp Options.

There you go, just sit back, relax and watch the upgrade process running;
During the process, the relevant files are copied over and the old appliance is shut down.
If you need more information or for troubleshooting needs, the log is saved on /var/log/vmware/upgrade.log

Once it’s done, just power on the new appliance and log in.
As we can see the upgrade was successful, my VCH is connected to the new VIC Appliance

My projects and images are there as well.


The only downside is that my images came unsigned, it’s due as a new appliance comes with a different certificate from my previous one.
So, if you are using Content Trust you will have to plan it accordingly and resigned your images after the upgrade so the users will be able to pull and run them again.

That’s all for today, stay tuned for the remaining upgrade phases; vSphere Plugin and Virtual Container Hosts.

Friday, May 4, 2018

VMware Cloud on AWS – Test Drive


There's this big buzz since VMworld last year, where the partnership between VMware and Amazon has been announced, creating what's being called VMware Cloud on AWS.
If you are like me, you are deadly curious to put your hands on this beauty, well let me tell you this,  there’s a Hand-on-Lab about it.

HOLs are the fastest and easiest way to test drive the full technical capabilities of VMware products, including VMC on AWS.

As any VMware HOL, there’s a manual you can read the instructions that will guide you through the lab, but if are more a freestyle type of guy, just open up Chrome and it will redirect you to the Cloud Services page.


From there you can consume VMware Cloud on AWS Service with a single click


The creation of your first SDDC could not be easier.


In less than 3 minutes, literally, my SDDC was ready and I could throw workloads on it.


It’s a fully functional service, you can play with all capabilities;


After playing with it, let’s say you are convinced that the solution fits your needs, but you are not sure on how to start and size your environment.

The first approach would be the “Start Small”, get the smaller SDDC possible, nowadays it starts with 4 hosts and increases the number of hosts as you need, Scale-out capability is part of VMC on AWS and just take minutes to spin un a new host to your cluster.

Just fill out a simple form with a couple of information like VM count, storage size, vCPU/core ratio, IP profile….

 
…. and BANG, the recommendation for your environment is presented to you.


I personally love the detailed storage consumption information chart

Along with a Cluster break down information



What else do we need ?!?!?


Friday, April 13, 2018

VMware's release storm




You probably woke up today just thinking about just another regular Friday, right?
During my matinal e-mail check, I was surprised by the number of new product's version VMware has released, some are just bug fixes while others contain full new amazing features.

I will highlight the ones I think it’s more relevant, but you can check all the details on each Product’s release.
Don’t be afraid with the number of details to read, focus on the products you have on your environment today instead, I’m sure you will find some fix or new feature that will certainly make your life easier !!!

Should we begin ?

The most amazing feature is new out of the box custom request form that removed the need for wrapping infrastructure and PaaS blueprints into XaaS blueprints.
Definitely a game change. You might also enjoy the capability of deploying vSphere blueprints from OVF or OVA.

vRB still closing the gap against Chargeback when it comes to vCD;
Talking about it, what about Overage policy for allocation pool models applying differential rate for vCPU and RAM usage, killer case for Service Providers!!
Also adding storage tiers pricing based on storage policy, charge of network bandwidth usage and data ingress and egress.
VMware Cloud on AWS got some new feature that worth checking as well.

That’s definitely a product you want on your toolbelt, debuting on this version is the install pre-checker, validating your environment before starting the deployment, content lifecycle management allowing you to manage the release of content such blueprints and workflows across multiples environment.

vROPs still focus on continuous performance optimization and capacity management features, what you can prove by those enhancements;
- 30% footprint reduction when monitoring the same number of objects;
- predictive real-time analytics;
- software license optimization aware for workload placements;
- native cost visibility;
- new quickstart page, customized homepage, enhanced vSAN dashboards and much more.

If you are looking for a central place to check the health of your entirely SDDC, this management pack is made for you. Now with improved vSAN and vCenter alerts, agentless vRA health and vROPs sizing dashboard.

This management pack is great, it provides out of the box vCenter self-healing workflows that act upon alerts…. How great is that ?!?!

Besides some bug fixes, It added support for vCloud Director 9.1

Although it’s in maintenance mode it still receiving fixes and now it’s compatible with vROPs 6.7
If you are looking for a more long-term solution, take a look at vRealize Network Insight
 
You don’t need to deploy dedicated instances for tenants anymore, multi-tenancy is now available on vRO, along with an updated Web-Based Clarity UI that brings centralized log views and workflow details directly on the monitoring client.

This minor release added support for Kubernetes 1.9.5 and Golang 1.9.4 along with a few fixes, from which I highlight;
- nodes are drained before they stop, minimizing downtime.
- unmount docker volumes fixes,
- BOSH DNS issues fix.

vRealize Log Insight 4.6.0 | April 12th 2018
Enhanced support for up to 15 vCenters per node, If some component suddenly stops sending log you can be notified about it, additional APIs for creating/deleting.

It worth mentioning the vRCS will no longer work with vRA 7.4 or later and its functionality is moving to vRealize Lifecycle Manager, but if you still at vRA 7.3 you should get its new version with a lot of improvements and defect fixes, like name of approvers is now recorded in pipeline executions, SLA for manual approvals and out of the box destroy action for deployments.

Happy Friday 13th

Friday, April 6, 2018

VMware Pivotal Container Service - PKS CLI

Finally, VMware Pivotal Container Service command line options series got to its end.
Featuring today PKS command line tool, allowing cloud admins to create, manage and delete kubernetes clusters within PKS.

Let's the fun begins

*** Installing PKS CLI ***
- Download PKS CLI tool from Pivotal Network;

obs: I'm focused on the Linux version, but it has a version for other platforms as well.


- Once transferred to your system make it executable;
Run: chmod +x pks-linux-amd64

- Move the pks cli to the bin directory;
Run: cp pks-linux-amd64 /usr/local/bin/pks

- You can test and check it’s version just running the command below;
Run: pks --version






*** Connecting to PKS *** 
Now that we have everything set up it's time to have some fun.

Let’s begin logging in to our PKS system:
Run: pks login -a “UAA_URL” -u “user_id” -p “password” -k

Obs: If you don’t know what the UAA_URL is or the user to connect to, go back and check the previous post.

*** Managing Kubernetes Cluster ***
- Creating kubernetes cluster;
Run: pks create-cluster “cluster_name” --external-hostname “address” --plan “plan_name”

Obs: external-hostname is the address from which to access the cluster Kubernetes API, also plans are part of Pivotal Container Service Tile.


- It's also easy to check all clusters in the system;
Run: pks clusters

- If you want to get a cluster's details;
Run: pks cluster “cluster_name”



- What about scaling out your cluster with a single command and the platform will take care of everything on your behalf
Run: pks resize “cluster_name” --num-nodes “X”

“X” means the number of desired worker nodes

- Finally you can delete your cluster when you don't need it anymore
Run: pks delete-cluster “cluster_name”

That’s all for the PKS command line series, now you are empowered with all the tools required to deliver and manager Kubernetes cluster at the speed and agility the business demands.

Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions. Reach me at @dumeirell

Most Viewed Posts

Blog Archive