Friday, February 7, 2020

Enterprise PKS Management Console installed, What now ?

I’ve been working on a VMware Enterprise PKS proof of concept for a customer and instead of installing all the components, ops manager, ops director, tiles, individually I decided to use the Enterprise PKS Management Console.
If you have not heard about it yet, it’s a single OVA that provides a unified installation in an automated way, which simplifies and expedite a lot the process of making PKS available.

But this post is more related to day 2, once everything is installed what now ?!?!?

In the past I wrote some posts about how to manage the solution and the need to install some tools like Bosh CLI, UaaC CLI e PKS CLI.

While those tools still exist and are needed, I found the use of Enterprise PKS Management Console a lot more simple, Bosh CLI and PKS CLI are already installed on the appliance, so, just SSH into it to create and manage the clusters immediately.

If you remember, in order to create a cluster you need a user with such permission and UaaC client was not installed on the appliance, that’s because the identity management has been integrated on the PKS Management Console.

To create and manage users, just select Identity Manager on the left pane.

Not only local users are allowed but also based on AD/LDAP and SAML providers, it will depend the option you select during the setup.

Always good to remember the roles and scope available within the solution.

- pks.clusters.admin: allow the user to create and manage all clusters within
the system;
- it’s a read only role for all clusters created on the solution;
- pks.clusters.manage: allow the user to create and manage only the cluster’s they own;

How about you, already using PKS Management Console, let me know what you think about it ?

Friday, January 31, 2020

Year in Review 2019

Thanks for the 200 days of January there’s still time to post my classic year in review post during the first month of the year…I assume there’ll be more.

Let’s see how it went, even for a low productive year…yeah, I know !!!

No surprise here, my page views have dropped by 13% totalizing 37,874 Pageviews in 2019.

People still coming from all around the globe, to be precise from 164 different countries, like Zambia, Lesotho, and Vanuatu, but the US alone represents more than 40% of my visitors.

Now the most interesting thing…. the top 10 articles from 2019.

You guys must be kidding me… Converter Tips still #1 ?!?!?

#5   VMware script to delete/remove VMs, guest (not ranked last year)
#6   vSAN stretched cluster topology explained (not ranked last year)
#9   Cloud Assembly – The Basics (not ranked last year)

 At least one post from 2019 made it to the Top 10... and I'm amazed one post from 2010 still been found out there... Can you guys guess which one?

Thursday, July 18, 2019

Cloud Assembly - Placement Engine

Last time we met I was talking about the basics of Cloud Assembly and how you can create your cloud-agnostic offer (blueprint) leveraging not more than a YAML descriptive definition.

But there was still a missing point, how do you handle the placement of your workloads?

Now that your multi-cloud strategy evolves multiple clouds, private, hybrid, public, whatever comes next… you need a way to make decisions about the workload placement.

Cloud Assembly handles this through the use of “constraints”.
Constraints are no more than identifiable capabilities of your resources.

Think about a business decision that every development should occur on the public cloud, but when the times to run it in production comes, it must run on-premise.
That’s exactly what you can see on my example, I’m using AWS as my Dev environment and my on-premise vSphere for Production;
AWS environment tag as env:dev

vCentert environment tag as env:prod

But it’s not all, you can use constraints on several places like on datastores, to identify which one is SSD or have replication enable. On networks, you can tell which one is internet-facing or even a backend network.

So, when you provision a new deployment, Cloud Assembly will try to match the constraints on your blueprint to the resources you have available, that’s how it decides when to place your workload.
If no endpoint can fulfill your constraints than the deployment will fail.

With all this information flying around, it seems that the placement decision is to be left to chance, in fact, it's not, Cloud Assembly makes it easy to test your business logic behind the decisions.

Go to Cloud Zones and hit Test Configuration;

Fill the machine details and the constraints and hit Simulate;

Soon you will see the decision tree, the graph walks you through the checking until it finds it’s home.
Pretty Nice !!!!

Now it comes to the final question, how do I add constraints to my blueprints?

That’s the easiest part, just add a constraint section to your blueprint and tag it to the desired capability.

My example is a bad one, because it makes the offer static to my dev environment and it was intentional.
It’s a challenge for you guys, go back to my The basics post and try to add an input for the destination so users can select the desired destination during provisioning time ; )

Thursday, July 11, 2019

Cloud Assembly – The Basics

Cloud Assembly is the fundamental stone of the Cloud Automation Services, CAS, based on a declarative infrastructure as code model you specify the applications and services you are willing to provide for end-users consumption, called Blueprint.
If you are used to vRealize Automation (vRA), you will see a remarkable resemblance between them.
Based on the same canvas concept as vRA, you can drag objects like, machines, networks, volumes, load balancers, etc… to form the desired state of your service.

But along with the drag and drop functionality, Cloud Assembly has evolved to be more developer-friendly, allowing you to declare your desired state on a YAML format, just like any good infrastructure as code tool.
As you are typing the visual view will reflect the changes automatically and vice versa.

The news won't stop here, version control is also integrated within the platform, enabling you to check and compare what has changed from version to version, allowing a quick troubleshoot in case if something goes wrong or even rolling back versions.

You might be wondering, not all services are statics sometime users need to provide information to fulfill the provisioning, like OS images, t-shirt size (small, medium, large), etc.
Cloud Assembly provides this functionality through the use of “inputs”:
This way you can prompt end users for the information required to provide a service.

The way inputs work is; 
specify on the inputs section and the reference it latter on the resources section using ${inputs.”name”} sintax
There are dozens of patterns you can apply to inputs, just check it out for a comprehensive list.

It’s not all if you need specific cloud services like AWS services, S3 buckets, Route53, RDS Clusters, Lambda functions, Azure Machines or SQL Databases and much more, it’s all available within the canvas, just drag the component and configure it.

One last thing, many companies have implemented a DevOps culture, where developers are using CI/CD tools and committing code changes to a repository. You probably want to leverage the same methodology to your blueprints, NOT A PROBLEM, Cloud Assembly can also integrates with your Git repository and get the latest committed changes.
How cool is that ?!?

Next post I’ll cover how to control the placement of your services, stay tune

Who am I

My photo
I’m an IT specialist with over 15 years of experience, working from IT infrastructure to management products, troubleshooting and project management skills from medium to large environments. Nowadays I'm working for VMware as a Consulting Architect, helping customers to embrace the Cloud Era and make them successfully on their journey. Despite the fact I'm a VMware employee these postings reflect my own opinion and do not represents VMware's position, strategies or opinions. Reach me at @dumeirell

Most Viewed Posts

Blog Archive