SovLabs has been adding some great new features to their Property toolkit module:
- Dynamically set and assign vRA Network Profile Names to VMs in a blueprint with our SovLabs Property Toolkit module for vRA 7.5 and vRA 7.6. Read more here
- Dynamically add additional vRA Disks to VMs in a blueprint with our SovLabs Property Toolkit module for vRA 7.5 and vRA 7.6.
Today we are looking at their new feature to dynamically add additional vRA disks, using the Property Toolkit module, which is a widely discussed topic on blogs and VMware’s community forum.
There are of course multiple ways to achieve this, for instance adding disks to the vSphere machine on the request form, however this method is very basic and does not provide a lot of flexibility.
Other customer would resort to creating custom forms with data grids with vRO actions to implement this.
SovLabs Property Toolkit module uses custom properties and can add up to 15 disks and makes use of the approval lifecycle of VM provisioning to assign disks prior to MachineRequested EBS.
- vRealize Automation 7.5 or newer
- SovLabs Plug-in Version 2019.16.0 or newer
- Approval Policy Type: Service Catalog – Catalog Item Request – Virtual Machine
- vRA blueprint with Cloned Machine Build type
- Make sure to correctly set the total capacity/maximum storage value that can support the disks to be added
- vRA login as Tenant Administrator or Approval Administrator and entitled to SovLabs Modules.
“We have ServiceNow and want to use its service management portal instead of vRA, is this possible?” This question comes up a lot from our customers and often with a follow up questions “Can we wrap ServiceNow approval policies around it?” The answer is YES!
There are 2 ways to achieve this, the first is using VMware’s vRealize Automation plugin for ITSM which is available here. The main premise of this plugin to expose the exact vRA services and catalog items directly within ServiceNow. This is good and all, but it does not provide a lot of flexibility and the application installation and configuration is complex. Check out these blogs for additional information on v7.6.1 and v5.0.
The second solution, and what I will be using is SovLab’s ServiceNow connector module, which is very easy to implement and provides a lot of flexible by allowing ServiceNow administrators to customize the catalog and the request process directly within the ServiceNow platform. It has the following highlighted features:
- Multi-tenant & vRA instance support
- Platform-native control for ServiceNow which means management and and customization is done directly within ServiceNow and also using ServiceNow constructs (catalog, workflow, etc.)
- Day2 vRA operations support
- Request as ServiceNow user automatically maps to corresponding vRA user, so also no requirement for SAML or ADFS!
- SovLabs Template Engine support for metadata injection and custom logic, which is a huge plus
- Can be coupled with the SovLabs CMDB Module, which is very useful and something everyone needs.
So lets start with the implementation prerequisites:
As a prerequisite you need a ServiceNow instance and a MID Server installed and configured. I assume this is already done so I will not provide steps here for this.
Some other SovLabs related prerequisites you need to take care of:
- a ServiceNow instance with a MID Server installed and configured.
- I assume this is already done so I will not provide steps here for this.
- ServiceNow connector plugin software
- For the ServiceNow tables: “question_choice”, “sc_cat_item” and “item_option_new” you have to set All Application Access for Can read, Can create, Can update, and Can delete
- Go to System Definition > Tables > question_choice
- Go to Application Access
- For All application scopes, make sure Can read, Can create, Can update, and Can delete are checked
- Repeat Step 2 and Step 3 for the other tables
- The ServiceNow usernames needs to match their vRA username
- Unless SovLabs ‘User Mapping’ is used, which you can read about here
- I just setup the usernames in ServiceNow to match my domain username login for vRA. “email@example.com”
- If you want to perform Day2 actions you have to install and configure the SovLabs ServiceNow CMDB module as well. Check out my blog on this.
- Administrator credentials to vRO that also has entitlements to the Business Group/Catalog Items being Imported to ServiceNow
I recently returned from a very successful Hashiconf 2019 where lots of new features where announces for the Hashicorp products. Here are some of the mayor announcements.
- Terraform Cloud (TFC)
- Rebranding of Terraform Enterprise SaaS to Terraform Cloud.
- TFC is all about collaboration. When more than 1 person starts working on a Terraform project it requires backend management of the state file and you should start orchestrating Terraform runs using a deployment pipeline. This is all now provided by Terraform Cloud!
- Free tier (up to 5 users)
- User interface
- Remote state management for storing, view and locking of state files.
- VCS connection management
- Collaboration on runs
- Remote runs and applies
- Private module registry
- Paid tiers (more than 5 users)
- Both the paid tiers are available for free until 01.01.2020!
- TFC: Teams
- Create multiple teams
- Control permissions of users on those teams
- TFC: Teams & governance
- This tier is also available for free until 01.01.2020
- Use Sentinel and Cost Estimation
- More information and pricing on offerings available here
- More information here.
- Terraform clustering
- This is only available with Terraform enterprise (TFE) and current in beta version
- More information here.
- Terraform Cost Estimation
- This is available for both TFE and TFC
- Is executed between the plan and apply phases of a TF run.
- Can also use Sentinal to control costs with defined policies
- More information here.
- It definitely felt like Consul was the new shiny toy at this years conference and the related sessions were packed.
- HashiCorp Consul Service (HCS) on Azure
- Native provisioning of a Consul cluster into any region through the Azure marketplace.
- Although the Pong game live demo did not go as planned I do see the value and potential for this product!
- Currently only available in private beta
- More information here
- Consul Enterprise now support VMware NSX Service Mesh Federation
- Support for the Service Mesh Federation Specification.
- More information here.
Now back to what we are here for…Terraform Cloud!
I am not a developer and have been looking for a reason to use WSL for a while and found a good use case to Terraform using VS Code on Linux.
In my opinion Hashicorp’s Terraform is the de facto choice in the infrastructure as code space just like Kubernetes is for container orchestration. It provides the ability to version your infrastructure and automate the provisioning of your resources across different cloud vendors as well as on-premise.
To get this working requires a couple of steps which I will provide here. Also at the time of writing this I am running Windows 10 Pro, with Version 10.0.18362 Build 18362.
- Open Powershell as Administrator and run the following command to enable this feature
- “Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux”
- Open MS store and download your favorite distribution, I selected Ubuntu.
- Don’t close the store just yet and wait for the installation to complete.
- You can also open from command prompt by typing “Ubuntu.exe” from the distro installation folder, or selecting ubuntu from app menu.
- Create a UNIX username
- Create a UNIX password
- Now lets update our distro to latest
- Run “sudo apt-get update”
- Run “sudo apt-get upgrade”
Install Terraform on linux distro:
- Run the following commands to install unzip
- “sudo apt-get install unzip”
- Copy the link address to latest Linux 64-bit download from this page here
- Run the following command to install Terraform
- “wget https://releases.hashicorp.com/terraform/0.12.7/terraform_0.12.7_linux_amd64.zip”
- “unzip terraform_0.12.7_linux_amd64.zip”
- “sudo mv terraform /usr/local/bin”
- Run the following command to verify its has been implemented successfully
- “terraform version”
- Should show “Terraform v0.12.7” (based on the version I downloaded)
Install the Azure and AWS CLI on the linux distro
This is not necessary but super useful if you have deploying to these cloud vendors.
- Azure CLI installation steps
- Run the following command to verify its working
- “az -v”
- AWS CLI installation steps
- Run the following command to install
- “sudo apt-get install awscli”
- Run the following command to verify its working
- “aws version”
Very happy to be acknowledged as a for another year! We have such a great and congrats to all the other 2019 vExperts. Keep up the good work, its worth it.
Being a VMware Partner has its perks and EMPOWER is definitely one of them! In previous years we had PEX which was scheduled alongside VMworld, but attending the technical sessions the weekend before was tough and made for a very long week in Vegas Last year this returned to its original standalone schedule with a great location and provided a lot more content to technical partners.
2019 is no different, and definitely improved upon with the addition of Lisbon (Europe) and Singapore (Asia) conference locations. VMware has also done a great job of listening to partner feedback to make this a standout event with recently added sessions like:
- VMware’s Storage and Availability Vision and Strategy
- VMware’s Hybrid Cloud Vision and Strategy
- Partner VCDX Session: Customer Win with Hyper-Converged Infrastructure
- Partner VCDX Session: Customer Win – Architectural Considerations for SDDC Customer Wins
- Partner VCDX Session: Customer Win – Hybrid Cloud Use Cases
Here are some of the highlights that I think are worth mentioning and what you can provide to your manager as reasons for attending, other than this awesome letter of course.
- Exclusive access to the same content that is enabling the VMware field teams. All marketing fluff has been removed 😊
- Hands-on labs are available with first time access to brand new labs for VMC on AWS and PKS.
- Expert led Livefire workshops with technical content and training.
- Free VMware certification tests for all technical tracks, which can help your company achieve their competencies. Register for exam here.
- Separation between technical and sales roles with dedicated sales conference on last 2 days of event
- Interaction with experts showcasing the latest VMware products in the Demo Zone
- Networking with other like-minded partners
- Team building! In many cases your teams are spread out across the country and this a great way to get together for some team building activities.
If these are not enough reasons already, I have $100 off registration codes available for technical passes to the conference. Please DM me via Twitter, Linkedin, or email me via my web site.
Schedule Builder will be opened publicly to all registrants on Thursday, March 7th at 11AM PST.
It has been a while since my last post, (just had too much going on) but I have been putting it off for way too long and I finally upgraded my vRA lab to 7.5. Here are my notes.
My distributed enterprise vRA 7.4 environment consists of the following components:
- vRA VIP
- vRA IaaS Manager VIP
- 2 x Windows vRA IaaS Manager Service servers
- vRA IaaS Web VIP
- 2 x Windows vRA IaaS Web servers
- 2 x Windows vRA DEM + Agent servers
- vRO VIP
- 2 x external vRO appliances
- External SQL database for vRA and vRO
- Running SovLabs extensibility software
There are 2 options available to get to the desired state with either an in-place upgrade of your existing vRA environment or to build out a new greenfield vRA and migrate your data over (VMware calls this a side-by-side upgrade).
If you are currently running 6.2.0 – 6.2.4 or 7.0.x, or have vCloud Director or vCloud Air endpoints you have to migrate!
Always before upgrading, make sure you have successful backups of all your nodes and while you’re at it also take snapshots of all the servers and backup your vRA and vRO database! You can never be too careful, ever! The upgrade steps for vRA are the same as what I have blogged about here. For this exercise, I am performing an in-place upgrade of vRA from 7.4 to 7.5, so please review the documentation if you upgrading from 6.2.5.
- Also, verify that all appliances and servers that are part of your deployment meet the system requirements for vRA 7.5 and also consult the VMware Product Interoperability Matrix about compatibility with other VMware products.
- I also have SovLabs plugins installed so make sure to upgrade SovLabs to a vRA 7.5 compatible version. At the time of the post, I upgraded to 2018.3.1. Upgrade steps for SovLabs can be found here.