SovLabs has been adding some great new features to their Property toolkit module:
- Dynamically set and assign vRA Network Profile Names to VMs in a blueprint with our SovLabs Property Toolkit module for vRA 7.5 and vRA 7.6. Read more here
- Dynamically add additional vRA Disks to VMs in a blueprint with our SovLabs Property Toolkit module for vRA 7.5 and vRA 7.6.
Today we are looking at their new feature to dynamically add additional vRA disks, using the Property Toolkit module, which is a widely discussed topic on blogs and VMware’s community forum.
There are of course multiple ways to achieve this, for instance adding disks to the vSphere machine on the request form, however this method is very basic and does not provide a lot of flexibility.
Other customer would resort to creating custom forms with data grids with vRO actions to implement this.
SovLabs Property Toolkit module uses custom properties and can add up to 15 disks and makes use of the approval lifecycle of VM provisioning to assign disks prior to MachineRequested EBS.
- vRealize Automation 7.5 or newer
- SovLabs Plug-in Version 2019.16.0 or newer
- Approval Policy Type: Service Catalog – Catalog Item Request – Virtual Machine
- vRA blueprint with Cloned Machine Build type
- Make sure to correctly set the total capacity/maximum storage value that can support the disks to be added
- vRA login as Tenant Administrator or Approval Administrator and entitled to SovLabs Modules.
“We have ServiceNow and want to use its service management portal instead of vRA, is this possible?” This question comes up a lot from our customers and often with a follow up questions “Can we wrap ServiceNow approval policies around it?” The answer is YES!
There are 2 ways to achieve this, the first is using VMware’s vRealize Automation plugin for ITSM which is available here. The main premise of this plugin to expose the exact vRA services and catalog items directly within ServiceNow. This is good and all, but it does not provide a lot of flexibility and the application installation and configuration is complex. Check out these blogs for additional information on v7.6.1 and v5.0.
The second solution, and what I will be using is SovLab’s ServiceNow connector module, which is very easy to implement and provides a lot of flexible by allowing ServiceNow administrators to customize the catalog and the request process directly within the ServiceNow platform. It has the following highlighted features:
- Multi-tenant & vRA instance support
- Platform-native control for ServiceNow which means management and and customization is done directly within ServiceNow and also using ServiceNow constructs (catalog, workflow, etc.)
- Day2 vRA operations support
- Request as ServiceNow user automatically maps to corresponding vRA user, so also no requirement for SAML or ADFS!
- SovLabs Template Engine support for metadata injection and custom logic, which is a huge plus
- Can be coupled with the SovLabs CMDB Module, which is very useful and something everyone needs.
So lets start with the implementation prerequisites:
As a prerequisite you need a ServiceNow instance and a MID Server installed and configured. I assume this is already done so I will not provide steps here for this.
Some other SovLabs related prerequisites you need to take care of:
- a ServiceNow instance with a MID Server installed and configured.
- I assume this is already done so I will not provide steps here for this.
- ServiceNow connector plugin software
- For the ServiceNow tables: “question_choice”, “sc_cat_item” and “item_option_new” you have to set All Application Access for Can read, Can create, Can update, and Can delete
- Go to System Definition > Tables > question_choice
- Go to Application Access
- For All application scopes, make sure Can read, Can create, Can update, and Can delete are checked
- Repeat Step 2 and Step 3 for the other tables
- The ServiceNow usernames needs to match their vRA username
- Unless SovLabs ‘User Mapping’ is used, which you can read about here
- I just setup the usernames in ServiceNow to match my domain username login for vRA. “firstname.lastname@example.org”
- If you want to perform Day2 actions you have to install and configure the SovLabs ServiceNow CMDB module as well. Check out my blog on this.
- Administrator credentials to vRO that also has entitlements to the Business Group/Catalog Items being Imported to ServiceNow
It has been a while since my last post, (just had too much going on) but I have been putting it off for way too long and I finally upgraded my vRA lab to 7.5. Here are my notes.
My distributed enterprise vRA 7.4 environment consists of the following components:
- vRA VIP
- vRA IaaS Manager VIP
- 2 x Windows vRA IaaS Manager Service servers
- vRA IaaS Web VIP
- 2 x Windows vRA IaaS Web servers
- 2 x Windows vRA DEM + Agent servers
- vRO VIP
- 2 x external vRO appliances
- External SQL database for vRA and vRO
- Running SovLabs extensibility software
There are 2 options available to get to the desired state with either an in-place upgrade of your existing vRA environment or to build out a new greenfield vRA and migrate your data over (VMware calls this a side-by-side upgrade).
If you are currently running 6.2.0 – 6.2.4 or 7.0.x, or have vCloud Director or vCloud Air endpoints you have to migrate!
Always before upgrading, make sure you have successful backups of all your nodes and while you’re at it also take snapshots of all the servers and backup your vRA and vRO database! You can never be too careful, ever! The upgrade steps for vRA are the same as what I have blogged about here. For this exercise, I am performing an in-place upgrade of vRA from 7.4 to 7.5, so please review the documentation if you upgrading from 6.2.5.
- Also, verify that all appliances and servers that are part of your deployment meet the system requirements for vRA 7.5 and also consult the VMware Product Interoperability Matrix about compatibility with other VMware products.
- I also have SovLabs plugins installed so make sure to upgrade SovLabs to a vRA 7.5 compatible version. At the time of the post, I upgraded to 2018.3.1. Upgrade steps for SovLabs can be found here.
vRealize Suite LifeCycle Manager (vRSLM) has now been around for a while and if you are a vRealize or vCloud Suite license holder this is definitely a product that should be part of our VMware portfolio. I am a bit backward because in my last post is showed how to upgrade your vRA environment using vRSLM and only now will I show how to actually install vRA which actually just comes out of necessity because one of my colleagues accidentally delete all my lab servers 🙂
For this post, I am using the latest vRSLM 1.3 and will be deploying a distributed vRA 7.4.
- jvra01 – vRA appliance with embedded vRO (recommended design to use embedded instead of external vRO since 7.3)
- jvra02 – vRA appliance with embedded vRO
- jvraweb01 – vRA IaaS Web
- jvraweb02 – vRA IaaS Web
- jvramgr01 – vRA IaaS Manager
- jvramgr02- vRA IaaS Manager
Since vSSLM automates and simplifies the deployment of your VMware SDDC stack, most of your time will be spent on prerequisites, so let’s start with that.
- Manually deploy 4 x vRA Iaas Windows Servers in your vCenter Server environment.
- Make sure they are added to the domain and DNS and NTP is working.
- Disable UAC on all Windows servers. Make sure to reboot if you have to disable this.
- Make sure that IPv6 is disabled on all Windows servers
- Add the windows service account as part of User Rights Assignment under Local Security Policies for Log on as a Service and Log on as a batch job on all windows machines.
- Verify the minimum resource requirements is set on all Windows servers. Set to at least 8GB.
- SQL Database
- Make sure the domain user has added the SQL server to the domain
- Make sure the domain user is added as part of the SQL DB user Logins list with the sysadmin privilege
- Load Balancer
- Make sure that the second member of each pool in the vRealize Automation load balancer is disabled.
There are also some scripts available to download to verify the prerequisites when you run the precheck for the creation of the vRA environment so this can be done later as well.
- Ensure that the vRSLCM appliance has correct FQDN configured
- Command for correcting the hostname is “/opt/vmware/share/vami/vami_set_hostname <hostname>”
- After setting the correct hostname, verify by using the command “hostname -f” or from 1.3 version of LCM, we can also verify from the settings page.
- Under vRSLM settings:
- Register with My VMware to access licenses, download Product Binaries, and consume Marketplace content.
- Download the vRealize Automation 7.4.0 product
- If you already have the OVA downloaded then you can import it under the Product binaries tab.
- Verify that you have vRealize Automation binaries status as completed.
- If you using a self-signed certificated in your environment (not recommended), then create a self-signed wildcard certificate for vRealize Suite product deployments.
- Best is to generate a single SAN certificate with all the product or management virtual host names or a wildcard certificate and provide this certificate when you create the environment for the first time. This ensures support for post provisioning actions such as Add Products and Scale Out.
- Configure NTP Servers for deploying products in environments
- Under Data Centers
- Create a Data Center with an associated location.
- Add the vCenter Server where the vRA environment will be deployed to.
- Make sure the data collection is successful.
As with most of my other blog posts, I am just providing a step by step guide for quick reference. Please refer to the documentation here for detailed information and please read the vRealize Automation 7.4 Release Notes known issues section which is updated regularly and helps you to be better prepare for the upgrade.
My environment consists of a distributed vRealize Automation running version 7.2 with an external clustered vRealize Orchestrator, which I am upgrading and not migrating to 7.4 Build 8182598. This will be a similar process if you have vRA 7.1 and greater. If you have an older version, refer to VMware’s documentation here.
The in-place upgrade process for the distributed vRA environment happens in 3 stages in the following order:
- vRealize Automation appliances
- IaaS Web server
- vRealize Orchestrator
Pre-requisites before we start:
- Make sure all VMware products are compatible with vRA’s current and new release by consulting the Product Interoperability Matrix.
- Verify enough storage space on servers
- At least 5GB on IaaS, SQL and Model Manager
At least 5 GB on the root partition of vRA appliance
5 GB on the /storage/db partition for the master vRA appliance
5 GB on the root partition for each replica virtual appliance
- Verify that MSDTC is enabled on all vRA and associated SQL servers.
- Check that the service “Distributed Transaction Coordinator” is running.
- The primary IaaS Website node (Model Manager data is installed) must have JAVA SE Runtime Environment 8, 64 bits, update 161 or later installed, and also verify JAVA_HOME environment variable is set correctly after the upgrade.
- If using embedded Postgres DB in a distributed vRA environment
- On master vRA node, navigate to /var/vmware/vpostgres/current/pgdata/
- Close any opened files in the pgdata directory and remove any files with a .swp suffix
- Verify the correct ownership of all files in this directories: postgres:users
- In a distributed vRA environment, change Postgres synchronous replication to async.
- Click .
- Click Async Mode and wait until the action completes.
- Verify that all nodes in the Sync State column display Async status
- I have only a master and replica so I am already async but just FYI
- In vRA tenants verify the following
- Make sure that no custom properties have spaces in the names.
- All saved and in-progress requests have finished successfully
Additional requirements before we start:
VMware’s vRealize Suite of Products are great, and each provides a lot of features and capabilities, and VMware has been working hard on integration between the products. However, these products are very much standalone with no cohesion between them from a lifecycle management perspective. This creates a lot of management overhead to install, upgrade, configure and manage all these products, as well the additional solution extensions.
In comes vRealize Suite LifeCycle Manager (vRSLCM) which is a relatively new product and is available to all customers with a vRealize Suite license. It automates the installation, configuration, and upgrading of the following products:
- vRealize Automation
- vRealize Operations Managers
- vRealize Log Insight
- vRealize Business for Cloud
In this blog, I am going to provide the steps on how to import an existing distributed Enterprise vRA 7.2 environment and perform the upgrade to 7.4 using vRSLCM 1.2.
Let’s start off with the initial creation of the environment, which does require a lot of information up front, but once you create or import products into the environment at a later time, it will make use of this stored environment information.
- Log in to your vRSLCM
- Select Create Environments
- Enter Environment Data
- Data Center (this you should have created during the initial configuration of your vRSLCM environment)
- Environment Type
- Environment Name
- Administrator email
- Default root password
- Click Next
- Create Environment
- Check the box for vRealize Automation
- Since we already have an environment that we need to import, select the import Radio button.
- Click Next
- Scroll down to bottom.
- Check the box to accept the terms and conditions.
- Either pick a vRealize Suite license which will populate from your my.vmware.com account, or enter one manually.
- Click Next
- Infrastructure Details (This information is used if you deploy new products)
- Select vCenter Server where your vRealize Suite products reside in.
- Select Cluster
- Select Network
- Select Datastore
- Select preferred Disk format for product deployments.
- Click Next
- Network (This information is used if you deploy new products)
- Enter default gateway of the network where your vRealize Suite products are deployed or will be deployed too.
- Enter Domain Name
- Enter search path
- Enter DNS
- Enter Netmask
- Click Next
- Certificates (I import a wildcard certificate or you can use multi-domain certificate would be a good choice to simplify the process)
- Click Next to use the self-signed generated certificate or click the import certificate button to add existing wildcard or SAN certificate.
- Click Next
- Import (Since we selected import we now get ask questions about our existing environment)
- Enter vRA root password
- Enter vRA Default Administrator password
- Enter Tenant User name.
- Selecting the “administrator” user works just fine here.
- Enter vRA Primary Node FQDN
Enter IaaS Username.
- I used the domain service account assigned to all IaaS servers
- Default vRA Tenant name is select “vsphere.local”
- Enter vRA Tenant password
- Enter IaaS Password for the domain account.
- Select vCenter Server from the drop-down where the vRA server is running on.
- Click Next
- Review summary
- Click Download configuration to save the JSON file for later use.
- Click Submit
- This will run for a while to configure the environment and import vRA
- If it fails, you have a couple of options
- Review the requests
- Under actions select retry and verify the information that you have entered.
- Delete the environment and start over (1.2 provides the ability to specify if you also want to delete the VMs when you delete a fully configured environment, definitely not recommended to do so in most cases!)
- If you want to pause the import, you can always come back later and resume\
- Verify the vRA product environment
- Select Environment tab on the left side
- Select View details of the newly created environment
- Verify that all the information of your distributed vRA environment is accurate. vRSLCM collects all your VIP names, vRA-, IaaS- and Database Servers as well as where each component resides.
SovLabs isn’t just a vRA plugin, it’s enterprise software that extends the capabilities of your vRealize Automation environment providing you with that end-to-end solution you have been craving for. As with any other enterprise software they periodically provide new patches and releases and with SoLabs that is no different.
The new 2017.3.x was released in August and provides some awesome new modules:
- Men & Mice DNS and IPAM
- SolarWinds DNS
- Backup as a Service
- Automate policy-driven backups and provide self-service VM and file-level recovery for –
- SovLabs VM tagging
- Drive rich metadata using VM tags and categories
- SovLabs Property Toolkit
- Manage your existing custom properties on VMs with the SovLabs Template Engine
- ServiceNow Support for Jakarta
- Puppet support for 2017.1
- VMware Tools connection
- Connect to Windows/Linux servers can now be done through VMware Tools which removes the requirement for WinRM, CygwinSSH or WinSSHD to be installed. This is huge!
- As a customer you can sign up under the self-service portal and view the detailed release notes here:
So how do we go about upgrading SovLabs to the latest version?
Step by step guide to upgrading from 2017.2.x to 2017.3.x. (there are some additional steps if you are upgrading from <= 2017.1.x so please contact SovLabs support)
- First off we want to create a backup of the vRO package
- Login to vRO Client
- Click Design
- Click on the package tab
- Click on the package icon on right hand side menu bar
- Enter name “com.sovlabs.backup.resources”
- Edit the newly create package, click on the pencil icon on the right hand side menu bar
- Click the Resources tab
- Click the Folder + icon
- Expand the Library folder, select the SovLabs folder
- Click on the Select button
- Once loaded, click save and close
- Right click the saved package and click export package
- Create a folder called sovlabs under downloads
- leave the rest of settings as default
- Save to your local system
- Now, lets save the old SovLabs Plugin:
- Use WinSCP and login as root to vRO appliance
- Go to directory /var/lib/vco/app-server/plugins
- Save the o11nplugin-sovlabs.dar to your local file system in same sovlabs folder created earlier under download.s
- We need to update the vRO Heap size
- If you have done this before then you can skip this step but this is needed to install the larger sized SovLabs module file into vRO otherwise the appliance might run out of memory during install/upgrade.
- Remember if you a vRO cluster, then you have to perform the steps on both server
- SSH into vRO appliance with user root
- Run # vi /var/lib/vco/configuration/bin/setenv.sh
- Find the #MEM_OPTS section
- Replace the -Xmx512m \ with -Xmx768m \
- Save the file
- Delete all SovLabs license keys
- Login to vRA tenant
- Click on Items tab -> SovLabs vRA Extensibility modules -> SovLabs License
- For each SovLabs License item listed
- Select Actions -> Delete License
- Download the SovLabs plugin
- Talk to SovLabs support about getting the software downloaded.
- Install the plugin into vRO appliance
- Login to controlcenter
- Select Plug-Ins -> Manage Plug-ins
- Click Browse
- Select the plugin
- Accept EULA
- Click on Install
- Accept the EULA
- Restart the vRO server
- On the Home page, click on the Startup Options icon
- Click on Restart
- Wait for vRO to restart successfully
- Log back in to the vRO configuration page
- Click on the Manage Plug-Ins icon
- Verify that the installed plugin is listed among the vRO plugins
- Now if you have a clustered vRO 7.2 and above, then the plugin should sync but I have seen some problems with 7.2 so follow these steps
- Perform a full reboot on primary so that the pending and active config fingerprint ID match.
- Then push the config to the other standby node
- It will need to rebooted which it often will not do so make sure you perform this step yourself.
- Verify that Synchronization state shows synchronized and verify the version of the plugin on both active and standby nodes.
- Login to the vRO Client and run the configuration
- Click on Design mode
- Click on WorkFlow tab
- Right click vRO workflow, “SovLabs/Configuration/SovLabs Configuration”
- Select Start Workflow
- The SovLabs Configuration workflow only needs to be run on one vRO in a clustered environment
- Select yes to accept the EULA
- Click Next
- Select the appropriate tenant and business group
- Create SovLabs vRA Catalog Service? = No
- Publish License Content? = No
- Click Next
- Upgrade existing SovLabs vRA content? = Yes
- Click Next
- Install or Update SovLabs workflow subscriptions (vRA7.x)? = Yes
- *Enables vRA to call vRO during machine lifecycles
- Click Submit
- Verify that the SovLabs Configuration workflow completed successfully
- Lastly, let’s verify the SovLabs Plugin in vRA
- Select Catalog tab
- Verify that Add license -> SovLabs Modules catalog exists
- Now lets install the new license key for 2017.3.x
- This process has also been drastically simplified with a single license key which will license all modules, where previously this was done one at a time.
- Select Catalog tab -> SovLabs vRA Extensibility Modules -> Add license – SovLabs Modules
- Copy the text from license file and paste into field
- Click Submit
- Verify all catalog tab -> SovLabs vRA Extensibility Modules that all catalogs are available.
- If you ever need to roll back then follow the steps in the document provided by SovLabs: