SovLabs has been adding some great new features to their Property toolkit module:
- Dynamically set and assign vRA Network Profile Names to VMs in a blueprint with our SovLabs Property Toolkit module for vRA 7.5 and vRA 7.6. Read more here
- Dynamically add additional vRA Disks to VMs in a blueprint with our SovLabs Property Toolkit module for vRA 7.5 and vRA 7.6.
Today we are looking at their new feature to dynamically add additional vRA disks, using the Property Toolkit module, which is a widely discussed topic on blogs and VMware’s community forum.
There are of course multiple ways to achieve this, for instance adding disks to the vSphere machine on the request form, however this method is very basic and does not provide a lot of flexibility.
Other customer would resort to creating custom forms with data grids with vRO actions to implement this.
SovLabs Property Toolkit module uses custom properties and can add up to 15 disks and makes use of the approval lifecycle of VM provisioning to assign disks prior to MachineRequested EBS.
- vRealize Automation 7.5 or newer
- SovLabs Plug-in Version 2019.16.0 or newer
- Approval Policy Type: Service Catalog – Catalog Item Request – Virtual Machine
- vRA blueprint with Cloned Machine Build type
- Make sure to correctly set the total capacity/maximum storage value that can support the disks to be added
- vRA login as Tenant Administrator or Approval Administrator and entitled to SovLabs Modules.
“We have ServiceNow and want to use its service management portal instead of vRA, is this possible?” This question comes up a lot from our customers and often with a follow up questions “Can we wrap ServiceNow approval policies around it?” The answer is YES!
There are 2 ways to achieve this, the first is using VMware’s vRealize Automation plugin for ITSM which is available here. The main premise of this plugin to expose the exact vRA services and catalog items directly within ServiceNow. This is good and all, but it does not provide a lot of flexibility and the application installation and configuration is complex. Check out these blogs for additional information on v7.6.1 and v5.0.
The second solution, and what I will be using is SovLab’s ServiceNow connector module, which is very easy to implement and provides a lot of flexible by allowing ServiceNow administrators to customize the catalog and the request process directly within the ServiceNow platform. It has the following highlighted features:
- Multi-tenant & vRA instance support
- Platform-native control for ServiceNow which means management and and customization is done directly within ServiceNow and also using ServiceNow constructs (catalog, workflow, etc.)
- Day2 vRA operations support
- Request as ServiceNow user automatically maps to corresponding vRA user, so also no requirement for SAML or ADFS!
- SovLabs Template Engine support for metadata injection and custom logic, which is a huge plus
- Can be coupled with the SovLabs CMDB Module, which is very useful and something everyone needs.
So lets start with the implementation prerequisites:
As a prerequisite you need a ServiceNow instance and a MID Server installed and configured. I assume this is already done so I will not provide steps here for this.
Some other SovLabs related prerequisites you need to take care of:
- a ServiceNow instance with a MID Server installed and configured.
- I assume this is already done so I will not provide steps here for this.
- ServiceNow connector plugin software
- For the ServiceNow tables: “question_choice”, “sc_cat_item” and “item_option_new” you have to set All Application Access for Can read, Can create, Can update, and Can delete
- Go to System Definition > Tables > question_choice
- Go to Application Access
- For All application scopes, make sure Can read, Can create, Can update, and Can delete are checked
- Repeat Step 2 and Step 3 for the other tables
- The ServiceNow usernames needs to match their vRA username
- Unless SovLabs ‘User Mapping’ is used, which you can read about here
- I just setup the usernames in ServiceNow to match my domain username login for vRA. “email@example.com”
- If you want to perform Day2 actions you have to install and configure the SovLabs ServiceNow CMDB module as well. Check out my blog on this.
- Administrator credentials to vRO that also has entitlements to the Business Group/Catalog Items being Imported to ServiceNow
Very happy to be acknowledged as a for another year! We have such a great and congrats to all the other 2019 vExperts. Keep up the good work, its worth it.
Being a VMware Partner has its perks and EMPOWER is definitely one of them! In previous years we had PEX which was scheduled alongside VMworld, but attending the technical sessions the weekend before was tough and made for a very long week in Vegas Last year this returned to its original standalone schedule with a great location and provided a lot more content to technical partners.
2019 is no different, and definitely improved upon with the addition of Lisbon (Europe) and Singapore (Asia) conference locations. VMware has also done a great job of listening to partner feedback to make this a standout event with recently added sessions like:
- VMware’s Storage and Availability Vision and Strategy
- VMware’s Hybrid Cloud Vision and Strategy
- Partner VCDX Session: Customer Win with Hyper-Converged Infrastructure
- Partner VCDX Session: Customer Win – Architectural Considerations for SDDC Customer Wins
- Partner VCDX Session: Customer Win – Hybrid Cloud Use Cases
Here are some of the highlights that I think are worth mentioning and what you can provide to your manager as reasons for attending, other than this awesome letter of course.
- Exclusive access to the same content that is enabling the VMware field teams. All marketing fluff has been removed 😊
- Hands-on labs are available with first time access to brand new labs for VMC on AWS and PKS.
- Expert led Livefire workshops with technical content and training.
- Free VMware certification tests for all technical tracks, which can help your company achieve their competencies. Register for exam here.
- Separation between technical and sales roles with dedicated sales conference on last 2 days of event
- Interaction with experts showcasing the latest VMware products in the Demo Zone
- Networking with other like-minded partners
- Team building! In many cases your teams are spread out across the country and this a great way to get together for some team building activities.
If these are not enough reasons already, I have $100 off registration codes available for technical passes to the conference. Please DM me via Twitter, Linkedin, or email me via my web site.
Schedule Builder will be opened publicly to all registrants on Thursday, March 7th at 11AM PST.
It has been a while since my last post, (just had too much going on) but I have been putting it off for way too long and I finally upgraded my vRA lab to 7.5. Here are my notes.
My distributed enterprise vRA 7.4 environment consists of the following components:
- vRA VIP
- vRA IaaS Manager VIP
- 2 x Windows vRA IaaS Manager Service servers
- vRA IaaS Web VIP
- 2 x Windows vRA IaaS Web servers
- 2 x Windows vRA DEM + Agent servers
- vRO VIP
- 2 x external vRO appliances
- External SQL database for vRA and vRO
- Running SovLabs extensibility software
There are 2 options available to get to the desired state with either an in-place upgrade of your existing vRA environment or to build out a new greenfield vRA and migrate your data over (VMware calls this a side-by-side upgrade).
If you are currently running 6.2.0 – 6.2.4 or 7.0.x, or have vCloud Director or vCloud Air endpoints you have to migrate!
Always before upgrading, make sure you have successful backups of all your nodes and while you’re at it also take snapshots of all the servers and backup your vRA and vRO database! You can never be too careful, ever! The upgrade steps for vRA are the same as what I have blogged about here. For this exercise, I am performing an in-place upgrade of vRA from 7.4 to 7.5, so please review the documentation if you upgrading from 6.2.5.
- Also, verify that all appliances and servers that are part of your deployment meet the system requirements for vRA 7.5 and also consult the VMware Product Interoperability Matrix about compatibility with other VMware products.
- I also have SovLabs plugins installed so make sure to upgrade SovLabs to a vRA 7.5 compatible version. At the time of the post, I upgraded to 2018.3.1. Upgrade steps for SovLabs can be found here.
vRealize Suite LifeCycle Manager (vRSLM) has now been around for a while and if you are a vRealize or vCloud Suite license holder this is definitely a product that should be part of our VMware portfolio. I am a bit backward because in my last post is showed how to upgrade your vRA environment using vRSLM and only now will I show how to actually install vRA which actually just comes out of necessity because one of my colleagues accidentally delete all my lab servers 🙂
For this post, I am using the latest vRSLM 1.3 and will be deploying a distributed vRA 7.4.
- jvra01 – vRA appliance with embedded vRO (recommended design to use embedded instead of external vRO since 7.3)
- jvra02 – vRA appliance with embedded vRO
- jvraweb01 – vRA IaaS Web
- jvraweb02 – vRA IaaS Web
- jvramgr01 – vRA IaaS Manager
- jvramgr02- vRA IaaS Manager
Since vSSLM automates and simplifies the deployment of your VMware SDDC stack, most of your time will be spent on prerequisites, so let’s start with that.
- Manually deploy 4 x vRA Iaas Windows Servers in your vCenter Server environment.
- Make sure they are added to the domain and DNS and NTP is working.
- Disable UAC on all Windows servers. Make sure to reboot if you have to disable this.
- Make sure that IPv6 is disabled on all Windows servers
- Add the windows service account as part of User Rights Assignment under Local Security Policies for Log on as a Service and Log on as a batch job on all windows machines.
- Verify the minimum resource requirements is set on all Windows servers. Set to at least 8GB.
- SQL Database
- Make sure the domain user has added the SQL server to the domain
- Make sure the domain user is added as part of the SQL DB user Logins list with the sysadmin privilege
- Load Balancer
- Make sure that the second member of each pool in the vRealize Automation load balancer is disabled.
There are also some scripts available to download to verify the prerequisites when you run the precheck for the creation of the vRA environment so this can be done later as well.
- Ensure that the vRSLCM appliance has correct FQDN configured
- Command for correcting the hostname is “/opt/vmware/share/vami/vami_set_hostname <hostname>”
- After setting the correct hostname, verify by using the command “hostname -f” or from 1.3 version of LCM, we can also verify from the settings page.
- Under vRSLM settings:
- Register with My VMware to access licenses, download Product Binaries, and consume Marketplace content.
- Download the vRealize Automation 7.4.0 product
- If you already have the OVA downloaded then you can import it under the Product binaries tab.
- Verify that you have vRealize Automation binaries status as completed.
- If you using a self-signed certificated in your environment (not recommended), then create a self-signed wildcard certificate for vRealize Suite product deployments.
- Best is to generate a single SAN certificate with all the product or management virtual host names or a wildcard certificate and provide this certificate when you create the environment for the first time. This ensures support for post provisioning actions such as Add Products and Scale Out.
- Configure NTP Servers for deploying products in environments
- Under Data Centers
- Create a Data Center with an associated location.
- Add the vCenter Server where the vRA environment will be deployed to.
- Make sure the data collection is successful.
As with most of my other blog posts, I am just providing a step by step guide for quick reference. Please refer to the documentation here for detailed information and please read the vRealize Automation 7.4 Release Notes known issues section which is updated regularly and helps you to be better prepare for the upgrade.
My environment consists of a distributed vRealize Automation running version 7.2 with an external clustered vRealize Orchestrator, which I am upgrading and not migrating to 7.4 Build 8182598. This will be a similar process if you have vRA 7.1 and greater. If you have an older version, refer to VMware’s documentation here.
The in-place upgrade process for the distributed vRA environment happens in 3 stages in the following order:
- vRealize Automation appliances
- IaaS Web server
- vRealize Orchestrator
Pre-requisites before we start:
- Make sure all VMware products are compatible with vRA’s current and new release by consulting the Product Interoperability Matrix.
- Verify enough storage space on servers
- At least 5GB on IaaS, SQL and Model Manager
At least 5 GB on the root partition of vRA appliance
5 GB on the /storage/db partition for the master vRA appliance
5 GB on the root partition for each replica virtual appliance
- Verify that MSDTC is enabled on all vRA and associated SQL servers.
- Check that the service “Distributed Transaction Coordinator” is running.
- The primary IaaS Website node (Model Manager data is installed) must have JAVA SE Runtime Environment 8, 64 bits, update 161 or later installed, and also verify JAVA_HOME environment variable is set correctly after the upgrade.
- If using embedded Postgres DB in a distributed vRA environment
- On master vRA node, navigate to /var/vmware/vpostgres/current/pgdata/
- Close any opened files in the pgdata directory and remove any files with a .swp suffix
- Verify the correct ownership of all files in this directories: postgres:users
- In a distributed vRA environment, change Postgres synchronous replication to async.
- Click .
- Click Async Mode and wait until the action completes.
- Verify that all nodes in the Sync State column display Async status
- I have only a master and replica so I am already async but just FYI
- In vRA tenants verify the following
- Make sure that no custom properties have spaces in the names.
- All saved and in-progress requests have finished successfully
Additional requirements before we start: