It has been a while since my last post, (just had too much going on) but I have been putting it off for way too long and I finally upgraded my vRA lab to 7.5. Here are my notes.
My distributed enterprise vRA 7.4 environment consists of the following components:
- vRA VIP
- vRA IaaS Manager VIP
- 2 x Windows vRA IaaS Manager Service servers
- vRA IaaS Web VIP
- 2 x Windows vRA IaaS Web servers
- 2 x Windows vRA DEM + Agent servers
- vRO VIP
- 2 x external vRO appliances
- External SQL database for vRA and vRO
- Running SovLabs extensibility software
There are 2 options available to get to the desired state with either an in-place upgrade of your existing vRA environment or to build out a new greenfield vRA and migrate your data over (VMware calls this a side-by-side upgrade).
If you are currently running 6.2.0 – 6.2.4 or 7.0.x, or have vCloud Director or vCloud Air endpoints you have to migrate!
Always before upgrading, make sure you have successful backups of all your nodes and while you’re at it also take snapshots of all the servers and backup your vRA and vRO database! You can never be too careful, ever! The upgrade steps for vRA are the same as what I have blogged about here. For this exercise, I am performing an in-place upgrade of vRA from 7.4 to 7.5, so please review the documentation if you upgrading from 6.2.5.
- Also, verify that all appliances and servers that are part of your deployment meet the system requirements for vRA 7.5 and also consult the VMware Product Interoperability Matrix about compatibility with other VMware products.
- I also have SovLabs plugins installed so make sure to upgrade SovLabs to a vRA 7.5 compatible version. At the time of the post, I upgraded to 2018.3.1. Upgrade steps for SovLabs can be found here.
vRealize Suite LifeCycle Manager (vRSLM) has now been around for a while and if you are a vRealize or vCloud Suite license holder this is definitely a product that should be part of our VMware portfolio. I am a bit backward because in my last post is showed how to upgrade your vRA environment using vRSLM and only now will I show how to actually install vRA which actually just comes out of necessity because one of my colleagues accidentally delete all my lab servers 🙂
For this post, I am using the latest vRSLM 1.3 and will be deploying a distributed vRA 7.4.
- jvra01 – vRA appliance with embedded vRO (recommended design to use embedded instead of external vRO since 7.3)
- jvra02 – vRA appliance with embedded vRO
- jvraweb01 – vRA IaaS Web
- jvraweb02 – vRA IaaS Web
- jvramgr01 – vRA IaaS Manager
- jvramgr02- vRA IaaS Manager
Since vSSLM automates and simplifies the deployment of your VMware SDDC stack, most of your time will be spent on prerequisites, so let’s start with that.
- Manually deploy 4 x vRA Iaas Windows Servers in your vCenter Server environment.
- Make sure they are added to the domain and DNS and NTP is working.
- Disable UAC on all Windows servers. Make sure to reboot if you have to disable this.
- Make sure that IPv6 is disabled on all Windows servers
- Add the windows service account as part of User Rights Assignment under Local Security Policies for Log on as a Service and Log on as a batch job on all windows machines.
- Verify the minimum resource requirements is set on all Windows servers. Set to at least 8GB.
- SQL Database
- Make sure the domain user has added the SQL server to the domain
- Make sure the domain user is added as part of the SQL DB user Logins list with the sysadmin privilege
- Load Balancer
- Make sure that the second member of each pool in the vRealize Automation load balancer is disabled.
There are also some scripts available to download to verify the prerequisites when you run the precheck for the creation of the vRA environment so this can be done later as well.
- Ensure that the vRSLCM appliance has correct FQDN configured
- Command for correcting the hostname is “/opt/vmware/share/vami/vami_set_hostname <hostname>”
- After setting the correct hostname, verify by using the command “hostname -f” or from 1.3 version of LCM, we can also verify from the settings page.
- Under vRSLM settings:
- Register with My VMware to access licenses, download Product Binaries, and consume Marketplace content.
- Download the vRealize Automation 7.4.0 product
- If you already have the OVA downloaded then you can import it under the Product binaries tab.
- Verify that you have vRealize Automation binaries status as completed.
- If you using a self-signed certificated in your environment (not recommended), then create a self-signed wildcard certificate for vRealize Suite product deployments.
- Best is to generate a single SAN certificate with all the product or management virtual host names or a wildcard certificate and provide this certificate when you create the environment for the first time. This ensures support for post provisioning actions such as Add Products and Scale Out.
- Configure NTP Servers for deploying products in environments
- Under Data Centers
- Create a Data Center with an associated location.
- Add the vCenter Server where the vRA environment will be deployed to.
- Make sure the data collection is successful.
As with most of my other blog posts, I am just providing a step by step guide for quick reference. Please refer to the documentation here for detailed information and please read the vRealize Automation 7.4 Release Notes known issues section which is updated regularly and helps you to be better prepare for the upgrade.
My environment consists of a distributed vRealize Automation running version 7.2 with an external clustered vRealize Orchestrator, which I am upgrading and not migrating to 7.4 Build 8182598. This will be a similar process if you have vRA 7.1 and greater. If you have an older version, refer to VMware’s documentation here.
The in-place upgrade process for the distributed vRA environment happens in 3 stages in the following order:
- vRealize Automation appliances
- IaaS Web server
- vRealize Orchestrator
Pre-requisites before we start:
- Make sure all VMware products are compatible with vRA’s current and new release by consulting the Product Interoperability Matrix.
- Verify enough storage space on servers
- At least 5GB on IaaS, SQL and Model Manager
At least 5 GB on the root partition of vRA appliance
5 GB on the /storage/db partition for the master vRA appliance
5 GB on the root partition for each replica virtual appliance
- Verify that MSDTC is enabled on all vRA and associated SQL servers.
- Check that the service “Distributed Transaction Coordinator” is running.
- The primary IaaS Website node (Model Manager data is installed) must have JAVA SE Runtime Environment 8, 64 bits, update 161 or later installed, and also verify JAVA_HOME environment variable is set correctly after the upgrade.
- If using embedded Postgres DB in a distributed vRA environment
- On master vRA node, navigate to /var/vmware/vpostgres/current/pgdata/
- Close any opened files in the pgdata directory and remove any files with a .swp suffix
- Verify the correct ownership of all files in this directories: postgres:users
- In a distributed vRA environment, change Postgres synchronous replication to async.
- Click .
- Click Async Mode and wait until the action completes.
- Verify that all nodes in the Sync State column display Async status
- I have only a master and replica so I am already async but just FYI
- In vRA tenants verify the following
- Make sure that no custom properties have spaces in the names.
- All saved and in-progress requests have finished successfully
Additional requirements before we start:
VMware’s vRealize Suite of Products are great, and each provides a lot of features and capabilities, and VMware has been working hard on integration between the products. However, these products are very much standalone with no cohesion between them from a lifecycle management perspective. This creates a lot of management overhead to install, upgrade, configure and manage all these products, as well the additional solution extensions.
In comes vRealize Suite LifeCycle Manager (vRSLCM) which is a relatively new product and is available to all customers with a vRealize Suite license. It automates the installation, configuration, and upgrading of the following products:
- vRealize Automation
- vRealize Operations Managers
- vRealize Log Insight
- vRealize Business for Cloud
In this blog, I am going to provide the steps on how to import an existing distributed Enterprise vRA 7.2 environment and perform the upgrade to 7.4 using vRSLCM 1.2.
Let’s start off with the initial creation of the environment, which does require a lot of information up front, but once you create or import products into the environment at a later time, it will make use of this stored environment information.
- Log in to your vRSLCM
- Select Create Environments
- Enter Environment Data
- Data Center (this you should have created during the initial configuration of your vRSLCM environment)
- Environment Type
- Environment Name
- Administrator email
- Default root password
- Click Next
- Create Environment
- Check the box for vRealize Automation
- Since we already have an environment that we need to import, select the import Radio button.
- Click Next
- Scroll down to bottom.
- Check the box to accept the terms and conditions.
- Either pick a vRealize Suite license which will populate from your my.vmware.com account, or enter one manually.
- Click Next
- Infrastructure Details (This information is used if you deploy new products)
- Select vCenter Server where your vRealize Suite products reside in.
- Select Cluster
- Select Network
- Select Datastore
- Select preferred Disk format for product deployments.
- Click Next
- Network (This information is used if you deploy new products)
- Enter default gateway of the network where your vRealize Suite products are deployed or will be deployed too.
- Enter Domain Name
- Enter search path
- Enter DNS
- Enter Netmask
- Click Next
- Certificates (I import a wildcard certificate or you can use multi-domain certificate would be a good choice to simplify the process)
- Click Next to use the self-signed generated certificate or click the import certificate button to add existing wildcard or SAN certificate.
- Click Next
- Import (Since we selected import we now get ask questions about our existing environment)
- Enter vRA root password
- Enter vRA Default Administrator password
- Enter Tenant User name.
- Selecting the “administrator” user works just fine here.
- Enter vRA Primary Node FQDN
Enter IaaS Username.
- I used the domain service account assigned to all IaaS servers
- Default vRA Tenant name is select “vsphere.local”
- Enter vRA Tenant password
- Enter IaaS Password for the domain account.
- Select vCenter Server from the drop-down where the vRA server is running on.
- Click Next
- Review summary
- Click Download configuration to save the JSON file for later use.
- Click Submit
- This will run for a while to configure the environment and import vRA
- If it fails, you have a couple of options
- Review the requests
- Under actions select retry and verify the information that you have entered.
- Delete the environment and start over (1.2 provides the ability to specify if you also want to delete the VMs when you delete a fully configured environment, definitely not recommended to do so in most cases!)
- If you want to pause the import, you can always come back later and resume\
- Verify the vRA product environment
- Select Environment tab on the left side
- Select View details of the newly created environment
- Verify that all the information of your distributed vRA environment is accurate. vRSLCM collects all your VIP names, vRA-, IaaS- and Database Servers as well as where each component resides.
SovLabs isn’t just a vRA plugin, it’s enterprise software that extends the capabilities of your vRealize Automation environment providing you with that end-to-end solution you have been craving for. As with any other enterprise software they periodically provide new patches and releases and with SoLabs that is no different.
The new 2017.3.x was released in August and provides some awesome new modules:
- Men & Mice DNS and IPAM
- SolarWinds DNS
- Backup as a Service
- Automate policy-driven backups and provide self-service VM and file-level recovery for –
- SovLabs VM tagging
- Drive rich metadata using VM tags and categories
- SovLabs Property Toolkit
- Manage your existing custom properties on VMs with the SovLabs Template Engine
- ServiceNow Support for Jakarta
- Puppet support for 2017.1
- VMware Tools connection
- Connect to Windows/Linux servers can now be done through VMware Tools which removes the requirement for WinRM, CygwinSSH or WinSSHD to be installed. This is huge!
- As a customer you can sign up under the self-service portal and view the detailed release notes here:
So how do we go about upgrading SovLabs to the latest version?
Step by step guide to upgrading from 2017.2.x to 2017.3.x. (there are some additional steps if you are upgrading from <= 2017.1.x so please contact SovLabs support)
- First off we want to create a backup of the vRO package
- Login to vRO Client
- Click Design
- Click on the package tab
- Click on the package icon on right hand side menu bar
- Enter name “com.sovlabs.backup.resources”
- Edit the newly create package, click on the pencil icon on the right hand side menu bar
- Click the Resources tab
- Click the Folder + icon
- Expand the Library folder, select the SovLabs folder
- Click on the Select button
- Once loaded, click save and close
- Right click the saved package and click export package
- Create a folder called sovlabs under downloads
- leave the rest of settings as default
- Save to your local system
- Now, lets save the old SovLabs Plugin:
- Use WinSCP and login as root to vRO appliance
- Go to directory /var/lib/vco/app-server/plugins
- Save the o11nplugin-sovlabs.dar to your local file system in same sovlabs folder created earlier under download.s
- We need to update the vRO Heap size
- If you have done this before then you can skip this step but this is needed to install the larger sized SovLabs module file into vRO otherwise the appliance might run out of memory during install/upgrade.
- Remember if you a vRO cluster, then you have to perform the steps on both server
- SSH into vRO appliance with user root
- Run # vi /var/lib/vco/configuration/bin/setenv.sh
- Find the #MEM_OPTS section
- Replace the -Xmx512m \ with -Xmx768m \
- Save the file
- Delete all SovLabs license keys
- Login to vRA tenant
- Click on Items tab -> SovLabs vRA Extensibility modules -> SovLabs License
- For each SovLabs License item listed
- Select Actions -> Delete License
- Download the SovLabs plugin
- Talk to SovLabs support about getting the software downloaded.
- Install the plugin into vRO appliance
- Login to controlcenter
- Select Plug-Ins -> Manage Plug-ins
- Click Browse
- Select the plugin
- Accept EULA
- Click on Install
- Accept the EULA
- Restart the vRO server
- On the Home page, click on the Startup Options icon
- Click on Restart
- Wait for vRO to restart successfully
- Log back in to the vRO configuration page
- Click on the Manage Plug-Ins icon
- Verify that the installed plugin is listed among the vRO plugins
- Now if you have a clustered vRO 7.2 and above, then the plugin should sync but I have seen some problems with 7.2 so follow these steps
- Perform a full reboot on primary so that the pending and active config fingerprint ID match.
- Then push the config to the other standby node
- It will need to rebooted which it often will not do so make sure you perform this step yourself.
- Verify that Synchronization state shows synchronized and verify the version of the plugin on both active and standby nodes.
- Login to the vRO Client and run the configuration
- Click on Design mode
- Click on WorkFlow tab
- Right click vRO workflow, “SovLabs/Configuration/SovLabs Configuration”
- Select Start Workflow
- The SovLabs Configuration workflow only needs to be run on one vRO in a clustered environment
- Select yes to accept the EULA
- Click Next
- Select the appropriate tenant and business group
- Create SovLabs vRA Catalog Service? = No
- Publish License Content? = No
- Click Next
- Upgrade existing SovLabs vRA content? = Yes
- Click Next
- Install or Update SovLabs workflow subscriptions (vRA7.x)? = Yes
- *Enables vRA to call vRO during machine lifecycles
- Click Submit
- Verify that the SovLabs Configuration workflow completed successfully
- Lastly, let’s verify the SovLabs Plugin in vRA
- Select Catalog tab
- Verify that Add license -> SovLabs Modules catalog exists
- Now lets install the new license key for 2017.3.x
- This process has also been drastically simplified with a single license key which will license all modules, where previously this was done one at a time.
- Select Catalog tab -> SovLabs vRA Extensibility Modules -> Add license – SovLabs Modules
- Copy the text from license file and paste into field
- Click Submit
- Verify all catalog tab -> SovLabs vRA Extensibility Modules that all catalogs are available.
- If you ever need to roll back then follow the steps in the document provided by SovLabs:
I ran into an interesting problem today on my distributed (enterprise) vRA 7.2 environment and wanted to share how I got it resolved.
I have not deployed anything in my environment for a while but when I tried today my request was not completing and status is showing “In Progress”
- Infrastructure -> Monitoring -> Audit Logs
- Machine requests shows that is was started
- Infrastructure -> Monitoring -> Log
- Found error on my manager services nodes “[EventBrokerService] Failed resuming workflow.. State VMPSMasterWorkflow32.Requested(POST). Event
Event Queue operation failed with MessageQueueErrorCode QueueNotFound for queue ’30da8a16-c532-4e13-bd81-39b09114a887′.”
- Logged into Service manager nodes and review the logs in Event Viewer
- Found error “Error occurred while registering the DEM.
System.Data.Services.Client.DataServiceTransportException: The underlying connection was closed: An unexpected error occurred on a send. —> System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a send. —> System.IO.IOException: Authentication failed because the remote party has closed the transport stream”
- Logged into the Web server nodes and review the logs in Event Viewer
- Found similar error as above
- Found error messages like “Error occurred writing to the repository tracking log”, “Error occurred while pinging repository”
Review DEM status:
- Infrastructure -> Monitoring -> DEM status
- both my DEM worker and Orchestrator shows with Status Active (Green)
I did some investigation and found really 2 problems that I needed to address
- If you find errors like “Event Queue operation failed with MessageQueueErrorCode QueueNotFound for queue” then you probably have manager service running on both instances (nodes).
- If you find errors like “System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it” then the problem is most likely with certificates and found in the vRA documentation that if you have commas in your OU section of the IaaS certificate, that your VM provisioning might fail and the following work around is provided
- Remove the commas from the OU section of the IaaS certificate, OR
- Change the polling method from WebSocket to HTTP to resolve the issues.
- Open the Manager Service configuration file in a text editor.
- C:\:Program Files (x86)\VMware\vCAC\Server\Manager Service.exe.config.
- Add the following lines to <appSettings>
- <add key=”Extensibility.Client.RetrievalMethod” value=”Polling”/>
<add key=”Extensibility.Client.PollingInterval” value=”2000″/>
<add key=”Extensibility.Client.PollingMaxEvents” value=”128″/>
- Restart the manager services
Some other things to verify:
- On Web server Windows OS nodes
- Verify that the VMware Cloud Automation Center Management agent services is running
- On Manager service Window OS nodes
- Verify that the VMware Cloud Automation Center Service is running
- This should only be running on 1 server if have a load balancer in front.
- Set the Startup type to Manual on the 2nd server so you don’t have worry about this service starting but remember you have to failover manually by changing the service to automatic and starting the service.
- In vRA 7.3 the failover process is now automated which is great!
- Verify that the VMware Cloud Automation Center Management agent services is running on your instances
- On DEM server Windows OS nodes
- Verify that your VMware vCloud Automation Center Agent and Management agent services is running
- Most people do not know this but VMware also has a very cool vRealize production test tool which I will blog about shortly.
In part 1 I showed how to add Microsoft Azure to vRA. In this part 2 I will show how to add Microsoft Azure with Non-EA account to vRealize Business which will provide cost information for your MS Azure account.
I have to apologies for taking so long to publish this but I had the blog written and ready to go but it was created with vRB 7.2, which had a lot of bugs with Azure integration and the documentation was not very thorough and made use of the old Azure portal interface for configuration. The problem I ran into can be view in the community post here, but with a lot of views and not responses I decided to wait until vRB 7.3 to review this again.
- You must have a Microsoft Azure Enterprise Agreement (EA) or non-EA account.
- If using MS Azure non-EA you must have one of the following credits offers:
- Monetary commitment
- Monetary credit
To add a non-EA account you will also need the following information during configuration so please make sure you have this available. I am also providing the steps on how to configure your non-EA account.
- Client ID
- When you register a client app, such as a console app, you receive a Client ID. The Client ID is used by the application to identify themselves to the users that they are requesting permissions from.
- Location of Purchase
- Tenant ID
- Value can be retrieved from the Azure default Active directory when you select manage -> properties in menu.
- Secret Key
- Value will be defined during app registration.