Upgrade VMware vRA to 7.5 and migrate external vRO to embedded vRO 7.5

It has been a while since my last post, (just had too much going on) but I have been putting it off for way too long and I finally upgraded my vRA lab to 7.5.  Here are my notes.

My distributed enterprise vRA 7.4 environment consists of the following components:

  • vRA VIP
    • 2 x vRA appliances
  • vRA IaaS Manager VIP
    • 2 x Windows vRA IaaS Manager Service servers
  • vRA IaaS Web VIP
    • 2 x Windows vRA IaaS Web servers
  • 2 x Windows vRA DEM + Agent servers
  • vRO VIP
    • 2 x external vRO appliances
  • External SQL database for vRA and vRO
  • Running SovLabs extensibility software

There are 2 options available to get to the desired state with either an in-place upgrade of your existing vRA environment or to build out a new greenfield vRA and migrate your data over (VMware calls this a side-by-side upgrade).

If you are currently running 6.2.0 – 6.2.4 or 7.0.x, or have vCloud Director or vCloud Air endpoints you have to migrate!

Always before upgrading, make sure you have successful backups of all your nodes and while you’re at it also take snapshots of all the servers and backup your vRA and vRO database!  You can never be too careful, ever! The upgrade steps for vRA are the same as what I have blogged about here.  For this exercise, I am performing an in-place upgrade of vRA from 7.4 to 7.5, so please review the documentation if you upgrading from 6.2.5.

  • Also, verify that all appliances and servers that are part of your deployment meet the system requirements for vRA 7.5 and also consult the VMware Product Interoperability Matrix about compatibility with other VMware products.
  • I also have SovLabs plugins installed so make sure to upgrade SovLabs to a vRA 7.5 compatible version.  At the time of the post, I upgraded to 2018.3.1. Upgrade steps for SovLabs can be found here.

vRA upgrade problem that I experienced:

  • My upgrade failed a couple of times with the following error messages displayed in /opt/vmware/var/log/vami/updatecli.log
    • Trying to upgrade Management Agent
      The Management Agent upgrade did not finish within the expected time on the nodes listed below. Make sure the Management Agent is up and running on them and that it has connectivity to the VA. Check the Management Agent log for details.
      + res=1
      + echo ‘Script /etc/bootstrap/preupdate.d/00-00-03-upgrade-management-agents failed, error status 1’
      + exit 1
      + rm -f /tmp/preupdate-err-log
      + exit 1
      + excode=1
      + test 1 -gt 0
      + vami_update_msg set pre-install ‘Pre-install: failed (code p-1)’
      + test -x /usr/sbin/vami-update-msg
      + /usr/sbin/vami-update-msg set pre-install ‘Pre-install: failed (code p-1)’
      + sleep 1
      + vami_update_msg set update-status ‘Update failed (code p-1). Check logs in /opt/vmware/var/log/vami or retry update later.’
      + test -x /usr/sbin/vami-update-msg
      + /usr/sbin/vami-update-msg set update-status ‘Update failed (code p-1). Check logs in /opt/vmware/var/log/vami or retry update later.’
      + exit
      03/12/2018 21:12:20 [ERROR] Failed with exit code 256
  •  I viewed the vRA cluster again and saw that some of my IaaS Nodes were upgraded to 7.5 version and some were not!
    • VMware documentation has some information on the IaaS management agents not upgrading here and here, but I did not want to upgrade it manually.
    • I RDP’d into the IaaS servers and found that the management agent service was now stopped.
    • I found another VMware KB here which mentions the same error messages but does not show that is related to 7.5, but I perform the steps to overwrite the config file anyway 🙂
    • I started the service again and verified in the cluster that it is connected, and started the upgrade again.
    • This seems to fix it and I was able to upgrade one or two Iaas Nodes, when it would fail again and had to repeat the steps for the remaining IaaS host agents that did not upgrade.
    • Just keep going it and an eventually all the IaaS host will update and the upgrade will complete
      • I could probably just have upgraded the agents manually as mentioned the documentation, but hey to each their own.

Now for the “fun” part 🙂

With vRO 7.5 there is NO upgrade path and you have to migrate, either to a newly deployed external vRO 7.5 appliance or to the embedded vRO on the vRA appliance.  For this environment, I chose to migrate to the embedded vRA.  Here is a summary of my steps.

Prerequisites before starting the migration process:

  • Stop the source Orchestrator services.
  • Enable SSH access for each node in the source and target environments.
  • Ensure that the source Orchestrator database is accessible from the target Orchestrator environment.
  • Back up the source Orchestrator database, including the database schema.
  • Complete all workflows in the source Orchestrator environment that are in the running or waiting for input states. Workflows in these states are marked as failed after migration to the target environment.
  • Not in the documentation and my personal recommendations for prereqs:
    • Make sure you have successful backups of your vRO appliances and also take snapshots of each VM.
    • If you have a cluster environment, make sure that your target cluster is in async mode
    • Login to the vRO database and make sure that there are no entries in the VMO_Lock table.  If there are entries found, then run the following workflow in the vRO client Library -> Locking -> Release all locks
      • This is VERY important and I will explain shortly.
    • I have SovLabs installed so make sure to prep your vRA server where the embedded vRO is running.  Steps available here.
    • If you are migrating from a source vRO using vpostgres, verify that you have remote connections enabled
      • SSH to vRO appliance
      • If missing, ddd the following line to /var/vmware/vpostgres/current/pgdata/postgresql.conf
        • listen_addresses =’*’
      • Restart the PostgreSQL service
        • service vpostgres restart

Migration steps:

  • Log in to the VAMI interface of the vRA appliance.
  • Select Migrate tab
  • Select vRO
  • Screen Shot 2019-01-10 at 10.59.21 AM.png
  • Enter the hostname and root credentials of the source Orchestrator.
    • You can enter the creds of any of the nodes if you have a clustered environment.
  • Enter the root credentials of the target Orchestrator master node.
  • Click Validate
    • If a migration prerequisite validation fails, review the failure and correct the problem. Click Edit Settings and retry the validation.
    • You can only continue once validation is successful.  I had no problems here.
  • Click Migrate
    • Very important here is that if a migration step fails, you have to revert to the pre-migration state of the target environment to retry.  Remember those snapshots!!!
      • My upgrade failed with following.Screen Shot 2018-12-04 at 5.25.01 PM
      • The logs revealed the following message:
        • MigrateDbOptions{sourceDbUsername=”, sourceDbPassword=”, sourceJdbcUrl=’jdbc:jtds:sqlserver://;domain=’, dropTarget=true, targetDbUsername=”, targetDbPassword=’***’, targetJdbcUrl=’jdbc:postgresql://localhost:5433/vmware?sslmode=verify-ca&sslrootcert=/var/vmware/vpostgres/current/.postgresql/root.crt’, skipTables='[vmo_vroconfiguration, vmo_vroconfigurationhistory, vmo_clustermember]’}
          ch.dunes.vso.db.DatabaseException: org.postgresql.util.PSQLException: ERROR: relation “vmo_lock” does not exist
          Position: 13 – ErrorCode:0 = TargetEx: org.postgresql.util.PSQLException: ERROR: relation “vmo_lock” does not exist
          Position: 13
          com.vmware.o11n.cli.configuration.exception.CommandException: ch.dunes.vso.db.DatabaseException: org.postgresql.util.PSQLException: ERROR: relation “vmo_lock” does not exist
          Position: 13 – ErrorCode:0 = TargetEx: org.postgresql.util.PSQLException: ERROR: relation “vmo_lock” does not exist
          Position: 13
          at com.vmware.o11n.cli.configuration.commands.db.MigrateDbCommand.execute(MigrateDbCommand.java:61)
          at com.vmware.o11n.cli.configuration.commands.ConfigurationCommand.executeCmd(ConfigurationCommand.java:110)
          at com.vmware.o11n.cli.configuration.ConfigurationCli.executeCommand(ConfigurationCli.java:145)
          at com.vmware.o11n.cli.configuration.ConfigurationCli.main(ConfigurationCli.java:121)
          Caused by: ch.dunes.vso.db.DatabaseException: org.postgresql.util.PSQLException: ERROR: relation “vmo_lock” does not exist
          Position: 13 – ErrorCode:0 = TargetEx: org.postgresql.util.PSQLException: ERROR: relation “vmo_lock” does not exist
          Position: 13
          at com.vmware.o11n.configuration.database.migrate.MigrateDatabase.migrate(MigrateDatabase.java:148)
          at com.vmware.o11n.cli.configuration.commands.db.MigrateDbCommand.execute(MigrateDbCommand.java:58)
          … 3 more
      • I reviewed the VMO_Lock table and found that it had some entries in the table.
        • I ran the workflow on the source vRO Client to release all locks.
        • Verified that all the entries were deleted.
        • Reverted my vRA appliances to the snapshots before trying the migration (scary stuff right here!)
      • Ran the migration again and it was successful
      • Screen Shot 2019-01-10 at 11.03.44 AM.png

Post Migration steps:

  • Log in to the Control Center
    • Generate a package signing certificate
      • Certificate -> Package signing Certificate -> Generate
    • Validate the status of the configuration.
    • Test admin credentials under configuration authentication provider.  Should be able to successfully authenticate.
  • Log in to the vRO java client and verify the migrated schedule tasks state
    • Navigate to Scheduled Tasks
    • I received an error on the schedule tasks
    • Screen Shot 2018-12-05 at 6.08.05 PM.png
    • Click on update and enter the password for the username specified.
    • This should resolve the errors.
  • Migrate Dynamic Type configurations
    • I did not have any dynamic types to migrate, but here are the steps as provided in the documentation.
    • Export dynamic type configurations in the source environment.
      1. Log in to the Java Client as an administrator user.
      2. Select the Workflows tab.
      3. Select Library > Dynamic Types > Configuration.
      4. Select the Export Configuration as Package workflow and run it.
      5. Click Not Set > Insert value.
      6. Select the namespaces you want to export and click Add to add them to the package.
      7. Click Submit to export the package.
    • Import dynamic type configurations in the target environment.
      1. Log in to the Java Client as administrator.
      2. Select the Workflows tab.
      3. Select Library > Dynamic Types > Configuration.
      4. Select the Import Configuration From Package workflow and run it.
      5. Click Configuration package to import.
      6. Browse to the exported package file and click Attach file.
      7. Review the information about the namespaces attached to the package and click Submit.
    • Select Inventory > Dynamic Types to verify that the dynamic type namespaces have been imported.
  • Now that vRO has been migrated, we have to update the configured vRO server in vRA as well as the endpoint.
    • Login to vRA as a system admin
      • Administration > vRO configuration > Server configuration.
        • If the Orchestrator server is the external Orchestrator server, update the configuration to the hostname of the embedded Orchestrator server of the vRealize Automation environment.
        • For embedded the port is 443!
    • Login to vRA as a tenant admin.
      • Administration > vRO configuration > Server configuration.
        • If the configured Orchestrator server is the external Orchestrator server, update the configuration to the hostname of the embedded Orchestrator server of the vRealize Automation environment.
        • For embedded the port is 443!
      • Infrastructure > Endpoint > Endpoint
        • Update the existing endpoint to the embedded target vRO.
        • Remember to remove the port –  https://vrahostfqdnORloadbalancer/vco/
        • Select the vRO Endpoint
          • Click on Actions -> Data Collection
          • Start the collection and verify status “data collection succeeded”
    • Login to vRA VAMI
      • Verify that the vco service is registered
      • VMware also mentions in the documentation to unregister the vco service of the external vRO server, which is weird because there is no way to tell from the UI which of the vco services are for internal or external vRO?? The unregistered button is also always grayed out.
        • I left this as is.
  • Deploy a couple of Catalog items and verify successful deployment.

Links:

vRealize Automation 7.5 Release Notes

VMware vRealize Orchestrator 7.5 Release Notes

Migrating the Orchestrator Configuration

Migrating an External Orchestrator Server to vRealize Automation 7.5

 

 

 

 

 

 

 

 

 

2 thoughts on “Upgrade VMware vRA to 7.5 and migrate external vRO to embedded vRO 7.5

  1. How did you view the entries in ‘VMO_Lock’?

    I’m still a bit fuzzy on how you resolved the ‘relation “vmo_lock” does not exist’ problem. Would you be able to elaborate?

    Like

    • Before migrating, open the vRO database on SQL server. Right click on the dbo.VMO_Lock and select top 1000 rows to see if there are any results. If nothing you should be good, but if there are then you need to open the vRO Client. Select Design -> Library -> Locking -> Release all locks and run the workflow. Once complete check to make sure the table is clear. That should do the trick.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s