vRA 7.1 installation steps – Advanced

This post was originally suppose to be for using the new silent installer in vRA 7.1, but since I have not been able to get the IaaS prerequisite fixes to apply successfully, without some tweaking and using external scripts, I thought I would start from the beginning with the UI installer.  I do have a case open with VMware GSS and hope to get a solution some time soon, and when I do I will create a new post and provide detailed information on my findings.

There are already many blog posts out there for step by step guide on the vRA installation wizard so I will keep this short and will only provide the necessary from my distributed installation notes.

Firstly here is a list of all the new features in vRA 7.1:

  • Streamlined installation process using a silent installer.
  • Agent and prerequisite command line interface.
  • Migration tool to move data from a source vRealize Automation 6.2.x environment to a fresh vRealize Automation 7.1 environment while preserving the source environment.
  • IPAM integration framework with ability to deploy machines and applications with automated assignment of IP addresses from leading IP address management systems, with the first integration with Infoblox.
  • Integrated support for Active Directory policies.
  • Custom property dictionary controls to improve property definitions and vRealize Orchestrator actions.
  • Reconfigure life-cycle events by means of event broker workflow subscriptions.
  • Additional vSphere provisioning options and data collection improvements.
  • Ability to manually conduct horizontal scale in and scale out of application environments deployed by vRealize Automation, including the automatic update of dependent components.
  • Customizable message of the day portlet available on the home page.
  • Additional information and filter options on the Items page.
  • Discontinued support for PostgreSQL external database.


I am not going to go into too much detail here but VMware does a great job of explaining everything in the online documentation:

Couple of important prerequisites which I want to highlight:
  • SQL server AlwaysOn availability groups is now supported but only for SQL 2016
  • 4 VIPS required for  vRA appliance, vRA Manager, vRA Web, vRO
    • For initial installation and configuration you don’t want the VIPs load balanced so use a DNS A-Record and once everything is done, flip over to your VIPs.
  • Recommend using a vRA service account
    • Add this service account to your IaaS Windows Local Administrator group
  • Recommend using a vRO and SQL service account as well.
  • Configure NTP on windows hosts and appliances with at last 2 NTP servers.
Lastly the prerequisites for the Windows IaaS can be daunting but since vRA 7 the prerequisites checker/fixer is build into the installation wizard making this process super easy.
My Enterprise Environment and virtual machines that was deployed:
lljvra01 vRA 7 Appliance #1
lljvra02 vRA 7 Appliance #2
lljvramgr01 vRA 7 manager #1, Windows 2012 R2
lljvramgr02 vRA 7 manager #2, Windows 2012 R2
lljvraweb01 vRA 7 web #1, Windows 2012 R2
lljvraweb02 vRA 7 web #2, Windows 2012 R2
lljvradem01 vRA 7 DEM #1, Windows 2012 R2
lljvradem02 vRA 7 DEM #2, Windows 2012 R2
lljvro01 vRO Appliance #1
lljvro02  vRO Appliance #2

Screen Shot 2016-10-27 at 5.38.58 PM.png

Install the Management Agent on Windows IaaS servers.

Management Agents are stand-alone IaaS components that register IaaS nodes with vRealize Automation appliances, automate the installation and management of IaaS components, and collect support and telemetry information.

Management Agent must be installed on each Windows server hosting IaaS components.

  • Open browser
  • Click Management agent installer, which will download the installer
  • Run the installer by clicking the .msi file
  • Click Next on the Welcome page.
  • Accept the EULA and click Next
  • Accept the path
  • Click Next
  • Enter the management site service details
    • vRA appliance address
    • Enter username root
    • Enter password
    • Click Load for SHA1 fingerprint to be populated
    • Check the box to confirm fingerprint match.
  • Click Next
  • Click Finish
  • Verify that the service is running. (image below)screen-shot-2016-09-21-at-1-46-24-pm





Continue reading

vCenter Server 6.5 announcement and detailed new features list

VMware finally pulled the curtains on their new vSphere 6.5 products during the European VMworld 2016 in Barcelona.  No release dates were announced but there are a lot of good stuff here.

  • vCenter 6.5
  • SRM 6.5
  • vRops 6.3
  • vRA 7.2
  • vSAN 6.5
  • VVOLS 2.0

I was fortunate enough to be part of the vCenter 6.5 beta and was impressed with the new features and VMware’s renewed focus on their core application stack.  I also have a couple of JFJ (JumpForJoy) moments which I listed below.

  • Auto-deploy finally got a UI and is now available for configuration in the vSphere Web client.
    • Creation of image profiles
    • Creation and activation of deploy rules
    • Management of deploy rules with ability to check compliance and remediation.
    • Ability to manually match non-deployed ESXi hosts to rules.
  • Enhancements to host profiles
    • Ability to search for a specific setting name, property name, or value by filtering the host profile tree while editing the host profile.
    • Copying setting from one host profile to another profile
    • Mark host profile settings as favorite and filter based on favorites.
  • Current Web client UI and usability improvements
    • Performance improvements
    • Keyboard shortcuts
    • Keyboard support in dialogs, Wizards and Confirmations
    • Recent objects global pane
    • Related objects tab replaced with object category tabs
    • Object details title bar displays the selected object’s icon and name, action icons, and the Actions menu
  • Live refresh, yes live!  JFJ moment!  This feature is awesome and not sure why it it took this long to make this available especially since we now have to use HTML5.   The real time updates are also done across users who are logged into vSphere client at the same time.
    • Live Tasks, Trigger alarms and reset alarms.
    • Navigation tree updates
    • Custom attributes
  • Oh yes and then there is the HTML5 web client.
    • HTML5 (<vcenter>/ui) and vSphere web client (<vcenter>/vsphere-client) are both available.
    • HTML5 web client does not yet have feature parity with vSphere web client and hopefully this will happen soon, but I recommend using the HTML5 as much as possible.
  • New and updated HA features.  JFJ moments all over the place!
    • Enhancements in the way calculations and configuration is done to determine failover capacity within a cluster.
    • Cluster Resource Percentage will be the default admission control moving forward.  The default failover capacity percentage will automatically be re-calculated based on the number of hosts in the cluster
    • Admission control – “VM Resource Reduction Event Threshold” setting
      • In past versions if a cluster did not have enough failover capacity during a hardware failure event, a number of VM’s would not be allowed to restart onto other healthy hosts. This new settings is a new feature that allows admins to specify the amount of resource reduction they are willing to tolerate in the cluster, potentially allowing additional VMs to be restarted even though capacity is not present, in exchange for potential performance degradation of VMs.
      • Setting this value to 0% means that you will not allow any resource reduction of any VM resources in your environment in the event of hardware failure.
    • Configuring orchestrator restarts!
      • Allow admins to specify the order of VM restarts as well as VM dependencies (critical applications, multi-tiered applications, and infrastructure services) at the cluster level.  We finally have similar orchestrated failover capabilities as SRM except SRM allows for injections of scripts which is not availabe with HA.
        • VM restart priority now includes: Lowest, Low, Medium, High, Highest
        • VM dependency restart conditions:
          • Resource allocated – Once resources for a VM are set aside on the host, HA will move to the next VM.
          • Powered On – Occurs when the power-on command is sent to the VM. Does not wait for the VM’s guest OS to be running.
          • Guest Heartbeats detected – Requires VMware Tools. Once vSphere sees that the VMware Tools agent is running, it will proceed.
          • App Heartbeats detected –  Requires scripting with the VMware Tools SDK, however this setting allows for information of a process/application within the VM’s guest OS to be passed shared to notify when an application is up and running in the VM.
  • Enhancements in event logging
    • Improve over 30 existing events for more detailed auditing.
    • Over 20 new events for different inventory operations.
    • Syslog / RELP streams
  • Storage IO Control (SIOC) with Storage policy-based management
    • SIOC was previously enabled per datastore and VM thresholds was set within the VM settings by first configuring the disk share value and then setting the IOPS limit value for each disk .   This was cumbersome to manage.
    • SIOC is now management and configured by using SPBM.
    • For storage policies there are now new rules available for readOPS, writeOPS, readLatency, writeLatency.
  • vCenter Server appliance Backup and Restore capability
    • File based backups/restore of vCenter server appliance through the Appliance Management UI.
    • Backup to a single folder all vCenter server core configuration, inventory and historical data.
    • Backup protocols available are FTP, SFTP, FTPS, HTTP, HTTPS
    • Encryption available for backup data before it is transferred.
    • Optional vCenter data available for backup:  Stats, Events, Alarms, Tasks.
    • To restore you have to use the vCenter installer which will deploy a new vCenter server appliance and restore the backup.  You cannot restore to your existing vCenter server. Make sure your existing vCenter server appliance is powered down before running a restore.
  • Command line deployment of vCenter server appliance
    • Scripted install
    • Installation using JSON formatted template and vcsa-cli-installer
  • vCSA and PSC failover. JFJ moment!
    • I will probably create a separate blog on this topic.
    • Native option to protect a vCenter server deployment from failures in hardware, vCenter and PSC service failures.
  • New Appliance management UI
    • Shows basic health with health badges.
    • CPU and Memory graphs showing utilization trends
    • Backup appliance
    • Create support bundle
    • Perform power operations such as rebooting and shutting down the appliance
  • Migration from a Windows vCenter server 5.5 to vCenter Appliance 6.0U
  • Security enhancements
    • VM level disk encryption.
    • Encrypted vMotion capabilities
    • Secure boot model


Please share your thoughts if you feel I am missing any other important features.






PowerCli Core Fling for MAC! How-to guide.

At VMworld 2016 in Las Vegas I attended the session INF8260 – Automated Deployment and Configuration of the vCenter Server Appliance which was presented by Alan Renouf and William Lam.  If you have not view this then I highly recommend you do so.  During the session Alan briefly touched on the ability to run PowerCLI on MAC and Linux which was very exciting news and without much delay the Fling was release 2 days ago:


PowerCLI Core uses Microsoft PowerShell Core and .Net Core to enable users of Linux, Mac and Docker to now use the same cmdlets which were previously only available on windows. Crazy right!? Well we have Microsoft to thank for making Powershell opensource and available for Linux and MAC

Below are my instructions on how I setup PowerCLI core to work on my MAC.

Step 1:   Download and Install .NET Core for Mac OS X from here

  • Install pre-requisites
    • In order to use .NET Core, we first need to install the latest version of OpenSSL. The easiest way to get this is from Homebrew.(Homebrew installs the stuff you need that Apple didn’t.)
  • Run the following command to install Homebrew:
    • # /usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)”
    •  After installing brew, run the following commands:
      • # brew update
      • # brew install openssl
      • # ln -s /usr/local/opt/openssl/lib/libcrypto.1.0.0.dylib /usr/local/lib/
      • # ln -s /usr/local/opt/openssl/lib/libssl.1.0.0.dylib /usr/local/lib/
  • Install .NET Core SDK
    • Download and install the official installer.
    • This installer will install the tools and put them on your PATH so you can run dotnet from the Console
    • Screen Shot 2016-10-18 at 8.57.10 PM.png
    • Screen Shot 2016-10-18 at 8.57.16 PM.png
    • Screen Shot 2016-10-18 at 8.57.22 PM.png
    • Screen Shot 2016-10-18 at 8.58.21 PM.png

Step 2:   Download and Install PowerShell for Mac OS X

  • https://github.com/PowerShell/PowerShell
  • Click “Clone or Download” -> Download Zip
  • Extract the Zip file
  • Installation doc can be found here:  docs/installation/linux.md#macos-1011
  • Download the PKG package powershell-6.0.0-alpha.11.pkg from the releases page onto the macOS machine.
    • Either double-click the file and follow the prompts, or install it from the terminal:  # sudo installer -pkg powershell-6.0.0-alpha.11.pkg -target /
  • You might get the error if your security settings have allowed apps only from App store and identified developers.
  • Screen Shot 2016-10-19 at 9.21.48 AM.png
  • Click OK
  • Go to MAC System Preferences
  • Security and Privacy
  • Unlock
  • Screen Shot 2016-10-19 at 9.25.59 AM.png
  • The powershell-6.0.0-alpha.11.pkg will show and you can click “Open Anyway”
  • Screen Shot 2016-10-19 at 9.26.23 AM.png
  • Screen Shot 2016-10-19 at 9.26.28 AM.png
  • Screen Shot 2016-10-19 at 9.26.41 AM.png


Step 3:  Perform the following steps to ensure you are using the latest OpenSSL and Curl

  • # brew install openssl
  • # brew install curl –with-openssl
  • Screen Shot 2016-10-19 at 9.38.59 AM.png
  • Since we installing 6.0.0-alpha.11, make sure you folder is updated correctly.
  • # sudo install_name_tool -change /usr/lib/libcurl.4.dylib /usr/local/opt/curl/lib/libcurl.4.dylib /usr/local/microsoft/powershell/6.0.0-alpha.11/System.Net.Http.Native.dylib
  • # sudo install_name_tool -add_rpath /usr/local/opt/openssl/lib /usr/local/microsoft/powershell/6.0.0-alpha.11/System.Security.Cryptography.Native.dylib

Step 4: Verify directoy /.local/share/powershell/Modules exists

  • If the directory does not exist, then create it by running the following commands:
    • # mkdir -p ~/.local/share/powershell/Modules

Step 5:  Extract the PowerCLI modules

  • Download the module zip file from:
  • Extract the PowerCLI modules into the directory you created above by running the following command:
  • # unzip PowerCLI.ViCore.4523941.zip -d ~/.local/share/powershell/Modules
  • # unzip PowerCLI.Vds.4523941.zip -d ~/.local/share/powershell/Modules


Step 6: Launch PowerShell

  • Open terminal
  • Start Powershell in the terminal by running the following command:
    • # powershell
  • Import the PowerCLI Modules into your PowerShell Session:
    • PS > Get-Module -ListAvailable PowerCLI* | Import-Module
  • Connect to your vCenter Server using Connect-VIServer
    • PS> Connect-VIServer -Server -User administrator@vSphere.local -Password Vmware!
  • I received  a WARNING when trying to connect:  “Invalid server certificate. Use Set-PowerCLIConfiguration to set the value for the InvalidCertificateAction option to

    Prompt if you’d like to connect once or to add a permanent exception for this server”

    • Best is to avoid the warning all together, with an official certificate or a self-signed certificate or run the following command:
      • PS >  Set-PowerCLIConfiguration -InvalidCertificateAction Ignore
    • Screen Shot 2016-10-19 at 10.00.49 AM.png
  • Now you should connect successfully to your vCenter server from MAC!







Stretched storage using SRM or vMSC?

I recently had a discussion regarding the setup of a secondary datacenter to provide business continuity to an existing infrastructure using stretched storage.  This in itself is an interesting topic with many architectural outcomes which I will not get into on this post but lets for the sake of argument say we decide to create active active data centers with EMC VPLEX Metro. 

The other side of the coin is how do you manage these active active data centers by balancing management overhead and resiliency?

vSphere Metro Storage Cluster (sMCS) and SRM 6.1, which supports stretched storage, are the two solutions I am going to review. There are already a whole bunch of articles out there on this but some of them mainly focused on pre 6.1.  These are just my notes and views and if you have a different viewpoint or it needs some tweaking please let me know, I cherish all feedback.

Why use stretched clusters:

  • Disaster avoidance or site maintenance without downtime is important.
    • Non-disruptive migration of workloads between the active-active datacenters.
  • When availability is one of your top priorities.  Depending on the failure scenario you have more outcomes where your VMs will not be impacted by outages from network, storage or host chassis failure at a site.
  • When your datacenters has network links which do no exceed 5 milliseconds round trip response time.
    • Redundant network links are highly recommended.
  • When you require multi-site load balancing of your workloads.


vPlex requirements: 

  • The maximum round trip latency on both the IP network and the inter-cluster network between the two VPLEX clusters must not exceed 5 milliseconds round-trip.
  • For management and vMotion traffic, the ESXi hosts in both data centers must have a private network on the same IP subnet and broadcast domain. Preferably management and vMotion traffic are on separate networks.
  • Stretched layer 2 network, meaning the networks where the VMs reside on needs to be available/accessible from both sites.
  • The data storage locations, including the boot device used by the virtual machines, must be active and accessible from ESXi hosts in both data centers.
  • vCenter Server must be able to connect to ESXi hosts in both data centers.
  • The VMware datastore for the virtual machines running in the ESX Cluster are provisioned on Distributed Virtual Volumes.
  • The maximum number of hosts in the HA cluster must not exceed 32 hosts for 5.x and 64 hosts for 6.0.
  • The configuration option auto-resume for VPLEX Cross-Connect consistency groups must be set to true.
  • Enabling FT on the virtual machines is supported except for Cluster Witness Servers.
  • This configuration is supported on both VS2 and VS6 hardware for VPLEX 6.0 and later releases.



a VMSC infrastructure is a stretched cluster that enables continues availability across sites, including support for:

  • vSphere vMotion
  • HA
  • DRS
  • FT over distance
  • Storage failure protection

VMSC requirements:

  • Single vCenter server
  • Cluster with DRS and HA enabled
  • Regular vCenter server requirements apply here

VMSC positives:

  • Continuous Availability
  • Fully Automatic Recovery
    • VMware HA (near zero RTO)
  • Automated Load Balancing
    • DRS and Instant vMotion
  • vMSC using VPLEX Metro
    • Certified Since vSphere 5.0
  • Behaves just like a single vSphere cluster

VMSC negatives:

  • Major architectural and operational considerations for HA and DRS configurations. This is especially true for highly customized environments with rapid changes in configuration.  Some configuration change examples:
    • Admission control
    • Host affinity rules to make sure that VMs talk to local storage
    • Datastore heartbeat
    • Management address heartbeat and 2 additional IPs
    • Change control for when workloads are migrated to different sites, rules would need to be updated.
  • Double the amount of resources required.  When you buy one, well you need to buy a second!  This is important since you have keep enough resources available on each site to satisfy the resources requirements for HA failover since all VMs are restarted within the cluster.
    • Recommended to set Admission control to 50%
  • No orchestration of powering on VMs after HA restart.
    • HA will attempt to start virtual machines with the categorization of High, Medium or Low. The difficulty here is, if critical systems must start first before other systems that are dependent on those virtual machines, there is no means by which VMware HA can control this start order more affectively or handle alternate workflows or run books that handle different scenarios for failure.
  • Single vCenter server
    • Failure of the site where vCenter resides disrupts management of both sites. Look out for development on this shortcoming in vSphere 6.5


SRM 6.1 with stretched storage:

Site Recovery Manager 6.1 adds support for stretched storage solutions over a metro distance from several major storage partners, and integration with cross-vCenter vMotion when using these solutions as replication. This allows companies to achieve application mobility without incurring in downtime, while taking advantage of all the benefits that Site Recovery Manager delivers, including centralized recovery plans, non-disruptive testing and automated orchestration.

Adding stretched storage to a Site Recovery Manager deployment fundamentally reduces recovery times.

  • In the case of a disaster, recovery is much faster due to the nature of the stretched storage architecture that enables synchronous data writes and reads on both sites.
  • In the case of a planned migration, such as in the need for disaster avoidance, data center consolidation and site maintenance using stretched storage enables zero-downtime application mobility. When using stretched storage, Site Recovery Manager can orchestrate cross-vCenter vMotion operations at scale, using recovery plans. This is what enables application mobility, without incurring in any downtime

SRM requirements:

  • Storage policy protection groups in enhanced linked mode

  • External PSCs for enhanced linked mode requirement
  • Supported compatible storage arrays and SRAs

  • vCenter server each site
  • Windows server each site for SRM application and SRA.

SRM positives:

  • Provides orchestrated and complex reactive recovery solution
    • For instance a 3 tiered application which has dependancies on specific services/servers to power on first.
  • Provides consistent, repeatable and testable RTOs
  • DR compliance shown through audit trails and repeatable processes.
  • Disaster Avoidance (Planned)
    • Manually Initiate  with SRM
    • Uses vMotion across vCenters for VMs
  • Disaster Recovery (Unplanned)
    • Manually Initiate Recovery Plan Orchestration
    • SRM Management Resiliency
  • VMware SRM 6.1 + VPLEX Metro 5.5
    • Stretched Storage with new VPLEX SRA
    • Separate failure domains, different vSphere Clusters

SRM negatives:

  • No Continuous Availability
  • No HA, DRS or FT across sites
  • No SRM “Test” Recovery plan due to stretched storage
    • Have to make use of planned migration to “test” but just be aware that your VMs associated to protection group will migrate live to second site.


Questions to ask:

At the end I really think it all comes down to a couple questions you can ask to make the decision easier.  SRM has narrows the gap on some of the features that VMSC provides so these questions are based on the remaining differences between each solution.

  1. Do you have complex tiered applications with dependancies on other applications like for instance databases?
  2. Do you have a highly customized environment which incurs rapid changes?
  3. Do you require DR compliance with audit trails and repeatable processes?

Pick SRM!

  1. Do you require a “hands off” fast automated failover?
  2. Do you have non-complex applications without any dependancies and do not care on how these power on during failover?
  3. Do you want to have your workloads automatically balanced across different sites?

Pick vMSC!






My Google Nexus 5x vs Pixel, should I stay or should I go now?

The Clash!

So I been on google fi from the start with the release of the Nexus 5x.  The service has been really good and not much to complain about other than if you do not watch your data usage you can expect some hefty bills coming your way.  Google’s wi-fi assistant does however connect your phone automatically to open wi-fi networks which was verified as fast and reliable.

I do however feel that Google neglected the Project Fi users a bit by not offered some kind of incentive or discount for existing Project Fi user to purchase Google’s new pixel phone.  I man can wish right? That price tag is crazy high and I am not sure if it is worth the price and I guess only time will tell when they release some real world reviews.

The positives:

  • Camera and app
  • Google Assistant
  • mmm, come on you can think of something else.

The negatives:

  • Expensive
  • Not water resistant
  • Design is ok i guess
  • Google store financing used for 24 month payment plan but this will be a separately bill from your Google Fi.  Why not a single bill?


  • Battery life

For now I am going to hold off on purchasing the Pixel and wait for some reviews and hopefully a quick price drop 😉








vRealize Log Insight – vRA 7 content pack configuration

A couple of months ago Kimberly Delgado wrote a detailed blog on the new vRA 7 content pack for vRealize log insight which includes all the enhancements. No need to me to repeat and link provide below.


I have had these steps written down quite some time ago but never really got around to putting into a blog, but since my post a couple of days ago on how to setup vRO content pack I though this would be fitting since these 2 content packs really go hand in hand.

vRA consists of multiple components and therefore can be configured with a simple or enterprise installation. This makes the deployment of the vRealize log insight agents interesting since you have install and/or configure the agents on both Windows (IaaS) and linux (virtual appliances).

The vRA content packs makes use of the Log Insight agent for both Windows and Linux which is great since this simplifies the configuration a lot!  The agent already comes preinstalled on vRA virtual appliances (easy) but still needs to be installed on the other component servers.

My configuration is an enterprise installation with following servers.

  • 2 x vRA virtual appliances
  • 2 x IaaS management
  • 2 x IaaS web
  • 2 x DEM/Agent servers

Here are the steps:

  • Login to vRealize Log insight
  • Select Content Packs
  • Select Marketplace
  • Select VMware – vRA 7 and install.
  • Select Administration
  • Under management select Agents.
  • Verify the agent groups “vRealize Automation 7 – Linux” and “vRealize Automation 7 – Windows” are available from drop down box.
  • Scroll to the bottom of Agent page and select “Download Log Insight Agent Version 3.x.x”
    • Download the Windows MSI.  Remember no need to download the linux agent since the agent is already preinstalled on vRA appliance.
  • Install windows agents on all IaaS web servers.
    • Run the msi on windows server.
    • Enter/verify the hostname of vRealize log insight <loginsightname.domain.com> during configuration setup.  Since we downloaded the agent from the vRLI server management UI the hostname gets populated in installer. cool!
    • Press Install
  • SSH to the vRA VA and update liagent.ini to point to the LI server.
    • Update the log insight file /var/lib/loginsight-agent/liagent.ini
      • Update hostname=<vrealizeLogInsightserver.domain.com>
      • Some additional parameters are available for configuration like protocol, port, ssl and reconnect.
  • Login to vRealize log insight to verify the agents are communicating with vRI server.
    • Select Administration
    • Select Agents
    • Verify agents are showing up in list.  If they do not verify that the service is running on the server or review the log files for agent on windows servers C:\ProgramData\VMware\Log Insight Agent\log
  • Now lets create the new agents groups
    • Highlight each of agent groups mentioned earlier from the drop down box and select copy template.(double square icon on far right)
    • Create filters to limit to the required servers.  Since we have 6 IaaS windows servers I create a easy filter with hostname starts with vraweb,vradem,vramgr.  Click refresh and you should now see all 6 agents.
    • Some further configuration is required here and we need to update and/or add to the agent configuration file logs for Windows agents.
      • Have a look at Vra-dem, vra-dem-metrics, vra-deo, vra-deo2
      • For instance lets review vra-deo where the directory is normally <hostname>-DEO after Distributed Execution Manager folder.
      • By default the directory is set to “C:\Program Files (x86)\VMware\vCAC\Distributed Execution Manager\DEO\Logs\”. This is incorrect…
      • Screen Shot 2016-10-05 at 5.56.13 PM.png
      • Directory SHOULD BE C:\Program Files (x86)\VMware\vCAC\Distributed Execution Manager\FQDN-DEO\Logs\
      • screen-shot-2016-10-05-at-5-57-12-pm
      • Also in my case I have multiple DEM and management servers so we have to create additional file log entries for each of the servers since the DEM folder name changes.
      • The easiest method to duplicate File Logs is to select the Edit tab and copy/paste original filelog and then just update the directory.
        • Screen Shot 2016-10-07 at 10.04.54 AM.png
      • The other method is to hover over File Logs and you will see a green “+ New” button appear, select it.  Lets called it “vra-dem2” to add the 2nd server file location.
        • I hope they release a feature to clone the file logs but currently I just go back and forth between original and new file log and copy paste the items since only change required is the directory, rest of entries stay the same.
        • Add the same tags manually
        • Do not forget to update the parse fields from drop down box as well.
        • screen-shot-2016-10-05-at-6-16-15-pm
      • Select “Save Agent Group”
      • This can be confusing and a bit tricky but I hope my explanation makes sense to update the file logs for Agent Configuration build.

Complimentary content packs which are highly recommend:

  • VMware Orchestrator 7.0
  • VMware vSphere
  • VMware NSX-vSphere (if implemented)

You should start seeing the event count increase for each of the agents as well as dashboards populating.  Go geek out on stats!  But wait, the Catalog dashboard also provides you with valuable pre-build queries to perform analysis on specific errors and failure scenarios.  The queries are different per dashboard so I do encourage you to get familiar with each.  Example of Alert Queries below for General – Problems dashboard:

Screen Shot 2016-10-07 at 6.40.14 PM.png