Volkswagen and its dieselgate nightmare sure messed up things for Audi, but I am looking forward to seeing what they can do in the Formula E race series. Hopefully this also bring some well needed publicity and oh yes race tracks to the competition.
This post was originally suppose to be for using the new silent installer in vRA 7.1, but since I have not been able to get the IaaS prerequisite fixes to apply successfully, without some tweaking and using external scripts, I thought I would start from the beginning with the UI installer. I do have a case open with VMware GSS and hope to get a solution some time soon, and when I do I will create a new post and provide detailed information on my findings.
There are already many blog posts out there for step by step guide on the vRA installation wizard so I will keep this short and will only provide the necessary from my distributed installation notes.
Firstly here is a list of all the new features in vRA 7.1:
Streamlined installation process using a silent installer.
Agent and prerequisite command line interface.
Migration tool to move data from a source vRealize Automation 6.2.x environment to a fresh vRealize Automation 7.1 environment while preserving the source environment.
IPAM integration framework with ability to deploy machines and applications with automated assignment of IP addresses from leading IP address management systems, with the first integration with Infoblox.
Integrated support for Active Directory policies.
Custom property dictionary controls to improve property definitions and vRealize Orchestrator actions.
Reconfigure life-cycle events by means of event broker workflow subscriptions.
Additional vSphere provisioning options and data collection improvements.
Ability to manually conduct horizontal scale in and scale out of application environments deployed by vRealize Automation, including the automatic update of dependent components.
Customizable message of the day portlet available on the home page.
Additional information and filter options on the Items page.
Discontinued support for PostgreSQL external database.
I am not going to go into too much detail here but VMware does a great job of explaining everything in the online documentation:
Couple of important prerequisites which I want to highlight:
SQL server AlwaysOn availability groups is now supported but only for SQL 2016
4 VIPS required for vRA appliance, vRA Manager, vRA Web, vRO
For initial installation and configuration you don’t want the VIPs load balanced so use a DNS A-Record and once everything is done, flip over to your VIPs.
Recommend using a vRA service account
Add this service account to your IaaS Windows Local Administrator group
Recommend using a vRO and SQL service account as well.
Configure NTP on windows hosts and appliances with at last 2 NTP servers.
Lastly the prerequisites for the Windows IaaS can be daunting but since vRA 7 the prerequisites checker/fixer is build into the installation wizard making this process super easy.
My Enterprise Environment and virtual machines that was deployed:
vRA 7 Appliance #1
vRA 7 Appliance #2
vRA 7 manager #1, Windows 2012 R2
vRA 7 manager #2, Windows 2012 R2
vRA 7 web #1, Windows 2012 R2
vRA 7 web #2, Windows 2012 R2
vRA 7 DEM #1, Windows 2012 R2
vRA 7 DEM #2, Windows 2012 R2
vRO Appliance #1
vRO Appliance #2
Install the Management Agent on Windows IaaS servers.
Management Agents are stand-alone IaaS components that register IaaS nodes with vRealize Automation appliances, automate the installation and management of IaaS components, and collect support and telemetry information.
Management Agent must be installed on each Windows server hosting IaaS components.
VMware finally pulled the curtains on their new vSphere 6.5 products during the European VMworld 2016 in Barcelona. No release dates were announced but there are a lot of good stuff here.
I was fortunate enough to be part of the vCenter 6.5 beta and was impressed with the new features and VMware’s renewed focus on their core application stack. I also have a couple of JFJ (JumpForJoy) moments which I listed below.
Auto-deploy finally got a UI and is now available for configuration in the vSphere Web client.
Creation of image profiles
Creation and activation of deploy rules
Management of deploy rules with ability to check compliance and remediation.
Ability to manually match non-deployed ESXi hosts to rules.
Enhancements to host profiles
Ability to search for a specific setting name, property name, or value by filtering the host profile tree while editing the host profile.
Copying setting from one host profile to another profile
Mark host profile settings as favorite and filter based on favorites.
Current Web client UI and usability improvements
Keyboard support in dialogs, Wizards and Confirmations
Recent objects global pane
Related objects tab replaced with object category tabs
Object details title bar displays the selected object’s icon and name, action icons, and the Actions menu
Live refresh, yes live! JFJ moment! This feature is awesome and not sure why it it took this long to make this available especially since we now have to use HTML5. The real time updates are also done across users who are logged into vSphere client at the same time.
Live Tasks, Trigger alarms and reset alarms.
Navigation tree updates
Oh yes and then there is the HTML5 web client.
HTML5 (<vcenter>/ui) and vSphere web client (<vcenter>/vsphere-client) are both available.
HTML5 web client does not yet have feature parity with vSphere web client and hopefully this will happen soon, but I recommend using the HTML5 as much as possible.
New and updated HA features. JFJ moments all over the place!
Enhancements in the way calculations and configuration is done to determine failover capacity within a cluster.
Cluster Resource Percentage will be the default admission control moving forward. The default failover capacity percentage will automatically be re-calculated based on the number of hosts in the cluster
Admission control – “VM Resource Reduction Event Threshold” setting
In past versions if a cluster did not have enough failover capacity during a hardware failure event, a number of VM’s would not be allowed to restart onto other healthy hosts. This new settings is a new feature that allows admins to specify the amount of resource reduction they are willing to tolerate in the cluster, potentially allowing additional VMs to be restarted even though capacity is not present, in exchange for potential performance degradation of VMs.
Setting this value to 0% means that you will not allow any resource reduction of any VM resources in your environment in the event of hardware failure.
Configuring orchestrator restarts!
Allow admins to specify the order of VM restarts as well as VM dependencies (critical applications, multi-tiered applications, and infrastructure services) at the cluster level. We finally have similar orchestrated failover capabilities as SRM except SRM allows for injections of scripts which is not availabe with HA.
VM restart priority now includes: Lowest, Low, Medium, High, Highest
VM dependency restart conditions:
Resource allocated – Once resources for a VM are set aside on the host, HA will move to the next VM.
Powered On – Occurs when the power-on command is sent to the VM. Does not wait for the VM’s guest OS to be running.
Guest Heartbeats detected – Requires VMware Tools. Once vSphere sees that the VMware Tools agent is running, it will proceed.
App Heartbeats detected – Requires scripting with the VMware Tools SDK, however this setting allows for information of a process/application within the VM’s guest OS to be passed shared to notify when an application is up and running in the VM.
Enhancements in event logging
Improve over 30 existing events for more detailed auditing.
Over 20 new events for different inventory operations.
Syslog / RELP streams
Storage IO Control (SIOC) with Storage policy-based management
SIOC was previously enabled per datastore and VM thresholds was set within the VM settings by first configuring the disk share value and then setting the IOPS limit value for each disk . This was cumbersome to manage.
SIOC is now management and configured by using SPBM.
For storage policies there are now new rules available for readOPS, writeOPS, readLatency, writeLatency.
vCenter Server appliance Backup and Restore capability
File based backups/restore of vCenter server appliance through the Appliance Management UI.
Backup to a single folder all vCenter server core configuration, inventory and historical data.
Backup protocols available are FTP, SFTP, FTPS, HTTP, HTTPS
Encryption available for backup data before it is transferred.
Optional vCenter data available for backup: Stats, Events, Alarms, Tasks.
To restore you have to use the vCenter installer which will deploy a new vCenter server appliance and restore the backup. You cannot restore to your existing vCenter server. Make sure your existing vCenter server appliance is powered down before running a restore.
Command line deployment of vCenter server appliance
Installation using JSON formatted template and vcsa-cli-installer
vCSA and PSC failover. JFJ moment!
I will probably create a separate blog on this topic.
Native option to protect a vCenter server deployment from failures in hardware, vCenter and PSC service failures.
New Appliance management UI
Shows basic health with health badges.
CPU and Memory graphs showing utilization trends
Create support bundle
Perform power operations such as rebooting and shutting down the appliance
Migration from a Windows vCenter server 5.5 to vCenter Appliance 6.0U
VM level disk encryption.
Encrypted vMotion capabilities
Secure boot model
Please share your thoughts if you feel I am missing any other important features.
At VMworld 2016 in Las Vegas I attended the session INF8260 – Automated Deployment and Configuration of the vCenter Server Appliance which was presented by Alan Renouf and William Lam. If you have not view this then I highly recommend you do so. During the session Alan briefly touched on the ability to run PowerCLI on MAC and Linux which was very exciting news and without much delay the Fling was release 2 days ago:
PowerCLI Core uses Microsoft PowerShell Core and .Net Core to enable users of Linux, Mac and Docker to now use the same cmdlets which were previously only available on windows. Crazy right!? Well we have Microsoft to thank for making Powershell opensource and available for Linux and MAC
Below are my instructions on how I setup PowerCLI core to work on my MAC.
Step 1: Download and Install .NET Core for Mac OS X from here
In order to use .NET Core, we first need to install the latest version of OpenSSL. The easiest way to get this is from Homebrew.(Homebrew installs the stuff you need that Apple didn’t.)
I recently had a discussion regarding the setup of a secondary datacenter to provide business continuity to an existing infrastructure using stretched storage. This in itself is an interesting topic with many architectural outcomes which I will not get into on this post but lets for the sake of argument say we decide to create active active data centers with EMC VPLEX Metro.
The other side of the coin is how do you manage these active active data centers by balancing management overhead and resiliency?
vSphere Metro Storage Cluster (sMCS) and SRM 6.1, which supports stretched storage, are the two solutions I am going to review. There are already a whole bunch of articles out there on this but some of them mainly focused on pre 6.1. These are just my notes and views and if you have a different viewpoint or it needs some tweaking please let me know, I cherish all feedback.
Why use stretched clusters:
Disaster avoidance or site maintenance without downtime is important.
Non-disruptive migration of workloads between the active-active datacenters.
When availability is one of your top priorities. Depending on the failure scenario you have more outcomes where your VMs will not be impacted by outages from network, storage or host chassis failure at a site.
When your datacenters has network links which do no exceed 5 milliseconds round trip response time.
Redundant network links are highly recommended.
When you require multi-site load balancing of your workloads.
The maximum round trip latency on both the IP network and the inter-cluster network between the two VPLEX clusters must not exceed 5 milliseconds round-trip.
For management and vMotion traffic, the ESXi hosts in both data centers must have a private network on the same IP subnet and broadcast domain. Preferably management and vMotion traffic are on separate networks.
Stretched layer 2 network, meaning the networks where the VMs reside on needs to be available/accessible from both sites.
The data storage locations, including the boot device used by the virtual machines, must be active and accessible from ESXi hosts in both data centers.
vCenter Server must be able to connect to ESXi hosts in both data centers.
The VMware datastore for the virtual machines running in the ESX Cluster are provisioned on Distributed Virtual Volumes.
The maximum number of hosts in the HA cluster must not exceed 32 hosts for 5.x and 64 hosts for 6.0.
The configuration option auto-resume for VPLEX Cross-Connect consistency groups must be set to true.
Enabling FT on the virtual machines is supported except for Cluster Witness Servers.
This configuration is supported on both VS2 and VS6 hardware for VPLEX 6.0 and later releases.
a VMSC infrastructure is a stretched cluster that enables continues availability across sites, including support for:
FT over distance
Storage failure protection
Single vCenter server
Cluster with DRS and HA enabled
Regular vCenter server requirements apply here
Fully Automatic Recovery
VMware HA (near zero RTO)
Automated Load Balancing
DRS and Instant vMotion
vMSC using VPLEX Metro
Certified Since vSphere 5.0
Behaves just like a single vSphere cluster
Major architectural and operational considerations for HA and DRS configurations. This is especially true for highly customized environments with rapid changes in configuration. Some configuration change examples:
Host affinity rules to make sure that VMs talk to local storage
Management address heartbeat and 2 additional IPs
Change control for when workloads are migrated to different sites, rules would need to be updated.
Double the amount of resources required. When you buy one, well you need to buy a second! This is important since you have keep enough resources available on each site to satisfy the resources requirements for HA failover since all VMs are restarted within the cluster.
Recommended to set Admission control to 50%
No orchestration of powering on VMs after HA restart.
HA will attempt to start virtual machines with the categorization of High, Medium or Low. The difficulty here is, if critical systems must start first before other systems that are dependent on those virtual machines, there is no means by which VMware HA can control this start order more affectively or handle alternate workflows or run books that handle different scenarios for failure.
Single vCenter server
Failure of the site where vCenter resides disrupts management of both sites. Look out for development on this shortcoming in vSphere 6.5
SRM 6.1 with stretched storage:
Site Recovery Manager 6.1 adds support for stretched storage solutions over a metro distance from several major storage partners, and integration with cross-vCenter vMotion when using these solutions as replication. This allows companies to achieve application mobility without incurring in downtime, while taking advantage of all the benefits that Site Recovery Manager delivers, including centralized recovery plans, non-disruptive testing and automated orchestration.
Adding stretched storage to a Site Recovery Manager deployment fundamentally reduces recovery times.
In the case of a disaster, recovery is much faster due to the nature of the stretched storage architecture that enables synchronous data writes and reads on both sites.
In the case of a planned migration, such as in the need for disaster avoidance, data center consolidation and site maintenance using stretched storage enables zero-downtime application mobility. When using stretched storage, Site Recovery Manager can orchestrate cross-vCenter vMotion operations at scale, using recovery plans. This is what enables application mobility, without incurring in any downtime
Storage policy protection groups in enhanced linked mode
External PSCs for enhanced linked mode requirement
Supported compatible storage arrays and SRAs
vCenter server each site
Windows server each site for SRM application and SRA.
Provides orchestrated and complex reactive recovery solution
For instance a 3 tiered application which has dependancies on specific services/servers to power on first.
Provides consistent, repeatable and testable RTOs
DR compliance shown through audit trails and repeatable processes.
Disaster Avoidance (Planned)
Manually Initiate with SRM
Uses vMotion across vCenters for VMs
Disaster Recovery (Unplanned)
Manually Initiate Recovery Plan Orchestration
SRM Management Resiliency
VMware SRM 6.1 + VPLEX Metro 5.5
Stretched Storage with new VPLEX SRA
Separate failure domains, different vSphere Clusters
No Continuous Availability
No HA, DRS or FT across sites
No SRM “Test” Recovery plan due to stretched storage
Have to make use of planned migration to “test” but just be aware that your VMs associated to protection group will migrate live to second site.
Questions to ask:
At the end I really think it all comes down to a couple questions you can ask to make the decision easier. SRM has narrows the gap on some of the features that VMSC provides so these questions are based on the remaining differences between each solution.
Do you have complex tiered applications with dependancies on other applications like for instance databases?
Do you have a highly customized environment which incurs rapid changes?
Do you require DR compliance with audit trails and repeatable processes?
Do you require a “hands off” fast automated failover?
Do you have non-complex applications without any dependancies and do not care on how these power on during failover?
Do you want to have your workloads automatically balanced across different sites?
So I been on google fi from the start with the release of the Nexus 5x. The service has been really good and not much to complain about other than if you do not watch your data usage you can expect some hefty bills coming your way. Google’s wi-fi assistant does however connect your phone automatically to open wi-fi networks which was verified as fast and reliable.
I do however feel that Google neglected the Project Fi users a bit by not offered some kind of incentive or discount for existing Project Fi user to purchase Google’s new pixel phone. I man can wish right? That price tag is crazy high and I am not sure if it is worth the price and I guess only time will tell when they release some real world reviews.
Camera and app
mmm, come on you can think of something else.
Not water resistant
Design is ok i guess
Google store financing used for 24 month payment plan but this will be a separately bill from your Google Fi. Why not a single bill?
For now I am going to hold off on purchasing the Pixel and wait for some reviews and hopefully a quick price drop 😉
A couple of months ago Kimberly Delgado wrote a detailed blog on the new vRA 7 content pack for vRealize log insight which includes all the enhancements. No need to me to repeat and link provide below.
I have had these steps written down quite some time ago but never really got around to putting into a blog, but since my post a couple of days ago on how to setup vRO content pack I though this would be fitting since these 2 content packs really go hand in hand.
vRA consists of multiple components and therefore can be configured with a simple or enterprise installation. This makes the deployment of the vRealize log insight agents interesting since you have install and/or configure the agents on both Windows (IaaS) and linux (virtual appliances).
The vRA content packs makes use of the Log Insight agent for both Windows and Linux which is great since this simplifies the configuration a lot! The agent already comes preinstalled on vRA virtual appliances (easy) but still needs to be installed on the other component servers.
My configuration is an enterprise installation with following servers.
2 x vRA virtual appliances
2 x IaaS management
2 x IaaS web
2 x DEM/Agent servers
Here are the steps:
Login to vRealize Log insight
Select Content Packs
Select VMware – vRA 7 and install.
Under management select Agents.
Verify the agent groups “vRealize Automation 7 – Linux” and “vRealize Automation 7 – Windows” are available from drop down box.
Scroll to the bottom of Agent page and select “Download Log Insight Agent Version 3.x.x”
Download the Windows MSI. Remember no need to download the linux agent since the agent is already preinstalled on vRA appliance.
Install windows agents on all IaaS web servers.
Run the msi on windows server.
Enter/verify the hostname of vRealize log insight <loginsightname.domain.com> during configuration setup. Since we downloaded the agent from the vRLI server management UI the hostname gets populated in installer. cool!
SSH to the vRA VA and update liagent.ini to point to the LI server.
Update the log insight file /var/lib/loginsight-agent/liagent.ini
Some additional parameters are available for configuration like protocol, port, ssl and reconnect.
Login to vRealize log insight to verify the agents are communicating with vRI server.
Verify agents are showing up in list. If they do not verify that the service is running on the server or review the log files for agent on windows servers C:\ProgramData\VMware\Log Insight Agent\log
Now lets create the new agents groups
Highlight each of agent groups mentioned earlier from the drop down box and select copy template.(double square icon on far right)
Create filters to limit to the required servers. Since we have 6 IaaS windows servers I create a easy filter with hostname starts with vraweb,vradem,vramgr. Click refresh and you should now see all 6 agents.
Some further configuration is required here and we need to update and/or add to the agent configuration file logs for Windows agents.
Have a look at Vra-dem, vra-dem-metrics, vra-deo, vra-deo2
For instance lets review vra-deo where the directory is normally <hostname>-DEO after Distributed Execution Manager folder.
By default the directory is set to “C:\Program Files (x86)\VMware\vCAC\Distributed Execution Manager\DEO\Logs\”. This is incorrect…
Directory SHOULD BE C:\Program Files (x86)\VMware\vCAC\Distributed Execution Manager\FQDN-DEO\Logs\
Also in my case I have multiple DEM and management servers so we have to create additional file log entries for each of the servers since the DEM folder name changes.
The easiest method to duplicate File Logs is to select the Edit tab and copy/paste original filelog and then just update the directory.
The other method is to hover over File Logs and you will see a green “+ New” button appear, select it. Lets called it “vra-dem2” to add the 2nd server file location.
I hope they release a feature to clone the file logs but currently I just go back and forth between original and new file log and copy paste the items since only change required is the directory, rest of entries stay the same.
Add the same tags manually
Do not forget to update the parse fields from drop down box as well.
Select “Save Agent Group”
This can be confusing and a bit tricky but I hope my explanation makes sense to update the file logs for Agent Configuration build.
Complimentary content packs which are highly recommend:
VMware Orchestrator 7.0
VMware NSX-vSphere (if implemented)
You should start seeing the event count increase for each of the agents as well as dashboards populating. Go geek out on stats! But wait, the Catalog dashboard also provides you with valuable pre-build queries to perform analysis on specific errors and failure scenarios. The queries are different per dashboard so I do encourage you to get familiar with each. Example of Alert Queries below for General – Problems dashboard: