vRealize version releases today: vRA 7.1, vROPS 6.3, vRO 7.1 and vRB 7.1

VMware released new versions today for a couple of vRealize products.  Listed below with new features I think are relevant.  Full list of what’s new features provided in links at bottom of blog.

vRealize Automation 7.1

  • Silent installer
  • Migration tool to migrate data from vRA 6.2.x to fresh vRA 7.1 while preserving the source environment.
  • IPAM integration framework although Sovereign System’s SovLabs modules does a great job with this already.
  • Manual horizontal scale in and out of vRA deployments

vRealize Operations Manager 6.3

  • Enhanced workload placement and DRS integration
  • Improved log insight integration (hopefully write a blog on this soon)
  • Enhanced vSphere monitoring with new hardening policies.
  • Allow for multiple Advanced and Enterprise editions license in same deployment which means you can mix single and suite licenses.  License counting for individual license keys is handled through licensing groups.

vRealize Orchestrator 7.1

  • Extending automation configuration
  • Plugin improvements

vRealize Business for Cloud 7.1

  • Support for newer and latest vRA
  • Allow integration with external VMware Identity manager is probably the biggest one here since this now allows for a standalone installation with its own UI . I tested this earlier and you now have the option to register with either a vRA or vIDM instance.
  • Screen Shot 2016-08-23 at 5.59.57 PM
  • If you register with vIDM you get a new UI which is accessible through the FQDN of your vRB appliance.
  • Screen Shot 2016-08-23 at 6.01.31 PM
  • New version of reference database

 

Links:

http://pubs.vmware.com/Release_Notes/en/vrops/63/vrops-63-release-notes.html#intro

http://pubs.vmware.com/Release_Notes/en/vra/vrealize-automation-71-release-notes.html#about

http://pubs.vmware.com/Release_Notes/en/orchestrator/vrealize-orchestrator-71-release-notes.html#new

http://pubs.vmware.com/Release_Notes/en/vRBforCloud/71/vRBforCloud-71-release-notes.html#whatsnew

 

 

vCloud Director 8: Configure logging to vRealize Log insight

With the recent release of the vCloud Director content pack (v8.4) for vRealize log insight I thought I would put the steps here for how to get this configured.

There are 2 methods to get the logs forwarded to your vRLI server.

  1. log4j. Setting an additional logger in VCD log4j.properties file.
  2. Loginsight Agent installed on

Steps to configure Log4j:

  • Logging is normally handled by log4j with configuration file /$VCLOUD_HOME/etc/log4j.properties.
  • Login to Cell with SSH.
  • Change to directory /$VCLOUD_HOME/etc/
  • Make backup copy of log4j.properties
    • cp log4j.properties log4j.properties.orig
  • Open the file log4j.properties in a text editor and add the following lines, where syslog-host-fqdn is the FQDN of your syslog host and port is an optional port number. If no port number specified then will default to 514.
    • log4j.appender.vcloud.system.syslog=org.apache.log4j.net.SyslogAppender
    • log4j.appender.vcloud.system.syslog.syslogHost=syslog-host-fqdn:port
  • Modify this line to add the vCloud Director syslog appenders:
    log4j.appender.vcloud.system.syslog.facility=LOCAL1

    • log4j.appender.vcloud.system.syslog.facility=LOCAL1, log4j.rootLogger=ERROR, vcloud.system.debug, vcloud.system.info, vcloud.system.syslog
  • Specify the logger pattern
    • #log4j.appender.vcloud.system.syslog.layout=com.vmware.vcloud.logging.layout.CustomPatternLayout
    • log4j.appender.vcloud.system.syslog.layout.ConversionPattern=%d{ISO8601} | %-8.8p | %-25.50t | %-30.50c{1} | %m | %x%n
    • log4j.appender.vcloud.system.syslog.threshold=INFO
  • Save file
  • Restart Cell  (yes not ideal, so therefore my reason to recommended log inisght agent)
    • service vmware-vcd restart
  • Repeat on each cell

Steps to configured Log Insight agent: (recommended)

  • Install the content pack which is pretty straight forward through the marketplace.
  • Verify the agent group for vCloud Director is available after installation.
  • Select the agent group and select copy template.
  • Provide new name.
  • Create filter that limits your specific vCD Cells by either selecting the hostname or IP address to filter by.
  • Save
  • Install the LI agent on each vCD cell
    • Download the agent
      • Administration -> Management -> Agents
      • At bottom on page you can download the agent
    • Copy the Linux RPM file to tmp folder on vCD cell. Good tool to use is WinSCP.
    • Install agent
      • rpm – U VMware-Log-Insight-Agent-3.3.1-3636434.noarch_10.10.30.74.rpm
    • Since we downloaded the agent directly from our log insight server the liagent.ini should already be populated with your server IP Address.  This can be verified by reviewing the ini file and looking for hostname entry. cat /etc/liagent.ini
  • You will now see the agent in log insight server. Verify that the agent is filtered correctly for you vCD Active group.

 

Links:

https://kb.vmware.com/kb/2004564

Attending VMworld 2016, Las Vegas

VMworld is around the corner and hope to meet a bunch of people out there and looking forward to new announcements and sessions.

I will already be there this Friday to take part in the Partner Exchange over the weekend and will be joining Anthony Spiteri’s vGolf on Sunday morning as well as some other gathers like Sips and Stogies.

Here is a link to all the other gatherings at VMworld:

http://www.vmworld.com/en/gatherings.html

If you want to attend VMworld’s customer appreciate party on Wednesday you have to access your VMworld account and sign the waiver, since if you don’t you will not be allowed to go.  To sign the waiver can be a bit convoluted so you just follow the following link, click submit and next page you can scroll down and accept the waiver agreement:

https://vmworld2016.lanyonevents.com/portal/myAccount.ww

Pasted image at 2016_08_22 10_32 AM

 

Some sessions and announcements to look out for:

I think VMware’s containers solution Photon is getting a lot of attention.

Looking forward to seeing the ways Arkin technology will be integrated into NSX.

Hopefully get the to hear more on vCloud director for Service Provider with release of Advanced networking services.

vSphere 6.5 (I am on the beta testing)

Seems to be quiet a few sessions on vCloud air which is interesting??

Some of the extreme performance series.

 

 

 

 

VSAN : Migrate VSAN cluster from vSS to vDS

How to migrate a VSAN cluster from vSS to vDS
I am sure there some of you that is currently a VSAN cluster in some shape or form either in POC, Development or Production environment.  It provides a cost effective solution that is great for remote offices or even management clusters and can be implemented and managed very easily but as the saying goes nothing ever come easy and you have to work for it.  The same goes here and there are a lot of prerequisites for a VSAN environment that is crucial for implementing a healthy system that performs to its full potential.  I will not go into much detail here and feel free to contact us if any services are required.
One of the recommendations for VSAN is to use a vDS and your VSAN license actually includes the ability to use vDS which allows you as our customer to take advantage of simplified network management regardless of the underlying vSphere edition.
If you upgrade from vSS to vDS the steps are a bit different that your normal migration.  I recommend you put the host into maintenance mode with ensure accessibility.  Verify the uplink used for VSAN VMkernel and use the manage physical network adapter to remove the vmnic from vSS and to add it to vDS. Now migrate the VMkernel to the VDS.  If you review the VSAN health the network will show failed.
To verify multicast network traffic is flowing from you host use the following command on the ESXi host using bash shell:
#tcpdump-uw -i vmk2 -n -s0 -t -c 20 udp port 23451 or ump port 12345
To review your multicast network settings
#esxcli vsan network list
Ruby vSphere Console (RVC) is also a great tool to have in your arsenal for managing VSAN and following command can be used to review the VSAN state:
vsan.check_state
To re-establish the network connection you can use the following command:
vsan.reapply_vsan_vmknic_config
Rerun the VSAN health test and verify Network shows passed.

Now that the VSAN network is up and running you can migrate the rest of VMkernels.

VSAN – on-disk upgrade error "Failed to realign following Virtual SAN objects"

I upgraded the ESXi hosts from 6.0 GA to 6.0U2 and selected upgrade for VSAN On-disk format Version, however this failed with following error message:

“Failed to realign following Virtual SAN objects:  XXXXX, due to object locked or lack of vmdk descriptor file, which requires manual fix”

I reviews the VSAN health log files at following location:
/storage/log/vmware/vsan-health/vmware-vsan-health-service.log

Grep realigned

Grep Failed

I was aware of this issue due to previous blog posts on same problem and new of KB 2144881 which made the task of cleaning objects with missing descriptor files much easier.

I ran the script: python VsanRealign.py -l /tmp/vsanrealign.log precheck.

I however received another alert and the python script did not behave as it should with it indicating a swap file had either multiple reverences or was not found.

I then used RVC to review the object info for the UUID in question.

I used RVC again to try and purge any inaccessible swap files:
vsan.purge_inaccessible_vswp_objects ~cluster

no objects was found.

I then proceeded to review the vmx files for the problem VM in question and found reference to only the original *.vswp file and not with additional extension of *.vswp.41796

Every VM on VSAN has 3 swap files:
vmx-servername*.vswp
servername*.vswp
sername*.vswp.lck

I figured this servername*.vswp.41796 is just a leftover file and bear no reference to the VM and this is what is causing the on-disk upgrade to fail.

I proceeded to move the file to my /tmp directory  (Please be very careful with delete/moving any files within a VM folder, this is done at your own risk and if you are not sure I highly recommend you contact VMware support for assistance)

I ran the python realign script again. This time I received a prompt to perform the autofix actions to remove this same object in question for which i selected yes.


I ran the on-disk upgrade again and it succeeded.

Even though VMware provides a great python script that will in most instance help you clean up the VSAN disk groups, there are times when this will not work as planned and then you just have to a bit more troubleshooting and perhaps a phone call to GSS.

links:
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2144881

VSAN – Changing Dell Controller from RAID to HBA mode

So had to recently make some changes for customer to set the PERC controller to HBA (non-raid), since previously it was configured with RAID mode and all disks was in RAID 0 virtual disks.  Each disk group consists of 5 disks with 1 x SSD and 4 x HDD.

I cannot overstate this but make sure you have all the firmware and drivers up to date which is provided in the HCL.

Here are some prerequisites for moving from RAID to HBA mode:  I am not going to get into details for performing these tasks.

  • All virtual disks must be removed or deleted.
  • Hot spare disks must be removed or re-purposed.
  • All foreign configurations must be cleared or removed.
  • All physical disks in a failed state, must be removed.
  • Any local security key associated with SEDs must be deleted.

I followed these steps:

  1. Put host into maintenance mode with full data migration. Have to select full data migration since we will be deleting the disk group.
    1. This process can be monitored in RVC using command vsan.resync_dashboard ~cluster
  2. Delete the VSAN disk group on the host in maintenance.
  3. Use the virtual console on iDRAC and select boot next time into lifecycle controller
  4. Reboot the host
  5. From LifeCycle Controller main menu
  6. System Setup
  7. Advanced hardware configuration
  8. Device Settings
  9. Select controller card
  10. Select Controller management
  11. Scroll down and select Advanced controller management
  12. Set Disk Cache for Non-RAID to Disable
  13. Set Non RAID Disk Mode to Enabled