vSAN 7: What’s new

In this post we will focus on 3 key product enhancements around vSAN.

Annotation 2020-03-07 132243.png

Simpler Lifecycle Management

Increase reliability and reduce number of tools

In vSphere 6.x hosts are individually managed with VMware Update Manager (VUM), but vSAN is a cluster based solution which is not ideal and can create inconsistencies.

vSphere 7 is introducing an entirely new solution at the cluster level to unify software and firmware management.

  • Annotation 2020-03-07 132501.png
  • This new approach is focused around this desired-state model for all lifecycle operations
    • Monitors compliance “drift” in real time
    • Provides then the ability to remediate back to the desired state
  • Built to manage server stack in cluster
    • Hypervisor
    • Drivers
    • Firmware
  • Modular framework supports vendor firmware plugins which allows their own firmware and respective drivers to be integrated with an image that is applied to hosts.
    • Dell
    • HPE

Native File Services

Now we talking! vSAN 7 introduces a fully integrated file service that is built right into the Hypervisor and managed through vCenter Server.

  • Annotation 2020-03-07 133517.png
  • Provision vSAN cluster capacity for file shares Supports NFS v4.1 & v3
  • Supports quotas for file shares
  • Suited for Cloud Native & traditional workloads on vSAN
    • I don’t think this capability is looking to replace large scale filers, but more looking to solve the specific use cases within that particular cluster.
  • Works with common vSAN/vSphere features

There are many use cases for both traditional VMs as well for cloud native applications. Let’s look at the latter.

Extension and integration K8s running on vSphere and vSAN

  • Annotation 2020-03-07 151352.png
  • Native files services will offer file-based persistent storage
  • vSAN also provides persistent block storage through SPBM for vSAN and vVols which is associated to a Storage class in Kubernetes
  • Persistent volume encryption and snapshot support
  • Volume resizing support
  • Support for some different tooling options
    • Wavefront
    • Prometheus
    • vROps

Continue reading

VMware announces Cloud Infrastructure for the Modern Applications – It’s a game changer!

Every year people anxiously wait for that shiny new phone, tablet, laptop or car and in most cases they end up being disappointed with a new expensive product that are not visually pleasing or provides very few advantages over the one they already have. (Pixel 4 wink wink)

Well every year some of us get just as excited about software releases, and in most cases we experience the same ups and downs with new releases.  But lately VMware with all their amazing strategic acquisitions, has been hitting it out of the park, and this year is no different and in my opinion a game changer with VMware expanding their portfolio to accelerate their strategy for any cloud, any app, any device as well as making Kubernetes available for enterprise adoption in vSphere, Public clouds an edge locations.

Annotation 2020-03-08 093509.png

The IT industry is changing at a fast pace and so has the definition of an application.  Apps used to be some monolith that consisted of a VM for the app and maybe another VM for the database and VMware provides a mature platform for these need, however now we are starting to see distributed systems where parts your application might be running in a VM, other parts running in containers that leverage Kubernetes as its control plane, and even consuming capabilities from other micro services like databases and serverless functions.

Challenges for modern app

These modern applications create challenges for Developers, Line-of-business leaders, as well as creates complexities for the VI admin around provisioning, logging, monitoring, troubleshooting, backup/restore, networking and security. Not just on-prem, but in the cloud as well.  VMware is looking to provide value across 3 different pillars.

vSphere 7 - Essential services for the modern hybrid cloud
These will be delivered through vSphere 7 capabilities as well as what VMware is referring to as “VMware with Kubernetes”.

key capabilities vsphere 7.png

Let’s first do a deep dive into the non-Kubernetes side and of course where everything always starts, our beloved vSphere:

vSphere 7 - cloud infrastructure modern applications.png

The primary focus in vSphere 7 is all about simplified Lifecycle Management, enhancing all the intrinsic security capabilities and the application accelerations deliveries capabilities.  The capabilities and enhances that we will be discussing will be available with Enterprise Plus.

key capabilities vsphere 7.png

Since we have so much to cover, I have broken up the detailed review of each products new features and enhancements into different blog posts:

vSphere 7: What’s new

vSAN 7: What’s new

vRealize Management 8.1 (vROPS, vRLI, vRA): What’s new

VMware Cloud Foundation 4: What’s new

The announcements above are great, but the star of the show is VMware’s entry into the container world with worthy products that will help customers navigate with ease around the complexities of Kubernetes. You can read about it in more detail on my blog post below.

VMware new product announcements: vSphere with Kubernetes (Project Pacific) & Tanzu App Portfolio

 

(All images on this page courtesy VMware)

VSAN : Migrate VSAN cluster from vSS to vDS

How to migrate a VSAN cluster from vSS to vDS
I am sure there some of you that is currently a VSAN cluster in some shape or form either in POC, Development or Production environment.  It provides a cost effective solution that is great for remote offices or even management clusters and can be implemented and managed very easily but as the saying goes nothing ever come easy and you have to work for it.  The same goes here and there are a lot of prerequisites for a VSAN environment that is crucial for implementing a healthy system that performs to its full potential.  I will not go into much detail here and feel free to contact us if any services are required.
One of the recommendations for VSAN is to use a vDS and your VSAN license actually includes the ability to use vDS which allows you as our customer to take advantage of simplified network management regardless of the underlying vSphere edition.
If you upgrade from vSS to vDS the steps are a bit different that your normal migration.  I recommend you put the host into maintenance mode with ensure accessibility.  Verify the uplink used for VSAN VMkernel and use the manage physical network adapter to remove the vmnic from vSS and to add it to vDS. Now migrate the VMkernel to the VDS.  If you review the VSAN health the network will show failed.
To verify multicast network traffic is flowing from you host use the following command on the ESXi host using bash shell:
#tcpdump-uw -i vmk2 -n -s0 -t -c 20 udp port 23451 or ump port 12345
To review your multicast network settings
#esxcli vsan network list
Ruby vSphere Console (RVC) is also a great tool to have in your arsenal for managing VSAN and following command can be used to review the VSAN state:
vsan.check_state
To re-establish the network connection you can use the following command:
vsan.reapply_vsan_vmknic_config
Rerun the VSAN health test and verify Network shows passed.

Now that the VSAN network is up and running you can migrate the rest of VMkernels.

VSAN – on-disk upgrade error "Failed to realign following Virtual SAN objects"

I upgraded the ESXi hosts from 6.0 GA to 6.0U2 and selected upgrade for VSAN On-disk format Version, however this failed with following error message:

“Failed to realign following Virtual SAN objects:  XXXXX, due to object locked or lack of vmdk descriptor file, which requires manual fix”

I reviews the VSAN health log files at following location:
/storage/log/vmware/vsan-health/vmware-vsan-health-service.log

Grep realigned

Grep Failed

I was aware of this issue due to previous blog posts on same problem and new of KB 2144881 which made the task of cleaning objects with missing descriptor files much easier.

I ran the script: python VsanRealign.py -l /tmp/vsanrealign.log precheck.

I however received another alert and the python script did not behave as it should with it indicating a swap file had either multiple reverences or was not found.

I then used RVC to review the object info for the UUID in question.

I used RVC again to try and purge any inaccessible swap files:
vsan.purge_inaccessible_vswp_objects ~cluster

no objects was found.

I then proceeded to review the vmx files for the problem VM in question and found reference to only the original *.vswp file and not with additional extension of *.vswp.41796

Every VM on VSAN has 3 swap files:
vmx-servername*.vswp
servername*.vswp
sername*.vswp.lck

I figured this servername*.vswp.41796 is just a leftover file and bear no reference to the VM and this is what is causing the on-disk upgrade to fail.

I proceeded to move the file to my /tmp directory  (Please be very careful with delete/moving any files within a VM folder, this is done at your own risk and if you are not sure I highly recommend you contact VMware support for assistance)

I ran the python realign script again. This time I received a prompt to perform the autofix actions to remove this same object in question for which i selected yes.


I ran the on-disk upgrade again and it succeeded.

Even though VMware provides a great python script that will in most instance help you clean up the VSAN disk groups, there are times when this will not work as planned and then you just have to a bit more troubleshooting and perhaps a phone call to GSS.

links:
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2144881

VSAN – cache disk unavailable when creating disk group on Dell

I ran into an issue at customer where the SSD which is to be used as the cache disk on the VSAN disk group was showing up as regular HDD.  However when I reviewed the storage device the disk is visible and is marked as flash…weird.  So what is going on here.

As I found out this due to a flash device being used with a controller that does not support JBOD.

To fix this I had to create a RAID 0 virtual disk for the SSD.  If you have a Dell controller this means you have to set the mode to RAID but make sure that all your regular HDDs to be used in the disk group is set to non-raid!  Once host is back online you have to go and mark the SSD drive as flash.  This is the little “F” icon in the disk devices.

This environment was configured with all the necessary VSAN prerequisites for Dell in place, you can review this on the following blog post:
http://virtualrealization.blogspot.com/2016/07/vsan-and-dell-poweredge-servers.html

Steps to setup RAID-0 on SSD through lifecycle controller:

  1. Lifecycle Controller
  2. System Setup
  3. Advanced hardware configuration
  4. device settings
  5. Select controller (PERC)
  6. Physical disk management
  7. Select SSD
  8. From drop down select “convert to Raid capable”
  9. Go back to home screen
  10. Select hardware configuration
  11. Configuration wizard
  12. Select RAID configuration
  13. Select controller
  14. Select Disk to convert from HBA to RAID (if required)
  15. Select RAID-0
  16. Select Physical disks (SSD in this case)
  17. Select Disk attribute and name Virtual Disk.
  18. Finish
  19. Reboot
After ESXi host is online again then you have to change the Disk to flash. This is due to RAID abstracting away most of the physical device characteristics and the media type as well.

  • Select ESXi host 
  • Manage -> Storage -> Storage adapters
  • Select vmhba0 from PERC controller
  • Select the SSD disk
  • Click on the “F” icon above.

VSAN – Changing Dell Controller from RAID to HBA mode

So had to recently make some changes for customer to set the PERC controller to HBA (non-raid), since previously it was configured with RAID mode and all disks was in RAID 0 virtual disks.  Each disk group consists of 5 disks with 1 x SSD and 4 x HDD.

I cannot overstate this but make sure you have all the firmware and drivers up to date which is provided in the HCL.

Here are some prerequisites for moving from RAID to HBA mode:  I am not going to get into details for performing these tasks.

  • All virtual disks must be removed or deleted.
  • Hot spare disks must be removed or re-purposed.
  • All foreign configurations must be cleared or removed.
  • All physical disks in a failed state, must be removed.
  • Any local security key associated with SEDs must be deleted.

I followed these steps:

  1. Put host into maintenance mode with full data migration. Have to select full data migration since we will be deleting the disk group.
    1. This process can be monitored in RVC using command vsan.resync_dashboard ~cluster
  2. Delete the VSAN disk group on the host in maintenance.
  3. Use the virtual console on iDRAC and select boot next time into lifecycle controller
  4. Reboot the host
  5. From LifeCycle Controller main menu
  6. System Setup
  7. Advanced hardware configuration
  8. Device Settings
  9. Select controller card
  10. Select Controller management
  11. Scroll down and select Advanced controller management
  12. Set Disk Cache for Non-RAID to Disable
  13. Set Non RAID Disk Mode to Enabled

VSAN – migrate VSAN cluster to new vCenter Server

I had to recently perform a VSAN cluster migration from one vCenter Server to another. This sounds like a daunting task but ended up being very simple and straight forward due to VSAN’s architecture to not have a reliance on vCenter Server for its normal operation(nice on VMware!) As a bonus the VMs does not need to be powered off or loose any connectivity.(bonus!)

Steps to perform:
  • Deploy a new vCenter Server and create a vSphere Cluster
  • Enable VSAN on the cluster.
  • Install VSAN license and associate to cluster
  • Disconnect one of the ESXi hosts from your existing VSAN Cluster
  • Add previously disconnected Host to the new VSAN Cluster on your new vCenter Server.
    • You will get a warning within the VSAN Configuration page stating there is a “Misconfiguration detected”. this is normal due to the ESXi not being able to communicate with the other hosts in the cluster it was configured with.
  • Add the rest of the ESXi hosts.
  • After all the ESXi are added back the warning should disappear.
links: