vCloud Director – Migration of storage to new storage profile

Scenario:
Migration from local direct attached storage on single ESXi host to more flexible environment with multiple ESXi host with new shared storage profile.
Some considerations:
  • The migration will create full clone VMs on new storage profile so please take the storage usage into consideration before starting the move.  Look at thin provisioning on VMs hard disk.
  • Can you afford to shut down the VMs or not for migration, this will affect your effort.
  • Just not vAPPs needs to be moved, also remember your vAPP templates and media.  I would start with the vAPPs and media first.

Within vSphere client:
Go to VM storage profiles (normally accessible from the main home screen on both regular and web client.
Select “Manage storage capabilities”
Create a new storage capability and provide name
Done
Select “Create VM storage profile”
Provide name
Select the storage capability just created before
Done
Create storage cluster within the datacenter which is assigned to your Provider VDC.
Add the shared storage datastores to the cluster.
Verify that all the datastores are seen by the ESXi hosts.
Very important, and i have forgotten this which will cause users to be unable to deploy vAPPS:

  • Within Vcenter client select the datastore, right click and select “Assign User-defined storage     capabilities” 
  • From drop down select the previously created storage capability.
These datastores should now be visible within vCloud director.
Within vCloud director:
There are a few ways to migrate your vAPPS over to the new storage profile, which is really refreshing to see multiple options and this certainly did become very useful for us since there is a bug in vcloud 5.1.2.1068441 for making use of the “move to” feature and your IP/Mac addresses are not retained. 
(Previous post describes the symptoms and workaround)
Taking the above bug into consideration for retaining MAC/IP, and the different workload involved with each here are the ways to migrate over.
Method 1:
Take into consider that in my version 5.1.2.1068441 when you perform this task it will not retain the MAC/IP source addresses and if you have NAT’s configured for VM’s on the Edge gateway this will not work anymore and you have to recreate these with new ORG ip assigned to VM.  Subsequent version will probably have this resolved so not an issue. Run a test before moving production vAPPs!
1. Shut down your vAPP, right click and select “move to”.
2. Next to each VM, click drop down box and change the storage profile to the newly created.
3. Perform this for each VM and select OK
4. Done, and super easy!
5. The same steps can be used for vAPP templates and media.
Method 2:
This method requires a lot more effort but also find that you can do perform a hot storage vmotion, which means you don’t have to shut down the VM.  It will also retain the MAC/IP address of ORG VDC network. 
1. Open the vAPP.
2. Select the VM properties.
3. Under General tab scroll down to bottom and select from drop down the new storage profile
4. This will perform a storage vMotion on the backend which can be observed from vSphere client.
5. Perform this task on each VM.
This will however only move over the VM’s… the vSE (vShield Edge) VM which serves the network for the vAPP does not get moved over, so you have to do this manually through the vsphere client with storage vmotion.  ( NOTE: I perform this task with the supervision of VMware support and they do require that you have the latest versions of ESXi and Vcenter server installed. Perform at own risk and I take no responsibility but this worked perfect for me without any problems.  Versions I currently have installed:
a. vCenter server 5.1.0 1123961
b. vCloud director 5.1.2.1068441
c. ESXi 5.1.0 1157734
Interesting result and resolution:
After I moved all my vAPPS, vAPP templates I reviewed the old datastores with Lctree application tool and it showed no VM’s anymore on the datastore, BUT when i go to vsphere client and look at datastore it shows a bunch of VM’s still associate to the storage..wow what is going on?
After some debugging I found that it is only the media files which are still linked to the VM as an ISO.  Just eject the CD/floppy from virtual machine by right click on VM within vCloud director.  
Resources:

A great tool to use to view the linked clones and any VM’s still on old storage profile datastores.

vCloud Director 5.1.2 – Bug – Retain IP/MAC resources does not apply when you use the “Move to” task to move to new Storage profile.

In previous blog post I mentioned the usefulness of this setting, but during our storage migration to new profile for vCloud director we ran into a bug where this is not applied.
I have opened a case with VMware and they verified this as a bug and now has a SR.  Hopefully get this fixed within the next build.
My current vcloud director version where this applies:
vCloud director 5.1.2.1068441
Debugging the problem:

When you have to move vAPPs to new storage profile the easiest way is  to shut down the vAPP and select “move to”.

However when you perform this task the vAPP will actually release the Org VDC NAT’d address for the VM.
If you have any NAT’s configured on the Edge gateway, this will now be out of sync.

Workaround:
I will discuss this in next blog post.

VMware Labs Flings: Lctree – Visualization of linked clone VM trees


Flings: Lctree

I was just pointed to Flings by the VMware support team.
These apps and tools build by VMware engineers are great and already found my favorite for vCloud director.



This tool is designed for the visualization of linked clone VM trees created by VMware vCloud Director when using fast provisioning.  
I managed Lab Manager before and always found the build in context view feature useful to show the relationship and dependencies between virtual machines.
This helped me a lot in finding the information about shadow copies in our environment as well as the visualizing the chain length and make decisions on when to consolidate.


Applications are not supported so no fixes and use at own risk.


vCloud director setting – Retain IP/MAC resources

This is a great setting and very useful, which I hope all users are aware of.
Scenario:
We make use of internal vAPP network on each vAPP which is then connected to the Org DC network. This means that each VM has a NAT’d address to its owned assigned ORG VDC IP.
On the OrgVDC IP we then again use NAT’s to our external networks.  
The destination IP for these NATs in the Edge gateway is the Org VDC IP address assigned to the VM.
This means that we need to keep the Org DC ip address always the same for the associated VM within the vAPP.
Why use it:
Whenever you shut down a vAPP and you don’t have this setting enabled, the vAPP will release the Org DC ip address and will get a new address. NOT good!
Now the NAT on the edge gateway is out of the sync with the Org DC IP address for the NAT’d VM, and you will run into problem.
Check it!

Host Spanning not working for VMs which runs on different hosts within a vAPP and using vCDNI network.

By default when a vAPP is created it creates a new port group within the associated vDS.
From testing and learning the hard way it seems the first uplink listed in the vDS is always assigned as the active uplink in the port group for the vAPP and load balancing set to “Route based on the originating virtual port ID”. 
This of course means you cannot setup teaming/EtherChannel on the physical uplink ports and whichever uplink is assigned needs to have the same VLAN ID as is configured for the vCDNI.

Debugging the problem:

In my situation the vCloud environment started with a single host and direct attached storage so only had a single vDS which had the port groups assigned for management, vmotion, external networks and to which vCDNI was associated too.

This caused the situation that our management uplink was always selected as active uplink for vApp port groups create, since it was the first listed uplink in vDS.  We however did not want to assign the same VLAN and have traffic flow over the physical management ports, physical separation always best in my opinion.

Solution:

Created a separate vDS on which I migrated the management and vmotion port groups (virtual adapters) too, as well as another for my external networks. This can be accomplished without downtime when you have 2 or more uplinks associate to the vmkernel

On the vDS which is associated to the vCDNI I removed all the uplinks. 
(On each of the uplinks, the associated vmnic has to be removed first before you can delete the uplinks from vDS, this is accomplished by the following:
Select the host
Select configuration tab
Select networking
Select vSphere Distributed Switch
Select Manage physical adapters.
Click remove for vmnic from the uplink name.
I setup two uplinks on the vCD associated to vCDNI and assigned the same VLAN ID on both of the uplinks physical ports.

Migrate Management and vMotion virtual Adapters (vmk0,vmk1) to new distributed virtual switch (vDS) without downtime.

  1. In Vcenter server select networking.
  2. Create new vDS on vcloud datacenter.
  3. Set the amount of uplinks needed and name them appropriately.  In my case we have two uplinks each for vmotion and management so total of 4.  Create the same uplink names as on original vDS.
  4. Create new Management and Vmotion port groups (different names, cannot be same) and remember to set your VLAN and balancing/teaming policies, but most importantly change the active uplinks to the newly create uplinks. (The upcoming steps we will assign the physical adapter to the active uplinks.
  5. Now go the Hosts and Clusters.
  6. Select the ESXi host and select configuration -> networking.
  7. Select vSphere Distributed Switch
  8. Now you will see both the VDS’s.
  9. Updated simplified procedure below for steps 10 – 13, however original still works as well.
  10. On the original vDS select the manage physical adapters.
  11. Now remove the physical adapter from the 2nd mgmt and vmotion uplink.  Keeping the active primary uplink in place.
  12. After it is removed select the “manage physical adapters” on new vDS.
  13. Add the removed physical adapter to the new uplinks.
  14. On new vDS select manage virtual adapters.
  15. Click Add
  16. Select Migrate existing virtual adapters.
  17. Select the virtual adapters (vmk0,vmk1) from old vDS and select the new port group name from new vDS to be associated with on move.
  18. After completed.
  19. Now run steps 10 to 13 to remove the physical adapter from the original vDS uplink and add to the new vDS uplink.
  20. done

UPDATE: Actually found a shortcut for the process from step 10 – 13
  • On the new vDS select the manage physical adapters.
  • On the uplink name select “click to Add NIC”
  • Select the corresponding physical adapter on original vDC.
  • You will be prompted if you want to move the physical adapter from original to new vDC
  • Whalaa!