Skip to main content

Deploying VCSA6.5 on VMware Distributed Switches (DVSwitches)

In one of my labs all the hosts and VMs are on the VMware distributed switch to mimic one of the production environment. When I planned for the upgrade from vSphere 5.5 to 6.5 in that lab and started deploying the VCSA 6.5 using the .iso image - I came across this. 



So, VMware doesn't support distributed switches with non-ephemeral port group when deploying the VCSA and PSC on 6.5. I read few articles and most of them suggested to migrate your vCenter 5.5 to Standard switch as well as build the new VCSA 6.5 on standard switch for smooth migration. 

But what if environment is 100% on distributed switch and going back and creating standard switch is not an option. Well, there is a work around to get past this screen. You will need to create a port group on distributed switch with "ephemeral binding". 




Once you create that port group on the distributed switch, all the VLANs/port groups from that distributed switch would show up and you can deploy the VCSA on the distributed switches. As you see in below screenshot - I have created one port  group with suffix ephemeral and now I can see all my VLANs/Port groups in the view for the selection of the network.



It may not be recommended way from the VMware to build the VCSA6.5 appliance but that is the way to snick in if you have everything on distributed switch. 

Another way - if you want to create just VCSA 6.5 with embedded PSC is using OVA file that is in the VCSA installer folder. So, instead of using the installer.exe (Located under - vcsa-ui-installer\win32) file for install use the .ova (Located under vcsa folder) to deploy the appliance. Once the appliance is deployed - you can use https://VCSAIP Address:5480 to get the option to configure, upgrade or migrate the vCenter.

Comments

Popular posts from this blog

Guest Introspection(GI) NSX Appliance gets Deleted Automatically after vCenter Disconnect

I was working in an environment where NSX was configured with Trend micro deep security in VDI environment to protect VDI desktops. We had 2 vCenters setup in that way and one of the vCenter is showing some odd symptoms. We are seeing some weird actions with NSX manager in one vCenter. So, I lose the connection to the vCenter from fat client, web client is still fine and doesn’t kick me off. In the process, the Guest Introspection appliance or GI VMs (appliance managed by NSX) from some  random hosts were getting deleted automatically -  from the cluster that was protected with NSX. When I checked the windows logs (as vCenter was on windows server) this is what I saw -  I opened a case with VMware support - They reviewed the NSX logs and came to conclusion that this is not the NSX issue but it's vCenter that is deleting those VMs. vCenter gets disconnected for few seconds, then connects back. It doesn't recognizes those GI VMs and starts de...

How to Check Up Time of ESXi Host or Virtual Machine from vCenter Server

To check up time of ESXi host from vCenter server - Logon to vCenter Server -  If using C# (fat) client - click on cluster - on right side pane click on Hosts - select the up time option by right clicking on the header. The column will be added at the end. You can drag and drop column to place it in your view. If using HTML client - click on cluster - on right side pane click on Hosts - select the up time attribute from show/hide column by clicking on the header  If using web (Flash)  client - click on cluster - on right side pane click on related objects - hosts - right click on the header to show/hide column - click on up time attribute Similarly,  you can check the up time for a virtual machine by clicking on the virtual machine tab instead of host tab, but only caveat  is vCenter gives you up time of a virtual machine according to reboots performed from the vCenter. So, rather than rebooting server fr...

VMWare ESXi Host showing Up Time as 0 second

I have seen this more on 5.5 ESXi hosts but the host shows the up time as 0 seconds and CPU and Memory percentage as 0% Easiest way to resolve this is to put the host in maintenance mode and then restart the management agents on the ESXi hosts.  #esxcli services.sh restart But to be more cleaner I prefer to put host in maintenance mode and reboot the host to get the clean state.