Start Updating vmware esx

Updating vmware esx

(v Switch4) Please note that the above design contains two single point of failure’s, whenever the Onboard NIC fails my whole front-end fails (same story whenever the Mezzanine Card fails, in that case my whole storage will be lost.) Customer constraints however kept me from doing it the way displayed in the image below (which obviously is technically the best way).

So for i SCSI we selected MZ2:1-A en MZ2:2-A as the 2 links with 10 GB allocated, leaving 0GB for MZ2:1-B, MZ2:2-B etc etc.

The final picture from v Switch perspective looks like this, where we separated: -Service Console (1GB – v Switch0) -VMotion (7GB – v Switch1) -Fault Tolerance (1GB – v Switch2) -VM Network’s (1GB – v Switch3) And gave the full 10GB to the i SCSI Storage.

So for example, whenever the module from Interconnect Bay 1 fails, vmnic0, 2,4 and 6 will fail as well, causing a failover to be initiated from within ESX (whenever configured correctly obviously 😉 When the module is powered on again the v Switch uplinks will be restored.

This is the case for Interconnect Bay 1 and 2 but this isn’t the case for Interconnect Bay 5 and 6.

The “FRONTEND”-SUS is connected via 4 10GB connections towards two Cisco 6509.

(20GB Active/20GB Passive) The “STORAGE”-SUS is connected via 4 10GB connections towards two Cisco Nexus 5000’s (20GB Active/20GB Passive) Word of advice: It’s recommended to use Portfast-settings on the endpoints of the Shared Uplink Set-connections.

If you are looking for the HP Flex Fabric mappings click here. These are pretty straight forward and should be known to all HP c-Class Administrators.