I’ve long been an advocate for using Distributed vSwitches (dVS) in vSphere. The single point of management, coupled with added functionality such as load-based teaming and NIOC, is a huge advantage over Standard vSwitches.
However, there’s always been a discussion about how to manage the uplinks for each dVS.
When I worked in the Professional Services team at my current employer, a number of environments I designed were built on hardware utilising 1GbE NICs. In this scenario life is simple – all uplinks can be added to the dVS and teaming and failover can used to ensure than the right portgroups go to the right uplinks.
After recently rebuilding my homelab, I’ve had the opportunity to get to grips with ten gigabit networking (our company has been using this technology for a while, however I’ve since moved on from PS). Each of my homelab servers comes with two 10GbE NICs and two 1GbE, so the design initially included one vDS with four uplinks:

I don’t think “Management – VLAN5” is working right…
In this configuration, one 1GbE would be a management connection, one would go to a DMZ switch, and the remaining two 10GbE uplinks would be used for various VMkernel and virtual machine traffic. I would use Teaming and Failover to control what traffic went where.
After discussing the design at length with a number of fellow vExperts, it came to light that creating a single vDS and adding uplinks of various speeds was a bad idea, as it could give “unpredictable results”.
I wasn’t convinced… but I soon would be.
A few days later I began having issues with the VSAN multicast performance test. I wasn’t getting the results I wanted, so I investigated setting a reservation for the VSAN portgroup.
Please note: setting a reservation for VSAN traffic is generally regarded as a bad idea. For more information, check out the VMware Virtual SAN Network design Guide at http://www.vmware.com/files/pdf/products/vsan/VMware-Virtual-SAN-Network-Design-Guide.pdf.
However when I came to set it I immediately spotted an issue – I was unable to set one above 0.75 Mb/s:
This is despite having 63.3 Gbit/s bandwidth available to the vDS:
It would appear that the dVS was including the lowest common denominator when it came to calculating bandwidth, and as the DMZ uplink was plugged into a 10/100 switch, this drastically lowered the figure.
So there we have it… real-life “unpredictable results” in action.
The solution was to create two new Distributed vSwitches for my OOB management and DMZ connections, and leave the existing one in place with only two uplinks:
Once I had removed the two 1GbE uplinks the vDS looked much better:
With that I was able to set my reservation, and eventually find my way to resolving my VSAN multicast performance test issue.
More on that another day.