A few days ago I wrote an article about How to speed up vMotion? In this article, I mentioned that one of the way to improve performance of vMotion is configuring Multi-NIC vMotion. Just for a quick reminder on Multi-NIC vMotion:
- Multi-NIC vMotion was introduced in vSphere 5.0
- This feature provides load balancing the vMotion network traffic over multiple network adapters. It means that one vMotion session is balanced between all available vmknic.
- Multi-NIC vMotion is based on Active and Standby NIC configuration
The configuration of Multi-NIC vMotion with VMware vSphere Standard Switch (vSS) or Distributed Switch (vDS) is simple. But what about Cisco Nexus 1000v?
Cisco Nexus 1000v does not provide standby NIC in meaning as vSS or vDS does and it is necessary to configure Multi-NIC vMotion following the steps:
- Create a vmkernel port for each physical NIC being used for multi-NIC vMotion per host
- Create a vethernet port profile on the 1000v for each vmkernel interface you created in step 1. For every vethernet port profile increase pinning id (0, 1, 2..)
- Connect each vmkernel port to its matching port profile
- Set channel group mode mac-pinning to relative for ethernet port profile
port-profile type vethernet Multi-NIC_vMotion1
vmware port-group vMotion1
switchport mode access
switchport access vlan 990
pinning id 0
no shutdown
state enabled
port-profile type ethernet Multi-NIC_vMotion_UPLINK
vmware port-group vMotion_UPLINK
switchport mode access
switchport access vlan 990
channel-group auto mode on mac-pinning relative
no shutdown
state enabled
As I mentioned in my another article about vPC Host Mode, Nexus 1000v supports creating a special Port Channel even if the upstream physicals switches cannot support PortChannel. This feature is based on Mac Pinning and also used to configure Multi-NIC vMotion with Cisco Nexus 1000v.