In this third part of our Blog Series on Deploying Storage Spaces Direct we will talk about probably the most important configuration – Network.
Storage Spaces Direct relies on having a solid, high performing, and reliable network configuration in order to operate optimally.
Microsoft has extended the capabilities of a feature called Remote Direct Memory Access or RDMA with Windows Server 2016.
This feature enhancement is one of the key drivers that has enabled Microsoft to step into the HyperConverged arena with market leaders such as Nutanix and Simplivity.
So to ensure our network is configured properly let’s dive into what it takes to enable RDMA for Storage Spaces Direct and how to configure it inside of the new team type called a SET (Switch Embedded Team).
Configuring RDMA for S2D
In Windows Server 2012 R2 it is not possible to bind RMDA Services to a Hyper-V Virtual Switch or Virtual Adapter. This increases the number of physical network adapters that are required to be installed in the Hyper-V host.
In Windows Server 2016, you can use fewer network adapters while using RDMA with or without SET.
The image below illustrates the software architecture changes between Windows Server 2012 R2 and Windows Server 2016.
RDMA Architecture in Windows Server 2016 – IMAGE Courtesy of Technet
The following sections provide instructions on how to use Windows PowerShell commands to enable Datacenter Bridging (DCB), create a Hyper-V Virtual Switch with an RDMA virtual NIC (vNIC), and create a Hyper-V Virtual Switch with SET and RDMA vNICs.
Enable Datacenter Bridging (DCB)
Before using any RDMA over Converged Ethernet (RoCE) version of RDMA, you must enable DCB. While not required for Internet Wide Area RDMA Protocol (iWARP) networks, testing has determined that all Ethernet-based RDMA technologies work better with DCB. Because of this, you should consider using DCB even for iWARP RDMA deployments.
The following Windows PowerShell script sample provides an example of how to enable and configure DCB for SMB Direct:
# # Turn on DCB Install-WindowsFeature Data-Center-Bridging # # Set a policy for SMB-Direct New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3 # # Turn on Flow Control for SMB Enable-NetQosFlowControl -Priority 3 # # Make sure flow control is off for other traffic Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7 # # Apply policy to the target adapters Enable-NetAdapterQos -Name "SLOT 2" # # Give SMB Direct 30% of the bandwidth minimum New-NetQosTrafficClass "SMB" -Priority 3 -BandwidthPercentage 30 -Algorithm ETS
If you have a kernel debugger installed in the system, you must configure the debugger to allow QoS to be set by running the following command.
# Override the Debugger - by default the debugger blocks NetQos # Set-ItemProperty HKLM:"\SYSTEM\CurrentControlSet\Services\NDIS\Parameters" AllowFlowControlUnderDebugger -type DWORD -Value 1 -Force
Create a Hyper-V Virtual Switch with SET and RDMA vNICs
To make use of RDMA capabilies on Hyper-V host virtual network adapters (vNICs) on a Hyper-V Virtual Switch that supports RDMA teaming, you can use the below sample Windows PowerShell script.
# # Create a vmSwitch with SET # New-VMSwitch -Name SETswitch -NetAdapterName "SLOT 2","SLOT 3" -EnableEmbeddedTeaming $true # # Add host vNICs and make them RDMA capable # Add-VMNetworkAdapter -SwitchName SETswitch -Name SMB_1 -managementOS Add-VMNetworkAdapter -SwitchName SETswitch -Name SMB_2 -managementOS Enable-NetAdapterRDMA "vEthernet (SMB_1)","vEthernet (SMB_2)" # # Verify RDMA capabilities; ensure that the capabilities are non-zero # Get-NetAdapterRdma | fl * # # Many switches won't pass traffic class information on untagged VLAN traffic, # so make sure host adapters for RDMA are on VLANs. (This example assigns the two SMB_* # host virtual adapters to VLAN 42.) # Set-VMNetworkAdapterVlan -VMNetworkAdapterName "SMB_1" -VlanId 42 -Access -ManagementOS Set-VMNetworkAdapterVlan -VMNetworkAdapterName "SMB_2" -VlanId 42 -Access -ManagementOS
That wasn’t that tough now was it. In our next part of this blog series we perform the final post-configuration tasks required to build our Storage Spaces Direct Solution. Some of these configurations will include IPMI, Patching, and other minor configurations.
Thanks,
Cristal
Cristal Kawula
Cristal Kawula is the co-founder of MVPDays Community Roadshow and #MVPHour live Twitter Chat. She was also a member of the Gridstore Technical Advisory board and is the President of TriCon Elite Consulting. Cristal is also only the 2nd woman in the world to receive the prestigious Veeam Vanguard award.
BLOG: http://www.checkyourlogs.net
Twitter: @supercristal1
Hi Cristal,
Why do you the Set-VMNetworkAdapterIsolation to assign a VLAN to the vnic and not Set-VMNetworkAdapterVlan ?
Thanks,
Dennis
Hey Dennis,
It was a typo –> We have fixed it good catch.
Thanks,
Dave and Cristal