Deploying Storage Spaces Direct (S2D)
The day has arrived for your company to start deploying its newly acquired HyperConveged Infrastructure from SuperMicro. Fortunately, the Out of Box experience process has been greatly simplified and a near instant-on infrastructure can be achieved. Your Vendor has pre-imaged all of the nodes with a copy of Windows Server 2016 Datacenter edition. As you follow this blog series we will walk you through the post-configuration steps necessary to deploy a hyper-converged Storage Spaces Direct configuration.
One box arrived today at your company’s headquarters in Calgary, AB, Canada. Currently all that is configured in the rack at the datacenter is the following:
- 2 x 15 Amp APC PDU (Power Distribution Units)
- 2 x Top of Rack Cisco Nexus 9372x Switches
-
2 x APC Rack Mount UPS
The next step was unboxing the Supermicro Superserver 2028TP-HC0TR. This is a very common model used by other hyper-converged vendors such as Nutanix. The nice thing about this unit is that it is readily available and the post configuration steps are quite straight forward.
Supermicro Superserver 2028TP-HC0TR front view
Supermicro Superserver 2028TP-HC0TR rear view
System view of Supermicro Superserver
Individual node view
For the purpose of this blog post we used an online configurator from a Supermicro reseller in the United States. This can be used as a sample configuration. This one had a few extra components in it like a APC UPS and the Windows 2016 licensing.
SuperServer 2028TP-HC0R Configured Price: $48,161.00 |
|
Selection Summary |
|
Barebone |
Supermicro SuperServer 2028TP-HC0TR – 2U TwinPro2 – 24x SATA/SAS – Dual 10-Gigabit Ethernet – LSI 3008 12G SAS – 2000W Redundant |
Processor |
8 x Eight-Core Intel Xeon Processor E5-2620 v4 2.10GHz 20MB Cache (85W) |
Memory |
32 x 32GB PC4-19200 2400MHz DDR4 ECC Registered DIMM |
Boot Drive |
128GB SATA 6.0Gb/s Disk on Module (MLC) (Vertical) |
Hard Drive |
16 x 900GB SAS 3.0 12.0Gb/s 10000RPM – 2.5″ – Hitachi Ultrastar C10K1800 (512n)
8 x 960GB Samsung PM863a Series 2.5 SATA 6.0Gb/s Solid State Drive |
Optical Drive |
No Optical Drive Support |
Network Card |
4 x Mellanox 10-Gigabit Ethernet Adapter ConnectX-3 EN MCX312A (2x SFP+) |
Power Protection |
APC Smart-UPS 5000VA LCD 208V – 5U Rackmount |
Operating System |
4 x Microsoft Windows Server 2016 Datacenter (16-core) |
Warranty |
3 Year Advanced Parts Replacement Warranty and NBD Onsite Service |
Tech Specs |
|
Barebone |
|
Memory Technology |
DDR4 ECC Reg |
North Bridge |
Intel C612 |
Form Factor |
2U |
Color |
Black |
Memory Slots |
16x 288-pin DIMM Sockets (per node) |
Graphics |
ASPEED AST2400 BMC |
Ethernet |
Intel X540 Dual port 10GBase-T (per node) |
Power |
2000W Redundant Platinum High-efficiency Power Supply |
External Bays |
6x 2.5″ Hot-swap SATA/SAS Drive Bays (per node) |
Expansion Slots |
1x PCI Express 3.0 x16 low-profile slot (per node) |
Front Panel |
Power On/Off button |
Back Panel |
2x USB 3.0 ports |
Dimensions (WxHxD) |
17.25″ (438mm) x 3.47″ (88mm) x 28.5″ (724mm) |
SAS 12Gbps Controller |
LSI 3008 |
SAS 12Gbps Ports |
24 (6 per node) |
SATA 6Gbps AHCI Controller |
Intel C612 |
SATA 6Gbps AHCI Ports |
24 (6 per node) |
SATA 6Gbps SCU Controller |
sSATA |
SATA 6Gbps SCU Ports |
16 (4 per node) |
Processor |
|
Product Line |
Xeon E5-2600 v4 |
Socket |
LGA2011-v3 Socket |
Clock Speed |
2.10 GHz |
QuickPath Interconnect |
8.0 GT/s |
Smart Cache |
20MB |
Cores/Threads |
8C / 16T |
Intel Virtualization Technology |
Yes |
Intel Hyper-Threading |
Yes |
Wattage |
85W |
Memory |
|
Technology |
DDR4 |
Type |
288-pin DIMM |
Capacity |
32 x 32 GB |
Speed |
2400 MHz |
Error Checking |
ECC |
Signal Processing |
Registered |
Boot Drive |
|
Storage Capacity |
128GB |
Interface |
6.0Gb/s Serial ATA |
Hard Drive |
|
Storage Capacity |
16 x 900GB 8 x 960GB |
Interface |
12.0Gb/s SAS 6.0Gb/s Serial ATA |
Rotational Speed |
10,000RPM |
Cache |
128MB |
Format |
512n |
Endurance |
1.3x DWPD |
Read IOPS |
97,000 IOPS (4KB) |
Write IOPS |
24,000 IOPS (4KB) |
Read Speed |
520 MB/s |
Write Speed |
480 MB/s |
NAND |
3D V-NAND |
Network Card |
|
Transmission Speed |
10-Gbps Ethernet |
Port Interface |
SFP+ |
Host Interface |
PCI Express 3.0 x8 |
Cable Medium |
Copper |
The SuperMicro server has been racked and wired as shown in the diagram below:
Wiring Configuration Cisco Nexus 9372x
Well folks that is where we will leave part #1 of this blog post. In the next post we will show you how to configure your Cisco Nexus Switches for this project.
Thanks,
Cristal
Cristal Kawula
Cristal Kawula is the co-founder of MVPDays Community Roadshow and #MVPHour live Twitter Chat. She was also a member of the Gridstore Technical Advisory board and is the President of TriCon Elite Consulting. Cristal is also only the 2nd woman in the world to receive the prestigious Veeam Vanguard award.
BLOG: http://www.checkyourlogs.net
Twitter: @supercristal1
Did you have any issues with drive stability on this setup under heavy loads? I’m using HP servers but the drives are the same – HP branded Samsung MP863a SSD’s. Everytime my cluster gets heavily loaded one or more of the drives will go into ‘lost communication’ state and only a reboot will fix it.
There is a actually a bug right now with the May CU–> check my latest post. Under Heavy load this is the case. What CU are you on Brian?