With the release of our Master Storage Spaces Direct book on Amazon we have been getting a lot of requests to help people setup some test labs.
Storage Spaces Direct does require some specific hardware to get going and today we had one such case.
We wanted to use some existing Dell R620 hardware to play with Storage Spaces Direct S2D in the lab.
We put in 2 x SSD Drives for the Cache (Journal) and 4 x SATA Drives for the Storage Pool
So, we actually followed the configuration that we had laid out in Chapter #5 in the book and ran into the following issue:
MediaType = Unspecified
Now the tricky part is that in order to change the media type you actually need to enable S2D first and create a storage pool.
When we tried to run Enable-StorageSpacesDirect we received the following errors:
We decided to skip the auto configuration by running:
Enable-ClusterStorageSpacesDirect-SkipEligibilityChecks -Autoconfig:$false -PoolFriendlyName S2DPool
The problem now was that Storage Spaces Direct couldn’t find a suitable drive for Cache because the SSD drives were still configured as Unspecified Media Type.
In order to change this, we needed to create a Storage Pool and then Change the Media Type Values.
Get-StorageSubSystem *cluster* | New-StoragePool -FriendlyName S2DPool -WriteCacheSizeDefault 0 -ProvisioningTypeDefault Fixed –ResiliencySettingNameDefault Mirror -PhysicalDisks (Get-PhysicalDisk | ? CanPool -EQ $true) Get-Physicaldisk | where size –eq 999653638144| Set-PhysicalDisk –MediaType HDD Get-Physicaldisk | where size –eq 524522881024| Set-PhysicalDisk –MediaType SSD Get-Storagepool S2DPool | Get-PhysicalDisk | ? MediaType -EQ SSD | Set-PhysicalDisk -Usage Journal
However, when we ran Get-ClusterStorageSpacesDirect we saw this:
We have tested with this configuration in the past and performance is very bad on our S2D Cluster without the Cache.
The only way I have figured out to work around this issue was as follows:
- Create the Failover Cluster
- Enable Storage Spaces Direct with Cache Disabled and AutoConfig Off
- Create a new Storage Pool
- Fix the Media Types
- Manually Turn back on Caching
Here is the PowerShell script that we used to build this base cluster for the testing lab today:
New-Cluster -Name S2DCluster -Node sh-va-r4,sh-va-r5 -NoStorage -StaticAddress 192.168.110.99 Enable-ClusterStorageSpacesDirect -SkipEligibilityChecks -Autoconfig:0 -confirm:$false -PoolFriendlyName S2DPool -CacheState Disabled -verbose Get-StorageSubSystem *cluster* | New-StoragePool -FriendlyName S2DPool -WriteCacheSizeDefault 0 -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror -PhysicalDisks (Get-PhysicalDisk | ? CanPool -EQ $true) Get-Physicaldisk | where size –eq 999653638144| Set-PhysicalDisk –MediaType HDD Get-Physicaldisk | where size –eq 524522881024| Set-PhysicalDisk –MediaType SSD Get-Storagepool S2DPool | Get-PhysicalDisk | ? MediaType -EQ SSD | Set-PhysicalDisk -Usage Journal Get-PhysicalDisk | Sort-Object FriendlyName | ft FriendlyName,HealthStatus,Size,BusType,MediaType,usage Get-ClusterStorageSpacesDirect (Get-Cluster).S2DCacheDesiredState = 2 Get-PhysicalDisk | Sort-Object FriendlyName | ft FriendlyName,HealthStatus,Size,BusType,MediaType,usage New-Volume -StoragePoolFriendlyName S2DPool -FriendlyName Volume1 -PhysicalDiskRedundancy 1 -FileSystem CSVFS_ReFS -Size 500GB New-Volume -StoragePoolFriendlyName S2DPool -FriendlyName Volume2 -PhysicalDiskRedundancy 1 -FileSystem CSVFS_ReFS -Size 500GB Get-ClusterStorageSpacesDirect
As you can see after we did this the Caching Error and the Unspecified Media Type Errors were fixed.
Hope you enjoy
Super Cristal
Thanks for writing this. I’m going to give this a stab in the following week and see if I can do the same. I’m running an array full of SSD’s. I dont need huge storage – I just need it fast.
I’ve also had the issue with Drived being plugeed into a RAID Controller and not supported by StorageSpaces. I found that Dell has new drivers for the controller that shows the drives as ATA, But I’m using IBM Hardware and there is no such update. Maybe this is a workaround – I’ll report back.
Also. I’m doing this with 4 Fully loaded x3650 M5 servers – I’m going to trying this with 3 of those being Nano server also. Have you tried with Nano ?
Hey Mike,
Thank you for the comment. I am very curious to see your results with the Fully Loaded x3650M5 Servers. If you can capture some data we might be able to incorporate into the next version of Master Storage Spaces Direct Volume 2. Always looking for more content.
We are just getting started with our own testing of Nano. The big issue with Nano is drivers and the physical hardware. We have tested in a virtual nested lab and it works great. I am going to send you a direct email so you have my contact info. Great to find like minds doing similar work.
Thanks,
Dave
This helped me quite a bit -to get rid the warning about “no disks for cache”. However oddly, even though now I don’t get the warning now, I do not see any cache disks in the performance counters so I don’t believe this actually worked – although cosmetically it looks on first impression it did.
have you tested your solution? I did the same, but the message is gone, but the ssd cache is not working, the performance does not increase.
Typo alert: I think you meant for line 11 in the final script to say:
Get-ClusterStorageSpacesDirect
Hello Dave,
I know this is an old article but I have some R620’s I’d like to test S2D on and was wondering what drive controller your 620 had in it that let it see the drives as non-raid connected.
Thank you so much,
Scott
Hey Scott,
The Are Perc 310 mini’s … The drives were showing up as RAID for me in this.
I’m with David on this one. While there are no errors, if I use Get-CacheDiskStatus, it shows that none of the disks are actually ever assigned.
Hey Craig thank you for you comment. Sorry for taking so long to respond. I agree that is a good idea to check the cache disk status.
Dave,
I’m building a 2 node S2D cluster (for lab) using two HP DL380, 4 HDD, 2 SSD. The onboard raid does not support HBA mode, so I’ve used your work-a-round for that (its a lab) and the script you created above to configure the pool, mediatype and create virtual disks. For the most part it runs without error, until it gets to creating the virtual disk. Here I get the following error
= = = = = = = = = = = = =
PS C:\> New-Volume -StoragePoolFriendlyName S2DPool -FriendlyName Volume1 -PhysicalDiskRedundancy 1 -FileSystem CSVFS_ReFS -Size 500GB
New-Volume : Object Not Found
Extended information:
The disk associated with the virtual disk created could not be not found.
Activity ID: {0bd9e8b4-31c5-4d1f-87fa-b7d366699a63}
At line:1 char:1
+ New-Volume -StoragePoolFriendlyName S2DPool -FriendlyName Volume1 -Ph …
= = = = = = = = = = = = =
The script does create the pool, sets the SSD usage as journal, but cannot create the VD. I’ve even tried to create them manually; same error. Any insight you’d like to share?
Also, I’m using SET team’d 4x1Gbit nics. I’ve seen your video setting up 10Gbit adapters to be used for the storage traffic using QOS, but how do I do the same using 1Gb Nics? QOS? In VMM I’ve already setup SET switch to use Port Profiles with 40% min bandwidth.
Darryl,
I had the same problem with the “The disk associated with the virtual disk created could not be not found.” message. For me, as I’m using only RAID bus drives and forcing S2D to use them, I had to set the cluster s2d bus type to 256 (for pure RAID bus) using ‘(Get-Cluster).S2DBusTypes=0x100’. This does mean other Bus types won’t work.
I’m guessing when the S2DBusTypes flag is set to ‘(Get-Cluster).S2DBusTypes=4294967295’ it looks at SATA etc. first and if there are no available drives of that Bus type, it decides there’s nothing there at all.
Of course, none of this may apply if you’re not using RAID drives!
Hello!
Which controller does the T620 use? What options can I use?
I have 2 machines with 32 disks each.
My controller does not support non-RAID.