Hey Storage Spaces Direct Fans,
We are back again with the 2nd post of the day. Today I had a great customer of mine double the capacity of their Storage Spaces Direct cluster on HPE DL380 G9’s by adding 24 x 1.6 TB SSD Drives. Now, when you expand a Storage Pool with Storage Spaces Direct the pool is supposed to automatically expand itself. What we found today we that the disks were getting added to the pool but the pool wasn’t expanding automatically.
Here is a shot of the configuration before adding the 6 additional drives. Sorry folks I wasn’t onsite so I couldn’t take the after picture just yet.
Let me walk you through my basic troubleshooting steps.
#1 – I checked the Storage Subsystem to see if there were any issues by running Debug-StorageSubsystem, then check with Get-Physicaldisk.
Below is the output. As we can see we didn’t see any error messages with Debug-StorageSubsystem and all 48 SSD Drives are there.
#2 – Run a Storage Health Report. Interestingly enough the CapacityPhysicalUnpooled was sitting at 34.93.. Hmmm.
#3 – I ran the Get-PhysicalDisk and looked at the CannotPoolReason Property. This time I found something very interesting that I hadn’t seen before.
The new drives were stuck in a Verification in progress state. This was preventing them from being added to the pool free space and left in an unallocated state.
They had been stuck like this for over 48 hours by the time I had a chance to take a look at this.
#4 – I knew that there is a bug with some Storage Spaces Direct Jobs getting stuck and having to change the CSV Ownership to kick them off.
So, I did this and these were the results within a few minutes.
I could immediately see Storage Jobs kicking off and the drives moving to CanPoolReason = True from Verification in Progress
#5 – At the end I quickly checked the GUI of Failover Cluster Manager and could see that the Capacity of the pool was now growing and it eventually completed adding the 24 disks.
For everyone’s reference we are on the August 2017 cumulative updates for Windows Server 2016.
Thanks and Happy Learning,
Dave
Hi, i have same problem….
> #4 – I knew that there is a bug with some Storage Spaces Direct Jobs getting stuck and having to change the CSV Ownership to kick them off.
Please describe detailed process! or show small example about changing CSV ownership to fix this issue
or you describe move owner role to other node?
Hey Dmitry,
You would open Failover Cluster Manager, Select Storage, Select disks, Right click on the CSV and Change the ownership to another node. Then check Get-StorageJob and see if it is moving.
There are known bugs out there with these storage jobs failing.
We also have a new slack channel slack.storagespacesdirect.com if you would like to chat directly.
Thanks,
Dave
Is it necessary to upgrade to 1706?
No it’s not required to go to 1706
Hello Dave,
Thanks for the article, it seems you have successfully solved this problem. But maybe you could advice me on the issue I have with my servers. The thing is I just created [sort of test] S2D cluster of two nodes, one SSD and two HDDs per node, so essentially I didn’t create any other pools, volumes or drives to work with other than the default pool which was had been created by `Enable-ClusterS2D` itself. But on HDD on one of the nodes seems to be not added to the pool with that exact message – “Verification in Progress”. It’s been couple of days since it is like that. Changing storage pool owner doesn’t do a thing, maybe you have something to add here?
Thanks