Most Windows Server admins I talk to are aware of Storage Spaces, but haven’t done much with it…it’s a relatively new capability (2012ish?) and burried in Server Manager. Storage Spaces lets you build large virtual disks like a harware RAID controller, but with out the extra hardware. The REALLY cool part of Storage Spaces is that it allows you to teir your storage by mixing SSDs with traditional spinning hard drives so you can create a big drive that can obsorbe writes on the SSDs and later migrate stale data the lower cost hard drives. Some years ago, I did silly testing with a mix of USB 3.0 attached disk configured in a Spaces Tier (only try that at home!) – my test rig looked something like this (USB sticks replace with Samsung SSDs for test):
…and pushed the limits of USB 3.0:
That performace picture tells a great story for storage on my coffee table, but what about Azure? Azure lets me configure managed disks of different types…SSDs and hard drives…each with a different cost and IO limits.
The question is… is there a benefit of creating a Storage Spaces tier in Azure to meet space requirements and improve IO performance while lowering overall cost…and the answer is a definite YES!
I’ll post details of my experiences with Storage Spaces and tiering here in more posts… but wanted to start with some unscientific performace data I gathered with IOMeter for a single Azure VM where I measured just write performance with two separate Spaces configs – one with only hard drives, the other with those same drives AND an SSD:
Your performance may vary – so test away… but for me, I think this is great way optimize cost and performance.
I’ll show you what I setup in my next post, and how I did.