By Craig Wilson
In my last blog, we looked at how hard drives are no longer the go-to answer for large scale storage anymore and how flash storage continues to vastly exceed hard drive pricing when everyone is under every increasing pressure to maximise the return on investment for any large-scale solution.
There is, of course, a third player in this game. Tape. Like hard drives tape capacity has continued to grow, IBM is due to launch its LTO9 Ultrium Technology in the first half of 2021 with 18TB native capacity or 45TB compressed capacity per cartridge. Unlike hard drives the performance has continued to increase as well. For a typical upgrade path LTO9 is offering a 33% increase on uncompressed performance over LOT-7. Tape storage also has some unique advantages. The ability to air-gap data to protect from modern ransomware attacks and the ability to have huge capacities with minimal power usage is often overlooked when comparisons are made to traditional hard drive storage.
How do you maximise ROI? A flash tier is always going to provide the most performance however not many projects need to utilise all storage data. Data is important and most organisations need to keep data for longer than it is being actively used. Have you considered how much of your data is used on a daily or weekly basis? This is where tiering comes in.
If you can identify a small percentage of your data that needs to be accessed on a regular basis then you can start to build a solution that takes the benefits from each storage technology and truly maximise the ROI. A solution with, for example, 20 per cent flash storage would present hot data that is used regularly with maximum performance to your compute environment while warm data could be stored on a cheaper hard drive-based storage array. Data that’s not been accessed in the last six months could then be offloaded onto a tape tier using the same physical infrastructure as the backup process reduces overall power consumption.
The most popular parallel filesystems such as IBM Spectrum Scale, BeeGFS and Lustre, have support for tiering either directly or via integration with the RobinHood Policy Engine. There is also additional software such as IBM’s Spectrum Protect and Spectrum Archive, Atempo’s Miria or Starfish that can augment these features.
Caching is also an option. IBM’s Spectrum Scale especially offers great flexibility in this area with features such as local read-only cache (LROC) and highly available write cache (HAWC). LROC uses a local SSD on the node as an extension to the buffer pool providing works best for small random reads where latency is a primary concern while HAWC uses a local SSD to reduce the response time for small write operations in turn greatly reducing write latency experienced by the client.
Deploying a single storage solution will always be a strong proposition from a management overhead. However, I don’t see hard drive storage ever being beaten by flash storage on a pure capacity to cost ratio basis any time soon. By deploying tiering, caching or both will improve storage performance to maximise ROI.
If you have any questions about any or would like support on your storage needs, then please drop us a line 0114 257 2200