Overclock.net banner

1 - 11 of 11 Posts

·
Registered
Joined
·
477 Posts
Discussion Starter · #1 ·
I'm wondering how good Storage Spaces and ReFS are in comparison to 1) Linux and software RAID + LVM and 2) Unix and ZFS.
 

·
Premium Member
Joined
·
65,162 Posts
ReFS write speeds are terrible....

Storage Spaces also is not as expandable as it may appear.... you basically get locked into performance based on the initial configuration.
 

·
Premium Member
Joined
·
5,821 Posts
Quote:
Originally Posted by DuckieHo View Post

ReFS write speeds are terrible....

Storage Spaces also is not as expandable as it may appear.... you basically get locked into performance based on the initial configuration.
Ducky,

ZFS when used with FreeNAS/NAS4Free also has abysmal write speeds. Is there a "next-gen" file system that can detect corruption and works reliably that doesn't throw performance into the toilet?
 

·
Premium Member
Joined
·
65,162 Posts
Quote:
Originally Posted by coachmark2 View Post

Ducky,

ZFS when used with FreeNAS/NAS4Free also has abysmal write speeds. Is there a "next-gen" file system that can detect corruption and works reliably that doesn't throw performance into the toilet?
ReFS can be drastically speed up if you get a SSD RAID1 for journaling.

The corruption we're talking about is bit-rot, right? Another approach would be to use a "landing zone" as a buffer. Write directly to the landing zone and then the system can move the file to the main file system over time. I believe Greyhole, Microsoft Drive Extender, and Drive Bender use a similar approach. This assumes the data won't sit on a drive long enough for bit rot and the buffer is large enough to cope.
 

·
Premium Member
Joined
·
5,821 Posts
I see. So you would setup a storage pool like so?

2 x 128GB SSD
6 x 4TB HDD

And put that all in a Storage Spaces parity setup?
 

·
Premium Member
Joined
·
65,162 Posts

·
Premium Member
Joined
·
5,821 Posts
Quote:
Originally Posted by DuckieHo View Post

Depends.... do you need fast writes?

I also forget that R2 has tiered storage as well.

Important: look at the "columns" concept. It's permanent based on initial setup: http://social.technet.microsoft.com/wiki/contents/articles/15200.storage-spaces-designing-for-performance.aspx
Let's assume that random I/O is the more important metric and that sequential needs only be decent (100MB/s+). Will Storage Spaces automatically detect the drive types and optimize storage appropriately? Or is that something that must be configured by the storage administrator?

Good article by the way.
smile.gif
 

·
Premium Member
Joined
·
65,162 Posts
Quote:
Originally Posted by coachmark2 View Post

Let's assume that random I/O is the more important metric and that sequential needs only be decent (100MB/s+). Will Storage Spaces automatically detect the drive types and optimize storage appropriately? Or is that something that must be configured by the storage administrator?

Good article by the way.
smile.gif
If you need random I/O and sequential over 100MB/s.... then parity alone is definitely not what you want.

I haven't played with tiered storage yet.... list of things to do. I am pretty sure it has to be explicitly configured.... you don't stuff like that being fully automatic...
 

·
Premium Member
Joined
·
5,821 Posts
I found this in the article you linked:

Best Practices for Storage Spaces

- Set your interleave to be at least as large as the most common I/O size from the applications that will be reading from and writing to the storage space. If you are unsure, use the default interleave size of 256 KB.
- Unless your workload has very specific needs and is unlikely to grow significantly, utilize the default column count chose by Spaces at creation time.
- When mixing disk types in the same storage pool, utilize manual disk selection (-PhysicalDisksToUse parameter) when creating a virtual disk, or separate different disk types into separate storage pools. Alternatively, utilize Storage Tiering (Windows Server 2012 R2)
- Do not use simple spaces unless resiliency is provided by the application or is unnecessary.
- Do not use parity spaces for workloads that are predominantly random in nature. Parity spaces are optimized for highly sequential / append-style workloads, such as archiving.
- When using dedicated journal disks for parity spaces, deploy SSDs.

Edit: Apparently there IS some automation to it...
Storage Tiering, the practice of moving frequently accessed data to very fast storage, while maintaining infrequently accessed data on moderate or slow storage is supported with Storage Spaces starting in Windows Server 2012 R2. Frequency of access (heat) on files is measured by the file system and fed into a tiering engine that instructs Spaces to move often used files to flash-based storage devices, while retaining cold data on large-capacity, slow storage. The major benefit is a significant increase in cost efficiency, as only the critical workload is accelerated by the flash-based storage, yet the majority of data stored can remain on slow, large-capacity devices (for example, 7,200 RPM 4TB HDDs).
 

·
Registered
Joined
·
500 Posts
Storage spaces with or without storage tiering is not going to be good solution where you need a lot of I/O and speedy performance (like VM Storage). Its useful as a solution for simple things like a small web / database server.

Its fine as a NAS solution / file server as long as you are not moving large files in and out of it constantly.

This is my just opinion of it from my experiences. Try it out and see if works for you. If not, look into some other alternatives out there like hardware raid, zfs, xfs, etc.
 

·
Premium Member
Joined
·
65,162 Posts
Quote:
Originally Posted by coachmark2 View Post

Storage Tiering, the practice of moving frequently accessed data to very fast storage, while maintaining infrequently accessed data on moderate or slow storage is supported with Storage Spaces starting in Windows Server 2012 R2. Frequency of access (heat) on files is measured by the file system and fed into a tiering engine that instructs Spaces to move often used files to flash-based storage devices, while retaining cold data on large-capacity, slow storage. The major benefit is a significant increase in cost efficiency, as only the critical workload is accelerated by the flash-based storage, yet the majority of data stored can remain on slow, large-capacity devices (for example, 7,200 RPM 4TB HDDs).
Of course, there's an algorithm that manages the tiering when in use. You don't want full automation in configuration though. You don't want to plug a drive in and the OS decides what to do with it.

If you have I/O needs, parity calcs will hinder the performance. It is always a question of what is good enough?
 
1 - 11 of 11 Posts
Top