Originally Posted by tycoonbob
Yes, you could run UnRAID in a VM, but it would be very limited running in Hyper-V (no Integration Components means no Time Sync, No dynamic RAM, and only 1 CPU core). With Hyper-V, you could pass several disks directly and then UnRAID would have access to the disk like it was local, instead of through a VHD. If you have a remote box that you want to use for UnRAID, that might be tricky. The easiest way would be with a SAS Expander card in the remote storage box, and some sort of HBA on your Hyper-V box with a SFF-8088 card that you pass to the UnRAID VM, that way the drives appear local. I guess you could technically make an iSCSI target out of that storage box and make 1 LUN spanning the size of each drive, then attach those to the UnRAID box (assuming it support iSCSI connections) but then you are putting iSCSI targets in RAID which will not yield optimal performance. If you have a remote storage box, you might as well install UnRAID on it if that's what you want to run.
Ah, yea doesn't seem like that would be for me then.
I personally run Windows Storage Server 2012 with a LSI MegaRAID 9261-8i controller. That controller is in a Norco RPC-4224 with 24 hot swap bays, and a HP SAS Expander, meaning that one controller is handling up to 24 drives. However, I can utilize the SFF-8088 port of the HP SAS Expander and connect up to 96 (I think that's the number) more drives using custom JBOD chassis. If you are spanning more than 12-15 drives, I also would recommend nested RAID, or just use smaller arrays. The URE error in RAID 5 is still present in RAID 6, but at a much higher storage capacity limit. RAID 5 that's around 10TB, and RAID 6 it's around 100TB (consumer 7200RPM drives have a URE rate of 1 in 10^14 chance, or approximately 1 in ~10TB bits read). When you rebuild a RAID 5 that lost a drive, every bit of data on that array has to be moved to restripe across the new drive. If it encounters a URE (stasticially speaking, a RAID 5 array over 10TB should) it will cause another drive to fail, destroying the ENTIRE array and EVERYTHING on that array, not just a few files. RAID 5 is still great for smaller SAS drives (76, 150, 300, and 450GB SAS drives for example), but isn't ideal for large storage arrays. I have been planning my storage array for a while now, and when I get around to buying drives, I will have a 20 drive RAID 60, with 3TB drives yielding ~48TB of usable storage. I will also have 4 2TB drives in a RAID 10 yielding 4TB usable, but faster storage. I would never recommend a single array over 20 drives or so. If you want to do 30 drives, I would recommend 3 RAID 6s at 10 drives each. That still yields 72TB of storage if using 3TB drives, and you an use DFS or something similar to make them look as one. Or do a RAID 60 with 3 10 drive RAID6s, and that will also yield 72TB of storage but make the array as one, giving increased performance.I also assume that you are planning to add drives as needed, but once you get past 10 drives, you are looking at like 4-7 days to initialize a new drive into an array, that uses parity calculations. Long time, and risky.
Thanks for the info.
How do you like the norco case? I was thinking of that a while ago or something similar. I heard some complaints about the back planes.
Yea, I was thinking when I am to get so many drives I'd do 8-12 drive RAID 6 arrays. I don't need RAID 60 for my uses.
For my next build/revision I would really like to have something with this power and priceDell C1100 w/72GB RAM
2 drives in RAID 1 for OS
RAID card like yours and a expander
A dumb storage box or two like the norco 4u 24 bays for 2 RAID 6 arrays in each each. Maybe a RAID 5 or a RAID 0 or two too.
One issue tho would be if i wanted to add in a few single non RAID drives, I'd need a SATA expander, but i'd be taking up the only PCIe slot with the RAID card.
I may need to figure out a way to quiet it a little as I have no where to put it in my house but my room....maybe my closet. I saw i am able to convert the 2 onboard nic ports to 10Gb/s with a mezz card too!
Possibly build a small server rack box.
Originally Posted by Mr.N00bLaR
Lots of good info here. Nice set up, I enjoy the cables
. Don't your drives get toasty being so close with little airflow between them? Do you have fans in the front of the case that I am maybe not seeing?
Thanks. It has a single fan on the bottom of the from of the case and one at the back and one at the top, that is it. The drives that get no air flow at the top are usually at about 38C. I ave no idea what the fan cooled HDDs are at.Edited by Sean Webster - 7/30/13 at 3:45am