Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › Ugh! So close to pulling the trigger! 10Tb+ investment
New Posts  All Forums:Forum Nav:

Ugh! So close to pulling the trigger! 10Tb+ investment - Page 2

post #11 of 19
Quote:
Originally Posted by RandomK View Post

RAID 5 only ends in tears. Period. If you must use RAID 5/6 then for the love of god do not include a hot spare. At the office we like to refer to RAID 5/6+1 as "Automated Array Destruction". You're going to be better off just copying the data off your degraded array to a new array than attempting to rebuild it if your drives are over 1.5TB (SATA) or 2TB (SAS). I would assume it would take you days or weeks to rip/download all that data again if it could even be replaced. You shouldn't take that lightly. If capacity is big concern and you want to use 6-8 drives then RAID 50 can be a decent compromise, but still isn't ideal and doesn't really sound realistic in a home server.

If it were me I would just wait a bit and buy four 4TB drives to use in RAID 10. This will give you nearly the same amount of space as four 3TB drives in RAID5, a very fast array (which will make life easier when you need to migrate that data somewhere in the future), and peace of mind (RAID 10 is extremely reliable). It also has the benefit of freeing up a lot of CPU on your server to be used for other tasks (HTPC maybe). It also dramatically shortens write times so you're less exposed to the risk of data loss from power failure. Keep in mind though that with MDADM growing a RAID 10 array is a giant pain, so size your array accordingly.

Wow. You must not work on actual storage systems. There is so much absolutely and completely factually wrong here it's not even worth my time to address it all. I mean seriously, it's plainly obvious you've never worked on anything more complicated than an ICH and have no experience working in large scale systems or with proper controllers. Either that, or you're just inexcusably bad at it and don't learn from mistakes. But I'm going to go with clueless, since 'no hotspare' is straight out of the absolutely dead wrong book. But I mean, what could I know about storage? I mean, I've only deployed a couple thousand petabytes and something like 15,000+ drives over the years.

But by all means: recommend the most vulnerable type possible for a system that's already pushing below 1-9^14 realized with a worse than 1:200 chance of two drives dropping in parallel thus obliterating any hope of recovery from a transient signal fault. Which is a "when not if" problem.

Oh, and FYI: that 1-9^14 is the optimistic number. Figuring the budget, etcetera, I'd say the realistic on it is between 1-8^13 and 1-8^14 with transient fault factored in. If you run any 0 variant with a NRBER that low, you deserve the inevitable data loss you'll get. Either you maintain a proper parity safety net or you lose data, period. And mdadm just drops that number even further, to the point where you may as well throw rare earth magnets at the box.

My advice would be to start saving and plan the arrays in advance. If you don't want to lose data, that costs money, period.
post #12 of 19
Thread Starter 
So raid6 is OK (well, I guess, more "appropriate") if i grow 2 disks at a time?



Sent from my rooted HTC Supersonic using Tapatalk 2 Pro
post #13 of 19
Thread Starter 
Quote:
Originally Posted by Sean Webster View Post

Quote:
Originally Posted by rootwyrm View Post

Quote:
Originally Posted by Sean Webster View Post

You can do it, the drive's specified URE rating needs to be taken into consideration tho. Id rather not risk URE with my own data. And RAID 6 with 4 drives is much better. Especially for expanding later on.
What? lol Slapping people with fish is fun. tongue.gif

RAID 6 would be better than 10 for expansion down the road.

Sigh. No, no it is not the NRBER. Even at 1-10^14 the single drive NRBER is irrelevant and I think there's maybe a hundred of us on the entire planet who understand NRBER as applied to RAID3 / RAID5 (they use the same base function with a varying P function.)
1-10^14 = 1 bit in 99,999,999,999,999 = 1 bit in 11.368 terabytes. The problem is SATA PHY plus consumer cables and consumer parts bump it to ~1-9^14 excluding physical damage and disconnect. The realized NRBER on a 15k drive in a consumer PC isn't even 1-10^15 thanks to everything else that breaks, like the PSU. Getting a drive to 1-10^15 is not just 'plug it in.' And even at that, the fact is that you're going to experience a mechanical defect before you hit a statistical NRBER 99.99% of the time or better. I have a list of 30+ Seacrate "Enterprise" 7200's that mechanically failed within a margin of +-100 POH at 23,000 and all less than 10% of the way to a statistical NRBE.
Between performance limitations and everything else, the chances of you getting hit by a car in your own home are vastly greater than those of hitting an NRBE prior to mechanical failure. You would have to run a HGST 7K4000 4TB drive flat out for >244,140,624.99999755 seconds (or more than 68,000 hours) to hit a statistical NRBE. That's at 1-10^14. Don't even ask what it is for a 600GB 1-10^16. The simple fact is that you're going to have any other failure resulting in data loss literally decades before you hit enough statistical NRBE to trigger anything other than a single parity recalc.

And don't frigging do 4 disk RAID6 EVER. Nothing good comes of it.
Two disk parity is a good thing. tongue.gif
Quote:
You get atrocious performance and you can't properly rebalance so you're doubly screwed.
Atrocious performance? That is why I'm getting near 800MB/s/600MB/s on my RAID 6 array? RAID controllers with a cache are in existance for that reason to assist with the write speeds.
Quote:
Never expand RAID unless you can do a block-level rebalance, which basically means: "NEVER EXPAND RAID!" You take everything off the disks, create new array, migrate data back. I have to do that song and dance on most gear well into the $1M+ price range. Low end ain't doing it better.
I've expanded and rebuilt RAID 6 arrays no issue many times for consumer and small business, and still, never showed or had an issue. Even if one were to occur that is what backups are for.
Quote:
And no, encouraging me to slap people with fish is bad. Because I'll say "I'm using a shark." And then get a forklift with an IBM ESS on it. tongue.gif
lol
Quote:
Originally Posted by anywhere View Post

Bahh!

I was going to buy 1 drive every 2 weeks and just grow the array
I would not advise rebuilding so often and to just save up. Expanding can take 2-4days depeding on the # of drives in the array...If you were to buy more drives 2x a year then that would be fine, but expansions every two weeks would be pointless.
Quote:
I guess my there's a chance my music might not play....
during an expansion music would still play fine...as long as you use a good RAID controller.


Am I misunderstanding the crossfire? Regardless of my storage use, is mdadm/raid/multi disk ha handling frowned on? Or do others just have more money or important/valuable data to take the higher class route?

2 weeks is pointless. I'll for see months.


Sent from my rooted HTC Supersonic using Tapatalk 2 Pro
post #14 of 19
Thread Starter 
well, mdadm won't grow raid10

doubtful any hardware will

so when i have 4 drives in raid10, how is it going to help me to add a 5th? a 6th?

expanding seems zero ability, so now what's my options?
post #15 of 19
Thread Starter 
well, after some virtual box testing....

raid10 is static sized..mdadm complains trying to make the impossible happen. at least in it's current stages of code..

buying 4 disks, filling them, then buying 4 more later, to not be able to grow the array? hmmm....

but read or write speed isn't my goal. 30Mb/sec is plenty for clients. i stream 1080 mkv rips, and this old turd is plenty. network stat's are showing 8MB/sec bursts occassionally when i watched a 1080 of pacific rim on the 60", 5Gb file.

i suppose i could make a 2nd same sized raid10 array, and just mount them both in one folder that's shared? i'm still pondering, everyone more then one way to skin a cat when it comes to data mangement.

anything truely important will go on DVD's, what little there is that's important.

for a soho network, i figured a parity array would be an easy answer to redudancy. rebuild times i'm not worried about. a decent quad core should be able to crunch#'s fast enough on a pci-e 16x card to reach 100Mb/sec? when i get time i'll search/google some benchmarks pertaining to mdadm, quad core, pcix16 card, and 8+ drives, i guess ZFS is a comparable canindate as well.



i'm not sure what my next move is. regarless, i'm buying 2 3tb drives via NewEgg and mirroring them asap, then when i buy 2 more drives, i'll make a decision from there. fail one of the doubles, make a degraded raid5 with the 3rd and 4th, copy over, fail/remove the 1st and 2nd, add them, parity check/resych, , then grow ext4 via resize2fs. i'm sure a day or so in between the last 2 commands

i need to get something happening, kernel buffer on freebsd is destroying the console screen on my fileserver as we speak..hehe... 3 most common words are WARING, TIMEOUT, and FAILURE, and some LBA #'s...oh goody! sounds awesome to, clicks, snaps, beeewwwwwwweeeEEEEEE, drive spins back up...can't wait to tear it apart and inspect.
post #16 of 19
I'm no storage expert by any means, but I think starting with a 4 disk raid6 and growing occasionally would be your best option. Just keep in mind that rebuilds and growing are the two most dangerous times in the life of an array and should thus be mitigated where possible. You'll likely be bottlenecked by your cpu in this sort of setup, so be aware of that and maybe consider buying a cheap replacement chip if it's socket 775 (a c2d would be a huge improvement)

This isn't an enterprise environment, so it's not like it's ultra-sensitive data. Everything in this context is replaceable, just not exactly convenient to replace so I think that raid6 provides the optimal level of fault protection and value for your application.
Renaissance
(13 items)
 
  
CPUMotherboardGraphicsRAM
i7 4790K GA-Z97n-Gaming 5 HIS 7850 Mushkin Blackline 2x8GB DDR3 2133 
Hard DriveCoolingOSCase
Corsair GS 240 Corsair H50 Windows 8.1 Lian-li PC-Q08R 
Mouse
Razer Deathadder 3500dpi 
  hide details  
Reply
Renaissance
(13 items)
 
  
CPUMotherboardGraphicsRAM
i7 4790K GA-Z97n-Gaming 5 HIS 7850 Mushkin Blackline 2x8GB DDR3 2133 
Hard DriveCoolingOSCase
Corsair GS 240 Corsair H50 Windows 8.1 Lian-li PC-Q08R 
Mouse
Razer Deathadder 3500dpi 
  hide details  
Reply
post #17 of 19
Thread Starter 
Next year I plan on moving to pci express, and a quad core 3.x ghx+

No reason that shouldn't be acceptable to the home user.

software raid isn't a concern. Flexibility is.

Looks like I'll buy these 2, later buy 2 more, Make degraded raid6, copy, disassemble previous, then add disks/recover/grow/resize.







Sent from my rooted HTC Supersonic using Tapatalk 2 Pro
post #18 of 19
Thread Starter 
Bought a pci sata2 card and did a $35 band aid until I find money for quad core /PSU/ram/mobo

4x4tb in an array.

Whew. Fsck's where getting pointless!

Sent from my rooted HTC Supersonic using Tapatalk 2 Pro
post #19 of 19
Thread Starter 
Well, I found a 64bit amd 3800 2.8ghz w/512 in the dumpster. Worked. Had to hillbilly the heats ink, the plastic tab broke for the spanner clip.

Bought more 3tb 'cudas. A $35 card with marvell chipset.

Mdadm mirrors data when creating a raid5 array with 2 devices, and will reshape parity when a 3rd is added. So I had redundancy from step one.

So with 2, 2.7tb usable and mirrored, debian runs from a 4gb pen stick (silence), swap disabled (but I had issues later on with resize2fs).

Swapped everything over to this other mother board. Between on board sata and the controller and pci card, debian saw everything. No hiccups. Array assembled and mounted, no driver issues, configuration issues. I love linux. It didn't see any difference except speed.

Added the new drives to array, grew around 100mb/sec, resize2fs came to a halt. Complained about memory. Too lazy to shut down system and add a drive for swap, so I used a swap file on the array a I grew, haha. Worked.

Now 8.1tb(2.7 usable, one for parity) /12tb (raw total) usable in raid5 on 4x3tb disks.

Ext4. Reads around 300mb/sec, writes around 130/sec. Brings that poor 2.8ghz to its knees.

Maybe I'll get lucky and find a quad core in the dumpster. I love ignorant college kids. I work at a 2000 unit apartment complex, dead center of 2 Universities. I find all kinds goodies from college turn over.


Anyways Next 2 I buy, hopefully no sooner then 6 months from now, I'll grow to raid6. I'm collecting about 2tb a month. Should slow down though.

That's been my experience. Make note.


Sent from my rooted HTC Supersonic using Tapatalk 2 Pro
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: RAID Controllers and Software
Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › Ugh! So close to pulling the trigger! 10Tb+ investment