Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks
New Posts  All Forums:Forum Nav:

PERC 5/i RAID Card: Tips and Benchmarks - Page 200

post #1991 of 7150
Quote:
Originally Posted by Manyak View Post
Yup. Let me break it down:

Fujitsu SAS Drives 36GB:
36GB platter
7.5ms Average Seek Time
**Average Seek Time per Gigabyte: (7.5 * 2) / 36 = 0.4167ms

Caviar Black 1TB:
333GB platters
9.4ms Average Seek Time
**Average Seek Time per Gigabyte: (9.4 * 2) / 333 = 0.0565ms
I am a little confused by what you are calculating here. Access time, as measured by HDTune and the like, is already the average time it takes to go from reading any bit on the disk to reading any other bit on the disk. You can see how HDTune measures it as the little random dots appear on the benchmark graph - the program requests a series of random data points spread across the disks, times how long it takes and divides by the number of bits requested to give an average. If 2 of the requested bits are close on the disk, the access time will be lower than average. If the requests are for data at opposite extremes of the platter then the access time will be higher.

Also, you should not confuse the term 'seek time' with 'access time'. Seek time is the time it takes the head to move between 2 tracks on the disk. Access time is the time it takes from the drive receiving the request until it delivers the data. It is the sum of the seek time, any settling time (the time it takes the read head to stabilise on the correct track, usually very short), controller overhead and latency due to the spinning disk. This is the value that benchmarking programs can report.

I might be wrong here, but I can't see what use or meaning the term 'Seek Time per Gigabyte' has, because seek time is, by definition, independent of disk capacity and is an absolute measure of the speed of the hdd controller and read/write head positioning system.

As for which disk wil be faster - that is a really tricky one, and I think it depends a lot on what you want to use it for. Movie editing etc, with large sequential transfers, the Caviar drive will destroy the little SAS drive as it can transfer at around twice the speed. I imagine 3 Caviar Blacks will give 8 of these old Fujitsus a good run for the money. However, for random reads the situation might be different. The RAIDed SAS drives will, in my opinion, be quicker in real world use if coupled with a decent controller. As an OS drive they will be very fast, even with the hit in access time that RAID adds.

To be honest though, I think the cost is very high if you only want to use the RAID card in RAID 0 to run the SAS drives for your OS. 2 60GB OCZ Vertex SSDs on motherboard RAID 0 would be cheaper and faster for OS use. I have no experience of the drives, but from what I have read the Vertex's don't suffer from the stuttering that plagues other cheaper drives. As they have built-in cache you don't really need a cache on the RAID controller.

Sorry this got a little long-winded...
Edited by the_beast - 4/3/09 at 1:38am
post #1992 of 7150
Quote:
Originally Posted by ounderfla69 View Post
I only said Raid 0 and Raid 1, Raid 5 will probably not work. If your Perc5/i fails you can get another Perc5/I and plug the drives in and import Foreign configuration in the bios without having to create a new raid array.
I agree

I have never even tried a RAID-0 or RAID-1 transfer myself though... and hope I never have to.

I would be hopeful for replacing the hardware with identical hardware set up in an identical fashion. When you create a RAID there are headers (not sure what they actually are called) written into the drives... some sort of information on the drive itself at the root, that helps identify the drive so on so forth... if you start changing hardware you can't expect the information to be understood by a different controller. I am quite sure that there is some sort of deal like that going on... and that is why it is not likely that you'd be able to transplant the drives easily.

I am even worried about swapping cards, but if you do everything right, then it should theoretically work.
Staaayshon
(13 items)
 
  
CPUMotherboardGraphicsRAM
4790k Z97 Sabertooth 1070 Sea Hawk 2x8GB Tridents 
Hard DriveCoolingOSMonitor
Evo 840 + 5x7200rpm R-0 H110 W10 1x1440p, 2x1200p, Z24x 
PowerCase
EVGA 850W G2 C70 
  hide details  
Reply
Staaayshon
(13 items)
 
  
CPUMotherboardGraphicsRAM
4790k Z97 Sabertooth 1070 Sea Hawk 2x8GB Tridents 
Hard DriveCoolingOSMonitor
Evo 840 + 5x7200rpm R-0 H110 W10 1x1440p, 2x1200p, Z24x 
PowerCase
EVGA 850W G2 C70 
  hide details  
Reply
post #1993 of 7150
Quote:
Originally Posted by the_beast View Post
I might be wrong here, but I can't see what use or meaning the term 'Seek Time per Gigabyte' has, because seek time is, by definition, independent of disk capacity and is an absolute measure of the speed of the hdd controller and read/write head positioning system.

I think what he means is that, higher density means that WHAT you want is closer to each other.

You are correct that seek time is an absolute measurement of the speed the head, track and sector locations.

However, i believe he has taken into consideration, that to read from track 0 to the track that defines 36GB(for comparison sakes) is factors smaller then the Fujitsu.

Since the track densities are higher, to seek from the innermost track to the outtermost on the Fujitsu(36GB) COULD be higher then the 1TB black seeking from 0-36GB track as the head has to PHYSICALLY move less distance on the 1TB.

I haven't done any calculations to PROVE that it is faster or slower. If Manyak calculations are indeed correct then the 1TB may well be faster.
Edited by zha50 - 4/3/09 at 2:28am
My System
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q6600 G0 440x7 P5Q-Pro 2x 9800GTX+ Kingston HyperX 800CL4 
Hard DriveOptical DriveOSMonitor
G.Skill Falcon 64GB Lite-on iHAS120 Windows 7 7100 x64 ASUS 22" 
KeyboardPowerCaseMouse
Saitek Eclipse II NeoHE 550W P182 MX Rev 
Mouse Pad
QcK+ 
  hide details  
Reply
My System
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q6600 G0 440x7 P5Q-Pro 2x 9800GTX+ Kingston HyperX 800CL4 
Hard DriveOptical DriveOSMonitor
G.Skill Falcon 64GB Lite-on iHAS120 Windows 7 7100 x64 ASUS 22" 
KeyboardPowerCaseMouse
Saitek Eclipse II NeoHE 550W P182 MX Rev 
Mouse Pad
QcK+ 
  hide details  
Reply
post #1994 of 7150
Quote:
Originally Posted by zha50 View Post
I think what he means is that, higher density means that WHAT you want is closer to each other.

You are correct that seek time is an absolute measurement of the speed the head, track and sector locations.

However, i believe he has taken into consideration, that to read from track 0 to the track that defines 36GB(for comparison sakes) is factors smaller then the Fujitsu.

Since the track densities are higher, to seek from the innermost track to the outtermost on the Fujitsu(36GB) COULD be higher then the 1TB black seeking from 0-36GB track as the head has to PHYSICALLY move less distance on the 1TB.

I haven't done any calculations to PROVE that it is faster or slower. If Manyak calculations are indeed correct then the 1TB may well be faster.
This is a good point & I kind of see where he's coming from. Still not sure that the figures he presented are meaningful or accurate, but they do illustrate the problem well. It would be interesting to see how much a Black would need to be short stroked to get the access times down to the 7.5ms offered by the SAS drives. Progressively short-stroking the Black & benching would give a good indication of how the different densities, rotational latencies and controller speeds really affect access times.
post #1995 of 7150
Quote:
Originally Posted by zha50
I think what he means is that, higher density means that WHAT you want is closer to each other.

You are correct that seek time is an absolute measurement of the speed the head, track and sector locations.

However, i believe he has taken into consideration, that to read from track 0 to the track that defines 36GB(for comparison sakes) is factors smaller then the Fujitsu.

Since the track densities are higher, to seek from the innermost track to the outtermost on the Fujitsu(36GB) COULD be higher then the 1TB black seeking from 0-36GB track as the head has to PHYSICALLY move less distance on the 1TB.

I haven't done any calculations to PROVE that it is faster or slower. If Manyak calculations are indeed correct then the 1TB may well be faster.
ok ... but I still have hope that 8x SAS in RAID 5 should be a lot faster than even 3x WD ... and 8x SAS will cost me 2x WD anyway ... all of this is just for fun fast workstation with small load it should be nice to hear 8x 10K rpm starting and spinning

and still nobody said anything about my current performance on ICH10R with 4x 160GB Hitachi (HDS721616PLA380) ...

RAID 5 is on front of drives - 22 MB from each drive = 89 MB


RAID 0 - after RAID 5 = 476 GB
post #1996 of 7150
Quote:
Originally Posted by RobiNet View Post
ok ... but I still have hope that 8x SAS in RAID 5 should be a lot faster than even 3x WD ... and 8x SAS will cost me 2x WD anyway ... all of this is just for fun fast workstation with small load it should be nice to hear 8x 10K rpm starting and spinning

and still nobody said anything about my current performance on ICH10R with 4x 160GB Hitachi (HDS721616PLA380) ...

RAID 5 is on front of drives - 22 MB from each drive = 89 MB

RAID 0 - after RAID 5 = 476 GB
I think if you want to get a faster workstation on a budget, I would look at repartitioning your current drives first. Put a small RAID 0 partition at the start of the disk to use for your OS, then use the rest of the space as RAID 5 for storage. This should give you a bit of a performance boost for free (although I can't remember exactly what it was you said you were going to use the drive for, so this might not be ideal...)

Although I have been defending the SAS drives (I actually have 4 Savvio 10K.2 drives that I intend to run in 2 RAID 1 sets on a PERC6i for OS & hyper-v storage on my server), I don't think they are right for your application. If you have ICHR10 and are looking for extra speed on a budget, I still think Vertex drives are the way to go. Even a single 120GB vertex would feel much faster than your 8 SAS drives, although it would lose out a little in sequential transfer. And you could then use your current drives in RAID 10 (if supported by the ICHR10) and have fast but redundant storage.

The PERCs are great cheap RAID cards, but unless you need multi-drive RAID 5/6 for storage or very large RAID 0 arrays for speed I think the new cache equipped SSDs are a better bet.
post #1997 of 7150
Quote:
Originally Posted by the_beast
The PERCs are great cheap RAID cards, but unless you need multi-drive RAID 5/6 for storage or very large RAID 0 arrays for speed I think the new cache equipped SSDs are a better bet.
but buying PERC 5/i with 512MB cache isn't so bad idea and $100 shouldn't be wasted ? I have still hope, that my system will work faster on hardware controller (with dedicated cache) even on old Hitach drives ...
Edited by RobiNet - 4/3/09 at 6:40am
post #1998 of 7150
Looking at the first post which has ICH9R vs PERC benchmarks, I imagine your performance will go up slightly, but maybe not enough to notice. Your 120GB drives are pretty slow anyway, so the boost from the PERC might not give you much extra speed. I think you really need some new drives to make the increase worthwhile.

Honestly, if I wasn't going to be using the PERC for a 4 drive RAID 6 array in addition to my SAS drives, I wouldn't have bought it (or the drives). When I need to expand my storage beyond 4 drives I will remove the SAS drives and go back to SATA OS drives.

As for 512MB cache - if you need to replace the memory, get a 512MB stick. If not, stick with the stock 256MB as you will likely not notice the difference.

Hope this helps,
post #1999 of 7150
Hello, where can I find the Vista RAID drivers for a PERC5/i?
I can't install Vista on my RAID0 array, no HDDs show up at the format screen.

Thanks.

EDIT, now it won't install them, says I need signed 64 bit drivers
Edited by xToaDx - 4/3/09 at 8:15am
post #2000 of 7150
Quote:
Originally Posted by xToaDx View Post
Hello, where can I find the Vista RAID drivers for a PERC5/i?
I can't install Vista on my RAID0 array, no HDDs show up at the format screen.

Thanks.

EDIT, now it won't install them, says I need signed 64 bit drivers
Go to the Dell support page for the PowerEdge 1900 server. You'll find Server 2008 x64 drivers for the Perc 5/i that'll work in Vista x64. Note that these drivers will work even if you flashed your Perc with LSI firmware.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: RAID Controllers and Software
Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks