Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › Low IOPs using LSI adapter!
New Posts  All Forums:Forum Nav:

Low IOPs using LSI adapter!

post #1 of 9
Thread Starter 
Hello,

I'm getting low random read/write IOPs in between 50-60k (sata 2 speeds), as soon as I plug a SSD back into my motherboards SATA 3 ports I get the advertised speeds. However, Sequential read/write speeds are high across the board, only the IOPs are being affected by this adapter.

Specs:

http://www.ebay.com/itm/New-IT-Mode-LSI-9211-8i-SAS-SATA-8-port-PCI-E-6Gb-s-Controller-Card-/252344098349?hash=item3ac0e15e2d:g:-w4AAOSwGYVXAyCn&rmvSB=true3930k
GA UD5 x79
Samsung EVO 840/850, 256gb and 500gb
Samsung 840 PRO 500gb

I currently have the latest IT firmware installed (2118it.bin / v20.00.00.00) and the newest driver from 2015 (2.0.76.0) opposed to the initial 2009. However, I was told that this is a SAS 2.0 device and I will only get sata 2 speeds. This makes no sense to me, because my sequencial read/writes hover around 550mb, its only the IOPs that are low. Any idea what's going on? My IOPs went from max of 60k to 70k after firmware update, but I should still be getting 90k+

Running out of time to return this thing, my last idea could be that it should be in a PCI 3.0 slot.

Thanks!
Edited by Dracarys - 7/9/16 at 9:19pm
post #2 of 9
Thread Starter 
Id like to add I'm only running JBOD and no raid, simple stuff just to bypass my sata 2 ports on my mobo. I also bypassed my ICY DOCK hub to make sure it wasn't bottlenecking me, so right from the LSI to my SSD out of the bay, still giving me the same results.

Just weird how only the IOPs are not at full potential, and not sequentials
Edited by Dracarys - 7/10/16 at 9:20pm
post #3 of 9
Perhaps it's cache related issue. Speaking from experience with Adaptec cards (not owning any LSI). I have to chop and change depending on SSD vendor or drive controller to achieve optimal performance. Some drives run better with drive caching disabled, some with enabled and some 50/50. Even Adaptec themselves recommend disabling caching for all SSD arrays (I know you run JBOD) which doesn't always work properly and in fact sometimes slowing drives to a crawl.
post #4 of 9
For small arrays, it's going to be almost impossible to beat the performance of the onboard ports connected to the Intel PCH.

If you were getting 90k IOPS with the drives attached to the motherboard, 70k with them attached to a lower-mid range SAS controller doesn't sound too bad really. Sequential performance is typically impacted far less by the increased latency of having another stage in the chain.

The controller does officially support 6.0Gbps speeds: http://www.avagotech.com/products/server-storage/io-controllers/sas-2008#specifications

The card itself is PCI-E 2.0, but moving it to a PCI-E 3.0 slot might help because it will be directly connected to the CPU, rather than hanging off the PCH. I still would not expect the same performance as the native Intel ports without a much larger and more complex array.
Primary
(15 items)
 
Secondary
(13 items)
 
In progress
(10 items)
 
CPUMotherboardGraphicsRAM
5820K @ 4.2/3.5GHz core/uncore, 1.175/1.15v Gigabyte X99 SOC Champion (F22n) Gigabyte AORUS GTX 1080 Ti (F3P) @ 2025/1485, 1... 4x4GiB Crucial @ 2667, 12-12-12-28-T1, 1.34v 
Hard DriveHard DriveHard DriveCooling
Plextor M6e 128GB (fw 1.06) M.2 (PCI-E 2.0 2x) 2x Crucial M4 256GB 4x WD Scorpio Black 500GB Noctua NH-D15 
OSMonitorKeyboardPower
Windows 7 Professional x64 SP1 BenQ BL3200PT Filco Majestouch Tenkeyless (MX Brown) Corsair RM1000x 
CaseMouseAudio
Fractal Design Define R4 Logitech G402 Realtek ALC1150 + M-Audio AV40 
CPUMotherboardGraphicsRAM
X5670 @ 4.4/3.2GHz core/uncore, 1.36 vcore, 1.2... Gigabyte X58A-UD5 r2.0 w/FF3mod10 BIOS Sapphire Fury Nitro OC+ @ 1053/500, 1.225vGPU/1... 2x Samsung MV-3V4G3D/US @ 2000, 10-11-11-30-T1,... 
RAMHard DriveHard DriveHard Drive
1x Crucial BLT4G3D1608ET3LX0 @ 2000, 10-11-11-3... OCZ (Toshiba) Trion 150 120GB Hyundai Sapphire 120GB 3x Hitachi Deskstar 7k1000.C 1TB 
CoolingOSPowerCase
Noctua NH-D14 Windows 7 Pro x64 SP1 Antec TP-750 Fractal Design R5 
Audio
ASUS Xonar DS 
CPUMotherboardGraphicsRAM
i7-6800K @ 4.3/3.5GHz core/uncore, 1.36/1.2v ASRock X99 OC Formula (P3.10) GTX 780 (temporary) 4x4GiB Crucial DDR4-2400 @ 11-13-12-28-T2, 1.33v 
Hard DriveHard DriveCoolingOS
Intel 600p 256GB NVMe 2x HGST Travelstar 7k1000 1TB Corsair H55 (temporary) Windows Server 2016 Datacenter 
PowerCase
Seasonic SS-860XP2 Corsair Carbide Air 540 
  hide details  
Reply
Primary
(15 items)
 
Secondary
(13 items)
 
In progress
(10 items)
 
CPUMotherboardGraphicsRAM
5820K @ 4.2/3.5GHz core/uncore, 1.175/1.15v Gigabyte X99 SOC Champion (F22n) Gigabyte AORUS GTX 1080 Ti (F3P) @ 2025/1485, 1... 4x4GiB Crucial @ 2667, 12-12-12-28-T1, 1.34v 
Hard DriveHard DriveHard DriveCooling
Plextor M6e 128GB (fw 1.06) M.2 (PCI-E 2.0 2x) 2x Crucial M4 256GB 4x WD Scorpio Black 500GB Noctua NH-D15 
OSMonitorKeyboardPower
Windows 7 Professional x64 SP1 BenQ BL3200PT Filco Majestouch Tenkeyless (MX Brown) Corsair RM1000x 
CaseMouseAudio
Fractal Design Define R4 Logitech G402 Realtek ALC1150 + M-Audio AV40 
CPUMotherboardGraphicsRAM
X5670 @ 4.4/3.2GHz core/uncore, 1.36 vcore, 1.2... Gigabyte X58A-UD5 r2.0 w/FF3mod10 BIOS Sapphire Fury Nitro OC+ @ 1053/500, 1.225vGPU/1... 2x Samsung MV-3V4G3D/US @ 2000, 10-11-11-30-T1,... 
RAMHard DriveHard DriveHard Drive
1x Crucial BLT4G3D1608ET3LX0 @ 2000, 10-11-11-3... OCZ (Toshiba) Trion 150 120GB Hyundai Sapphire 120GB 3x Hitachi Deskstar 7k1000.C 1TB 
CoolingOSPowerCase
Noctua NH-D14 Windows 7 Pro x64 SP1 Antec TP-750 Fractal Design R5 
Audio
ASUS Xonar DS 
CPUMotherboardGraphicsRAM
i7-6800K @ 4.3/3.5GHz core/uncore, 1.36/1.2v ASRock X99 OC Formula (P3.10) GTX 780 (temporary) 4x4GiB Crucial DDR4-2400 @ 11-13-12-28-T2, 1.33v 
Hard DriveHard DriveCoolingOS
Intel 600p 256GB NVMe 2x HGST Travelstar 7k1000 1TB Corsair H55 (temporary) Windows Server 2016 Datacenter 
PowerCase
Seasonic SS-860XP2 Corsair Carbide Air 540 
  hide details  
Reply
post #5 of 9
Thread Starter 
I've also read this caching can be a nightmare, but I might give it a try, not even sure how to turn it off/on and I'm almost certain it won't make a difference.

As for the PCIe 3.0, would that also reduce latency that I keep hearing about? Apparently it's better to have all your SSDs in your motherboards ports. Although for the type of music production I do, I think it's better to have them spit up so everything is not fighting through the master bus. Large sample are shot from my SSDs to RAM, up to 50GB sometimes depending on session:

https://www.youtube.com/watch?v=j6Ikb81-K7k
Quote:
Originally Posted by Ypsylon View Post

Perhaps it's cache related issue. Speaking from experience with Adaptec cards (not owning any LSI). I have to chop and change depending on SSD vendor or drive controller to achieve optimal performance. Some drives run better with drive caching disabled, some with enabled and some 50/50. Even Adaptec themselves recommend disabling caching for all SSD arrays (I know you run JBOD) which doesn't always work properly and in fact sometimes slowing drives to a crawl.
Quote:
Originally Posted by Blameless View Post

For small arrays, it's going to be almost impossible to beat the performance of the onboard ports connected to the Intel PCH.

If you were getting 90k IOPS with the drives attached to the motherboard, 70k with them attached to a lower-mid range SAS controller doesn't sound too bad really. Sequential performance is typically impacted far less by the increased latency of having another stage in the chain.

The controller does officially support 6.0Gbps speeds: http://www.avagotech.com/products/server-storage/io-controllers/sas-2008#specifications

The card itself is PCI-E 2.0, but moving it to a PCI-E 3.0 slot might help because it will be directly connected to the CPU, rather than hanging off the PCH. I still would not expect the same performance as the native Intel ports without a much larger and more complex array.
post #6 of 9
Quote:
Originally Posted by Dracarys View Post

As for the PCIe 3.0, would that also reduce latency that I keep hearing about?

PCI-E 3.0 in and of it self won't (card itself is not PCI-E 3.0), and all the slots on your UD5 that can fit that LSI card are already directly connected to the CPU.
Quote:
Originally Posted by Dracarys View Post

Apparently it's better to have all your SSDs in your motherboards ports. Although for the type of music production I do, I think it's better to have them spit up so everything is not fighting through the master bus. Large sample are shot from my SSDs to RAM, up to 50GB sometimes depending on session

Unless you are making very heavy use of your network and USB devices at the same time as you are reading from all three SSDs simultaneously, the DMI 2.0 link between the CPU and the PCH will have enough bandwidth to handle everything.

With the SSDs you describe, the X79 ports are your best bet. Put the 500GB drives in the 6Gbps ports and if you desperately need 6Gbps on the 250GB drive, plug it into your LSI card.
Edited by Blameless - 7/11/16 at 4:55pm
Primary
(15 items)
 
Secondary
(13 items)
 
In progress
(10 items)
 
CPUMotherboardGraphicsRAM
5820K @ 4.2/3.5GHz core/uncore, 1.175/1.15v Gigabyte X99 SOC Champion (F22n) Gigabyte AORUS GTX 1080 Ti (F3P) @ 2025/1485, 1... 4x4GiB Crucial @ 2667, 12-12-12-28-T1, 1.34v 
Hard DriveHard DriveHard DriveCooling
Plextor M6e 128GB (fw 1.06) M.2 (PCI-E 2.0 2x) 2x Crucial M4 256GB 4x WD Scorpio Black 500GB Noctua NH-D15 
OSMonitorKeyboardPower
Windows 7 Professional x64 SP1 BenQ BL3200PT Filco Majestouch Tenkeyless (MX Brown) Corsair RM1000x 
CaseMouseAudio
Fractal Design Define R4 Logitech G402 Realtek ALC1150 + M-Audio AV40 
CPUMotherboardGraphicsRAM
X5670 @ 4.4/3.2GHz core/uncore, 1.36 vcore, 1.2... Gigabyte X58A-UD5 r2.0 w/FF3mod10 BIOS Sapphire Fury Nitro OC+ @ 1053/500, 1.225vGPU/1... 2x Samsung MV-3V4G3D/US @ 2000, 10-11-11-30-T1,... 
RAMHard DriveHard DriveHard Drive
1x Crucial BLT4G3D1608ET3LX0 @ 2000, 10-11-11-3... OCZ (Toshiba) Trion 150 120GB Hyundai Sapphire 120GB 3x Hitachi Deskstar 7k1000.C 1TB 
CoolingOSPowerCase
Noctua NH-D14 Windows 7 Pro x64 SP1 Antec TP-750 Fractal Design R5 
Audio
ASUS Xonar DS 
CPUMotherboardGraphicsRAM
i7-6800K @ 4.3/3.5GHz core/uncore, 1.36/1.2v ASRock X99 OC Formula (P3.10) GTX 780 (temporary) 4x4GiB Crucial DDR4-2400 @ 11-13-12-28-T2, 1.33v 
Hard DriveHard DriveCoolingOS
Intel 600p 256GB NVMe 2x HGST Travelstar 7k1000 1TB Corsair H55 (temporary) Windows Server 2016 Datacenter 
PowerCase
Seasonic SS-860XP2 Corsair Carbide Air 540 
  hide details  
Reply
Primary
(15 items)
 
Secondary
(13 items)
 
In progress
(10 items)
 
CPUMotherboardGraphicsRAM
5820K @ 4.2/3.5GHz core/uncore, 1.175/1.15v Gigabyte X99 SOC Champion (F22n) Gigabyte AORUS GTX 1080 Ti (F3P) @ 2025/1485, 1... 4x4GiB Crucial @ 2667, 12-12-12-28-T1, 1.34v 
Hard DriveHard DriveHard DriveCooling
Plextor M6e 128GB (fw 1.06) M.2 (PCI-E 2.0 2x) 2x Crucial M4 256GB 4x WD Scorpio Black 500GB Noctua NH-D15 
OSMonitorKeyboardPower
Windows 7 Professional x64 SP1 BenQ BL3200PT Filco Majestouch Tenkeyless (MX Brown) Corsair RM1000x 
CaseMouseAudio
Fractal Design Define R4 Logitech G402 Realtek ALC1150 + M-Audio AV40 
CPUMotherboardGraphicsRAM
X5670 @ 4.4/3.2GHz core/uncore, 1.36 vcore, 1.2... Gigabyte X58A-UD5 r2.0 w/FF3mod10 BIOS Sapphire Fury Nitro OC+ @ 1053/500, 1.225vGPU/1... 2x Samsung MV-3V4G3D/US @ 2000, 10-11-11-30-T1,... 
RAMHard DriveHard DriveHard Drive
1x Crucial BLT4G3D1608ET3LX0 @ 2000, 10-11-11-3... OCZ (Toshiba) Trion 150 120GB Hyundai Sapphire 120GB 3x Hitachi Deskstar 7k1000.C 1TB 
CoolingOSPowerCase
Noctua NH-D14 Windows 7 Pro x64 SP1 Antec TP-750 Fractal Design R5 
Audio
ASUS Xonar DS 
CPUMotherboardGraphicsRAM
i7-6800K @ 4.3/3.5GHz core/uncore, 1.36/1.2v ASRock X99 OC Formula (P3.10) GTX 780 (temporary) 4x4GiB Crucial DDR4-2400 @ 11-13-12-28-T2, 1.33v 
Hard DriveHard DriveCoolingOS
Intel 600p 256GB NVMe 2x HGST Travelstar 7k1000 1TB Corsair H55 (temporary) Windows Server 2016 Datacenter 
PowerCase
Seasonic SS-860XP2 Corsair Carbide Air 540 
  hide details  
Reply
post #7 of 9
Thread Starter 
If by network you mean internet, no not at all, also not using USB at all. However I AM reading from 9 SSDs simultaneously, and my HDD here and there, but it's variable as instruments are distributed across all the drives. For example a Brass and Percussion instrument might be playing from my 840 500gb Pro, while some woodwinds and strings will be playing from the 250gb, or all at the same time but that's not often.

If I load a 2gb Violin or Choir from my SATA 2 port, I do see a big difference in load time, however it's the IOPs that are more important from what I've read in audio forums. So I should pretty much return this thing then, since sequential read/write isn't that important aside from throughput, and my IOPs are pretty much the same (70k with LSI, 50k in SATA 2).

I thought splitting up the drives would be ideal for throughput in general, same as if I had PCIe ssd's along side my SATA SSDs.
Quote:
Originally Posted by Blameless View Post

PCI-E 3.0 in and of it self won't (card itself is not PCI-E 3.0), and all the slots on your UD5 that can fit that LSI card are already directly connected to the CPU.
Unless you are making very heavy use of your network and USB devices at the same time as you are reading from all three SSDs simultaneously, the DMI 2.0 link between the CPU and the PCH will have enough bandwidth to handle everything.

With the SSDs you describe, the X79 ports are your best bet. Put the 500GB drives in the 6Gbps ports and if you desperately need 6Gbps on the 250GB drive, plug it into your LSI card.

Edited by Dracarys - 7/11/16 at 6:08pm
post #8 of 9
Thread Starter 
I ran some tests wit my music software, it's all the same pretty much, although loading certain samples into RAM is quicker now, that's about it as far as I know. Samsung Magician is so random, my Samsung Evo 850 and 840 256gb gets 70k random reads and 60k random writes, versus my Samsung PRO 840 only getting 40k Random writes, and like 300sh sequencial writes, which makes no sense because the PRO is a better drive! All firmware is up to date. Don't get it, I've tried adjusting some settings in RSTe, and power options, doesn't make a difference, pretty sure this LSI card is just cheap as hell, I mean it was only 100 bucks. I'm disappointed because this was a bandage upgrade until I'm ready for a DDR4 rig, but right now my 3930k is only 5-10% worse then the newest broadwell 6 core, what's the point?

The only thing left I could think of is reverting back to an older firmware, probably won't do anything. I'm also too lazy to return this so I'll probably just keep it, I guess it makes my case neat and allows me to bypass ****ty marvel controllers all together.
post #9 of 9
Thread Starter 
Well f*ck it, I"m going to keep it since the throughput is better than sata 2 ports, and my case is nice and neat biggrin.gif
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: RAID Controllers and Software
Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › Low IOPs using LSI adapter!