1. Unless you are using SAS devices, you should use Intel 6 Gbit/s SATA ports (SATA #1-2 blue are 6 Gbit/s). After those two are populated, ti depends on the drivers you'll be using. With Marvell's drivers I am getting decent performance for other SSDs from Marvell 6 Gbit/s SATA ports (gray). However, if you use Windows built-in Marvell driver the results might be worse than using Intel's SATA 3 Gbit/s.
2. If you are using Windows, I recommend to update to the latest Intel RSTe (SATA) and Marvell SATA drivers. As for the rest, I do not think it is critical but it probably does not hurt either.
3. You would not need Marvell's crap if you are only using Intel's SATA ports. If you have one SSD, just connect it to Intel's SATA 6 Gbit port (blue SATA #1). Unfortunately I do not have pictures of the BIOS here and I cannot reboot right now, but the important points are as follows:
- If you are planning to use stock ECC memory, no need to change timings/voltage. But if you are using "overclocker" RAM with XMP, or you wish to overclock , you might need to enter the memory timings manually
- Ensure that the NUMA is ON (chipset configuration -> memory configuration)
- You can also configure the CPU settings for your system role in terms of RAPL (power limits). For example, if you plan to do CPU-hungry stuff, you should configure the CPU for "High Performance Server (HPC)" role, configure the CPU power management to "Performance" and, optionally, maximize the turbo power limits
- You can speed up booting by enabling UEFI-only boot and Windows fast boot - but for this to work, Windows has to be installed in UEFI mode. Same goes for Linux - if you use UEFI boot, your Linux distro has to be installed in UEFI mode.
- There are many more options, but have very specific demands I suggest to live them as-is. You can enable "above 4G decoding" PCI devices to free-up some virtual memory space below 4 GB. If you are using ECC memory, you can also enable both demand and patrol scrubbing in memory configuration to increase the robustness against simple memory errors.
5. No - but for people that have B0/C0/C1 SNB-EP Xeons slot #3 is actually more stable in terms of boot-detection / PCIe link training.
Thanks a TON for this info! I really appreciate it. I'll do as instructed. Only question, can you expound on "UEFI" a little? Is it worth doing this? Anyone run into any issues at all? Would you personally recommend it and how much of a difference can it make when already on an SSD?