Overclock.net banner

vSAN --> Shared Storage Swap - Discussion [pics]

418 views 3 replies 2 participants last post by  PuffinMyLye 
#1 ·
I've had my 4-node vSAN Cluster setup for almost a year now and I haven't been in love with both the performance and (most importantly) the flexibility it gives me in my home network. So I'm giving strong consideration to re-purposing one of my vSAN nodes as well as all the vSAN disks into a single All-SSD SAN to be used as a shared storage device for either a 2 or 3 node ESXi cluster.

Hardware for this SAN will be as follows:

  • SuperMicro X10SDV-2C-7TP4F (Xeon D-1508)
  • 16GB DDR4 2133 Registered RAM
  • 4 x Hitachi 400GB HUSSL4040ASS600 SAS SSDs (38PB TBW
    biggrin.gif
    )
  • 4 x Intel 800GB S3500 SATA SSDs

With regard to Server OS I'm exploring many options including the following:

FreeNAS/NAS4Free (FreeBSD)
Napp-it/Nexenta (Solaris)
OMV/ZoL (Linux)

I will update this thread as this project progresses but if anyone has any other suggestions/experience with some of these OS's with relation to their performance as an ESXi shared datastore (such as NFS/iSCSI performance) I'd love to hear about it.

Current cluster pic before reconfig.

 
See less See more
2
#3 ·
I've been running vSAN over a year too and decided to add some storage over fibre channel. I used ESOS for the OS and directly connected the FC HBAs from the storage server to the vmware hosts.

Created 2 RAID 50 arrays with the 450GB HDDs and a RAID10 array with the SSDs. Inside the OS, I setup the SSD array to act as cache for the HDD arrays using bcache.

Performance has been great for shared datastores and RDM disks.

Specs:
1 x Supermicro X9SRL-F motherboard
1 x Xeon E5-2670 processor
8 x 8GB (64GB) DDR3 RAM
16 x Hitachi 450GB 10K SAS HDD
6 x Toshiba Q300 Pro 256GB SSD
3 x HP P420 RAID Controller
1 x Intel X520-T2 10GbE NIC
3 x QLogic QLE2562 HBAs
 
#4 ·
Quote:
Originally Posted by jibesh View Post

I've been running vSAN over a year too and decided to add some storage over fibre channel. I used ESOS for the OS and directly connected the FC HBAs from the storage server to the vmware hosts.

Created 2 RAID 50 arrays with the 450GB HDDs and a RAID10 array with the SSDs. Inside the OS, I setup the SSD array to act as cache for the HDD arrays using bcache.

Performance has been great for shared datastores and RDM disks.

Specs:
1 x Supermicro X9SRL-F motherboard
1 x Xeon E5-2670 processor
8 x 8GB (64GB) DDR3 RAM
16 x Hitachi 450GB 10K SAS HDD
6 x Toshiba Q300 Pro 256GB SSD
3 x HP P420 RAID Controller
1 x Intel X520-T2 10GbE NIC
3 x QLogic QLE2562 HBAs
Thanks for the suggestion and for chiming in. I don't have any fiber channel hardware but I am using dual 10Gb SFP+ ports (with optics) for my servers. I just briefly looked at ESOS and it looks more like a pre-boot RAID controller menu than an actual OS haha. Have you benchmarked the performance you're getting? My array(s) will be all-flash so I want to squeeze every ounce of performance that I can.
 
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top