To 10 GBE or not to 10 GBE.
So while waiting for parts, I did some research today. Basicly, I really
need more network throughput.
All of the MBs used have 2x 1GBE ports, but that is not enough. Each vHost will hold 10-15 virtual machines, and each will use quite a bit of network bandwidth. You can bond the ports into an aggregation group (therefore getting 2 GBit/s pipe), but will not be enough. It would suffice for the virtual machines, but since all the storage is attached over the network, all the disk traffic needs to pass the network as well. Im hoping the SAN will be able to put out around 300 MByte/s, and that needs to pass over the network to the vHosts, and must not be slowed down by the traffic from virtual machines.
So I looked at a cheapest way to enable the individual hosts for 10 GBE networking. And man, its not cheap. Lets look at the cheapest solution I found:
1x 8-port 10 GBE switch - link
3x 1-port 10 GBE NICs - even going off eBay - 3x 150$
So for that price, I can have a whole another 1U vHost. Another option would be to buy Infiniband cards/switches off eBay, which are not really 10 GBE, but 7.5 GBit/s. Used Infiniband equipment is really
cheap these days. The problem with that is that not all Infiniband cards are supported by vmWare ESXi (the hypervisor we will be running), and even so, iSCSI needs to work over an IP network, which infiniband is natively not. Of course there is IPoIB (IP over Infiniband), but that just adds complexity and potential issues.
Not to mention, I would have to buy stuff off eBay, used, and without proper long-term warranties. So in the end, Quad-Port GBE NICs are what I will be going with.
Each server will have 2 onboard GBE ports and 4 additional GBE ports on the NICs. This will give me max throughput of 6GBit/s per server, which should be enough, keep cost down, and hopefully work as required.