Originally Posted by spyshagg
Here is what you would need to even start benefiting from those cards, according to each of these scenarios:Scenario 1
One card on the server and one card on the PC to improve transfer speed between them:
- Only possible by using the multilink feature of SMB 3 which is only possible at the time of this post in windows server 2012, windows 8 and 10. (linux smb3 is not feature complete as of yet).
- Switch independent mode available (no LACP required) but may need to be setup under windows powershell depending on OS version.
- Total throughput available between both server and client = 500MB/s. (this means you should at least have both computers with a hdd/ssd capable of sustaining >300MB/s).Scenario 2
Both cards on the same Server to feed a multitude of client PCs (with 8 nics you better be at a library.. in a big city)
- Server needs to be capable of link aggregation (either linux or windows server 2012)
- Switch independent mode available (no LACP required)
- Total server throughput = 1000MB/s. (linux does a lot of caching to the RAM, but still it would be nice if your HDD could sustain this speed).
- Client PC's can be made of potatoes.tips:
- If you are planing this only to move big amounts of data within the server it self, don't do it. SMB 3 allows server side copy if you copy/paste data from within the same share. You would only need a measly 10mbps nic to copy gigabytes of data in a few seconds (depending on the server disk speeds). If you use a Server with linux with btrfs file system, those copies are instantaneous. (literally hundreds of gigabytes in less than a second).
I have personally configured and installed many servers across my company using scenario 2 with linux centos link aggregation mode 6 (no LACP required) with up to 5 nics. If you are deploying it to production in a fileserver capacity, I recommend lots and lots of RAM as linux caches all the latest files onto there, releasing a huge burden from the HDD life expectancy.