Overclock.net › Forums › Specialty Builds › Servers › [Build Log] NUS Server
New Posts  All Forums:Forum Nav:

[Build Log] NUS Server - Page 4

post #31 of 123
Thread Starter 
Quote:
Originally Posted by Lt.JD View Post

Good stuff, can't wait for the next update.
Thanks, Lt! So very exciting for me.

I have uploaded about 60 photos, in the photos section on post 3. Be sure to click them to view full size images. I have also started the build, and those photos are also in post 3. Video of my new fan has also been uploaded, and is in post 3...so check it out.
Edited by tycoonbob - 7/5/12 at 11:21am
post #32 of 123
What made you go with windows 2012 for NAS/SAN? Mind you 2012 is most likely free for you, but seems like your performance would be better/simpler setup with more bare-bones Linux install with iSCSI and CIFS? Also are you doing any sort of high availability for the SAN?

Awesome build though(and very jealous), and I love the norco 2440 I've used in a few NAS/SAN builds. thumb.gif
post #33 of 123
Thread Starter 
Quote:
Originally Posted by swat565 View Post

What made you go with windows 2012 for NAS/SAN? Mind you 2012 is most likely free for you, but seems like your performance would be better/simpler setup with more bare-bones Linux install with iSCSI and CIFS? Also are you doing any sort of high availability for the SAN?
Awesome build though(and very jealous), and I love the norco 2440 I've used in a few NAS/SAN builds. thumb.gif

My home environment is all Windows once again, and that's my personal preference. Now I plan to run 2012, since I can get a free copy...but I won't be relying on Storage Spaces, exactly. Since I am using a hardware controller, disk I/O performance won't really matter on the OS. Secondly, I will also be using MPIO between this server, as well as my Hyper-V servers.

Nothing is simpler than Windows to me. smile.gif
post #34 of 123
Thread Starter 
It's been a week, but more progress has been made. I ordered my CPU and CPU cooler, and should have those mid next week. I will finally be able to power this on and test it all out!
post #35 of 123
Sorry if this was already posted.

But what sort of interconnect are you using for the SAN, I assume Gigabit Ethernet via CAT6, if you have multiple NICs are you gonna team them for more bandwidth?

What sort of throughput are you expecting?
    
CPUMotherboardGraphicsGraphics
Intel Core i7 860 Asus P7P55D-E Pro MSI GTX560 Ti TwinFrozr II MSI GTX560 Ti TwinFrozr II 
RAMHard DriveHard DriveHard Drive
Corsair 8GB DDR3 OCZ Vertex 3 Western Digital Caviar Black Western Digital Caviar Green 
Hard DriveOptical DriveCoolingOS
Samsung 840 Pro Lite-On 24x DVD-RW CoolerMaster V8 Windows 8.1 Professional 
OSMonitorMonitorMonitor
Debian 7.1 Samsung S22B350H Samsung S22B350H Samsung S22B350H 
KeyboardPowerCaseMouse
Ducky Shine II Corsair HX850 CoolerMaster Storm Enforcer Logitech M500 
Mouse PadAudio
Razer Goliathus Microsoft LifeChat LX 3000 
  hide details  
Reply
    
CPUMotherboardGraphicsGraphics
Intel Core i7 860 Asus P7P55D-E Pro MSI GTX560 Ti TwinFrozr II MSI GTX560 Ti TwinFrozr II 
RAMHard DriveHard DriveHard Drive
Corsair 8GB DDR3 OCZ Vertex 3 Western Digital Caviar Black Western Digital Caviar Green 
Hard DriveOptical DriveCoolingOS
Samsung 840 Pro Lite-On 24x DVD-RW CoolerMaster V8 Windows 8.1 Professional 
OSMonitorMonitorMonitor
Debian 7.1 Samsung S22B350H Samsung S22B350H Samsung S22B350H 
KeyboardPowerCaseMouse
Ducky Shine II Corsair HX850 CoolerMaster Storm Enforcer Logitech M500 
Mouse PadAudio
Razer Goliathus Microsoft LifeChat LX 3000 
  hide details  
Reply
post #36 of 123
Thread Starter 
Quote:
Originally Posted by dushan24 View Post

Sorry if this was already posted.
But what sort of interconnect are you using for the SAN, I assume Gigabit Ethernet via CAT6, if you have multiple NICs are you gonna team them for more bandwidth?
What sort of throughput are you expecting?

I believe I posted about my network plan in the first post, but yes...the NUS server has 4 gigabit NICs, and everything else in my house is (purposely) gigabit. I currently have a 16-port unmanaged gigabit switch, and I will be getting a 24 port smart switch in the future. With the NUS server, I am actually going to set up MPIO with the 4 NICs, and on my Hyper-V Hosts I am going to set up MPIO with 2 ports. I will be running some Cat6 STP throughout my house in the coming weeks. Maybe Cat6a, if I can find some quality cable cheap enough. As far as the throughput, I really don't know. Server to server over my existing 16 port gigabit witch is a constant and stable 120ish MB/s assuming I'm not streaming or anything. I'm not looking to maximize my throughput, but as long as I can run 15-25 VMs with iSCSI Targets from this NUS server, I will be more than happy. I have also thought about getting some 20GB/s Infiniband HBAs and linking those back to back (without a switch), which would be an investment of $200-300 (for 3 HBAs, plus cables).

Things may change once I get to this point though. I plan to finish up my NUS before working on building my mDC room, and running my cables, and having my ISP come out and running a new cable, etc.

Thanks for the questions!
post #37 of 123
Switched fabric (such as InfiniBand) is cool but not worth it in my opinion. If you're all CAT6, try finding a 2nd hand 10G switch on the cheap.

Even that is overkill. We use gigabit ethernet through a managed switch over CAT6 in our data centre. That is an infrastructure with 4 SAN's, 8 ESXi hosts and a few other things and we don't saturate the pipe.

All we do is:
Use Multi Path I/O on the iSCSI initiator in all the Windows VM's
Use link aggregation (NIC teaming) on each of the servers
Segment the traffic into 2 VLAN's
Port trunking on the switches.
Edited by dushan24 - 7/13/12 at 6:51pm
    
CPUMotherboardGraphicsGraphics
Intel Core i7 860 Asus P7P55D-E Pro MSI GTX560 Ti TwinFrozr II MSI GTX560 Ti TwinFrozr II 
RAMHard DriveHard DriveHard Drive
Corsair 8GB DDR3 OCZ Vertex 3 Western Digital Caviar Black Western Digital Caviar Green 
Hard DriveOptical DriveCoolingOS
Samsung 840 Pro Lite-On 24x DVD-RW CoolerMaster V8 Windows 8.1 Professional 
OSMonitorMonitorMonitor
Debian 7.1 Samsung S22B350H Samsung S22B350H Samsung S22B350H 
KeyboardPowerCaseMouse
Ducky Shine II Corsair HX850 CoolerMaster Storm Enforcer Logitech M500 
Mouse PadAudio
Razer Goliathus Microsoft LifeChat LX 3000 
  hide details  
Reply
    
CPUMotherboardGraphicsGraphics
Intel Core i7 860 Asus P7P55D-E Pro MSI GTX560 Ti TwinFrozr II MSI GTX560 Ti TwinFrozr II 
RAMHard DriveHard DriveHard Drive
Corsair 8GB DDR3 OCZ Vertex 3 Western Digital Caviar Black Western Digital Caviar Green 
Hard DriveOptical DriveCoolingOS
Samsung 840 Pro Lite-On 24x DVD-RW CoolerMaster V8 Windows 8.1 Professional 
OSMonitorMonitorMonitor
Debian 7.1 Samsung S22B350H Samsung S22B350H Samsung S22B350H 
KeyboardPowerCaseMouse
Ducky Shine II Corsair HX850 CoolerMaster Storm Enforcer Logitech M500 
Mouse PadAudio
Razer Goliathus Microsoft LifeChat LX 3000 
  hide details  
Reply
post #38 of 123
Thread Starter 
Quote:
Originally Posted by dushan24 View Post

Switched fabric (such as InfiniBand) is cool but not worth it in my opinion. If you're all CAT6, try finding a 2nd hand 10G switch on the cheap.
Even that is overkill. We use gigabit ethernet through a managed switch over CAT6 in our data centre. That is an infrastructure with 4 SAN's, 8 ESXi hosts and a few other things and we don't saturate the pipe.
All we do is:
Use Multi Path I/O on the iSCSI initiator in all the Windows VM's
Use link aggregation (NIC teaming) on each of the servers
Segment the traffic into 2 VLAN's
Port trunking on the switches.

10GigE ether is still too expensive. Would be $500 minimum for the switch, then cost of 10GigE adapter for each host in my cluster, plus the NUS server. You can get IB HBA's for $75/ea, and not use a switch...but it's much less supported, and iSCSI over IB isn't officially supported. Going 10GigE would kinda defeat the purpose of me getting the motherboard that I got, and since I will have 25 VMs max (probably), 4 NICs MPIO on the NUS and 2 NICs MPIO on each host, should be plenty for my network.

I probably won't deal with VLANs, but I did think about it. Now I am a little confused on what you said you are using at work...MPIO on your iSCSI Initiator, but LACP on your other servers? Other servers as in your hosts, or other servers in your environment? You can't MPIO to LACP...as they are two different things, but both are still failovers.

Anyway, I work for an IT consulting company so I see different environments all over the place. I've worked in anything from doing a P2V on all two servers in an environment, to working in environments such as Papa Johns, Humana, Nissan, and Toyota (of America). It's amazing seeing all the different ways things are done, but I am confident that what I am planning for is more than what I would really need.
post #39 of 123
Quote:
Originally Posted by tycoonbob View Post

Now I am a little confused on what you said you are using at work...MPIO on your iSCSI Initiator, but LACP on your other servers? Other servers as in your hosts, or other servers in your environment? You can't MPIO to LACP...as they are two different things, but both are still failovers.

Each server has two 4 port Ethernet cards. We aggregate (team) the ports on each card. This yields additional bandwidth. This is obviously done on the host level.

We then use MPIO with the 2 discrete NIC teams at the hypervisor level to map LUNs on the SAN as storage repositories for the given ESXi host.

Additionally each VM is configured with multiple vNICs so we can again use MPIO via the iSCSI initiator to map LUNs directly to the VM. We have also implemented 2 VLANs to segregate VM and SAN traffic.

I see your confusion as my previous description implied we were using MPIO and LACP together on a single NIC.

PS: Edited to correct mistakes (I'm tired).
Edited by dushan24 - 7/14/12 at 8:15am
    
CPUMotherboardGraphicsGraphics
Intel Core i7 860 Asus P7P55D-E Pro MSI GTX560 Ti TwinFrozr II MSI GTX560 Ti TwinFrozr II 
RAMHard DriveHard DriveHard Drive
Corsair 8GB DDR3 OCZ Vertex 3 Western Digital Caviar Black Western Digital Caviar Green 
Hard DriveOptical DriveCoolingOS
Samsung 840 Pro Lite-On 24x DVD-RW CoolerMaster V8 Windows 8.1 Professional 
OSMonitorMonitorMonitor
Debian 7.1 Samsung S22B350H Samsung S22B350H Samsung S22B350H 
KeyboardPowerCaseMouse
Ducky Shine II Corsair HX850 CoolerMaster Storm Enforcer Logitech M500 
Mouse PadAudio
Razer Goliathus Microsoft LifeChat LX 3000 
  hide details  
Reply
    
CPUMotherboardGraphicsGraphics
Intel Core i7 860 Asus P7P55D-E Pro MSI GTX560 Ti TwinFrozr II MSI GTX560 Ti TwinFrozr II 
RAMHard DriveHard DriveHard Drive
Corsair 8GB DDR3 OCZ Vertex 3 Western Digital Caviar Black Western Digital Caviar Green 
Hard DriveOptical DriveCoolingOS
Samsung 840 Pro Lite-On 24x DVD-RW CoolerMaster V8 Windows 8.1 Professional 
OSMonitorMonitorMonitor
Debian 7.1 Samsung S22B350H Samsung S22B350H Samsung S22B350H 
KeyboardPowerCaseMouse
Ducky Shine II Corsair HX850 CoolerMaster Storm Enforcer Logitech M500 
Mouse PadAudio
Razer Goliathus Microsoft LifeChat LX 3000 
  hide details  
Reply
post #40 of 123
Quote:
Originally Posted by tycoonbob View Post

10GigE ether is still too expensive. Would be $500 minimum for the switch, then cost of 10GigE adapter for each host in my cluster, plus the NUS server. You can get IB HBA's for $75/ea, and not use a switch...but it's much less supported, and iSCSI over IB isn't officially supported. Going 10GigE would kinda defeat the purpose of me getting the motherboard that I got, and since I will have 25 VMs max (probably), 4 NICs MPIO on the NUS and 2 NICs MPIO on each host, should be plenty for my network.

That's a fair point about the cost.

But as you said iSCSI over InfiniBand may be troublesome.

I'd just stick with gigabit Ethernet.
    
CPUMotherboardGraphicsGraphics
Intel Core i7 860 Asus P7P55D-E Pro MSI GTX560 Ti TwinFrozr II MSI GTX560 Ti TwinFrozr II 
RAMHard DriveHard DriveHard Drive
Corsair 8GB DDR3 OCZ Vertex 3 Western Digital Caviar Black Western Digital Caviar Green 
Hard DriveOptical DriveCoolingOS
Samsung 840 Pro Lite-On 24x DVD-RW CoolerMaster V8 Windows 8.1 Professional 
OSMonitorMonitorMonitor
Debian 7.1 Samsung S22B350H Samsung S22B350H Samsung S22B350H 
KeyboardPowerCaseMouse
Ducky Shine II Corsair HX850 CoolerMaster Storm Enforcer Logitech M500 
Mouse PadAudio
Razer Goliathus Microsoft LifeChat LX 3000 
  hide details  
Reply
    
CPUMotherboardGraphicsGraphics
Intel Core i7 860 Asus P7P55D-E Pro MSI GTX560 Ti TwinFrozr II MSI GTX560 Ti TwinFrozr II 
RAMHard DriveHard DriveHard Drive
Corsair 8GB DDR3 OCZ Vertex 3 Western Digital Caviar Black Western Digital Caviar Green 
Hard DriveOptical DriveCoolingOS
Samsung 840 Pro Lite-On 24x DVD-RW CoolerMaster V8 Windows 8.1 Professional 
OSMonitorMonitorMonitor
Debian 7.1 Samsung S22B350H Samsung S22B350H Samsung S22B350H 
KeyboardPowerCaseMouse
Ducky Shine II Corsair HX850 CoolerMaster Storm Enforcer Logitech M500 
Mouse PadAudio
Razer Goliathus Microsoft LifeChat LX 3000 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Servers
Overclock.net › Forums › Specialty Builds › Servers › [Build Log] NUS Server