Overclock.net › Forums › Case Mods & Cases › Builds & Case Mods › Build Logs › [Build Log] FINAL Bachelor Splurge - New Server Rack Build - Goodbye $$$
New Posts  All Forums:Forum Nav:

[Build Log] FINAL Bachelor Splurge - New Server Rack Build - Goodbye $$$ - Page 8

post #71 of 91
Thread Starter 
Quote:
Originally Posted by seross69 View Post

Great planning and using a lot of things... Is just to learn or just because you could or a combination?? I use hard drives that I put in Safe deposit box for my back ups...

This was done to add redundancy/high availability to my media dockers (Plex, Sonarr, CP, etc) and to setup a lab environment for testing new OS' and applications for use in my everydjob/career.
post #72 of 91
Quote:
Originally Posted by PuffinMyLye View Post

24 port Cat6 Patch Panel
Switch: Dell X1052 10Gb Switch

Hello,
I was also looking into the Dell X1052 switch.
Does it provide full 10 Gb transfer between the 4 x 10 Gb ports ? Dell wasn´t able to answer this question.

I want to connect 2 server and 2 workstaion via 10Gb and get full speed.

All other computers 1 Gb is enough.

Klaus
post #73 of 91
Thread Starter 
Quote:
Originally Posted by klaus79856 View Post

Hello,
I was also looking into the Dell X1052 switch.
Does it provide full 10 Gb transfer between the 4 x 10 Gb ports ? Dell wasn´t able to answer this question.

I want to connect 2 server and 2 workstaion via 10Gb and get full speed.

All other computers 1 Gb is enough.

Klaus

Yes. I have 4 servers connected to the 4 10Gb SFP+ ports and they talk to each other at full 10Gb speed.
post #74 of 91
My setup won't ever compare to yours, but I picked up a Dell Powerconnect 5524 for $25 thumb.gif

Here's the beginnings of everything, so far only have 4 wires ran in the house. 2 to the office, 2 behind the living room TV




I keep referencing back to this thread for ideas biggrin.gif
Edited by Dalchi Frusche - 7/15/16 at 5:19am
Night Shark
(15 items)
 
Humble Server
(12 items)
 
PFSense
(11 items)
 
CPUMotherboardGraphicsRAM
Phenom II X4 945 3.0Ghz Asus M5A88-V EVO Integrated ADATA 6GB 
Hard DriveHard DriveHard DriveOptical Drive
Western Digital WD800 Western Digital WD3200 Hitachi Desktar 160GB Generic DVD RW 
CoolingOSPowerCase
Xigmatek Loki Ubuntu Server Raidmax RX 530W XCLIO A380BK 
CPUMotherboardGraphicsRAM
Athlon II X2 220 Dual Core 2.8Ghz GIGABYTE GA-MA785GM-US2H Integrated SAMSUNG 8GB 2x4GB PC2-6400 DDR2-800 240pin DIMM 
RAMHard DriveCoolingOS
CORSAIR XMS3 DHX 4GB (2 x 2GB) 240-Pin DDR3 SDR... WD 60GB WD600 Caviar Stock AM3 Fan PFSense 2.3 
PowerCaseOther
Generic 400W e Machine mATX Intel PRO NIC 
  hide details  
Reply
Night Shark
(15 items)
 
Humble Server
(12 items)
 
PFSense
(11 items)
 
CPUMotherboardGraphicsRAM
Phenom II X4 945 3.0Ghz Asus M5A88-V EVO Integrated ADATA 6GB 
Hard DriveHard DriveHard DriveOptical Drive
Western Digital WD800 Western Digital WD3200 Hitachi Desktar 160GB Generic DVD RW 
CoolingOSPowerCase
Xigmatek Loki Ubuntu Server Raidmax RX 530W XCLIO A380BK 
CPUMotherboardGraphicsRAM
Athlon II X2 220 Dual Core 2.8Ghz GIGABYTE GA-MA785GM-US2H Integrated SAMSUNG 8GB 2x4GB PC2-6400 DDR2-800 240pin DIMM 
RAMHard DriveCoolingOS
CORSAIR XMS3 DHX 4GB (2 x 2GB) 240-Pin DDR3 SDR... WD 60GB WD600 Caviar Stock AM3 Fan PFSense 2.3 
PowerCaseOther
Generic 400W e Machine mATX Intel PRO NIC 
  hide details  
Reply
post #75 of 91
Thread Starter 
Quote:
Originally Posted by Dalchi Frusche View Post

My setup won't ever compare to yours, but I picked up a Dell Powerconnect 5524 for $25 thumb.gif

Here's the beginnings of everything, so far only have 4 wires ran in the house. 2 to the office, 2 behind the living room TV




I keep referencing back to this thread for ideas biggrin.gif

Awesome deal! How loud is that thing? Noise was a big concern for me since my rack is in my office.



On a side note, I've pre-ordered the new Ubiquiti EdgeSwitch ES-16-XG switch that is shipping soon. Specs are as follows:

EdgeSwitch 16-port XG (ES-16-XG)
  • 12 SFP+ ports and 4 RJ45 10GBASE-T ports (total of 16 10G ports). No PoE output support on the RJ45 ports.
  • 1U rackmount with removable mounting ears (enclosure is pretty much identical to the ES-12F)
  • Same features as existing EdgeSwitch models (extensive L2 features and basic L3 features)
  • Full line-rate performance for all ports simultaneously (i.e., 160 Gbps "throughput", 320 Gbps "capacity", or 238.10 Mpps "rate")
  • DC power input support: 2.5 mm DC power inline connector for 16V to 25V DC input, minimum 56W (same as ES-48-Lite and ES-12F)
  • RJ45 serial console port
  • Two cooling fans


Now I can finally utilized the second SFP+ port on all 4 of my servers as well as connect my desktop PC via 10GbE biggrin.gif.
post #76 of 91
Quote:
Originally Posted by PuffinMyLye View Post

Awesome deal! How loud is that thing? Noise was a big concern for me since my rack is in my office.

It's super quiet, I think my PFSense box is actually louder, and I don't even hear it kick on. I wasn't too concerned about noise though, since it'll be an enclosure in the basement.
Quote:
Originally Posted by PuffinMyLye View Post

On a side note, I've pre-ordered the new Ubiquiti EdgeSwitch ES-16-XG switch that is shipping soon.

Now I can finally utilized the second SFP+ port on all 4 of my servers as well as connect my desktop PC via 10GbE biggrin.gif.

wheee.gif Yay for speeeeeeed! biggrin.gif
Night Shark
(15 items)
 
Humble Server
(12 items)
 
PFSense
(11 items)
 
CPUMotherboardGraphicsRAM
Phenom II X4 945 3.0Ghz Asus M5A88-V EVO Integrated ADATA 6GB 
Hard DriveHard DriveHard DriveOptical Drive
Western Digital WD800 Western Digital WD3200 Hitachi Desktar 160GB Generic DVD RW 
CoolingOSPowerCase
Xigmatek Loki Ubuntu Server Raidmax RX 530W XCLIO A380BK 
CPUMotherboardGraphicsRAM
Athlon II X2 220 Dual Core 2.8Ghz GIGABYTE GA-MA785GM-US2H Integrated SAMSUNG 8GB 2x4GB PC2-6400 DDR2-800 240pin DIMM 
RAMHard DriveCoolingOS
CORSAIR XMS3 DHX 4GB (2 x 2GB) 240-Pin DDR3 SDR... WD 60GB WD600 Caviar Stock AM3 Fan PFSense 2.3 
PowerCaseOther
Generic 400W e Machine mATX Intel PRO NIC 
  hide details  
Reply
Night Shark
(15 items)
 
Humble Server
(12 items)
 
PFSense
(11 items)
 
CPUMotherboardGraphicsRAM
Phenom II X4 945 3.0Ghz Asus M5A88-V EVO Integrated ADATA 6GB 
Hard DriveHard DriveHard DriveOptical Drive
Western Digital WD800 Western Digital WD3200 Hitachi Desktar 160GB Generic DVD RW 
CoolingOSPowerCase
Xigmatek Loki Ubuntu Server Raidmax RX 530W XCLIO A380BK 
CPUMotherboardGraphicsRAM
Athlon II X2 220 Dual Core 2.8Ghz GIGABYTE GA-MA785GM-US2H Integrated SAMSUNG 8GB 2x4GB PC2-6400 DDR2-800 240pin DIMM 
RAMHard DriveCoolingOS
CORSAIR XMS3 DHX 4GB (2 x 2GB) 240-Pin DDR3 SDR... WD 60GB WD600 Caviar Stock AM3 Fan PFSense 2.3 
PowerCaseOther
Generic 400W e Machine mATX Intel PRO NIC 
  hide details  
Reply
post #77 of 91
First of all, great work you have done. thumb.gif

Can you elaborate more on the VMware configuration? I am interested in how you have done each part of it, specially vSAN and network parts.
For the media storage space, did you configure the space as a dedicated space for the VM? or through VMware itself as a VM datastore (either vSAN'd or normal)?

Thank you
Finally Made
(13 items)
 
  
CPUMotherboardGraphicsRAM
Phenom II X6 Gigabyte GA-MA790FXT-UD5P XFX GTX 275 Corsair Vengeance 16G 
Hard DriveCoolingOSMonitor
2x WD 320G Cooler Master V8 Microsoft Windows 8 Pro HP w2207 
KeyboardPowerCase
Logitech MX5500 Antec Truepower CoolerMaster Centurion 534 
  hide details  
Reply
Finally Made
(13 items)
 
  
CPUMotherboardGraphicsRAM
Phenom II X6 Gigabyte GA-MA790FXT-UD5P XFX GTX 275 Corsair Vengeance 16G 
Hard DriveCoolingOSMonitor
2x WD 320G Cooler Master V8 Microsoft Windows 8 Pro HP w2207 
KeyboardPowerCase
Logitech MX5500 Antec Truepower CoolerMaster Centurion 534 
  hide details  
Reply
post #78 of 91
Thread Starter 
Quote:
Originally Posted by alawadhi View Post

First of all, great work you have done. thumb.gif

Can you elaborate more on the VMware configuration? I am interested in how you have done each part of it, specially vSAN and network parts.
For the media storage space, did you configure the space as a dedicated space for the VM? or through VMware itself as a VM datastore (either vSAN'd or normal)?

Thank you

Each node has a 400GB cache SSD and an 800GB capacity SSD contributing to my vSAN datastore. I'm using a FTT (failures to tolerate) policy of 1 so I can put one server into maintenance mode and still lose an additional node while still keeping my VM's up and available.

The above datastore is for just my VMs themselves. My media storage drives are connected to HBA controllers (different from the controllers my vSAN drives are connected to) and are passed through to individual UnRAID VMs via hardware passthrough . Those UnRAID VM's are just tiny 1GB vmdks for booting as once UnRAID is booted it operates completely off of USB drives (which I've also passed through to those VMs). Therefore all the media on my bulk drives are not part of the vSAN datastore as it would be all but impossible to migrate 64TB worth of data between hosts during a failure or maintenance. That is the purpose of having two of those arrays in case I need to take one down or it fails on it's own.

As for the networking piece I'm using a virtual distributed switch (vDS) via vCenter to manage the physical and virtual networking of the cluster. I currently have the single 10Gb NIC being shared amongst vSAN, vMotion, and VM Network traffic with dual 1Gb NICs as failover. And the reverse is true for the Management network. I'm waiting for the new Ubiquiti ES-16-XG switch to be released in a few weeks so that I can utilize the 2nd SFP+ port in all 4 of my servers. At which point I will probably re-configure my vDS.

I'm not sure if that answers your questions but feel free to ask me any anything else.

P.S. I just released my OP was a bit out of date with some of the changed I've made/parts I've ordered since. So I just updated that to reflect my current setup.
Edited by PuffinMyLye - 7/22/16 at 11:10am
post #79 of 91
Quote:
Originally Posted by PuffinMyLye View Post

Each node has a 400GB cache SSD and an 800GB capacity SSD contributing to my vSAN datastore. I'm using a FTT (failures to tolerate) policy of 1 so I can put one server into maintenance mode and still lose an additional node while still keeping my VM's up and available.

The above datastore is for just my VMs themselves. My media storage drives are connected to HBA controllers (different from the controllers my vSAN drives are connected to) and are passed through to individual UnRAID VMs via hardware passthrough . Those UnRAID VM's are just tiny 1GB vmdks for booting as once UnRAID is booted it operates completely off of USB drives (which I've also passed through to those VMs). Therefore all the media on my bulk drives are not part of the vSAN datastore as it would be all but impossible to migrate 64TB worth of data between hosts during a failure or maintenance. That is the purpose of having two of those arrays in case I need to take one down or it fails on it's own.

As for the networking piece I'm using a virtual distributed switch (vDS) via vCenter to manage the physical and virtual networking of the cluster. I currently have the single 10Gb NIC being shared amongst vSAN, vMotion, and VM Network traffic with dual 1Gb NICs as failover. And the reverse is true for the Management network. I'm waiting for the new Ubiquiti ES-16-XG switch to be released in a few weeks so that I can utilize the 2nd SFP+ port in all 4 of my servers. At which point I will probably re-configure my vDS.

I'm not sure if that answers your questions but feel free to ask me any anything else.

P.S. I just released my OP was a bit out of date with some of the changed I've made/parts I've ordered since. So I just updated that to reflect my current setup.

Thanks for your prompt reply.

So, I can summarize what you did as:
VM Hosts
  1. 1 SSD for cache, other for vSAN storage
  2. vDS has vMotion, normal network and vSAN traffic using 10G, failover to 1G
  3. management network 1G, failover to 10G

Media Storage
  1. the storage VM (UnRAID) using HDDs that are directly attached to it (native, not as VMdatastore)
  2. replication is done to the other VM host which has the media store using normal network (not through VMware)

What do you use for replication? rsync? something else?

For Plex Server, how did you configure the mount points (as you mentioned you are using Linux)? as 1 active and 1 failover?

I had an idea of using a low end PC/Server for storage with dual 10G (one for network and the other for direct attach to another server which has Plex/etc configured as an iSCSI or something else), and a high end PC/Server for computation.
Finally Made
(13 items)
 
  
CPUMotherboardGraphicsRAM
Phenom II X6 Gigabyte GA-MA790FXT-UD5P XFX GTX 275 Corsair Vengeance 16G 
Hard DriveCoolingOSMonitor
2x WD 320G Cooler Master V8 Microsoft Windows 8 Pro HP w2207 
KeyboardPowerCase
Logitech MX5500 Antec Truepower CoolerMaster Centurion 534 
  hide details  
Reply
Finally Made
(13 items)
 
  
CPUMotherboardGraphicsRAM
Phenom II X6 Gigabyte GA-MA790FXT-UD5P XFX GTX 275 Corsair Vengeance 16G 
Hard DriveCoolingOSMonitor
2x WD 320G Cooler Master V8 Microsoft Windows 8 Pro HP w2207 
KeyboardPowerCase
Logitech MX5500 Antec Truepower CoolerMaster Centurion 534 
  hide details  
Reply
post #80 of 91
Thread Starter 
Quote:
Originally Posted by alawadhi View Post

Quote:
Originally Posted by PuffinMyLye View Post

Each node has a 400GB cache SSD and an 800GB capacity SSD contributing to my vSAN datastore. I'm using a FTT (failures to tolerate) policy of 1 so I can put one server into maintenance mode and still lose an additional node while still keeping my VM's up and available.

The above datastore is for just my VMs themselves. My media storage drives are connected to HBA controllers (different from the controllers my vSAN drives are connected to) and are passed through to individual UnRAID VMs via hardware passthrough . Those UnRAID VM's are just tiny 1GB vmdks for booting as once UnRAID is booted it operates completely off of USB drives (which I've also passed through to those VMs). Therefore all the media on my bulk drives are not part of the vSAN datastore as it would be all but impossible to migrate 64TB worth of data between hosts during a failure or maintenance. That is the purpose of having two of those arrays in case I need to take one down or it fails on it's own.

As for the networking piece I'm using a virtual distributed switch (vDS) via vCenter to manage the physical and virtual networking of the cluster. I currently have the single 10Gb NIC being shared amongst vSAN, vMotion, and VM Network traffic with dual 1Gb NICs as failover. And the reverse is true for the Management network. I'm waiting for the new Ubiquiti ES-16-XG switch to be released in a few weeks so that I can utilize the 2nd SFP+ port in all 4 of my servers. At which point I will probably re-configure my vDS.

I'm not sure if that answers your questions but feel free to ask me any anything else.

P.S. I just realized my OP was a bit out of date with some of the changed I've made/parts I've ordered since. So I just updated that to reflect my current setup.
What do you use for replication? rsync? something else?

For Plex Server, how did you configure the mount points (as you mentioned you are using Linux)? as 1 active and 1 failover?

I had an idea of using a low end PC/Server for storage with dual 10G (one for network and the other for direct attach to another server which has Plex/etc configured as an iSCSI or something else), and a high end PC/Server for computation.

I Iooked into rsync for replication but couldn't find an easy way to have it run indefinitely at a specific interval. And since I'm a Windows admin by trade and have a Windows VM I use for VM backups (Veeam) I just threw SyncBack Free on there which mirrors the shares on both UnRAID VMs every half hour.

For the mount points on Linux I'm using mergerFS to pool both sets of NFS mounted shares. MergerFS is configured with the 'ff' (first found) option so my dockers will always use the NFS shares of UnRAID01 unless they are unavailable then they would use the UnRAID02 shares.

What your have brainstormed sounds like it would work just fine.
Edited by PuffinMyLye - 7/22/16 at 1:03pm
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Build Logs
Overclock.net › Forums › Case Mods & Cases › Builds & Case Mods › Build Logs › [Build Log] FINAL Bachelor Splurge - New Server Rack Build - Goodbye $$$