Overclock.net banner

1 - 14 of 14 Posts

·
Premium Member
Joined
·
10,851 Posts
Discussion Starter #1 (Edited)
I just got done building a new FreeNAS to replace my older one. The purpose was to have more cores and RAM to help with Plex transcoding, be able to run more bhyve virtual machines, and have some more storage space available with the possibility to add even more. I am slowly but surely converting all my media (mostly H264) into H265/HEVC to optimize the space the files take, which really pegs the CPU. My Xeon was starting to struggle with a high number of transcodes at the same time in addition to my VMs and everything else going on. Side note, can't wait for the 3900X or even better the 16c/32t for AM4 that is coming out which will go in my personal desktop so it just flies through the transcoding from H264 to HEVC. 2700X is okay but when you're converting tons of files any time saved would be better.

Specs on the server are:

SuperMicro SuperChassis 933T-R760B- a 3U rack mount server case with 15 hot swappable SATA and triple redundant 760W power supply. Plenty of airflow.
Asrock X370 Taichi- I had this before I upgraded to my Asus C7H. It's got an amazing VRM, 12k capacitors, supports ECC RAM, POST code readout, and 10 SATA ports. Currently I have all 10 SATA occupied, but if I add more drives I will need to get something like the LSI 9207-8i which is no problem. I had to remove the big white shroud going across the left side of the motherboard to fit the fans from the case in. It was very easy to do and was just held in place by screws on the back
AMD Ryzen 1700- 8c/16t perfect for my needs
Stock cooler from the 2700X-, slightly better than the 1700 cooler and I had it kicking around since I have a Scythe Mugen 5 on my 2700X
4 x 16GB (64GB) Crucial CT16G4WFD8266- all 4 sticks run fine at the rated 2666MHz C19 and are supported in ECC mode by the Asrock board. ECC is essential for my environment
Visiontek 5450 1GB- installed in the bottom PCIe slot since I'll use the X8/X16 slots for 10GbE or HBA for additional storage down the road. This has no fan which is nice, and was basically only used for the initial installation of FreeNAS and in the future if I ever need to make BIOS changes. Everything I do is either SSH or FreeNAS webGUI
10 x Toshiba X300 5TB HDDs- they are configured in a ZFS RAIDZ3 so I have redundancy that 3 drives can fail with a usable storage of about 35TB. With lz4 compression enabled, I should be able to store a little bit more than that. They're cheap, they're fast and noisy so I hope I don't need to replace them too often for failures. I have one extra sitting around (already verified all drives were good with long SMART tests) in case of failure.
2 x 32GB Silicon Power Ultima U02 Flash Drives- used in a mirror for the operating system. I stocked up on these when they were I think $6 or $7 for a 2 pack, so I'll be ready to replace them if they die. FreeNAS works fine off a flash drive, and the large storage allows me to keep many previous versions of FreeNAS if I ever upgrade and need to rollback.
UPDATE: Samsung PM961 128GB MLC M.2 NVME SSD for boot drive- FreeNAS no longer boots and loads into RAM like previous versions so flash drives are no longer recommended. I got a good deal on Amazon on this so moved over to it.
OS: FreeNAS 11.2-U4.1- FreeBSD is rock solid, and openZFS is one of the best filesystems. In addition to the various jails (all manually configured jails, no FreeNAS plugins here) I have for things like Plex, Tautulli, Transmission torrent, etc... I also run a few bhyve VMs with Ubuntu 18 server, (PiHole DNS) Ubuntu 16 server (Unifi controller) and more to come. FreeNAS serves up some SMB / NFS shares for various machines on my network. I use netdata to monitor statistics. I automate short SMART testing, long SMART testing, and ZFS scrubs and get email reports to monitor server health.

Pictures of the inside of the server and the back can be found here: https://imgur.com/a/NYrlmp8

I didn't take pictures of the front, but it looks like this (except 10 of the HDD lights are constantly blinking depending on what's going on lol): https://www.memory4less.com/images/products/img0922a/CSE-933T-R760B-lg.jpg


Any feedback or questions are welcome!
 

·
Linux Lobbyist
Joined
·
3,743 Posts
@OP

Nice build. :D What are your CPU temps like when under load? I ask because while that cooler was intended for normal PC towers, rack servers are designed for the airflow to travel front to rear and sit horizontally. :) I'm wondering if a tower cooler might be more effective, but if temps are good there's no need to worry. :)
 

·
Premium Member
Joined
·
10,851 Posts
Discussion Starter #3
@OP

Nice build. :D What are your CPU temps like when under load? I ask because while that cooler was intended for normal PC towers, rack servers are designed for the airflow to travel front to rear and sit horizontally. :) I'm wondering if a tower cooler might be more effective, but if temps are good there's no need to worry. :)
A tower cooler would indeed be more effective, however the limited height (3U) of the case makes it hard to get a good one that would fit with the case closed. Attached is a pic of my temps over the last 24 hours. I haven't seen it go higher than 45C though like the screenshot. That's probably close to 100% load, but in a very short burst (like a lot of Plex transcodes at the same time or whatever).
 

Attachments

·
Linux Lobbyist
Joined
·
3,743 Posts
@OP

45C - that cooler is certainly performing. :D On that note, as much as Plex transcodes in real-time, does it cache its transcodes so that it doesn't have to process the same stream repeatedly?
 

·
Premium Member
Joined
·
10,851 Posts
Discussion Starter #5
@OP

45C - that cooler is certainly performing. :D On that note, as much as Plex transcodes in real-time, does it cache its transcodes so that it doesn't have to process the same stream repeatedly?
It does do some caching, but it's very aggressive at cleaning them up. In the folder /usr/local/plexdata/Plex Media Server/Cache/Transcode/Sessions there will be a folder for each stream that's happening with a bunch of files like chunk-stream0-00001.m4s, chunk-stream0-00002.m4s, etc... But they are automatically deleted shortly after the user ends the stream. So essentially, everything is transcoded on the fly. Most of my media is 720P HEVC for optimal file sizes, and surprisingly some clients can direct play them without transcoding, like an Xbox One for example. I also get a lot of direct plays on older TV shows or cartoons that I could only find in 480p so that doesn't tax the CPU too much. I had my peak streams last night, 6 in total with 2 direct plays and 4 transcodes. CPU briefly hit 70% but other than that it was a lot lower. I'm impressed by the 1700, my Xeon quad core would've been pegged to the max. I noticed no slowdown for my VMs, jails, or SMB/NFS shares during this time either.
 
  • Rep+
Reactions: reezin14

·
Expand Always in Always
Joined
·
4,595 Posts
Looks and sounds like an excellent build. I was thinking along the same lines once the new Ryzen 3 CPU's come out.
Transferring over some of my sig rig components and I'd be good. Seeing your setup convinces me to go this route.
Except I'll probably go the Linux or Win Server 19 route and use Emby for my media needs.

I have also been on the lookout for the Asrock Rack x470D4U2/2L2T line of server boards.
Would love to get the 2L2T version for onboard 10Gb but just may go with an add-on card depending on prices.
Add a gtx1060,1660 or use my RX 470 for transcoding and that's that.
I'm also planning to add some VM's just for pen-testing too. Anyway thanks for your insight.

EDIT: I'd also like to ask what are your idle and load watts usage. If you happen to know/care.
 

·
Premium Member
Joined
·
10,851 Posts
Discussion Starter #7
Looks and sounds like an excellent build. I was thinking along the same lines once the new Ryzen 3 CPU's come out.
Transferring over some of my sig rig components and I'd be good. Seeing your setup convinces me to go this route.
Except I'll probably go the Linux or Win Server 19 route and use Emby for my media needs.

I have also been on the lookout for the Asrock Rack x470D4U2/2L2T line of server boards.
Would love to get the 2L2T version for onboard 10Gb but just may go with an add-on card depending on prices.
Add a gtx1060,1660 or use my RX 470 for transcoding and that's that.
I'm also planning to add some VM's just for pen-testing too. Anyway thanks for your insight.

EDIT: I'd also like to ask what are your idle and load watts usage. If you happen to know/care.
I would recommend checking out Proxmox. It's Debian Linux based and is an awesome hypervisor. Would be great for you to run Emby and a bunch of VMs, etc... I wouldn't recommend Windows Server for media/VM, as with any Linux/Unix there is less overhead, more stability, and better licensing (usually free! and you can view the source and/or modify it). Avoid unRAID. I encourage you to explore the benefits of the ZFS filesystem (technically OpenZFS but everybody it's usually just called ZFS) and try to work that into your build for storage. Proxmox supports it and here is a guide that might give you some good information: https://forum.level1techs.com/t/how-to-create-a-nas-using-zfs-and-proxmox-with-pictures/117375 Although this video is old from 2008, here is a great presentation about ZFS:

The Asrock Rack board looks fantastic! Having dedicated IPMI is a huge plus. I was even thinking of buying the Lantronix Spider so I could enable that for this build, but ultimately it's a little too expensive. I run it headless and dread the day I have to go upstairs and lug a monitor to do any troubleshooting. With IPMI that's 100% mitigated. Also with the Asrock Rack I wouldn't have to have a 5450 just plugged in doing nothing. My X370 Taichi probably only had about 6 months of power on time with a very moderate overclock, so fingers crossed I expect it to last a REALLY long time with good airflow and no overclocks going on.

Unfortunately I don't know power consumption and don't have a reliable way to test it. Maybe one of these day's I'll get a Kill-A-Watt, but honestly it's a very low priority for me. Sorry I can't comment on that.
 

·
Expand Always in Always
Joined
·
4,595 Posts
Hmmm, I'll have a look at Proxmox sounds interesting I was entertaining XCP-NG also.
I've looked/ran Unraid but it just didn't seem that impressive to me.
Especially the $$ part, when there are many viable/better free alternatives out there.
I agree with you that FreeNas is excellent overall, used it in the past.
At this time it doesn't fit my needs/wants specifically with storage.

Every time I want to increase my storage I don't want to have to buy x4 or x5 hdd's.
Whether going to bigger drives or adding another pool.
I just want to increase as I see fit. If I was running 50TB's to 100TB's then sure.
I'm still under 13TB's at the moment. And that's with a pool size of 20TB's
That's the part of ZFS I don't like, adds unwanted expenses IMO.

I'm thinking in the next couple of years or so I'll be moving to SSD's for bulk storage.
I've been using Win 16/19 with Stablebit drive pooling for a few years now, it's been solid so...

But yeah, I was impressed with the Asrock line-up of boards.
Not only IPMI but dual M.2 and optionally 10Gb plus support for 7nm on top.
So in a few years, I could throw in a 16c/32t CPU for relatively cheap.

No biggie about the power the consumption I have a kill-a-watt here.
It's just this server will be running 24/7 and don't want a power hog.
My current setup uses under 65w at load. I'll be hitting you up now & then for info.
Sorry so long winded. :)
 

·
Premium Member
Joined
·
10,851 Posts
Discussion Starter #9
Every time I want to increase my storage I don't want to have to buy x4 or x5 hdd's.
Whether going to bigger drives or adding another pool.
I just want to increase as I see fit. If I was running 50TB's to 100TB's then sure.
I'm still under 13TB's at the moment. And that's with a pool size of 20TB's
That's the part of ZFS I don't like, adds unwanted expenses IMO.
Yeah that's the biggest flaw of ZFS and I totally understand where you're coming from. RAIDZ expansion will be coming out at some point. There is actually an alpha out right now: https://github.com/zfsonlinux/zfs/pull/8853 but it's basically for testing only and I would never ever use it on actual data. Hopefully it will become stable and integrated into all the OSes that support ZFS. As much as I'd like to think that it will be "soon" it's still probably going to be a couple years realistically.
 

·
Linux Lobbyist
Joined
·
3,743 Posts
@Dopamin3

I've just completed a mITX converged server build myself, running Proxmox VE. Proxmox has its (well documented) faults but overall it's really quite nice. :D
 

·
Premium Member
Joined
·
10,851 Posts
Discussion Starter #11
@Dopamin3

I've just completed a mITX converged server build myself, running Proxmox VE. Proxmox has its (well documented) faults but overall it's really quite nice. :D
Nice! Did you do a build log or any thread about it? Would love to check it out.
 

·
Linux Lobbyist
Joined
·
3,743 Posts
Nice! Did you do a build log or any thread about it? Would love to check it out.
Nah, it was built in kind of a hurry so I didn't log any of it. Specs:

Gigabyte H370N WiFi
Intel i5 8400
Corsair Vengeance LPX 16GB
(2) Samsung SM961 128GB NVMe in MD RAID 1
(5) Western Digital Red 8TB in MD RAID 6
LSI 9211-8i
Norco ITX-S8
Seasonic SS-350M1U

The host runs Proxmox VE with the LSI controller passed through to the NAS/iSCSI VM. I haven't built out the rest of the VMs or containers yet; part of the reason for choosing the i5 was that at least one of the VMs will be a Jenkins build server; if AMD did an 8C/16T Ryzen APU I'd have picked that instead. :)
 

·
Premium Member
Joined
·
10,851 Posts
Discussion Starter #13
Nah, it was built in kind of a hurry so I didn't log any of it. Specs:

Gigabyte H370N WiFi
Intel i5 8400
Corsair Vengeance LPX 16GB
(2) Samsung SM961 128GB NVMe in MD RAID 1
(5) Western Digital Red 8TB in MD RAID 6
LSI 9211-8i
Norco ITX-S8
Seasonic SS-350M1U

The host runs Proxmox VE with the LSI controller passed through to the NAS/iSCSI VM. I haven't built out the rest of the VMs or containers yet; part of the reason for choosing the i5 was that at least one of the VMs will be a Jenkins build server; if AMD did an 8C/16T Ryzen APU I'd have picked that instead. :)
Great build! I really like that motherboard with dual LAN and dual NVME when it's only mini ITX. Also that case with the 8 hot swappable bays! Good stuff.
 

·
Linux Lobbyist
Joined
·
3,743 Posts
Great build! I really like that motherboard with dual LAN and dual NVME when it's only mini ITX. Also that case with the 8 hot swappable bays! Good stuff.
Thanks! :D The dual-LAN was a hard requirement for me; the dual NVMe was a nice bonus though. :D The case is nicely laid out but the build quality leaves a lot to be desired, unfortunately. Still, 8 3.5" drives in a mITX form factor is a beautiful thing - if the likes of Fractal Design or IcyDock came out with a similar case, I'd grab it in a heartbeat. :)
 
1 - 14 of 14 Posts
Top