Overclock.net banner

Help me build HTPC/NAS combo

6247 Views 13 Replies 4 Participants Last post by  ymetushe
Hey guys, I've had a Win 2008R2 based NAS for years with PERC H700 running hardware RAID, i3-6100t, nVME boot SSD. Worked well, but recently I hooked up the thing to a 4k TV and tried to use it as HTPC/light gaming machine, and quickly ran into the iGPU limitations - mainly max [email protected] supported output res.

Ideally, I wanted to build a VM farm this time with the following VMs:
1. Dedicated NAS VM. FreeNAS, maybe even ditch PERC H700 and go with software RAID via ZFS. Pass-through storage drives to this guy.
2. Dedicated Win10 machine for HTPC/light gaming. Pass-through the GPU to this guy.
? 3. Ubuntu machine for work, light coding. This could potentially be rolled into #1, as all I need is a terminal access via Putty.
? 4. Maybe a pfSense VM in the future, though this is def not a priority for now.

I bought Ryzen 3400G (4c/8t, Vega11 iGPU), B450 mobo, 16g ram and quickly realized a major hurdle: looks like it's not possible to pass-through the iGPU, need dGPU for that. I believe Vega11 in the 3400G is plentiful for me, and I don't want a dedicated GPU because it would draw extra power in a machine that runs 24/7 and idles 98% of the time.

Now I'm thinking of doing a Win10Pro for the base OS (#2), then enable Hyper-V and run #1 and #3 in a VM.
-I would get 100% of GPU performance in Win10, which is where I need it.
-Sacrifice 2-4 threads for the VMs. Remaining 4 threads for Win10 would be plenty (even the current 2c/4t i3-6100t is fine for my CPU load, I'm GPU bound)
-Don't care too much if I have to reboot the whole thing now and then, this is not a mission critical NAS.

Do you think Win10+Hyper-V NAS VM is possible, or am I missing something? Appreciate any input!
1 - 14 of 14 Posts

· Registered
Joined
·
275 Posts
I did the same like you but i knew that onboard vega cannot be bypassed so i did put a 1070 bypassed to the htpc vm and a 3600cpu. The card doesn't consume anything in idle and you can even shutdown the htpc vm when you don't need it like i do.
Im with esxi and this vms
1. Untangle firewall
2. Pihole
3. Server 2019 as Nas that has all the disks and a hpe p440 4gb fwbc raid controller that gives back with nfs sharing to esxi from storage spaces 2 volumes. A vm volume and a data volume. To the data volume i had also enabled deduplication. The storage spaces pool has tiering also.
4. 3xWin10 vm's for the family where all have NUCs as terminals with XP just for remote desktop.
5. Vcsa vm (ok i dont really need that)
6. Some extra vm's that i play with like debian, vrealize etc.

Its rock solid and damn fast. Power consumption is 100w on idle mainly from the sas disks.

The machine is in my signature. I just now have took out the 1070 and replaced it with a gt710 pciX1 since i wanted to try a asus hyperM2 nvme controller with 4x256 nvme in raid0 for cache in storage pool. This card if i have anything in pci slot2 doesn't work with x16 since its an x570 limitation, so to see the 4 nvme disks needs to be allone. I haven't decided yet if the hussle is needed for my needs.
 
  • Rep+
Reactions: ymetushe

· Registered
Joined
·
768 Posts
Discussion Starter · #3 ·
I'm leaning towards Win10 + HyperV FreeNAS. Looks like I'm able to pass HDD's directly to the VM, I already have a 2nd Intel PRO NIC installed that I can pass to FreeNAS. I'll run some file transfer before (w/ current setup) and after to compare.
Ultimately I'm limited by the 1Gb NIC speed (still passing on 10Gb... maybe I'll jump on 2.5G bandwagon once that gets more adoption), so as long as I can still saturate that I'll call it good enough.

Just about any data I can find on dGPU's says they draw 10w at idle. I know it's not a lot, but I really try to make every w count.
To put things in perspective, 10w x 24hr x 365d x $0.25/kw = $21.9/year. For a system I don't expect to touch for 3-5 years, that's easily $100. On top of that, I have to manage the heat in my media closet where the server lives, so every BTU of heat spared there is a plus, too.
 

· Registered
Joined
·
3,464 Posts
FreeNAS really wants to be bare metal and not be in a VM. It can be done, but can have issues. This mainly has to do with ZFS needing direct drive access. Going through a hypervisor tends to muck with that.

My setup right now is FreeNAS on a dedicated box. I then setup an xcpNG server to run my VM's. I setup iSCSi targets for some of the VM's on FreeNAS. Like the debian VM that runs Plex. I have a couple other Linux VM's and a Win10 one that hosts some game servers. I just cloned the existing OS over to the VM so I did not have to remake everything in Linux.

I do not know how well GPU passthrough works in xcpNG. I know some people use it, but I have no need for that function. I would say the main takeaway from all of this is to not run FreeNAS in a VM. If all you need is storage on the NAS, you can get away with a really basic CPU. Celeron or Pentium for example. You can Do the machine for super cheap if you want, besides the drives.

Just a thought. Good luck with whatever you do.
 

· Registered
Joined
·
768 Posts
Discussion Starter · #5 ·
Thanks for the input. I'll get a few spare drives and test it all out without wiping my existing PERC RAID1 drives.

If it really doesn't work, I can always go with:
1. Forget FreeNAS and stick to PERC H700 RAID1. Though I really wanted to use this HW change as a chance to upgrade to ZFS (I believe that to be an upgrade, though still researching)
2. Forgo the Vega 11 iGPU and get a Radeon RX460, go the typical hypervisor route.
 

· Registered
Joined
·
3,464 Posts
ZFS is arguably the best RAID solution out right now. As long as you build the server right and design the array correctly. FreeNAS does most of the fine grain stuff automatically and does a good job when building a new array.

Another reason for a dedicated NAS box if running anything with ZFS is the massive memory requirements. In a nutshell, you need about 1GB of RAM for every TB of storage. This is for maintaining the ZFS hash/parity data. Any RAM beyond that is used as ARC cache by ZFS. After some usage you will see the ARC size going up and up Eventually it stops around 97% usage. At least on FreeNAS. I have not played with it on Linux yet. iXsystems, the company behind FreeNAS/TrueNAS generally discourage running in a VM because of this and the nature of ZFS wanting/needing direct drive access to function correctly. The only exceptions I am aware of are in large arrays where you may have in a larger enterprise environment where you have a whole rack of servers and drive shelves to work with. But that is way outside the scope of your project.

If you can find a hypervisor that allows direct drive access where you can assign a pile of drives that only talk to the FreeNAS VM, that may be your ticket. But I have no clue what ones can do that cleanly. I will be curious to see what you come up with and how well it works.
 

· Registered
Joined
·
768 Posts
Discussion Starter · #7 ·
I actually only have 4TB NAS, believe it or not - I don't do movie storage, so it's only files, music, some software, backups. I could easily increase the capacity just have no need to - still 1tb free after 8 years. So 1GB RAM per TB is not an issue at all. Got a single 16GB stick of ram, and MB can support 4x16gb.

I think I'll go with 6gb FreeNAS, 2gb automatically gets allocated to iGPU, and I'm left with 8G for Win10 HTPC/Light gaming rig.

Hyper-V does have HDD pass-through (here: https://thesolving.com/virtualization/how-to-configure-a-pass-through-disk-with-hyper-v/), and that's what I'm planning to go with. I'll play around with it in the next week or two as time allows and let you know how it works out.
 

· Registered
Joined
·
3,464 Posts
Good luck. As an FYI, if you add a VDEV to an existing volume of drives in ZFS, it will not self balance. It will instead start filling the new VDEV. If you copy all the data off the NAS and then back it will split the data between the two VDEV's giving some performance gain. For best performance, it is best to get all the drives you need in advance. So, maybe upgrade now if you think the 1TB you have left will be exhausted relatively soon. Or hold off and buy another lump of matching drives later and add them. Just keep the limitations in mind.
 

· Registered
Joined
·
768 Posts
Discussion Starter · #9 ·
Alright, a bit of an update.

After further research, I'm much more comfortable with ZFS overall. The expansion limitation is not a big deal for me, but data integrity, overall adoption and maturity are. Made a backup (and redundant backup of important data) of my PERC array and started decommissioning the current setup.

In the process, decided to measure the power consumption for the first time. I was very surprised by the findings.

Current base system is ASRock Z270M Pro4, Intel i3-6300t, 8GB DDR4, PNY NVMe 480GB system drive, PERC H700 w/ external fan, 2x WD Red 4TB RAID1, DVD Drive, 2x80mm + 1x60mm case fan, CX430 PSU, extra Intel PRO1000 NIC.
All measurements below are at CPU idle, after waiting a min or two for everything to settle.
Power OFF: 0.8w
Power On, LCD On: ~45w

Here's the rough breakdown, in W. I pulled each component individually and measured the delta. This is approximate, not going for scientific accuracy:
Base System (CPU, RAM, NMVe, LCD Off) 16.7
H700 w/ Fan 13.7
2xWD RED 4TB 3.2
Intel PRO1000 eNIC 2.1
1x60mm (med speed) 1.2
2x80mm (low speed) 1.5
DVD Drive 3.1
LCD On 0.9

Takeaways:
Definitely ditching the DVD drive. I threw it in there in case I ever needed it, which was 2 times in 8 years.
Was surprised how little the HDDs themselves use, same fore eNIC
H700 is for sure coming out - I thought this was a 5W card, not a 14W hog!
LCD off (screensaver), saves about 1W. Who knew!

Coming up: power consumption of AMD setup, Network speed tests, HDD array speed tests.
Added plans for future: 2.5Gbps LAN - surprisingly affordable, works over CAT5E. Will add PCIe card ($30+) in the NAS, and 2.5G USB-C/3 (~$35) one in my main desktop. Connected directly for now to avoid the $170+ on a new compatible switch.
 

· Registered
Joined
·
3,464 Posts
Looking good so far. I know my system pulls around 50 watts when it only had three HDD in it. It has six now. I have not looked at it since. It has a haswell quad core xeon in it. Not sure the exact model off hand. So, not far off your old one. I have no DVD or video cards in it though. Server mobo has a basic video out for console needs. But I usually do everything via web UI for FreeNAS or SSH into it. Mainly only use SSH for working on jails.

I will be interested to see your numbers from the AMD setup.
 

· Registered
Joined
·
768 Posts
Discussion Starter · #11 ·
Alright, system has been moved, and the whole thing is a great success if you ask me. Quick recap of the new setup:

Gigabyte B450 Aorus Pro WiFi (w/ Intel iNIC)
AMD Ryzen 5 3400G CPU
1x16gb DR3200 RAM
PNY 480GB NVMe SSD
2xWD Red 4TB HDD in Mirror
Intel Pro1000 eNIC
ARK IPC-3U380PS Case w/ Corsair CX430 PSU

OS is Win10 Pro, Hyper-V installed. FreeNAS VM set up in Hyper-v, with the following hardware passed to FreeNAS:
2xWD Red drives, ZFS Mirror (passed as physical drives)
Intel Pro100 eNIC (setup as external adapter, so FreeNAS has direct connection to network switch and it's own IP address).
FreeNAS did not complain about anything at all, and so far this week the system has been working beautifully, and noticeably faster.

Power Usage wise, the AMD setup is about 4w more power hungry with same base system. 16GB of 1.35v RAM does not use any appreciable amount of extra power vs 1.2v 8gb stick, maybe 0.2w. But, since I ditched the H700 and DVD drive, my overall system consumption at idle is down about 10w and is now at ~40w level (I think I measured the baseline wrong with intel setup).

Network transfer speeds via Intel Pro1000 1Gb NIC:
Old PERC setup:

ZFS:


Clearly, the 1Gb NIC is the bottleneck as far as sequential speeds. I did a few more tests.
PERC H700 directly on the system (not via lan):

ZFS setup via 10G virtual NIC (since crystaldisk mark is not available on Unix):

ZFS is obviously caching the banannas out of the drives. Maybe running larger sample size would bring it down to a more real-world write performance, but 1GB is about a middle ground of the size of files I work with, so I felt it was a good representation.

And, of course, things are working as expected on Win10 side, too - excellent graphics performance. I'll keep this running for a while, maybe post an update down the road.
 

Attachments

· Meep
Joined
·
5,578 Posts
Connected directly for now to avoid the $170+ on a new compatible switch.
Yeah looked into it as well not too long ago, you're essentially forced to buy a 10GBase-T switch and those are stuck in enterprise lalaland in terms of pricing.

Makes those cheap 2.5GBase-T NIC's kinda pointless if you have more than 2 devices that require higher bandwidth.
 

· Registered
Joined
·
3,464 Posts
Looking good. ZFS is always caching. That is part of how it keeps the performance up. Also running multiple VDEVs in a single pool helps. ZFS will stripe the data between the VDEV's. More you have the more it stripes. You can get some pretty high performance from spinning rust if the network can keep up and you have enough drives. It is cool stuff.
 

· Registered
Joined
·
768 Posts
Discussion Starter · #14 ·
10 month follow up!
The system has been working almost flawlessly. FreeNAZ and ZFS are awesome, not a hiccup. Perc H700 sold on eBay while it was still worth anything, and not missed at all.
Windows is being Windows, forcing updates and reboots, that sort of thing. I know I can turn it off, blah blah blah, but it's a never-ending battle with MSFT: they keep re-enabling their crap in future updates, etc, and I just don't have the time to stay on top of it. So I relented and accepted the digital cost of living in today's world.
I never setup the native system handling stuff, so Windows is not able to natively shut down FreeNAS. Hence, Windows is essentially pulling the plug on FreeNAS with every reboot. Still, FreeNAS comes back online (I have it at autostart in HyperV) every time and merely logs a log entry about it to complain. I know it's not perfect, but life has priorities and fixing this is not one of them (setting up the backup is lots more urgent, but not yet done either)

I am able to use the system as HTPC otherwise, as planned, with FreeNAS running in HyperV in background. I also made a few more VM's for my purposes (Windows box and CentOS boxes for dev purposes). No problems. Upgraded to 32GB of RAM, though that probably wasn't necessary. With 16G, I had 8G for FreeNAS, 2G for iGPU RAM, and left with 6G for HTPC. It was fine, but now it's extra fine with 32G of RAM.

I do plan to eventually upgrade the CPU to something like a Ryzen 4700GE/4750GE (8c/16t, faster graphics, 35W TDP), and am glad AMD's got my back by releasing such a CPU as I was hoping they would at the time I built this system. The 3400G is running like a champ for now, though a little hot with the stock cooler, hits 75C under gaming, so aftermarket HSF is in order. Part of the problem is also with the case, as it only has 2x60mm exhaust fans on the back.
 
1 - 14 of 14 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top