Help me build HTPC/NAS combo - Overclock.net - An Overclocking Community
Forum Jump: 

Help me build HTPC/NAS combo

 
Thread Tools
post #1 of 8 (permalink) Old 02-12-2020, 10:51 PM - Thread Starter
New to Overclock.net
 
ymetushe's Avatar
 
Join Date: Apr 2009
Location: The States
Posts: 629
Rep: 57 (Unique: 55)
Help me build HTPC/NAS combo

Hey guys, I've had a Win 2008R2 based NAS for years with PERC H700 running hardware RAID, i3-6100t, nVME boot SSD. Worked well, but recently I hooked up the thing to a 4k TV and tried to use it as HTPC/light gaming machine, and quickly ran into the iGPU limitations - mainly max [email protected] supported output res.

Ideally, I wanted to build a VM farm this time with the following VMs:
1. Dedicated NAS VM. FreeNAS, maybe even ditch PERC H700 and go with software RAID via ZFS. Pass-through storage drives to this guy.
2. Dedicated Win10 machine for HTPC/light gaming. Pass-through the GPU to this guy.
? 3. Ubuntu machine for work, light coding. This could potentially be rolled into #1, as all I need is a terminal access via Putty.
? 4. Maybe a pfSense VM in the future, though this is def not a priority for now.

I bought Ryzen 3400G (4c/8t, Vega11 iGPU), B450 mobo, 16g ram and quickly realized a major hurdle: looks like it's not possible to pass-through the iGPU, need dGPU for that. I believe Vega11 in the 3400G is plentiful for me, and I don't want a dedicated GPU because it would draw extra power in a machine that runs 24/7 and idles 98% of the time.

Now I'm thinking of doing a Win10Pro for the base OS (#2), then enable Hyper-V and run #1 and #3 in a VM.
-I would get 100% of GPU performance in Win10, which is where I need it.
-Sacrifice 2-4 threads for the VMs. Remaining 4 threads for Win10 would be plenty (even the current 2c/4t i3-6100t is fine for my CPU load, I'm GPU bound)
-Don't care too much if I have to reboot the whole thing now and then, this is not a mission critical NAS.

Do you think Win10+Hyper-V NAS VM is possible, or am I missing something? Appreciate any input!

Be nice and don't forget to thank people who try to help you out.


ymetushe is offline  
Sponsored Links
Advertisement
 
post #2 of 8 (permalink) Old 02-12-2020, 11:45 PM
New to Overclock.net
 
bluechris's Avatar
 
Join Date: Feb 2015
Location: Athens - Greece
Posts: 229
Rep: 2 (Unique: 2)
I did the same like you but i knew that onboard vega cannot be bypassed so i did put a 1070 bypassed to the htpc vm and a 3600cpu. The card doesn't consume anything in idle and you can even shutdown the htpc vm when you don't need it like i do.
Im with esxi and this vms
1. Untangle firewall
2. Pihole
3. Server 2019 as Nas that has all the disks and a hpe p440 4gb fwbc raid controller that gives back with nfs sharing to esxi from storage spaces 2 volumes. A vm volume and a data volume. To the data volume i had also enabled deduplication. The storage spaces pool has tiering also.
4. 3xWin10 vm's for the family where all have NUCs as terminals with XP just for remote desktop.
5. Vcsa vm (ok i dont really need that)
6. Some extra vm's that i play with like debian, vrealize etc.

Its rock solid and damn fast. Power consumption is 100w on idle mainly from the sas disks.

The machine is in my signature. I just now have took out the 1070 and replaced it with a gt710 pciX1 since i wanted to try a asus hyperM2 nvme controller with 4x256 nvme in raid0 for cache in storage pool. This card if i have anything in pci slot2 doesn't work with x16 since its an x570 limitation, so to see the 4 nvme disks needs to be allone. I haven't decided yet if the hussle is needed for my needs.

CPU
Ryzen 3600
Motherboard
Gigabyte Aorus Pro
GPU
Nvidia GT710 Pci X1
RAM
Trident Z DDR4-3200MHz CL14-14-14-34 1.35V 64GB (4x16GB)
Hard Drive
HGST Ultrastar 7K4000 4TB 7200 RPM 512n SAS 6Gb/s 3.5-Inch 64MB HDD Enterprise Hard Drive HUS724040ALS640
Hard Drive
Samsung 860 Evo 1TB
Hard Drive
ADATA XPG SX8200 PRO 256GB M.2 2280
Power Supply
Corsair HX750i High-Performance ATX Power Supply — 750 Watt 80 Plus® PLATINUM Certified PSU
Cooling
Noctua NH-D15 chromax black
Cooling
Noctua NF-F12 PWM
Cooling
NOCTUA NF-A8 FLX FAN 80MM
Case
Corsair Carbide Air 740
Operating System
VMware ESXi 6.7 Update 3
Other
Asus HYPER M.2 X16 CARD V2
Other
HPE Smart Array P440/4GB FBWC 12Gb 1-port Int SAS Controller
▲ hide details ▲
bluechris is offline  
post #3 of 8 (permalink) Old 02-13-2020, 04:33 PM - Thread Starter
New to Overclock.net
 
ymetushe's Avatar
 
Join Date: Apr 2009
Location: The States
Posts: 629
Rep: 57 (Unique: 55)
I'm leaning towards Win10 + HyperV FreeNAS. Looks like I'm able to pass HDD's directly to the VM, I already have a 2nd Intel PRO NIC installed that I can pass to FreeNAS. I'll run some file transfer before (w/ current setup) and after to compare.
Ultimately I'm limited by the 1Gb NIC speed (still passing on 10Gb... maybe I'll jump on 2.5G bandwagon once that gets more adoption), so as long as I can still saturate that I'll call it good enough.

Just about any data I can find on dGPU's says they draw 10w at idle. I know it's not a lot, but I really try to make every w count.
To put things in perspective, 10w x 24hr x 365d x $0.25/kw = $21.9/year. For a system I don't expect to touch for 3-5 years, that's easily $100. On top of that, I have to manage the heat in my media closet where the server lives, so every BTU of heat spared there is a plus, too.

Be nice and don't forget to thank people who try to help you out.


ymetushe is offline  
Sponsored Links
Advertisement
 
post #4 of 8 (permalink) Old 02-13-2020, 05:30 PM
New to Overclock.net
 
Prophet4NO1's Avatar
 
Join Date: Feb 2014
Posts: 3,125
Rep: 162 (Unique: 119)
FreeNAS really wants to be bare metal and not be in a VM. It can be done, but can have issues. This mainly has to do with ZFS needing direct drive access. Going through a hypervisor tends to muck with that.

My setup right now is FreeNAS on a dedicated box. I then setup an xcpNG server to run my VM's. I setup iSCSi targets for some of the VM's on FreeNAS. Like the debian VM that runs Plex. I have a couple other Linux VM's and a Win10 one that hosts some game servers. I just cloned the existing OS over to the VM so I did not have to remake everything in Linux.

I do not know how well GPU passthrough works in xcpNG. I know some people use it, but I have no need for that function. I would say the main takeaway from all of this is to not run FreeNAS in a VM. If all you need is storage on the NAS, you can get away with a really basic CPU. Celeron or Pentium for example. You can Do the machine for super cheap if you want, besides the drives.

Just a thought. Good luck with whatever you do.
Prophet4NO1 is offline  
post #5 of 8 (permalink) Old 02-13-2020, 09:27 PM - Thread Starter
New to Overclock.net
 
ymetushe's Avatar
 
Join Date: Apr 2009
Location: The States
Posts: 629
Rep: 57 (Unique: 55)
Thanks for the input. I'll get a few spare drives and test it all out without wiping my existing PERC RAID1 drives.

If it really doesn't work, I can always go with:
1. Forget FreeNAS and stick to PERC H700 RAID1. Though I really wanted to use this HW change as a chance to upgrade to ZFS (I believe that to be an upgrade, though still researching)
2. Forgo the Vega 11 iGPU and get a Radeon RX460, go the typical hypervisor route.

Be nice and don't forget to thank people who try to help you out.


ymetushe is offline  
post #6 of 8 (permalink) Old 02-15-2020, 09:33 AM
New to Overclock.net
 
Prophet4NO1's Avatar
 
Join Date: Feb 2014
Posts: 3,125
Rep: 162 (Unique: 119)
ZFS is arguably the best RAID solution out right now. As long as you build the server right and design the array correctly. FreeNAS does most of the fine grain stuff automatically and does a good job when building a new array.

Another reason for a dedicated NAS box if running anything with ZFS is the massive memory requirements. In a nutshell, you need about 1GB of RAM for every TB of storage. This is for maintaining the ZFS hash/parity data. Any RAM beyond that is used as ARC cache by ZFS. After some usage you will see the ARC size going up and up Eventually it stops around 97% usage. At least on FreeNAS. I have not played with it on Linux yet. iXsystems, the company behind FreeNAS/TrueNAS generally discourage running in a VM because of this and the nature of ZFS wanting/needing direct drive access to function correctly. The only exceptions I am aware of are in large arrays where you may have in a larger enterprise environment where you have a whole rack of servers and drive shelves to work with. But that is way outside the scope of your project.

If you can find a hypervisor that allows direct drive access where you can assign a pile of drives that only talk to the FreeNAS VM, that may be your ticket. But I have no clue what ones can do that cleanly. I will be curious to see what you come up with and how well it works.
Prophet4NO1 is offline  
post #7 of 8 (permalink) Old 02-18-2020, 12:33 PM - Thread Starter
New to Overclock.net
 
ymetushe's Avatar
 
Join Date: Apr 2009
Location: The States
Posts: 629
Rep: 57 (Unique: 55)
I actually only have 4TB NAS, believe it or not - I don't do movie storage, so it's only files, music, some software, backups. I could easily increase the capacity just have no need to - still 1tb free after 8 years. So 1GB RAM per TB is not an issue at all. Got a single 16GB stick of ram, and MB can support 4x16gb.

I think I'll go with 6gb FreeNAS, 2gb automatically gets allocated to iGPU, and I'm left with 8G for Win10 HTPC/Light gaming rig.

Hyper-V does have HDD pass-through (here: https://thesolving.com/virtualizatio...-with-hyper-v/), and that's what I'm planning to go with. I'll play around with it in the next week or two as time allows and let you know how it works out.


ymetushe is offline  
post #8 of 8 (permalink) Old 02-18-2020, 03:54 PM
New to Overclock.net
 
Prophet4NO1's Avatar
 
Join Date: Feb 2014
Posts: 3,125
Rep: 162 (Unique: 119)
Good luck. As an FYI, if you add a VDEV to an existing volume of drives in ZFS, it will not self balance. It will instead start filling the new VDEV. If you copy all the data off the NAS and then back it will split the data between the two VDEV's giving some performance gain. For best performance, it is best to get all the drives you need in advance. So, maybe upgrade now if you think the 1TB you have left will be exhausted relatively soon. Or hold off and buy another lump of matching drives later and add them. Just keep the limitations in mind.
Prophet4NO1 is offline  
Reply

Quick Reply
Message:
Options

Register Now

In order to be able to post messages on the Overclock.net - An Overclocking Community forums, you must first register.
Please enter your desired user name, your email address and other required details in the form below.
User Name:
If you do not want to register, fill this field only and the name will be used as user name for your post.
Password
Please enter a password for your user account. Note that passwords are case-sensitive.
Password:
Confirm Password:
Email Address
Please enter a valid email address for yourself.
Email Address:

Log-in



Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off