Overclock.net banner

1 - 20 of 24 Posts

·
Registered
Joined
·
1,106 Posts
Discussion Starter #1
Hey all,

I have a Caselabs M8 that I am looking to make some upgrades too.

This case was purchased years ago with the intent to make my computer the "backbone" of all my work and entertainment.

This computer is used for work as well as gaming but I would also like to get some hot swap bays to start using it as a server.

However, for the server(NAS for Plex/Steam Cache/Work Server) I was wanting to run a Linux VM or maybe several.

Windows and my games would run on the "bare metal"

All that said, I am having a hard time finding much information online about doing such a build. I was wanting to build this on the 2700X but thinking that maybe the extra cores of Threadripper might be neededso that I could dedicate cores to the different VMs.

Any help here would be appreciated.
 

·
Linux Lobbyist
Joined
·
3,743 Posts
@OP

Your biggest issue will be memory and I/O, long before computational power. A 2700X will be plenty - pair it with a good NVMe SSD and at least 16GB of RAM and you'll be good. Linux server VMs can easily operate with 512MB of RAM (in many cases less). My own setup is my sig. rig, which uses VirtualBox for the VMs. I currently have the following setup active:

Host (desktop, Linux) 16GiB RAM
General VM (desktop, Linux) 2.5GiB
Security VM (desktop, Linux) 2GiB
Torrent VM (desktop, Linux) 1.5GiB
Usenet VM (desktop, Windows) 2.5GiB

Router VM (headless, pfSense) 512MB
Streaming VM (headless, Linux) 512MB

My Streaming VM uses Serviio but since it's only me using it 512MB is fine. :) The system is currently using ~12GiB RAM out of 16GiB total, so if I were to start my development VM, the system would start to swap. I can probably trim a bit from the other VMs, but we'll see...
 

·
Linux Lobbyist
Joined
·
3,743 Posts
@OP

Basically I/O contention. You'll likely have multiple VMs running on the same storage. If that storage is a single hard drive you'll struggle, no matter how beefy the rest of the machine's resources are (I've been there). A RAID array can smooth things out a little, but you'll never beat an SSD. Don't get me wrong though, it would probably have been better phrased as, "you'll run into memory and I/O problems long before you run out of computational power".
 

·
Registered
Joined
·
1,106 Posts
Discussion Starter #5
Ok, well my SSD died on me Friday so I used this as an opportunity to get a couple 4tb HDDs. My old 955be has virtualization so I'm thinking about doing an Ubuntu vm to start ripping my movies to. However not sure how vms handle hardware upgrades when I try to go to 2700x later this year.

Would I be better off to have a small SSD for the different OS on different vms or NVMe?

I do plan to setup raid of some sort for the server storage.
 

·
Linux Lobbyist
Joined
·
3,743 Posts
@OP

SSDs can be AHCI, NVMe or SATA. NVMe is the latest and has the highest throughput and lowest latency of the three, so yes an NVMe SSD would be perfect for storing VMs. However, a SATA SSD is perfectly fine and likely cheaper (I run a 1TB 850 EVO). As for upgrading hardware, all of the popular hypervisors have support for Ryzen.
 

·
god loves ugly
Joined
·
4,211 Posts
I run KVM in my home lab and recommend it highly. Any Fedora or Arch Linux documentation on KVM/libvirtd/QEMU would be helpful for running your own Windows/Linux VMs. You can even do Windows gaming VMs with VFIO and get near-native performance by passing through a GPU (I measure ~300 points loss in the VM with 3DMark).

Something a little easier to use would be Proxmox, it provides easy setup and a decent web UI. I'm not sure if things like VFIO are possible, however. I prefer setting up KVM myself due to the extra control.
 

·
Tech Enthusiast
Joined
·
12,388 Posts
I have all my VMs on a server rather than my workstation but if it were me I'd personally opt to run ESXi and have your windows box as a VM too and just pass the GPU through to it.

Linux boxes are pretty light hardware wise, plex can varies depending how many transcodes you need at any given time. And NAS can depend as well depending what platform you go with, FreeNAS while doesn't NEED as much RAM as many lead on, it will utilize however much you feed it.
 

·
Registered
Joined
·
1,106 Posts
Discussion Starter #9
I run KVM in my home lab and recommend it highly. Any Fedora or Arch Linux documentation on KVM/libvirtd/QEMU would be helpful for running your own Windows/Linux VMs. You can even do Windows gaming VMs with VFIO and get near-native performance by passing through a GPU (I measure ~300 points loss in the VM with 3DMark).

Something a little easier to use would be Proxmox, it provides easy setup and a decent web UI. I'm not sure if things like VFIO are possible, however. I prefer setting up KVM myself due to the extra control.
Are there any additional advantages to running Windows as a VM instead of on the "bare metal"?

I agree that 300 points is not much and I have been doing some reading on Looking Glass as well.

But if I run it as a VM is it less vulnerable to viruses? I currently run BitDefender but it would be nice not to have to deal with it anymore.

My SSD died on me Friday and I am supposed to be getting the new one today so this is a perfect opportunity to get this system going in the next couple days.
 

·
Registered
Joined
·
1,106 Posts
Discussion Starter #10
I have all my VMs on a server rather than my workstation but if it were me I'd personally opt to run ESXi and have your windows box as a VM too and just pass the GPU through to it.

Linux boxes are pretty light hardware wise, plex can varies depending how many transcodes you need at any given time. And NAS can depend as well depending what platform you go with, FreeNAS while doesn't NEED as much RAM as many lead on, it will utilize however much you feed it.
Same question as above, are there any additional advantages to running Windows as a VM?

ALso, I am concerned about hardware upgrades in the future while running VMs. I know what to expect when running windows alone but running VMs is a new area for me but one that I am very interested in.
 

·
Tech Enthusiast
Joined
·
12,388 Posts
Advantages of running windows as a VM, being able to restart the windows VM without having the other VMs go down.

Hardware upgrades for the most part will make zero difference, VMs are transferrable to other machines just fine.

Virus/Malware/etc for all intents and purposes are all the same as if you were running the machine on their own on bare metal.
 

·
god loves ugly
Joined
·
4,211 Posts
Are there any additional advantages to running Windows as a VM instead of on the "bare metal"?

I agree that 300 points is not much and I have been doing some reading on Looking Glass as well.

But if I run it as a VM is it less vulnerable to viruses? I currently run BitDefender but it would be nice not to have to deal with it anymore.

My SSD died on me Friday and I am supposed to be getting the new one today so this is a perfect opportunity to get this system going in the next couple days.
It's just as vulnerable as a bare metal installation, but it does allow you to more effectively 'sandbox'. Use different VMs for different things, or use snapshotting to quickly backup/restore if you find out something doesn't work. Windows updates alone are enough for me to want a VM - if it randomly reboots or whatever, no big deal. I have a Linux VM I can use too :)

I just recently 'abandoned' my native Fedora 28 install (with my custom rawhide kernel) with VFIO for a native Windows installation. Mainly on a temporary comparative basis... Scores aren't much higher, but there is a better general 'feel'. However I think that may be partly due to focusing on stabilizing my overclocks, which I know best how to do on native Windows (need things like voltage and temperature sensors which the Linux kernel doesn't quite provide).
 

·
Registered
Joined
·
1,106 Posts
Discussion Starter #13
It's just as vulnerable as a bare metal installation, but it does allow you to more effectively 'sandbox'. Use different VMs for different things, or use snapshotting to quickly backup/restore if you find out something doesn't work. Windows updates alone are enough for me to want a VM - if it randomly reboots or whatever, no big deal. I have a Linux VM I can use too :)

I just recently 'abandoned' my native Fedora 28 install (with my custom rawhide kernel) with VFIO for a native Windows installation. Mainly on a temporary comparative basis... Scores aren't much higher, but there is a better general 'feel'. However I think that may be partly due to focusing on stabilizing my overclocks, which I know best how to do on native Windows (need things like voltage and temperature sensors which the Linux kernel doesn't quite provide).
Whenever I played with Linux in the past it was not on an overclock so that would be an interesting experiment.

That said, from what I have read/seen it looks to me like most are wasting their time trying to overclock Ryzen and you would be better off with XFR and getting your RAM speed and timmings in check.

I think I am going to give ESXi a shot since it is so light weight. I can't spend too much time on this as I have to get my computer up and running so that I can get my work done. I work for myself at home when I am not in the field. Granted most of my reports to clients are not due for a few months but I hate waiting till deadlines are up.
 

·
Registered
Joined
·
1,106 Posts
Discussion Starter #14
anyone have any experience with unRAID?

doing some research into it and it looks like it might have some restrictions as far as what you can do with it but might also be a good free option.

I have Ubuntu on a bootable USB so I am thinking I am going to install that today and then work on getting the VMs up.
 

·
Registered
Joined
·
13 Posts
I am currently running three separate UnRaid servers in my home. First thing to note is the OS is not free, it does cost, but there is a 30 day free trial period. I am a fan of the of the OS and it has done everything I have needed with the exception of a few minor things that aren't critical to have. VM's have worked well for me and the docker and storage system works very well with little hassle. If you have a spare USB drive I would highly suggest installing UnRaid to it and giving it a try, you may just like it. Just my 2 cents.
 

·
Registered
Joined
·
1,106 Posts
Discussion Starter #16
Just to provide an update here.

I got my new SSD(old one failed on me last week) and 2x 4TB HDD on Monday.

Today is my first chance that I have to get working on this.

After doing more reading about UnRaid I found that it uses KVM for the VM.

Since KVM is a RedHat product I decided to also get Fedora 28.

So goal for today is to get Fedora 28 installed on my computer and get KDE desktop up an running. If time allows today I am going to work on getting KVM running as well and install windows.

THe only need I have for Windows is for Reserach Management that I use for my home business and games.

My current plan is to run Fedora 28 as my main desktop environment when I dont need the Windows applications and also have it working as my NAS. Question here though is would I be better from a performance standpoint to have the NAS separate for the desktop?

I also do Steam In-Home streaming and wondering if there are any special considerations that I need to keep in mind?
 

·
god loves ugly
Joined
·
4,211 Posts
Just to provide an update here.

I got my new SSD(old one failed on me last week) and 2x 4TB HDD on Monday.

Today is my first chance that I have to get working on this.

After doing more reading about UnRaid I found that it uses KVM for the VM.

Since KVM is a RedHat product I decided to also get Fedora 28.

So goal for today is to get Fedora 28 installed on my computer and get KDE desktop up an running. If time allows today I am going to work on getting KVM running as well and install windows.

THe only need I have for Windows is for Reserach Management that I use for my home business and games.

My current plan is to run Fedora 28 as my main desktop environment when I dont need the Windows applications and also have it working as my NAS. Question here though is would I be better from a performance standpoint to have the NAS separate for the desktop?

I also do Steam In-Home streaming and wondering if there are any special considerations that I need to keep in mind?
Fedora is my go-to distro for desktop use, and I'm RedHat certified for the server front, so if you have any issues - I'd be happy to help!

If you intend to do VFIO and find you want to pass through things like a PCI-e sound card, you may need an ACS-patched kernel. This is because a lot of boards only make a point to put GPUs in their own IOMMU groups (which allows you to pass it through to a VM). The ACS override patch allows you break these groups down even more, allowing you to pass through more devices. I maintain a kernel with this patch for Fedora 27/28 and EL7 (CentOS/RHEL) - https://copr.fedorainfracloud.org/coprs/jlay/kernel-acspatch/

As long as your workloads are spread across disks, you'll probably be fine mixing desktop/NAS. Bandwidth may be a concern if your local network will be consuming content from the NAS a lot. Aside from disk I/O and network bandwidth, NAS is a fairly non-intense workload (CPU/memory wise you'll have a hard time making a dent unless you're using something like ZFS).

For the VMs, be sure you're using bridged networking - your Windows VM for example will need this for the streaming service to be accessible by anything else.

Edit: I just realized I haven't updated my kernel in COPR in a couple weeks - feel free to use it as is if you need it (it's still relatively recent, should be the 4.17.2 kernel). I've kicked off a new build and will have updated packages locally in around an hour. Once it completes I'll send the source RPM over to COPR and get their build systems going, that'll take ~8 hours to finish. If you want the latest kernel before that I can send it your way, or it'll show up in COPR available as an update once they finish building it.
 

·
Premium Member
Joined
·
9,494 Posts
I have all my VMs on a server rather than my workstation but if it were me I'd personally opt to run ESXi and have your windows box as a VM too and just pass the GPU through to it.

I agree with you. On my server (also htpc) I have a windows 10 VM running in ESXi with the gpu passed through. Works like a charm.
 

·
Registered
Joined
·
1,106 Posts
Discussion Starter #19
Fedora is my go-to distro for desktop use, and I'm RedHat certified for the server front, so if you have any issues - I'd be happy to help!

If you intend to do VFIO and find you want to pass through things like a PCI-e sound card, you may need an ACS-patched kernel. This is because a lot of boards only make a point to put GPUs in their own IOMMU groups (which allows you to pass it through to a VM). The ACS override patch allows you break these groups down even more, allowing you to pass through more devices. I maintain a kernel with this patch for Fedora 27/28 and EL7 (CentOS/RHEL) - https://copr.fedorainfracloud.org/coprs/jlay/kernel-acspatch/

As long as your workloads are spread across disks, you'll probably be fine mixing desktop/NAS. Bandwidth may be a concern if your local network will be consuming content from the NAS a lot. Aside from disk I/O and network bandwidth, NAS is a fairly non-intense workload (CPU/memory wise you'll have a hard time making a dent unless you're using something like ZFS).

For the VMs, be sure you're using bridged networking - your Windows VM for example will need this for the streaming service to be accessible by anything else.

Edit: I just realized I haven't updated my kernel in COPR in a couple weeks - feel free to use it as is if you need it (it's still relatively recent, should be the 4.17.2 kernel). I've kicked off a new build and will have updated packages locally in around an hour. Once it completes I'll send the source RPM over to COPR and get their build systems going, that'll take ~8 hours to finish. If you want the latest kernel before that I can send it your way, or it'll show up in COPR available as an update once they finish building it.
I am sure that I will take you up on that! I have toyed in the past with Ubuntu but hated the desktop. I have already made the switch to KDE and like it much better.

Here is where I am at:

Fedora 28 Workstation-installed and running
-Vivaldi Browser-Installed
-Steam-installed
-virt-manager-installed

I have also installed Windows 10 on the VM but seem to be having some issues with it. I got a couple "something went wrong" screens but just clicked on "skip" we will see where that takes me. I have a blue "just a moment" screen.

I am really enjoying how fast Fedora w/ KDE is, also enjoying learning the console commands but I have ALOT to learn.

Quick quetsion for the NAS, should I just use Fedora 28 Workstation or should I run Fedora Server as a VM?
 

·
Registered
Joined
·
1,106 Posts
Discussion Starter #20
Hey all,

I got the Windows 10 VM working.

It took me a couple days because there was something wrong and I was getting some error codes with the initial install.

Not really sure why as I never found a solution. What I ended up doing is deleting the VM and doing a fresh ISO download of W10 and reinstalling. Now it works fine.

Next step will be getting PCI passthrough working but I need to get caught up on some work since I lost some of my reports that are due to clients when my SSD failed.
 
1 - 20 of 24 Posts
Top