Overclock.net banner

1 - 6 of 6 Posts

·
Registered
Joined
·
140 Posts
Discussion Starter #1
I have a Dell PowerEdge T620 (Dual E5-2640 Procs, 48GB of RAM) server running 2 x 256gb SSD's in Raid 1 which is currently storing Hyper-V Server 2016 OS and 6 VM's. One of those VM's is a Plex Server. I then have 6 x 2TB HDD's in Raid 10 for file and media storage. What is the best method to present the Raid 10 array to my Plex Server?

Should I just create an smbshare via PowerShell on the Hyper-V host and map the drive on my Plex server or should I create a fixed VHD set (using all 6 TB) and connect it to my VM's? Plex is running on Server 2016 by the way.

Also, what about ReFS vs NTFS for the Raid 10? I understand ReFS significance in VM storage compared to NTFS but what about just file/media storage in this scenario?
 

·
Top kek
Joined
·
3,595 Posts
You can present it as Clustered Storage or just use it as DAS.

Also, NTFS is good enough.
 

·
Registered
Joined
·
140 Posts
Discussion Starter #3
Thanks - I wasn't sure if there were any performance benefits to presenting it as an SMBShare from the hypervisor OS or VHD Set.
 

·
Top kek
Joined
·
3,595 Posts
If everything is running on one host, there will be no big difference, but if the storage is divided between few (its not DAS), its speed will depend on the connectivity first and on the type of implementation.

If you have 1Gb connection between your hosts, it will be your primary bottleneck point. Our E5-2660v2 servers are running Hyper-V (4 physical hosts), which are clustered together. They don't have any DAS, instead we have a EMC VNX5100, which has lots of storage divided on different pools and presented to different physical or virtual hosts (everything on 1 LUN). The disks are SAS 10k and 13k if i remember correctly, moving VM's around the host takes about1 minute per 1 VM from a pool, its limitted by the 1Gb, which gets saturated.

Now, if we look at the PACS server (VM), its bottlenecked by the storage speeds, because it hads tons of small sized files (CR,CT such). I had to move like 10TB worth of PACS images from 1 presented storage to another, which took 1 whole day, or even more.
 

·
Registered
Joined
·
140 Posts
Discussion Starter #5
You work in Imaging Informatics in Healthcare to then I see! That's cool. I do as well. This is for my home lab. At work we have a 6 host ESXI cluster strictly for our PACS system with 18 storage nodes attached to 4 SAN's each running 48 x 4TB drives. Basically we don't purge a ******* thing lol.

As for my home lab yeah it's DAS. 2 x 256gb SSD's in Raid 1 for the VM's and 6 x 2TB drives in Raid 10 for storage. It's mostly just pictures, music, movies, tv shows, and then a **** ton of drivers and installations (OS ISO's) for side work I've done over the years.
 

·
Top kek
Joined
·
3,595 Posts
Quote:
Originally Posted by CJston15 View Post

You work in Imaging Informatics in Healthcare to then I see! That's cool. I do as well. This is for my home lab. At work we have a 6 host ESXI cluster strictly for our PACS system with 18 storage nodes attached to 4 SAN's each running 48 x 4TB drives. Basically we don't purge a ******* thing lol.

As for my home lab yeah it's DAS. 2 x 256gb SSD's in Raid 1 for the VM's and 6 x 2TB drives in Raid 10 for storage. It's mostly just pictures, music, movies, tv shows, and then a **** ton of drivers and installations (OS ISO's) for side work I've done over the years.
Actually, system administrator in a big hospital. We will be moving soon to VMWare with the new datacenter. Still not completely decided. Anyway, if you have any questions, ask, i will try to answer them to my best ability.
 
1 - 6 of 6 Posts
Top