Overclock.net › Forums › Specialty Builds › Servers › Post Your Server!!!
New Posts  All Forums:Forum Nav:

Post Your Server!!! - Page 428

post #4271 of 4324
Quote:
Originally Posted by Liranan View Post

Quote:
Originally Posted by cdoublejj View Post

 
Quote:
Originally Posted by Liranan View Post

I will virtualise my current Windows 8.1 and GPU passthrough in vSphere this coming weekend.


AMD make passthrough relatively easy compared with nVidia so it depends on whether they want to and considering nVidia want to sell Tesla's and Grids they definitely don't want to make it achievable.


Yeah but, passthrough will only power 1 VM and the more GPUs the less slots for other stuff like raid and 10gbps fiber.

also if you wish to rebuild or do a new RAID you can just migrate the data vs reinstall the OS. SATAdoms are also faster alternative to SD cards and USB.
I have no use for RAID or 10GBE cards in my main PC, so I have free PCIE slots to test passthrough. If it works well I will start using ESXi and run Windows and Linux on the machine. My next PC will definitely have 32GB RAM as 16 just isn't enough anymore.

As long as you know you'll need a 2nd PC to actually access any VM but the one you assign the GPU to as well as most of the config work. It does not do any video output locally besides a config gui, and VMs that don;t have a dedi GPU are remote access.
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
post #4272 of 4324
Quote:
Originally Posted by twerk View Post

ESXi is not Linux or in fact Unix based, the kernel was built from the ground up.

The performance impact is pretty much non-existent. The only noticeable thing is a slight memory overhead.
It's time for me to test this again... The last time this claim was made reviews showed a 5-8% hit for computes, but I'm not sure I've seen tests on a haswell xeon or newer which have been busy improving the IOMMU.

When I tested my personal apps (typical 1-10G run-time memory image heavy IPC multi-threaded) I saw as much as a 15-18% hit on the same machine for such things, but that was with ESXi 6.0 and I honestly don't recall if that was haswell or sandy bridge? It's been a while since I did that...

Such a pain to test apples:apples, but I'd love to see this overhead promise finally come true for my needs...

I know as recently as BW xeon KVM showed improvement, but still in the 10%+ range for my use-case...
post #4273 of 4324
Quote:
Originally Posted by KyadCK View Post
 
Quote:
Originally Posted by Liranan View Post
 
Quote:
Originally Posted by cdoublejj View Post

 
Quote:
Originally Posted by Liranan View Post

I will virtualise my current Windows 8.1 and GPU passthrough in vSphere this coming weekend.


AMD make passthrough relatively easy compared with nVidia so it depends on whether they want to and considering nVidia want to sell Tesla's and Grids they definitely don't want to make it achievable.


Yeah but, passthrough will only power 1 VM and the more GPUs the less slots for other stuff like raid and 10gbps fiber.

also if you wish to rebuild or do a new RAID you can just migrate the data vs reinstall the OS. SATAdoms are also faster alternative to SD cards and USB.
I have no use for RAID or 10GBE cards in my main PC, so I have free PCIE slots to test passthrough. If it works well I will start using ESXi and run Windows and Linux on the machine. My next PC will definitely have 32GB RAM as 16 just isn't enough anymore.

As long as you know you'll need a 2nd PC to actually access any VM but the one you assign the GPU to as well as most of the config work. It does not do any video output locally besides a config gui, and VMs that don;t have a dedi GPU are remote access.

Will each GPU need to be connected to their own dedicated screen or can all VM's share the same screen by connecting the screen one of the GPU's?

The girlfriend.
(15 items)
 
The Mistress
(13 items)
 
Media Server
(11 items)
 
CPUMotherboardGraphicsRAM
A8-6410 Lenovo Lancer 4B2 K16.3 R5 128 Shaders/M230 Hynix 8GB DDR3 1600 
Hard DriveHard DriveOSMonitor
Samsung 840 120 GB SSD Seagate Momentus 1TB 5400rmp Win 8.1 CMN1487 TN LED 14" 1366*768 
KeyboardPowerMouseMouse Pad
Lenovo AccuType 2900mAh/41Wh Elan Trackpad/Logitech M90 Super Flower 
Audio
AMD Avalon(Connexant) 
  hide details  
Reply
The girlfriend.
(15 items)
 
The Mistress
(13 items)
 
Media Server
(11 items)
 
CPUMotherboardGraphicsRAM
A8-6410 Lenovo Lancer 4B2 K16.3 R5 128 Shaders/M230 Hynix 8GB DDR3 1600 
Hard DriveHard DriveOSMonitor
Samsung 840 120 GB SSD Seagate Momentus 1TB 5400rmp Win 8.1 CMN1487 TN LED 14" 1366*768 
KeyboardPowerMouseMouse Pad
Lenovo AccuType 2900mAh/41Wh Elan Trackpad/Logitech M90 Super Flower 
Audio
AMD Avalon(Connexant) 
  hide details  
Reply
post #4274 of 4324
Quote:
Originally Posted by Liranan View Post

Quote:
Originally Posted by KyadCK View Post

 
Quote:
Originally Posted by Liranan View Post

 
Quote:
Originally Posted by cdoublejj View Post

 
Quote:
Originally Posted by Liranan View Post

I will virtualise my current Windows 8.1 and GPU passthrough in vSphere this coming weekend.



AMD make passthrough relatively easy compared with nVidia so it depends on whether they want to and considering nVidia want to sell Tesla's and Grids they definitely don't want to make it achievable.



Yeah but, passthrough will only power 1 VM and the more GPUs the less slots for other stuff like raid and 10gbps fiber.


also if you wish to rebuild or do a new RAID you can just migrate the data vs reinstall the OS. SATAdoms are also faster alternative to SD cards and USB.
I have no use for RAID or 10GBE cards in my main PC, so I have free PCIE slots to test passthrough. If it works well I will start using ESXi and run Windows and Linux on the machine. My next PC will definitely have 32GB RAM as 16 just isn't enough anymore.


As long as you know you'll need a 2nd PC to actually access any VM but the one you assign the GPU to as well as most of the config work. It does not do any video output locally besides a config gui, and VMs that don't have a dedi GPU are remote access.
Will each GPU need to be connected to their own dedicated screen or can all VM's share the same screen by connecting the screen one of the GPU's?

It's hardware passthrough. Nothing can even see the GPU except the VM it is assigned to, full stop, not even ESXi.

You'll need a screen per VM, you'll need a KB/mouse per VM, you'll need to assign USB cards per VM, and anything else you want them to have as well for any VM you plan to connect to physically instead of remotely. If you want to access two VMs this way then you'll need an absolute minimum of 4 PCI-e slots in use.

ESXi is server software, it's not designed to be accessed locally. Assigning cards like this is for giving your CAD server a GPU and still access it remotely, not to plug in. The good news is that doing so this way massively cuts latency compared to software redirects and sharing, much closer to native.
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
post #4275 of 4324
Quote:
Originally Posted by KyadCK View Post

It's hardware passthrough. Nothing can even see the GPU except the VM it is assigned to, full stop, not even ESXi.

You'll need a screen per VM, you'll need a KB/mouse per VM, you'll need to assign USB cards per VM, and anything else you want them to have as well for any VM you plan to connect to physically instead of remotely. If you want to access two VMs this way then you'll need an absolute minimum of 4 PCI-e slots in use.

ESXi is server software, it's not designed to be accessed locally. Assigning cards like this is for giving your CAD server a GPU and still access it remotely, not to plug in. The good news is that doing so this way massively cuts latency compared to software redirects and sharing, much closer to native.

So it's just like normal passthrough. I will definitely test this as I would like to create two VM's, one Windows for gaming and another Linux for everything else. Sounds like great fun, I just need to deal with the two GPU's and peripherals. If this doesn't work well I will try regular Linux passthrough.

The girlfriend.
(15 items)
 
The Mistress
(13 items)
 
Media Server
(11 items)
 
CPUMotherboardGraphicsRAM
A8-6410 Lenovo Lancer 4B2 K16.3 R5 128 Shaders/M230 Hynix 8GB DDR3 1600 
Hard DriveHard DriveOSMonitor
Samsung 840 120 GB SSD Seagate Momentus 1TB 5400rmp Win 8.1 CMN1487 TN LED 14" 1366*768 
KeyboardPowerMouseMouse Pad
Lenovo AccuType 2900mAh/41Wh Elan Trackpad/Logitech M90 Super Flower 
Audio
AMD Avalon(Connexant) 
  hide details  
Reply
The girlfriend.
(15 items)
 
The Mistress
(13 items)
 
Media Server
(11 items)
 
CPUMotherboardGraphicsRAM
A8-6410 Lenovo Lancer 4B2 K16.3 R5 128 Shaders/M230 Hynix 8GB DDR3 1600 
Hard DriveHard DriveOSMonitor
Samsung 840 120 GB SSD Seagate Momentus 1TB 5400rmp Win 8.1 CMN1487 TN LED 14" 1366*768 
KeyboardPowerMouseMouse Pad
Lenovo AccuType 2900mAh/41Wh Elan Trackpad/Logitech M90 Super Flower 
Audio
AMD Avalon(Connexant) 
  hide details  
Reply
post #4276 of 4324
Quote:
Originally Posted by Liranan View Post

So it's just like normal passthrough. I will definitely test this as I would like to create two VM's, one Windows for gaming and another Linux for everything else. Sounds like great fun, I just need to deal with the two GPU's and peripherals. If this doesn't work well I will try regular Linux passthrough.
We are getting tantalizingly close to this dream, but as of now its still very hit-or-miss. In particularly I've found that usb and sound connectivity coupled with random glitches in GPU as well produce a "work in progress" experience that varies wildly with your hardware selection. Some works better than others whether it be MB, USB, Sound or GPU...

regarding USB specifically, I have an ESXi VM that requires a USB hardware dongle for licensing of software. It works find _most_ of the time, but occasionally, randomly, it cannot find the dongle...

The very latest versions of their software have eliminated the dongle, but when you are dealing with $multi-thousand software, you upgrade when you must, not just for convenience. My point was that if the USB is dropping out there, it is dropping out elsewhere which makes for interesting gaming problems given latency sensitivity.

I've been dreaming of a world where I did not have to dual boot or have multiple machines for games and work for 25 years now.... Still dreaming, but so, so close. The distance now is stability and latency, not basic functionality (though the loss of SLI means its hard to achieve 144-165Hz @ 1440p with current GPUs)
post #4277 of 4324
Quote:
Originally Posted by cekim View Post

Quote:
Originally Posted by Liranan View Post

So it's just like normal passthrough. I will definitely test this as I would like to create two VM's, one Windows for gaming and another Linux for everything else. Sounds like great fun, I just need to deal with the two GPU's and peripherals. If this doesn't work well I will try regular Linux passthrough.
We are getting tantalizingly close to this dream, but as of now its still very hit-or-miss. In particularly I've found that usb and sound connectivity coupled with random glitches in GPU as well produce a "work in progress" experience that varies wildly with your hardware selection. Some works better than others whether it be MB, USB, Sound or GPU...

regarding USB specifically, I have an ESXi VM that requires a USB hardware dongle for licensing of software. It works find _most_ of the time, but occasionally, randomly, it cannot find the dongle...

The very latest versions of their software have eliminated the dongle, but when you are dealing with $multi-thousand software, you upgrade when you must, not just for convenience. My point was that if the USB is dropping out there, it is dropping out elsewhere which makes for interesting gaming problems given latency sensitivity.

I've been dreaming of a world where I did not have to dual boot or have multiple machines for games and work for 25 years now.... Still dreaming, but so, so close. The distance now is stability and latency, not basic functionality (though the loss of SLI means its hard to achieve 144-165Hz @ 1440p with current GPUs)

That's why you do PCI-e pass through for an entire USB expansion card instead or relying on software pass, given his use case. tongue.gif

Also who says you can't do SLI this way? You absolutely can provided you have the money and the lanes.
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
post #4278 of 4324
Quote:
Originally Posted by KyadCK View Post

That's why you do PCI-e pass through for an entire USB expansion card instead or relying on software pass, given his use case. tongue.gif

Also who says you can't do SLI this way? You absolutely can provided you have the money and the lanes.
Nvidia requires something like "different SLI" which is iffy itself...
post #4279 of 4324
Quote:
Originally Posted by cekim View Post

Quote:
Originally Posted by KyadCK View Post

That's why you do PCI-e pass through for an entire USB expansion card instead or relying on software pass, given his use case. tongue.gif

Also who says you can't do SLI this way? You absolutely can provided you have the money and the lanes.
Nvidia requires something like "different SLI" which is iffy itself...

IOMMU is an address redirect table, nothing about that forces any hardware to work in an abnormal way.
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
post #4280 of 4324
Quote:
Originally Posted by KyadCK View Post

Quote:
Originally Posted by cekim View Post

Quote:
Originally Posted by KyadCK View Post

That's why you do PCI-e pass through for an entire USB expansion card instead or relying on software pass, given his use case. tongue.gif

Also who says you can't do SLI this way? You absolutely can provided you have the money and the lanes.
Nvidia requires something like "different SLI" which is iffy itself...

IOMMU is an address redirect table, nothing about that forces any hardware to work in an abnormal way.

I can attest, SLI does work in passthrough, so does crossfire... I did it using KVM but I see no reason that it wouldn't work on another hyper-visor.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Servers
Overclock.net › Forums › Specialty Builds › Servers › Post Your Server!!!