Overclock.net › Forums › Software, Programming and Coding › Operating Systems › {Guide} Create a Gaming Virtual Machine
New Posts  All Forums:Forum Nav:

{Guide} Create a Gaming Virtual Machine - Page 44

post #431 of 813
Quote:
Originally Posted by Chetyre View Post

I'm passing a secondary graphics card, the primary one is the integrated one in the processor, HD graphics 4000 IIRC. the one that has no signal is the monitor that should be connected to windows. BTW, if I VNC to the machine I just get a black screen that says "no serial" or something like that.
I'm using xl, not xm.

Is that what Windows throws at you when booting? Or some VNC/console message?

Can you post your domU config file (win7.cfg or whatever) found in /etc/xen/ ?
post #432 of 813
Quote:
Originally Posted by powerhouse View Post

Is that what Windows throws at you when booting? Or some VNC/console message?
Can you post your domU config file (win7.cfg or whatever) found in /etc/xen/ ?

It's a VNC message.

I've disabled the gfx_passthru option and I could VNC into windows and see the nvidia device. Still no signal though. I will try to install the card's drivers and see what happens.

my config is as follows:
Code:
builder='hvm'
memory=1024
vcpus=8
pae=1
vif=['bridge=xenbr0,'mac=<random mac here>']
disk=['phy://dev/vg_xen/windows,hda,w']
boot='dc'
pci=['01:00.0']
vnc=1
vnclisten="0.0.0.0"
vncpasswd=""
usbdevice='tablet'
localtime=1

Edited by Chetyre - 1/9/13 at 1:42pm
post #433 of 813
This sort of setup intrigues me and I'd love to experiment with it in some sort of LAN center setup. It may be very expensive in the start-up phase (maybe not even that since you can buy the hardware in bulk) but I think long term it'll offer good cost reductions and improved efficiency. Maintenance would be easier in my eyes, and you wouldn't have to interrupt your customers since you can use VMotion/etc. tools to move VMs between "servers" on the fly. I also see it being a more secure setup since the customers wouldn't have access to all the beefy hardware.

If I want a LAN center with say 64 "terminals" that my customers will use, how powerful would those 64 terminals have to be? Assume that I'd have some monstrous hardware in the back with 256GB of RAM or even more, and enough HD7950s (or whatever GPU, say even 2 7950s per gaming terminal) to make anyone on OCN cry in joy, along with whatever processors would be appropriate (racks of IB-Es anyone?). I'm just being theoretical here for the sake of curiosity. I'd love to start up a business some day and it consists of a LAN center for PC gaming.

Would love to hear back from anyone that's experienced such a setup.
Gaming Rig
(20 items)
 
  
CPUMotherboardGraphicsRAM
Intel 2500k, 4.6GHz, 1.304v ASRock P67 Extreme4 Gen3 2x Sapphire HD7970 OC with Boost, 1150 MHz/1550... 2x4GB DDR3 1600 Corsair Vengeance 
Hard DriveHard DriveHard DriveOptical Drive
Samsung 840 Pro Samsung 750GB HD753LJ Samsung F3 ASUS 24X DVD Combo Drive 
CoolingOSMonitorKeyboard
Noctua DH14 Windows 8 Professional x64 Crossover 27Q 27" IPS LED, 2560x1440 Logitech G11 
PowerCaseMouseMouse Pad
Corsair TX750 Cooler Master HAF932 Logitech G500 Custom 
AudioAudioAudioAudio
Creative X-Fi Titanium Fatal1ty 2x Dayton B652 Bookshelf Dayton DTA-100A Amplifier Dayton 12" SUB-1200 Subwoofer 
  hide details  
Reply
Gaming Rig
(20 items)
 
  
CPUMotherboardGraphicsRAM
Intel 2500k, 4.6GHz, 1.304v ASRock P67 Extreme4 Gen3 2x Sapphire HD7970 OC with Boost, 1150 MHz/1550... 2x4GB DDR3 1600 Corsair Vengeance 
Hard DriveHard DriveHard DriveOptical Drive
Samsung 840 Pro Samsung 750GB HD753LJ Samsung F3 ASUS 24X DVD Combo Drive 
CoolingOSMonitorKeyboard
Noctua DH14 Windows 8 Professional x64 Crossover 27Q 27" IPS LED, 2560x1440 Logitech G11 
PowerCaseMouseMouse Pad
Corsair TX750 Cooler Master HAF932 Logitech G500 Custom 
AudioAudioAudioAudio
Creative X-Fi Titanium Fatal1ty 2x Dayton B652 Bookshelf Dayton DTA-100A Amplifier Dayton 12" SUB-1200 Subwoofer 
  hide details  
Reply
post #434 of 813
Quote:
Originally Posted by Stealth Pyros View Post

This sort of setup intrigues me and I'd love to experiment with it in some sort of LAN center setup. It may be very expensive in the start-up phase (maybe not even that since you can buy the hardware in bulk) but I think long term it'll offer good cost reductions and improved efficiency. Maintenance would be easier in my eyes, and you wouldn't have to interrupt your customers since you can use VMotion/etc. tools to move VMs between "servers" on the fly. I also see it being a more secure setup since the customers wouldn't have access to all the beefy hardware.

If I want a LAN center with say 64 "terminals" that my customers will use, how powerful would those 64 terminals have to be? Assume that I'd have some monstrous hardware in the back with 256GB of RAM or even more, and enough HD7950s (or whatever GPU, say even 2 7950s per gaming terminal) to make anyone on OCN cry in joy, along with whatever processors would be appropriate (racks of IB-Es anyone?). I'm just being theoretical here for the sake of curiosity. I'd love to start up a business some day and it consists of a LAN center for PC gaming.

Would love to hear back from anyone that's experienced such a setup.

To do what you want it would probably be more appropriate to research streaming solutions together with the virtualization one. just making VMs and assigning them video cards won't magically transfer into a lot of gaming speed over a network, even on a LAN. You probably need something like what OnLive does.

As for the terminals, any thin client should work just fine. They wouldn't need to be powerful at all, just enough to drive the monitor at the right resolution and to use the software that will be receiving the stream from the server, whatever it could be.
post #435 of 813
Quote:
Originally Posted by Chetyre View Post

Quote:
Originally Posted by Stealth Pyros View Post

This sort of setup intrigues me and I'd love to experiment with it in some sort of LAN center setup. It may be very expensive in the start-up phase (maybe not even that since you can buy the hardware in bulk) but I think long term it'll offer good cost reductions and improved efficiency. Maintenance would be easier in my eyes, and you wouldn't have to interrupt your customers since you can use VMotion/etc. tools to move VMs between "servers" on the fly. I also see it being a more secure setup since the customers wouldn't have access to all the beefy hardware.

If I want a LAN center with say 64 "terminals" that my customers will use, how powerful would those 64 terminals have to be? Assume that I'd have some monstrous hardware in the back with 256GB of RAM or even more, and enough HD7950s (or whatever GPU, say even 2 7950s per gaming terminal) to make anyone on OCN cry in joy, along with whatever processors would be appropriate (racks of IB-Es anyone?). I'm just being theoretical here for the sake of curiosity. I'd love to start up a business some day and it consists of a LAN center for PC gaming.

Would love to hear back from anyone that's experienced such a setup.

To do what you want it would probably be more appropriate to research streaming solutions together with the virtualization one. just making VMs and assigning them video cards won't magically transfer into a lot of gaming speed over a network, even on a LAN. You probably need something like what OnLive does.

As for the terminals, any thin client should work just fine. They wouldn't need to be powerful at all, just enough to drive the monitor at the right resolution and to use the software that will be receiving the stream from the server, whatever it could be.

Thanks @ streaming solution. I figured the terminals wouldn't need to be too powerful. Low budget Atoms with 2-4GB RAM should handle it just fine.
Gaming Rig
(20 items)
 
  
CPUMotherboardGraphicsRAM
Intel 2500k, 4.6GHz, 1.304v ASRock P67 Extreme4 Gen3 2x Sapphire HD7970 OC with Boost, 1150 MHz/1550... 2x4GB DDR3 1600 Corsair Vengeance 
Hard DriveHard DriveHard DriveOptical Drive
Samsung 840 Pro Samsung 750GB HD753LJ Samsung F3 ASUS 24X DVD Combo Drive 
CoolingOSMonitorKeyboard
Noctua DH14 Windows 8 Professional x64 Crossover 27Q 27" IPS LED, 2560x1440 Logitech G11 
PowerCaseMouseMouse Pad
Corsair TX750 Cooler Master HAF932 Logitech G500 Custom 
AudioAudioAudioAudio
Creative X-Fi Titanium Fatal1ty 2x Dayton B652 Bookshelf Dayton DTA-100A Amplifier Dayton 12" SUB-1200 Subwoofer 
  hide details  
Reply
Gaming Rig
(20 items)
 
  
CPUMotherboardGraphicsRAM
Intel 2500k, 4.6GHz, 1.304v ASRock P67 Extreme4 Gen3 2x Sapphire HD7970 OC with Boost, 1150 MHz/1550... 2x4GB DDR3 1600 Corsair Vengeance 
Hard DriveHard DriveHard DriveOptical Drive
Samsung 840 Pro Samsung 750GB HD753LJ Samsung F3 ASUS 24X DVD Combo Drive 
CoolingOSMonitorKeyboard
Noctua DH14 Windows 8 Professional x64 Crossover 27Q 27" IPS LED, 2560x1440 Logitech G11 
PowerCaseMouseMouse Pad
Corsair TX750 Cooler Master HAF932 Logitech G500 Custom 
AudioAudioAudioAudio
Creative X-Fi Titanium Fatal1ty 2x Dayton B652 Bookshelf Dayton DTA-100A Amplifier Dayton 12" SUB-1200 Subwoofer 
  hide details  
Reply
post #436 of 813
Thread Starter 
Quote:
Originally Posted by Stealth Pyros View Post

This sort of setup intrigues me and I'd love to experiment with it in some sort of LAN center setup. It may be very expensive in the start-up phase (maybe not even that since you can buy the hardware in bulk) but I think long term it'll offer good cost reductions and improved efficiency. Maintenance would be easier in my eyes, and you wouldn't have to interrupt your customers since you can use VMotion/etc. tools to move VMs between "servers" on the fly. I also see it being a more secure setup since the customers wouldn't have access to all the beefy hardware.

If I want a LAN center with say 64 "terminals" that my customers will use, how powerful would those 64 terminals have to be? Assume that I'd have some monstrous hardware in the back with 256GB of RAM or even more, and enough HD7950s (or whatever GPU, say even 2 7950s per gaming terminal) to make anyone on OCN cry in joy, along with whatever processors would be appropriate (racks of IB-Es anyone?). I'm just being theoretical here for the sake of curiosity. I'd love to start up a business some day and it consists of a LAN center for PC gaming.

Would love to hear back from anyone that's experienced such a setup.

I have often thought about this too. What I was thinking would be good is something like an 8-core AMD (to keep cost down) and boards with plenty of PCI-e slots (like 3 or 4) so you would be able to run 3 gaming computers in one. Another (and probably better) solution would be to get a really expensive server card and use Xen's XCP (Xen Cloud Platform). Quadros and FirePros both have models suited for virtual machines over the network. Then all you would need is a thin client at each station which could cost as little as $60. You might even be able to use consumer grade cards to accelerate the VMs over the network, but you would have to look into it.
Test Chamber ITX
(14 items)
 
Dream Rig (AMD)
(11 items)
 
Dream Rig (Intel)
(11 items)
 
CPUMotherboardGraphicsRAM
i7-2600 Gigabyte H77N-wifi Radeon HD 7750 16GB Corsair DDR3 1600 CL 10 (@1333 CL 8) 
Hard DriveHard DriveOSPower
Corsair Force 3 180gb Seagate Barracuda STBD2000101 2x in RAID1 Lubuntu / Win 8.1 / CentOS 6.4 -- Xen 4.3 FSP 80+ Micro-ATX 450w 
Case
Fractal Node 304 
CPUMotherboardGraphicsGraphics
AMD FX-8350 ASRock Fatal1ty 990FX Professional Sapphire Radeon HD 7970 GHz Edition 3GB AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Trident X Series 32GB (4 x 8GB) DDR3-1866 Seagate Barracuda 1TB 3.5" 7200RPM x2 RAID 1 Corsair Force Series GT 120GB x2 RAID 0 NZXT Kraken X60 
PowerCaseOther
SeaSonic Platinum 860W 80 PLUS Platinum Lian-Li PC-100 (Black) ATX Mid Tower Case RaspberryPi 
CPUMotherboardGraphicsGraphics
i7-3930k ASrock X79 EXTREME4-M SAPPHIRE Radeon HD 7870 GHz Edition 2GB  AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Ripjaws Z Series 32GB (4 x 8GB) DDR3-1866 2x Corsair Force Series 3 180GB RAID 0 2x Western Digital Red 2TB RAID 1 Noctua NH-D14 SE2011 
PowerCaseOther
SeaSonic 660W 80 PLUS Platinum Silverstone TJ08B-E Integrated RaspberryPi 
  hide details  
Reply
Test Chamber ITX
(14 items)
 
Dream Rig (AMD)
(11 items)
 
Dream Rig (Intel)
(11 items)
 
CPUMotherboardGraphicsRAM
i7-2600 Gigabyte H77N-wifi Radeon HD 7750 16GB Corsair DDR3 1600 CL 10 (@1333 CL 8) 
Hard DriveHard DriveOSPower
Corsair Force 3 180gb Seagate Barracuda STBD2000101 2x in RAID1 Lubuntu / Win 8.1 / CentOS 6.4 -- Xen 4.3 FSP 80+ Micro-ATX 450w 
Case
Fractal Node 304 
CPUMotherboardGraphicsGraphics
AMD FX-8350 ASRock Fatal1ty 990FX Professional Sapphire Radeon HD 7970 GHz Edition 3GB AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Trident X Series 32GB (4 x 8GB) DDR3-1866 Seagate Barracuda 1TB 3.5" 7200RPM x2 RAID 1 Corsair Force Series GT 120GB x2 RAID 0 NZXT Kraken X60 
PowerCaseOther
SeaSonic Platinum 860W 80 PLUS Platinum Lian-Li PC-100 (Black) ATX Mid Tower Case RaspberryPi 
CPUMotherboardGraphicsGraphics
i7-3930k ASrock X79 EXTREME4-M SAPPHIRE Radeon HD 7870 GHz Edition 2GB  AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Ripjaws Z Series 32GB (4 x 8GB) DDR3-1866 2x Corsair Force Series 3 180GB RAID 0 2x Western Digital Red 2TB RAID 1 Noctua NH-D14 SE2011 
PowerCaseOther
SeaSonic 660W 80 PLUS Platinum Silverstone TJ08B-E Integrated RaspberryPi 
  hide details  
Reply
post #437 of 813
Thread Starter 
Small update on overcommitting CPU resources:

I had a chance to run some wPrime today and here is what I came up with:

32m and 1024m results are each in 6, 8 and 20 vCPUs from top to bottom.



As you can see, not using all the cores available is not the quickest, but severely overcommitting or splitting CPU resources has little effect. This leads me to believe that performance and load balancing being done in Xen 4.2.0 is superior to older versions. When I tried similar things with the original setup (even just 8 vCPUs) everything ran slow including dom0 and the domU. Even Idle performance was terrible.

I also have a second theory. I have moved on to windows 8 and I looked at all the added features and there is a slough of features that are virtualization related -- including hyper-v. I also set viridian=1 in my .sxp for my virtual machine and windows 8 acknowledges that it is a virtual machine. Either way, this is good news for virtualization and this guide. I am going to officially recommend that readers switch to windows 8 biggrin.gif

Oh here is a little pic just for fun: QEMU HVM CPU limit is 32 I think (I tried to assign 40) and it thinks I have 2 sockets tongue.gif



So I did some wPrime tests with 32 vCPUs and when I set the benchmark to only use 8 vCPUs the speeds were back in the 10 second range. I am really liking how xen is handling all this with W8 thumb.gif
Edited by dizzy4 - 1/10/13 at 4:30pm
Test Chamber ITX
(14 items)
 
Dream Rig (AMD)
(11 items)
 
Dream Rig (Intel)
(11 items)
 
CPUMotherboardGraphicsRAM
i7-2600 Gigabyte H77N-wifi Radeon HD 7750 16GB Corsair DDR3 1600 CL 10 (@1333 CL 8) 
Hard DriveHard DriveOSPower
Corsair Force 3 180gb Seagate Barracuda STBD2000101 2x in RAID1 Lubuntu / Win 8.1 / CentOS 6.4 -- Xen 4.3 FSP 80+ Micro-ATX 450w 
Case
Fractal Node 304 
CPUMotherboardGraphicsGraphics
AMD FX-8350 ASRock Fatal1ty 990FX Professional Sapphire Radeon HD 7970 GHz Edition 3GB AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Trident X Series 32GB (4 x 8GB) DDR3-1866 Seagate Barracuda 1TB 3.5" 7200RPM x2 RAID 1 Corsair Force Series GT 120GB x2 RAID 0 NZXT Kraken X60 
PowerCaseOther
SeaSonic Platinum 860W 80 PLUS Platinum Lian-Li PC-100 (Black) ATX Mid Tower Case RaspberryPi 
CPUMotherboardGraphicsGraphics
i7-3930k ASrock X79 EXTREME4-M SAPPHIRE Radeon HD 7870 GHz Edition 2GB  AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Ripjaws Z Series 32GB (4 x 8GB) DDR3-1866 2x Corsair Force Series 3 180GB RAID 0 2x Western Digital Red 2TB RAID 1 Noctua NH-D14 SE2011 
PowerCaseOther
SeaSonic 660W 80 PLUS Platinum Silverstone TJ08B-E Integrated RaspberryPi 
  hide details  
Reply
Test Chamber ITX
(14 items)
 
Dream Rig (AMD)
(11 items)
 
Dream Rig (Intel)
(11 items)
 
CPUMotherboardGraphicsRAM
i7-2600 Gigabyte H77N-wifi Radeon HD 7750 16GB Corsair DDR3 1600 CL 10 (@1333 CL 8) 
Hard DriveHard DriveOSPower
Corsair Force 3 180gb Seagate Barracuda STBD2000101 2x in RAID1 Lubuntu / Win 8.1 / CentOS 6.4 -- Xen 4.3 FSP 80+ Micro-ATX 450w 
Case
Fractal Node 304 
CPUMotherboardGraphicsGraphics
AMD FX-8350 ASRock Fatal1ty 990FX Professional Sapphire Radeon HD 7970 GHz Edition 3GB AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Trident X Series 32GB (4 x 8GB) DDR3-1866 Seagate Barracuda 1TB 3.5" 7200RPM x2 RAID 1 Corsair Force Series GT 120GB x2 RAID 0 NZXT Kraken X60 
PowerCaseOther
SeaSonic Platinum 860W 80 PLUS Platinum Lian-Li PC-100 (Black) ATX Mid Tower Case RaspberryPi 
CPUMotherboardGraphicsGraphics
i7-3930k ASrock X79 EXTREME4-M SAPPHIRE Radeon HD 7870 GHz Edition 2GB  AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Ripjaws Z Series 32GB (4 x 8GB) DDR3-1866 2x Corsair Force Series 3 180GB RAID 0 2x Western Digital Red 2TB RAID 1 Noctua NH-D14 SE2011 
PowerCaseOther
SeaSonic 660W 80 PLUS Platinum Silverstone TJ08B-E Integrated RaspberryPi 
  hide details  
Reply
post #438 of 813
Quote:
Originally Posted by Chetyre View Post

It's a VNC message.

I've disabled the gfx_passthru option and I could VNC into windows and see the nvidia device. Still no signal though. I will try to install the card's drivers and see what happens.

my config is as follows:
Code:
builder='hvm'
memory=1024
vcpus=8
pae=1
vif=['bridge=xenbr0,'mac=<random mac here>']
disk=['phy://dev/vg_xen/windows,hda,w']
boot='dc'
pci=['01:00.0']
vnc=1
vnclisten="0.0.0.0"
vncpasswd=""
usbdevice='tablet'
localtime=1

I've disabled gfx_passthru too, and I believe that it's the setting that works with many/most graphics cards / guest OSes. It's a good sign that Windows detects the Nvidia graphics card.

Here is my win7.cfg for reference:
Code:
kernel = "/usr/lib/xen-default/boot/hvmloader"
builder='hvm'
memory = 24576
name = "win7"
vcpus=10
pae=1
acpi=1
apic=1
on_xend_stop="shutdown"
vif = [ 'vifname=win7,type=ioemu,mac=00:16:3e:68:07:07,bridge=xenbr0' ]
disk = [ 'phy:/dev/mapper/lm13-win7,hda,w' , 'phy:/dev/mapper/photos-photo_stripe,hdb,w' , 'phy:/dev/mapper/original-photo_raw,hdc,w' ]
device_model = '/usr/lib/xen-default/bin/qemu-dm'
boot="c"
sdl=0
opengl=1
vnc=1
vncpasswd=''
stdvga=0
serial='pty'
tsc_mode=0
viridian=1
#soundhw='all'
usb=1
usbdevice='tablet'
gfx_passthru=0
pci=[ '02:00.0', '02:00.1' , '00:1a.0' , '0a:00.0' ]
post #439 of 813
Quote:
Originally Posted by powerhouse View Post

I've disabled gfx_passthru too, and I believe that it's the setting that works with many/most graphics cards / guest OSes. It's a good sign that Windows detects the Nvidia graphics card.

Here is my win7.cfg for reference:
Code:
kernel = "/usr/lib/xen-default/boot/hvmloader"
builder='hvm'
memory = 24576
name = "win7"
vcpus=10
pae=1
acpi=1
apic=1
on_xend_stop="shutdown"
vif = [ 'vifname=win7,type=ioemu,mac=00:16:3e:68:07:07,bridge=xenbr0' ]
disk = [ 'phy:/dev/mapper/lm13-win7,hda,w' , 'phy:/dev/mapper/photos-photo_stripe,hdb,w' , 'phy:/dev/mapper/original-photo_raw,hdc,w' ]
device_model = '/usr/lib/xen-default/bin/qemu-dm'
boot="c"
sdl=0
opengl=1
vnc=1
vncpasswd=''
stdvga=0
serial='pty'
tsc_mode=0
viridian=1
#soundhw='all'
usb=1
usbdevice='tablet'
gfx_passthru=0
pci=[ '02:00.0', '02:00.1' , '00:1a.0' , '0a:00.0' ]

Some questions:

What is viridian useful for? I read on it a little bit and apparently it is microsoft's solution to make the OS virtualization aware or something, but does it actually affect anything?

Why do you need opengl=1? Doesn't the graphics card take care of opengl in the hvm?

What does serial and tsc do?

Do you really need pae if your system is already x64 (I'm supposing)?
post #440 of 813
Quote:
Originally Posted by Stealth Pyros View Post

This sort of setup intrigues me and I'd love to experiment with it in some sort of LAN center setup. It may be very expensive in the start-up phase (maybe not even that since you can buy the hardware in bulk) but I think long term it'll offer good cost reductions and improved efficiency. Maintenance would be easier in my eyes, and you wouldn't have to interrupt your customers since you can use VMotion/etc. tools to move VMs between "servers" on the fly. I also see it being a more secure setup since the customers wouldn't have access to all the beefy hardware.

If I want a LAN center with say 64 "terminals" that my customers will use, how powerful would those 64 terminals have to be? Assume that I'd have some monstrous hardware in the back with 256GB of RAM or even more, and enough HD7950s (or whatever GPU, say even 2 7950s per gaming terminal) to make anyone on OCN cry in joy, along with whatever processors would be appropriate (racks of IB-Es anyone?). I'm just being theoretical here for the sake of curiosity. I'd love to start up a business some day and it consists of a LAN center for PC gaming.

Would love to hear back from anyone that's experienced such a setup.

In addition to what's been said already, one of the bottlenecks is the network performance. Steve Perlman's Onlive has developed some unique hardware and software to get their online gaming service working, as well as providing remote desktop services (DaaS = Desktop as a Service). Here is some interesting stuff, though the editor seems preoccupied with licensing stuff: http://www.brianmadden.com/blogs/gabeknuth/archive/2012/01/25/Breaking-down-OnLive-Desktop-_2D00_-Why-this-is-not-the-desktop-virtualization-solution-you_2700_re-looking-for.aspx

VMware favors PCoIP, which essentially compresses the data before it's send over the network. It's much more complicated, though. Teradici has developed custom chips that run PCoIP which are used by many zero client manufacturers. VMware can support those zero clients via software or - better - via dedicated PCoIP PCIe boards installed in the servers. Microsoft has yet other technologies - RDP and RemoteFX. Citrix is well known for HDX, many years and perhaps still the leader in VDI and remote desktops.

In the Linux world there is NX, and lately also SPICE, a technology developed by Qumranet (sounds familiar? should be) and acquired by Redhat - see here http://en.wikipedia.org/wiki/SPICE_%28protocol%29. Though Redhat dropped Xen in favor of KVM/qemu I sincerely hope that Xen will be able to profit from SPICE.

This is as far as my limited knowledge goes. Some year or two ago I played a little with NX and other remote desktop protocols to see how they perform. My objective was to set up a home server with zero clients for everybody, running their applications in VMs on the server. I soon realized that the remote desktop part would be a major headache, in particular since I insist on zero clients (well, I guess a real thin Linux thin-client would be fine too). Alternatively (or in addition to that), the remote desktop should run on iPADs.

Good luck with your LAN center.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Operating Systems
Overclock.net › Forums › Software, Programming and Coding › Operating Systems › {Guide} Create a Gaming Virtual Machine