Overclock.net › Forums › Software, Programming and Coding › Operating Systems › {Guide} Create a Gaming Virtual Machine
New Posts  All Forums:Forum Nav:

{Guide} Create a Gaming Virtual Machine - Page 45

post #441 of 769
Quote:
Originally Posted by Chetyre View Post

Some questions:

What is viridian useful for? I read on it a little bit and apparently it is microsoft's solution to make the OS virtualization aware or something, but does it actually affect anything?

Why do you need opengl=1? Doesn't the graphics card take care of opengl in the hvm?

What does serial and tsc do?

Do you really need pae if your system is already x64 (I'm supposing)?

Excellent questions! Frankly, I had to look it up again - see here for more info: http://wiki.xen.org/wiki/XenConfigurationFileOptions.

viridian: see http://old-list-archives.xen.org/archives/html/xen-users/2009-07/msg00661.html - not sure if that is relevant to my system, but so far I don't see that this option hurts. I think you are right about this option exposing virtualization to the VM, see also http://digitaldj.net/2010/08/25/a-possible-fix-xen-hvm-windows-2008/. In the end, I didn't make any comparison test viridian=1 vs. viridian=0.

pae: pae=1 is the default if not explicitly set to 0. I inserted this line as a reminder to change if things go bad or don't work well. I haven't tried pae=0, though. Both dom0 and domU are 64bit, by the way.

opengl=1 : should provide graphics acceleration in the (VNC) console window, as long as the graphics card drivers of the host are present (if I got this right, and I'm not sure about it, the VNC console uses the graphics card driver of the host/dom0). See here for some hint: http://www.2virt.com/blog/?p=151. In my system it seems to work fine, though again I didn't compare that with opengl=0. This option has no impact once the VGA is passed thru.

serial='pty' : This option enables a serial console. See https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Virtualization/sect-Virtualization-Troubleshooting_Xen-Guest_configuration_files.html. It's actually only relevant for Linux HVMs, not for Windows. It's probably best to delete this option, as I can't see any benefit.

tsc_mode=0 : This is the default. This option is actually quite involving. An in-depth explanation on the settings can be found here: http://svn.openfoundry.org/xenids/xen-4.0.0/docs/misc/tscmode.txt. Again, I use this entry as a reminder on what to check/modify when things go wrong.

None of the settings you asked about are critical or likely to influence whether or not your Windows guest will start and run with VGA passthru. What I tried to do is get a list of all potentially relevant options and list them in the config file with their default values (mostly). In some cases, like opengl and viridian, I chose to enable the feature in the hope it would bring benefits. I haven't yet compared those settings enabled versus not enabled, so I can't say if they actually are beneficial.




About gfx_passthru:
gfx_passthru=BOOLEAN (Click to show)
gfx_passthru=BOOLEAN

Enable graphics device PCI passthrough. This option makes an assigned PCI graphics card become primary graphics card in the VM. The QEMU emulated graphics adapter is disabled and the VNC console for the VM will not have any graphics output. All graphics output, including boot time QEMU BIOS messages from the VM, will go to the physical outputs of the passedthrough physical graphics card.

The graphics card PCI device to passthrough is chosen with pci option, exactly in the same way as normal Xen PCI device passthrough/assignment is done. Note that gfx_passthru does not do any kind of sharing of the GPU, so you can only assign the GPU to one single VM at a time.

gfx_passthru also enables various legacy VGA memory ranges, BARs, MMIOs, and ioports to be passed thru to the VM, since those are required for correct operation of things like VGA BIOS, text mode, VBE, etc.

Enabling gfx_passthru option also copies the physical graphics card video BIOS to the guest memory, and executes the VBIOS in the guest to initialize the graphics card.

Most graphics adapters require vendor specific tweaks for properly working graphics passthrough. See the XenVGAPassthroughTestedAdapters http://wiki.xen.org/wiki/XenVGAPassthroughTestedAdapters wiki page for currently supported graphics cards for gfx_passthru.

gfx_passthru is currently only supported with the qemu-xen-traditional device-model. Upstream qemu-xen device-model currently does not have support for gfx_passthru.

Note that some graphics adapters (AMD/ATI cards, for example) do not necessarily require gfx_passthru option, so you can use the normal Xen PCI passthrough to assign the graphics card as a secondary graphics card to the VM. The QEMU-emulated graphics card remains the primary graphics card, and VNC output is available from the QEMU-emulated primary adapter.

More information about Xen gfx_passthru feature is available on the XenVGAPassthrough http://wiki.xen.org/wiki/XenVGAPassthrough wiki page.
post #442 of 769
@dizzy4: Thanks for the wPrime results and your tests re VCPU over-provisioning. Looks like the 4.2 hypervisor handles scheduling nicely. You say that with your "original setup (even just 8 vCPUs) everything ran slow including dom0 and the domU. Even Idle performance was terrible." I assume you refer to the Xen 4.1 hypervisor. Do you mean that everything was slow while running the benchmark, or normal operation?

I downloaded and installed the Prime95 benchmark/application - both under Linux and Windows. I reserve 2 VCPUs for dom0, the rest (10) for Windows. I never noticed any slowdown or bad performance using the 4.1.2 hypervisor. Here my xm info output:
xm info (Click to show)
xm info
host : woody
release : 3.2.0-35-generic
version : #55-Ubuntu SMP Wed Dec 5 17:42:16 UTC 2012
machine : x86_64
nr_cpus : 12
nr_nodes : 1
cores_per_socket : 6
threads_per_core : 2
cpu_mhz : 3200
hw_caps : bfebfbff:2c100800:00000000:00003f40:13bee3bf:00000000:00000001:00000000
virt_caps : hvm hvm_directio
total_memory : 32740
free_memory : 2391
free_cpus : 0
xen_major : 4
xen_minor : 1
xen_extra : .2
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler : credit
xen_pagesize : 4096
platform_params : virt_start=0xffff800000000000
xen_changeset : unavailable
xen_commandline : placeholder iommu=1 dom0_mem=6G,max:6G vga=mode-0x031A console=vga
cc_compiler : gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)
cc_compile_by : marc.deslaurier
cc_compile_domain : ubuntu.com
cc_compile_date : Tue Dec 11 16:32:07 UTC 2012
xend_config_format : 4

Here my Prime95 results under Linux dom0 (while Windows domU was running idle):
Timing FFTs using 12 threads on 6 physical CPUs.
Best time for 61 bit trial factors: 2.101 ms.
Best time for 62 bit trial factors: 2.137 ms.
Best time for 63 bit trial factors: 2.407 ms.
Best time for 64 bit trial factors: 2.472 ms.
Best time for 65 bit trial factors: 2.873 ms.
Best time for 66 bit trial factors: 3.378 ms.
Best time for 67 bit trial factors: 3.354 ms.
Best time for 75 bit trial factors: 3.261 ms.
Best time for 76 bit trial factors: 3.271 ms.
Best time for 77 bit trial factors: 3.268 ms.

Here the same test under Windows domU using 10 VCPUs:
Best time for 61 bit trial factors: 2.675 ms.
Best time for 62 bit trial factors: 2.618 ms.
Best time for 63 bit trial factors: 3.204 ms.
Best time for 64 bit trial factors: 3.284 ms.
Best time for 65 bit trial factors: 4.314 ms.
Best time for 66 bit trial factors: 5.367 ms.
Best time for 67 bit trial factors: 5.343 ms.
Best time for 75 bit trial factors: 5.237 ms.
Best time for 76 bit trial factors: 5.228 ms.
Best time for 77 bit trial factors: 5.224 ms.

I did several runs of Prime95 and results vary. The above is about average. Obviously the 2 missing VCPUs do have some impact. However, I also noticed that the Windows CPU usage meter went down from initially 100% to some 20% as CPU cores were added. Strange, isn't it?
post #443 of 769
Quote:
Originally Posted by Ex3c View Post

Hello, i am interested in the power consumption of the graphics card, when used in the virtual machine.
For example does "Zero Core Power" work with an AMD GPU?
And what happens when the virtual machine is shut down: Is also the GPU shutdown, and consumes really zero power?

It took me some time to set up my wattmeter. My PC is now connected to a digital wattmeter that continually measures and displays the power consumption in Watt. Here some results:

Linux dom0 only, without any open windows/apps: 115W
Linux dom0 and Windows domU, Firefox runs on Linux with this page open: 117W
Prime95 benchmark on Windows: ~233W at start, then down to 182W at the end when all 10 threads are used (looks like a problem with the Windows version)
Prime95 (calculating primes) on Windows: 253W (constant), with Windows CPU meter on 100%
Prime95 (calculating primes) on Linux (w/o Windows domU running): 307W !!!
Prime95 on Linux (see above), then starting Windows domU: Up to 320W during and shortly after booting Windows, then 307W
Prime95 on both Linux dom0 and Windows domU: 293W (Windows CPU meter at maximum)

I could do more, for example file transfers via Samba or other checks.

All in all I'm surprised how low the power consumption is.

My software:
Xen hypervisor 4.1.2 with Linux Mint 13 Mate 64bit dom0 running a graphical desktop
Windows Pro 64bit as domU

Hardware powered by the Seasonic 660W X-Series Gold PSU:
Intel 3930K CPU
Asus Sabertooth X79
ATI 6450 GPU (for dom0)
Nvidia Quadro 2000 (for domU)
3x WD20EARS Green 2TB drives
1x WD10?? Green 1TB drive
1x WD5000AAK? 500GB drive
1x Sandisk Extreme 120GB SSD
1x LG DVD R/W
1x Transcend PDC3 USB 3.0 / SATA-III combo card
1x Edimax USB KVM switch
1x Microsoft 600 (wired) keyboard
1x Microsoft basic USB mouse
Corsair 500R with 3x120mm fans, and 1x250mm side fan
Noctua NH-D14 SE2011 CPU Cooler with 2 fans

Amazing how much stuff is connected to a PC power supply.
post #444 of 769
Another performance test, this time networking. I finally got to install netperf 2.5.0. Here some results:

Linux dom0 to Windows 7 domU using a xen bridge:
Code:
netperf -H my_guest_OS_IP
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to my_guest_OS_IP (my_guest_OS_IP) port 0 AF_INET : demo
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 65535  16384  16384    10.00    5188.43
Power consumption during test: 200W

Linux dom0 to remote Linux Mint laptop via Cisco / Linksys EA3500 router (Gigabit links):
Code:
netperf -H my_remote_laptop_IP
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to my_remote_laptop_IP (my_remote_laptop_IP) port 0 AF_INET : demo
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    10.01     601.65
Power consumption during test: 118W (like idle)

Remote Linux Mint laptop to Windows domU via Cisco router (Gigabit):
Code:
netperf -H my_guest_OS_IP
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to my_guest_OS_IP (my_guest_OS_IP) port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 65535  16384  16384    10.00     523.79
Power consumption during test: 138-153W (several reruns)

Remote Linux Mint laptop to Linux Mint dom0 via Cisco router (Gigabit):
Code:
netperf -H my_host_dom0_IP
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to my_host_dom0_IP (my_host_dom0_IP) port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    10.01     534.31
Power consumption during test: 125W

Network throughput looks good under netperf. So I'm just a little closer to explaining why samba or nfs file transfers to remote PCs are slow, extremely slow!

By the way, Samba file transfers between dom0 and domU are REAL fast, as you can expect when looking at the throughput (it's like a 10Gig link). But the power consumption also points to CPU resources used.
post #445 of 769
Quote:
Originally Posted by powerhouse View Post

Excellent questions! Frankly, I had to look it up again - see here for more info: http://wiki.xen.org/wiki/XenConfigurationFileOptions.

viridian: see http://old-list-archives.xen.org/archives/html/xen-users/2009-07/msg00661.html - not sure if that is relevant to my system, but so far I don't see that this option hurts. I think you are right about this option exposing virtualization to the VM, see also http://digitaldj.net/2010/08/25/a-possible-fix-xen-hvm-windows-2008/. In the end, I didn't make any comparison test viridian=1 vs. viridian=0.

pae: pae=1 is the default if not explicitly set to 0. I inserted this line as a reminder to change if things go bad or don't work well. I haven't tried pae=0, though. Both dom0 and domU are 64bit, by the way.

opengl=1 : should provide graphics acceleration in the (VNC) console window, as long as the graphics card drivers of the host are present (if I got this right, and I'm not sure about it, the VNC console uses the graphics card driver of the host/dom0). See here for some hint: http://www.2virt.com/blog/?p=151. In my system it seems to work fine, though again I didn't compare that with opengl=0. This option has no impact once the VGA is passed thru.

serial='pty' : This option enables a serial console. See https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Virtualization/sect-Virtualization-Troubleshooting_Xen-Guest_configuration_files.html. It's actually only relevant for Linux HVMs, not for Windows. It's probably best to delete this option, as I can't see any benefit.

tsc_mode=0 : This is the default. This option is actually quite involving. An in-depth explanation on the settings can be found here: http://svn.openfoundry.org/xenids/xen-4.0.0/docs/misc/tscmode.txt. Again, I use this entry as a reminder on what to check/modify when things go wrong.

None of the settings you asked about are critical or likely to influence whether or not your Windows guest will start and run with VGA passthru. What I tried to do is get a list of all potentially relevant options and list them in the config file with their default values (mostly). In some cases, like opengl and viridian, I chose to enable the feature in the hope it would bring benefits. I haven't yet compared those settings enabled versus not enabled, so I can't say if they actually are beneficial.




About gfx_passthru:
gfx_passthru=BOOLEAN (Click to show)
gfx_passthru=BOOLEAN

Enable graphics device PCI passthrough. This option makes an assigned PCI graphics card become primary graphics card in the VM. The QEMU emulated graphics adapter is disabled and the VNC console for the VM will not have any graphics output. All graphics output, including boot time QEMU BIOS messages from the VM, will go to the physical outputs of the passedthrough physical graphics card.

The graphics card PCI device to passthrough is chosen with pci option, exactly in the same way as normal Xen PCI device passthrough/assignment is done. Note that gfx_passthru does not do any kind of sharing of the GPU, so you can only assign the GPU to one single VM at a time.

gfx_passthru also enables various legacy VGA memory ranges, BARs, MMIOs, and ioports to be passed thru to the VM, since those are required for correct operation of things like VGA BIOS, text mode, VBE, etc.

Enabling gfx_passthru option also copies the physical graphics card video BIOS to the guest memory, and executes the VBIOS in the guest to initialize the graphics card.

Most graphics adapters require vendor specific tweaks for properly working graphics passthrough. See the XenVGAPassthroughTestedAdapters http://wiki.xen.org/wiki/XenVGAPassthroughTestedAdapters wiki page for currently supported graphics cards for gfx_passthru.

gfx_passthru is currently only supported with the qemu-xen-traditional device-model. Upstream qemu-xen device-model currently does not have support for gfx_passthru.

Note that some graphics adapters (AMD/ATI cards, for example) do not necessarily require gfx_passthru option, so you can use the normal Xen PCI passthrough to assign the graphics card as a secondary graphics card to the VM. The QEMU-emulated graphics card remains the primary graphics card, and VNC output is available from the QEMU-emulated primary adapter.

More information about Xen gfx_passthru feature is available on the XenVGAPassthhttp://www.overclock.net/t/1348236/please-need-your-help-to-avoid-formatrough http://wiki.xen.org/wiki/XenVGAPassthrough wiki page.


TSC is indeed very interesting, but more for server use than for us, so I guess 0 or 1 are the best. Thanks for the links, they were very informative. I Think I'll write my config enterely with all the defaults and commented so I remember all these.

I'd like to know more about viridian though. If it makes the hvm aware of virtualization, it might lead to better integration and drivers. It's an interesting subject.

I haven't had time to mess with my VM lately, but what should I expect exactly if I'm not using gfx_passthru? That option makes the graphics card primary on the domU instead of secundary, so can I just make it primary in the domU? I wonder if that is why my monitor has no signal
post #446 of 769
Quote:
Originally Posted by Chetyre View Post

TSC is indeed very interesting, but more for server use than for us, so I guess 0 or 1 are the best. Thanks for the links, they were very informative. I Think I'll write my config enterely with all the defaults and commented so I remember all these.

I'd like to know more about viridian though. If it makes the hvm aware of virtualization, it might lead to better integration and drivers. It's an interesting subject.

I haven't had time to mess with my VM lately, but what should I expect exactly if I'm not using gfx_passthru? That option makes the graphics card primary on the domU instead of secundary, so can I just make it primary in the domU? I wonder if that is why my monitor has no signal

I think the gfx_passthru option is a bit misleading. A good explanation can be found here: http://wiki.xen.org/wiki/XenVGAPassthrough#The_effect_of_gfx_passthru.3D_option. I quote:
Quote:
When you specify "gfx_passthru=1" the passthru graphics card will be made the primary graphics card in the VM, and the Xen Qemu-dm emulated Cirrus graphics card is disabled.

If you use "gfx_passthru=0", or don't have gfx_passthru= option at all, then the Xen Qemu-dm emulated Cirrus graphics card will be the primary in the VM, and the passthru graphics card will be secondary.

That same link a little further down at "Status of VGA graphics passthru in Xen" says with regard to Xen 4.1.1:
Quote:
Passing thru AMD/ATI Radeon/FirePro/FireGL adapters as secondary to the VM should work out-of-the-box (you need latest ATI gfx drivers in the VM). Secondary means the Xen Qemu-dm virtual Cirrus adapter is the primary where you see the VM BIOS etc when powering on the VM, and the passthru adapter is secondary, so you can enable/use it after the OS in the VM has started and you've installed drivers for the adapter.

Up until now I misunderstood the meaning of "secondary adapter". The way I understand it now (reading the above), it's not your onboard or PCIe #1 slot adapter being the primary adapter versus the add-on VGA card or PCIe #2 slot adapter being the secondary one. Instead, it's from the perspective of the Windows guest:
When booting the (Windows) guest:
1. gfx_passthru=0 means that Windows boots with the qemu-dm emulated Cirrus graphics adapter. Once Windows has started, the user can then install and activate the driver for his/her graphics card. Access to Windows is via VNC, there is no graphics output to the passed thru VGA adapter.
Upon reboot of the guest, Windows again boots the Cirrus adapter, but later loads the driver for the secondary (physical) graphics adapter and switches the video output to that adapter (provided it is set up correctly under Windows).

2. gfx_passthru=1 means that Windows is not presented with the emulated Cirrus adapter, but boots directly using the physical (passed-thru) adapter and hopefully manages to to use it. But there is more to this option: "Xen copies the VGA BIOS to the HVM guest memory and re-executes it there to initialize the graphics card". In some cases vendor specific tools are needed to copy the VGA BIOS to file to be loaded from there. (See above link for more.)

In my case option 1 works, option 2 doesn't. When booting Windows I can connect to the Cirrus emulated display via VNC and I see the typical Windows boot animation until the graphics driver loads and switches to my physical display. Here a screen shot of the VNC window:
post #447 of 769
Hello,
sorry for my english... i am french.
I make a xen with fedora like the tutorial.
I give my cg to a vm for make a HTPC.
When i start the vm where ubuntu is, i see the video card, when i write lspci in a terminal.
But i can active in display.
And nothing arrive in my TV.

Second question, what is the solution for configuration of bridge.
because i do :

for em1:
BRIDGE=virbro

for virbr0:
DEVICE=virbr0
TYPE=Bridge
BOOTPROTO=dhcp
ONBOOT=yes
DELAY=0

but nothing....

Thank you for your helping
post #448 of 769
Quote:
Originally Posted by wowoteur View Post

Hello,
sorry for my english... i am french.
I make a xen with fedora like the tutorial.
I give my cg to a vm for make a HTPC.
When i start the vm where ubuntu is, i see the video card, when i write lspci in a terminal.
But i can active in display.
And nothing arrive in my TV.

Second question, what is the solution for configuration of bridge.
because i do :

for em1:
BRIDGE=virbro

for virbr0:
DEVICE=virbr0
TYPE=Bridge
BOOTPROTO=dhcp
ONBOOT=yes
DELAY=0

but nothing....

Thank you for your helping

your configuration for em1 reads BRIDGE=virbro, it should be BRIDGE=virbr0. Are you sure your device is em1 btw?

Do you want your VM to be an Ubuntu HTPC? Sorry I couldn't understand very well
post #449 of 769
Yes my nic is em1.
And it's virbr0, i do a mistake when i write my post.
I want to create a vm for a htpc with ubuntu and plex.
post #450 of 769
I finally got my HTPC/gaming/general rig going, with thanks to dizzy4 and powerhouse for their excellent guides.

The rig comprises of:

Case: Thermaltake Element Q (added a chassis fan instead of a vertical mount HDD)
Mainboard: Asrock Z77E-ITX (Bios version 1.40)
Memory: gSkill Ripjaws 8GB DDR3 - 1600MHz (2 x 4GB kit)
CPU: Xeon E3 1225V2
CPU Cooler: Xigmatek Praeton LD963
GPU: HD P4000 (Dom0), Powercolor 7750 1GB DDR5 (DomU)
HDD: Samsung 830 128GB SSD and WD 1TB Green
Keyboard and Mouse: Microsoft Entertainment 7000 (Dom0), iPazzport 2.4GHz Mini Keyboard (DomU)
TV Tuner: Leadtek DTV USB Dongle Dual DVB-T (DomU)
DVD/CD: Pioneer DVD RW (Dom0)
OS/Main software: Xen 4.1.3 64 bit, Mint 14 Cinnamon 64 bit (Kernel 3.7) (Dom0), Windows 8 Pro 64 bit + XBMC Frodo (DomU)
Display: 32" Full HD LCD (Dom0, using DVI), 58" Full HD Plasma (Dom0, using HDMI)

CPU Geometry: Dom0 4/2 Cores, DomU 2 Cores
RAM Geometry: Dom0 (and system) 4096MB, DomU 4096MB
HDD Geometry (LVM): Samsung 830 (Dom0: 1GB boot, 15GB root, 45GB home) (DomU: remainder NTFS), WD 1TB (Dom0: 10GB swap, 215GB storage) (DomU: remainder NTFS)

DomU config:
Code:
kernel = '/usr/lib/xen-4.1/boot/hvmloader'
builder='hvm'
memory=4096
name="win8"
vcpus=2
pae=1
acpi=1
apic=1
vif = [ 'type=ioemu,mac=00:16:3e:##:##:##,bridge=xenbr0' ] # Insert your own MAC
disk = [ 'phy:/dev/mapper/lm14-win8,hda,w' , 'phy:/dev/mapper/hdd-spaceU,hdb,w' ]
device_model = '/usr/lib/xen-4.1/bin/qemu-dm'
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'
boot='c'
sdl=0
opengl=1
vnc=1
vncpasswd=''
stdvga=0
serial='pty'
tsc_mode=0
viridian=1
gfx_passthru=0
pci=[ '01:00.0', '01:00.1' , '00:1a.0' , '05:00.0' ] # 01:00.0 and 01:00.1 is HDMI A/V (Powercolor 7750), 00:1a.0 is onboard USB controller #2, 05:00.0 is the Asmedia USB 3.0 controller
xen_platform_pci=1
localtime=1

Problems and how they were overcome:

"Kernel panic" and hang when booting Mint 14 Cinnamon (or Mate) with Xen 4.1.3 (or 4.1.4). Overcome by updating the Linux kernel to 3.7 (3.8 rc1 works as well).

Flaky USB keyboard/mouse performance with Win 8 DomU and the TV Tuner not recognised at boot (a very, very frustrating problem). Overcome by disabling "fast startup" and "hibernate" in Power Options/System Settings in Win 8. It is not a Xen/Dom0 problem. There has been no difference to boot or shutdown times with "fast startup" and "hibernate" disabled.

No text displayed in XBMC when the 7750 driver is upgraded in Win 8. Overcome by just using the 7700 series driver Win 8 detects/uses during install. The current, updated ATI drivers (from the AMD website) have a known issue with XBMC (no text).
Do not ever try to use the driver/software installer exe or iso/disc that came with the card or from the AMD website, it will crash your rig and you will have to reinstall nearly everything. Instead, extract the actual driver folder/files from the disc or iso and let Windows find the drivers in Device Manager if you have to update for any reason.

Lost Win 8 activation when GPLPV is installed. Overcome by phoning Microsoft for a new "key".
Flaky USB and ethernet performance with GPLPV. Overcome by disabling USB passthrough (it doesn't appear to be needed when USB controllers are being passed through) and disabling network function when installing GPLPV. I'm now using gplpv_Vista2008x64_0.11.0.369.msi from http://www.meadowcourt.org/private/gplpv_Vista2008x64_0.11.0.369.msi .
It is now working very well, with 7.9 (and sometimes 8.1) for HDD performance in Win 8. The overall result is 6.9 (due to the graphics card), which is fine for me.

Lost Win 8 activation after installing Windows Media Centre for Win 8 (yes, I had a legitimate WMC key emailed to me).
Don't use Windows Media Centre. It is truly a pain. Even despite being unable to activate, it would always hang when attempting to scan for TV channels.

There were several other minor problems with Mint 14 and the Xen install that I don't really recall (like auto-booting DomU), but they were relatively easily solved by a bit of quick googling.
Win 8 also had a few other minor problems (that weren't anything to do with Xen or being virtualised) that had to be sorted.

Future upgrades and mods:

Improve DomU/Win 8 boot times (it currently takes several minutes to boot from a cold start)
Xen 4.2
Kernel 3.8
Better PSU (450W SFX)......just in case (the included 220W PSU has been ok so far)
USB Bluray for DomU
Larger capacity and faster mechanical HDD
Icybox 3.5" cardreader and USB hub module
Decent IR remote to use with the iPazzport keyboard for DomU
Game controller and maybe a better single slot GPU (if ever needed)
Better sound system (might build an all valve/tube one when time permits)
Edited by Rezz - 1/12/13 at 6:02pm
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Operating Systems
Overclock.net › Forums › Software, Programming and Coding › Operating Systems › {Guide} Create a Gaming Virtual Machine