Overclock.net › Forums › Software, Programming and Coding › Operating Systems › {Guide} Create a Gaming Virtual Machine
New Posts  All Forums:Forum Nav:

{Guide} Create a Gaming Virtual Machine - Page 2

post #11 of 816
Thread Starter 
Quote:
Originally Posted by lloyd mcclendon View Post

"why someone would want to do this as opposed to running Windows natively"
Well, obviously _if_ it works just as well there's no need to boot into windows. I never cared for a dual boot, huge waste of time. This way everything is kept in linux, and you have some throw away windows VM for gaming. If the performance is as good as the OP is claming, this is 1000 times better than wine, and 100000 times better than a dual boot. So if it works, i don't know why you wouldn't want to do this. Really this is HUGE news - how many linux users crawl back to windows entirely or dual boot just for gaming. And wine .. it hurts to say this, but it's practically impossible for that project to keep up with the crazyness that the MS developers create.
Apparently KVM does offer the VT-D PCI pass through ... but i'm a little fuzzy on whether or not it actually works right yet. I'll be trying to get one of my gentoo vms with xorg on it to use this... and if that actually works I should be able to get my XP VM showing good FPS as well.
It does solve a lot of headaches, but it can also create some more in the process. This far from being perfected (the guide and the technology), but it really is the future of computing. KVM does offer VT-d passthrough, but is also a type 2 hypervisor which translates to poorer performance than Xen. There is a small performance loss because it still is a VM, but compared to what we usually think of gaming on a VM this is amazing. For instance, your GTX 570s in SLI would still have enough power to play pretty much anything and a 5% loss of performance might be worth it to not have to dualboot or use Wine. Some nVidia cards are reported to work with Xen and VT-d enabled, but others are more tricky (non-reference). The current unstable Xen release is supposed to have more support for nVidia cards.
Quote:
Originally Posted by lloyd mcclendon View Post

I guess maybe I don't understand this - if I give a windows guest pass through access to my graphics card, and install the windows nvidia drivers package etc - what happens to the host (and the 12 other VMs) using the same graphics card?
The display will go blank because the host no longer owns the resources of the card. This could be fixed by writing a script to reattach the card to the host once the gaming VM shuts down. Another fix is to get a weak little card to run your non-gaming VMs and to keep the host with a dedicated card. This is why having my i7-2600 is so good; The host and other VMs get to use the integrated graphics while the windows VM uses the HD 5850.
Test Chamber ITX
(14 items)
 
Dream Rig (AMD)
(11 items)
 
Dream Rig (Intel)
(11 items)
 
CPUMotherboardGraphicsRAM
i7-2600 Gigabyte H77N-wifi Radeon HD 7750 16GB Corsair DDR3 1600 CL 10 (@1333 CL 8) 
Hard DriveHard DriveOSPower
Corsair Force 3 180gb Seagate Barracuda STBD2000101 2x in RAID1 Lubuntu / Win 8.1 / CentOS 6.4 -- Xen 4.3 FSP 80+ Micro-ATX 450w 
Case
Fractal Node 304 
CPUMotherboardGraphicsGraphics
AMD FX-8350 ASRock Fatal1ty 990FX Professional Sapphire Radeon HD 7970 GHz Edition 3GB AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Trident X Series 32GB (4 x 8GB) DDR3-1866 Seagate Barracuda 1TB 3.5" 7200RPM x2 RAID 1 Corsair Force Series GT 120GB x2 RAID 0 NZXT Kraken X60 
PowerCaseOther
SeaSonic Platinum 860W 80 PLUS Platinum Lian-Li PC-100 (Black) ATX Mid Tower Case RaspberryPi 
CPUMotherboardGraphicsGraphics
i7-3930k ASrock X79 EXTREME4-M SAPPHIRE Radeon HD 7870 GHz Edition 2GB  AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Ripjaws Z Series 32GB (4 x 8GB) DDR3-1866 2x Corsair Force Series 3 180GB RAID 0 2x Western Digital Red 2TB RAID 1 Noctua NH-D14 SE2011 
PowerCaseOther
SeaSonic 660W 80 PLUS Platinum Silverstone TJ08B-E Integrated RaspberryPi 
  hide details  
Reply
Test Chamber ITX
(14 items)
 
Dream Rig (AMD)
(11 items)
 
Dream Rig (Intel)
(11 items)
 
CPUMotherboardGraphicsRAM
i7-2600 Gigabyte H77N-wifi Radeon HD 7750 16GB Corsair DDR3 1600 CL 10 (@1333 CL 8) 
Hard DriveHard DriveOSPower
Corsair Force 3 180gb Seagate Barracuda STBD2000101 2x in RAID1 Lubuntu / Win 8.1 / CentOS 6.4 -- Xen 4.3 FSP 80+ Micro-ATX 450w 
Case
Fractal Node 304 
CPUMotherboardGraphicsGraphics
AMD FX-8350 ASRock Fatal1ty 990FX Professional Sapphire Radeon HD 7970 GHz Edition 3GB AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Trident X Series 32GB (4 x 8GB) DDR3-1866 Seagate Barracuda 1TB 3.5" 7200RPM x2 RAID 1 Corsair Force Series GT 120GB x2 RAID 0 NZXT Kraken X60 
PowerCaseOther
SeaSonic Platinum 860W 80 PLUS Platinum Lian-Li PC-100 (Black) ATX Mid Tower Case RaspberryPi 
CPUMotherboardGraphicsGraphics
i7-3930k ASrock X79 EXTREME4-M SAPPHIRE Radeon HD 7870 GHz Edition 2GB  AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Ripjaws Z Series 32GB (4 x 8GB) DDR3-1866 2x Corsair Force Series 3 180GB RAID 0 2x Western Digital Red 2TB RAID 1 Noctua NH-D14 SE2011 
PowerCaseOther
SeaSonic 660W 80 PLUS Platinum Silverstone TJ08B-E Integrated RaspberryPi 
  hide details  
Reply
post #12 of 816
Quote:
Originally Posted by dizzy4 View Post

The display will go blank because the host no longer owns the resources of the card.

ok i didn't catch that at all. That kind of explains Plan9's post then. Somehow I was expecting the card to be "shared" but obviously that is nonsense. So is it possible to hot detach / reattach a GPU while either system remains running? Reading about this here it sounds like you can do it for some pci devices, but I would think a GPU is pretty fundamental to remain the same while the machine is running. If not you'd need at least two cards.

I'll see if I can get something going and throw my welfare ATI card in there for some testing. Then if that works i'll consider removing the SLI bridge and running them as separate cards (eventually get a 3rd... and 4th card), and also a KVM switch. thumb.gif I picked up a killawatt meter and this thing only pulls between 400-500W from the wall ... so 4 cards is more feasible than I thought.
stable again
(25 items)
 
  
CPUCPUMotherboardGraphics
E5-2687W E5-2687W ASUS Z9PED8-WS EVGA GTX 570 (Linux host) 
GraphicsRAMHard DriveHard Drive
EVGA GTX 970 FTW (win7 guest) 64GB G.SKILL 2133 2x Crucial M4 256GB raid1 4x 3TB raid 10 
CoolingCoolingCoolingCooling
2x Apogee HD  2x RX 480 2x MCP 655 RP-452x2 rev2 (new) 
CoolingCoolingOSOS
16x Cougar Turbine CFT12SB4 (new) EK FC 580 Gentoo (host) Gentoo (x23 guests) 
OSMonitorMonitorPower
windows 7 (guest w/ vfio-pci) Viewsonic 23" 1080P Viewsonic 19" Antec HCP Platinum 1000 (new) 
CaseOtherOther
Case Labs TH10 (still the best ever) 2x Lamptron FC-5 IOGEAR 2 way DVI KVM Switch 
  hide details  
Reply
stable again
(25 items)
 
  
CPUCPUMotherboardGraphics
E5-2687W E5-2687W ASUS Z9PED8-WS EVGA GTX 570 (Linux host) 
GraphicsRAMHard DriveHard Drive
EVGA GTX 970 FTW (win7 guest) 64GB G.SKILL 2133 2x Crucial M4 256GB raid1 4x 3TB raid 10 
CoolingCoolingCoolingCooling
2x Apogee HD  2x RX 480 2x MCP 655 RP-452x2 rev2 (new) 
CoolingCoolingOSOS
16x Cougar Turbine CFT12SB4 (new) EK FC 580 Gentoo (host) Gentoo (x23 guests) 
OSMonitorMonitorPower
windows 7 (guest w/ vfio-pci) Viewsonic 23" 1080P Viewsonic 19" Antec HCP Platinum 1000 (new) 
CaseOtherOther
Case Labs TH10 (still the best ever) 2x Lamptron FC-5 IOGEAR 2 way DVI KVM Switch 
  hide details  
Reply
post #13 of 816
alright well i tried this out last night ... and no luck. KVM has long had support for PCI pass through. You simply add a "PCI host device" in the VM definition. I had seen this before but never really knew what it was. very cool. This all works by using existing kernel code and theoretically should be* faster than xen which is more userland code. _If_ it would work ...

It all uses the intel_iommu and DMA remapping parts of the kernel. Either this is just really new and still has some bugs to shake out, or it's a real problem of hardware compatibility. Booting with intel_iommu=on caused the marvell sata controller on my board to cause a kernel panic at boot. Moving the plugs for two of my drives to the other controller and disabling this controller got past that and the system boots. However when I try to startx, the nvidia driver won't load... something about DMA remappings and bits already being set. rolleyes.gif Either this card / this version of the driver / my config is incorrect. I will try to investigate that a bit further later ..

I think if i could get X to start using these cards, I could easily pass the ATI card I plugged in directly to a VM and it would be perfect. Once I get X started there's a few steps of mapping the busID / IRQ using the pci_stub module ... If that works I'll be dedicating at least one 570 to a windows VM for gaming and ditch this wine crap. thumb.gif
stable again
(25 items)
 
  
CPUCPUMotherboardGraphics
E5-2687W E5-2687W ASUS Z9PED8-WS EVGA GTX 570 (Linux host) 
GraphicsRAMHard DriveHard Drive
EVGA GTX 970 FTW (win7 guest) 64GB G.SKILL 2133 2x Crucial M4 256GB raid1 4x 3TB raid 10 
CoolingCoolingCoolingCooling
2x Apogee HD  2x RX 480 2x MCP 655 RP-452x2 rev2 (new) 
CoolingCoolingOSOS
16x Cougar Turbine CFT12SB4 (new) EK FC 580 Gentoo (host) Gentoo (x23 guests) 
OSMonitorMonitorPower
windows 7 (guest w/ vfio-pci) Viewsonic 23" 1080P Viewsonic 19" Antec HCP Platinum 1000 (new) 
CaseOtherOther
Case Labs TH10 (still the best ever) 2x Lamptron FC-5 IOGEAR 2 way DVI KVM Switch 
  hide details  
Reply
stable again
(25 items)
 
  
CPUCPUMotherboardGraphics
E5-2687W E5-2687W ASUS Z9PED8-WS EVGA GTX 570 (Linux host) 
GraphicsRAMHard DriveHard Drive
EVGA GTX 970 FTW (win7 guest) 64GB G.SKILL 2133 2x Crucial M4 256GB raid1 4x 3TB raid 10 
CoolingCoolingCoolingCooling
2x Apogee HD  2x RX 480 2x MCP 655 RP-452x2 rev2 (new) 
CoolingCoolingOSOS
16x Cougar Turbine CFT12SB4 (new) EK FC 580 Gentoo (host) Gentoo (x23 guests) 
OSMonitorMonitorPower
windows 7 (guest w/ vfio-pci) Viewsonic 23" 1080P Viewsonic 19" Antec HCP Platinum 1000 (new) 
CaseOtherOther
Case Labs TH10 (still the best ever) 2x Lamptron FC-5 IOGEAR 2 way DVI KVM Switch 
  hide details  
Reply
post #14 of 816
Hi there,

Firstly awesome guide, found it very useful re setup of Xen, there is a few issues I have and am hoping you can help me address them (struggling to find information/guides relevant to this OS/your build):

I am running an i870 (no on chip GPU so figuring this makes a difference tbh)

Trying to start a VM with a hdd image I made (I didnt use partitions as you did due to space issues on my small test drive) I got permissions issues (also got for ISO/CD didn't matter much where I shoved the ISO either

I ran 'setenforce 0' to remove the issue prior to trying passthrough of the GPU etc

When you perform a passthrough of hardware you initially get an error stating PCI-BACK doesn't have/own the hardware, I have found out how to fix this adhoc and that's fine tbh but thought I shoud post the fix for those whom don't know

'sudo lspci' (lists hardware ids) primary gpu should usually list as 01:00.0 my secondary listed as 06:00.0 ans 06:00.1 (sound for secondary card) this was an ATI card, take a note of the ID for the hardware in the above format you will need to unbind this hardware to use it on a VM

'lspci -n' - This gets you the vendor IDs (find the hardware id as per above step in this list and the code listed like xxxx:xxxx is what you seek)

3rdly to unbind the hardware I had to run the following

'echo "xxxx xxxx" > /sys/bus/pci/drivers/pci-stub/new_id
echo "0000:06:00.0" > /sys/bus/pci/devices/0000:06:00.0/driver/unbind
echo "0000:06:00.0" > /sys/bus/pci/drivers/pci-stub/bind'

substitute xxxx xxxx for the values you got from step 2 (xxxx:xxxx format (remove the : and replace with a space as above)

After doing all that I could boot and install an OS (Windows XP and Windows 7 were both setup as testers)

Passed through a dedicated Gigabit NIC and this works fantastically (unbind etc needed as above to make this work however) in either OS

Either OS can see the GPU (tried with an ATI 4350 and a Nvidia 560TI), and it installs the drivers but I get error code 43 (seems common when googling), issue has something to do with patches needing to be installed and this is where I get extremely lost

Any ideas re the patching? Also any tips on how to permanently resolve the other issues (not having to unbind on dom0 boot would be awesome)

BTW I did the unbind etc with the primary card and this caused major issues with the dom0 seems if I ran this via ssh/putty as its the primary GPU it freezes/crashes but can reload virt-manager, though nothing seems to work after this also (any tips here would also be cool)

If I can manage to get this working I can finally wipe and partition my SSD (thus making 2x VMs on it one for me and one for Wife with full GPU passthrough from my pair of 560Ti's


Thanks for all your help in advance and congratulations on an AWESOME guide,
Edited by SoulCleaver - 2/15/12 at 4:35pm
post #15 of 816
Thread Starter 
It sounds like you are having a few hardware quirks, but I am glad to see it working for you! I am working on a configuration and a start-up script that will do all this for the user. Right now, I am about 85% done so stay posted. I am also working on a custom live-usb that will allow this without storage device changes that will include these scripts. SELinux can be a little bit cumbersome and I also usually disable it for use with Xen, so no worries there.

In regards to your PCI-BACK question, I have found a good amount of issues with it and my work-around was to get libvirt to disable it using virt-manager. This can also be done using the virsh console command. The script will take care of this for you, but I will mention how so you can get it up and running in the mean time. virsh nodedev-list will give you a list of all your devices. Find your PCI devices and use this command: virsh nodedev-dettach pci_0000_01_00_0. The device should match what you saw in the list and should work. Notice that "dettach" is misspelled... kind of silly, but that is just how the command is used. The next step would be to do: xm pci-attach win7a 01:00.0. This will attach it to domain 'win7a' and can be done before or after starting the VM. I hope that simplifies your unbind woes.

Nvidia support seems to be somewhat limited now. I hear that some people have had luck using certain reference cards and most of the older 8000 series cards. I am still unsure about patching, but it wouldn't be easy and would most likely require a rebuild of RPMs or a recompile from the source code. Supposedly Xen 4.2 includes the patches, but is still marked as unstable. I will see about including that in the next version of the guide.

I also experience problems when removing the primary video card. For me this is my integrated chip. The OS still seems functional, but will throw the occasional error and sometimes I need to reconnect via SSH. I also have a lot of issues passing through the iGPU to use for another guest. It has do do with the frame buffer of the iGPU being shared with main memory, but 4.1.2 is supposed to be patched to support it. I am spending much of my free time looking into these issues, so stay posted for that too.

P.S. Welcome to OCN! thumb.gif

P.P.S. If anyone wants to help, i would be happy to know how to get my Live-USB to boot Xen before booting into Fedora. This would make it easier for a lot of people who want to try this.

Best,
Dizzy4
Test Chamber ITX
(14 items)
 
Dream Rig (AMD)
(11 items)
 
Dream Rig (Intel)
(11 items)
 
CPUMotherboardGraphicsRAM
i7-2600 Gigabyte H77N-wifi Radeon HD 7750 16GB Corsair DDR3 1600 CL 10 (@1333 CL 8) 
Hard DriveHard DriveOSPower
Corsair Force 3 180gb Seagate Barracuda STBD2000101 2x in RAID1 Lubuntu / Win 8.1 / CentOS 6.4 -- Xen 4.3 FSP 80+ Micro-ATX 450w 
Case
Fractal Node 304 
CPUMotherboardGraphicsGraphics
AMD FX-8350 ASRock Fatal1ty 990FX Professional Sapphire Radeon HD 7970 GHz Edition 3GB AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Trident X Series 32GB (4 x 8GB) DDR3-1866 Seagate Barracuda 1TB 3.5" 7200RPM x2 RAID 1 Corsair Force Series GT 120GB x2 RAID 0 NZXT Kraken X60 
PowerCaseOther
SeaSonic Platinum 860W 80 PLUS Platinum Lian-Li PC-100 (Black) ATX Mid Tower Case RaspberryPi 
CPUMotherboardGraphicsGraphics
i7-3930k ASrock X79 EXTREME4-M SAPPHIRE Radeon HD 7870 GHz Edition 2GB  AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Ripjaws Z Series 32GB (4 x 8GB) DDR3-1866 2x Corsair Force Series 3 180GB RAID 0 2x Western Digital Red 2TB RAID 1 Noctua NH-D14 SE2011 
PowerCaseOther
SeaSonic 660W 80 PLUS Platinum Silverstone TJ08B-E Integrated RaspberryPi 
  hide details  
Reply
Test Chamber ITX
(14 items)
 
Dream Rig (AMD)
(11 items)
 
Dream Rig (Intel)
(11 items)
 
CPUMotherboardGraphicsRAM
i7-2600 Gigabyte H77N-wifi Radeon HD 7750 16GB Corsair DDR3 1600 CL 10 (@1333 CL 8) 
Hard DriveHard DriveOSPower
Corsair Force 3 180gb Seagate Barracuda STBD2000101 2x in RAID1 Lubuntu / Win 8.1 / CentOS 6.4 -- Xen 4.3 FSP 80+ Micro-ATX 450w 
Case
Fractal Node 304 
CPUMotherboardGraphicsGraphics
AMD FX-8350 ASRock Fatal1ty 990FX Professional Sapphire Radeon HD 7970 GHz Edition 3GB AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Trident X Series 32GB (4 x 8GB) DDR3-1866 Seagate Barracuda 1TB 3.5" 7200RPM x2 RAID 1 Corsair Force Series GT 120GB x2 RAID 0 NZXT Kraken X60 
PowerCaseOther
SeaSonic Platinum 860W 80 PLUS Platinum Lian-Li PC-100 (Black) ATX Mid Tower Case RaspberryPi 
CPUMotherboardGraphicsGraphics
i7-3930k ASrock X79 EXTREME4-M SAPPHIRE Radeon HD 7870 GHz Edition 2GB  AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Ripjaws Z Series 32GB (4 x 8GB) DDR3-1866 2x Corsair Force Series 3 180GB RAID 0 2x Western Digital Red 2TB RAID 1 Noctua NH-D14 SE2011 
PowerCaseOther
SeaSonic 660W 80 PLUS Platinum Silverstone TJ08B-E Integrated RaspberryPi 
  hide details  
Reply
post #16 of 816
Interesting thread, Booked for sure!
post #17 of 816
Thread Starter 
Small update! I removed unnecessary install components and fixed an issue I found with selinux -- turned it off rolleyes.gif

Also, I added some preliminary Windows 8 support! I just tried it with the public preview and it works pretty well. Things will only get better, but this is good news thumb.gif

I am also working on the first beta release! It will be huge and make things about 4 steps less and provide a couple scripts that will make things faster and easier. Most importantly, it will allow users to try this without changing their disk drives!
Test Chamber ITX
(14 items)
 
Dream Rig (AMD)
(11 items)
 
Dream Rig (Intel)
(11 items)
 
CPUMotherboardGraphicsRAM
i7-2600 Gigabyte H77N-wifi Radeon HD 7750 16GB Corsair DDR3 1600 CL 10 (@1333 CL 8) 
Hard DriveHard DriveOSPower
Corsair Force 3 180gb Seagate Barracuda STBD2000101 2x in RAID1 Lubuntu / Win 8.1 / CentOS 6.4 -- Xen 4.3 FSP 80+ Micro-ATX 450w 
Case
Fractal Node 304 
CPUMotherboardGraphicsGraphics
AMD FX-8350 ASRock Fatal1ty 990FX Professional Sapphire Radeon HD 7970 GHz Edition 3GB AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Trident X Series 32GB (4 x 8GB) DDR3-1866 Seagate Barracuda 1TB 3.5" 7200RPM x2 RAID 1 Corsair Force Series GT 120GB x2 RAID 0 NZXT Kraken X60 
PowerCaseOther
SeaSonic Platinum 860W 80 PLUS Platinum Lian-Li PC-100 (Black) ATX Mid Tower Case RaspberryPi 
CPUMotherboardGraphicsGraphics
i7-3930k ASrock X79 EXTREME4-M SAPPHIRE Radeon HD 7870 GHz Edition 2GB  AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Ripjaws Z Series 32GB (4 x 8GB) DDR3-1866 2x Corsair Force Series 3 180GB RAID 0 2x Western Digital Red 2TB RAID 1 Noctua NH-D14 SE2011 
PowerCaseOther
SeaSonic 660W 80 PLUS Platinum Silverstone TJ08B-E Integrated RaspberryPi 
  hide details  
Reply
Test Chamber ITX
(14 items)
 
Dream Rig (AMD)
(11 items)
 
Dream Rig (Intel)
(11 items)
 
CPUMotherboardGraphicsRAM
i7-2600 Gigabyte H77N-wifi Radeon HD 7750 16GB Corsair DDR3 1600 CL 10 (@1333 CL 8) 
Hard DriveHard DriveOSPower
Corsair Force 3 180gb Seagate Barracuda STBD2000101 2x in RAID1 Lubuntu / Win 8.1 / CentOS 6.4 -- Xen 4.3 FSP 80+ Micro-ATX 450w 
Case
Fractal Node 304 
CPUMotherboardGraphicsGraphics
AMD FX-8350 ASRock Fatal1ty 990FX Professional Sapphire Radeon HD 7970 GHz Edition 3GB AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Trident X Series 32GB (4 x 8GB) DDR3-1866 Seagate Barracuda 1TB 3.5" 7200RPM x2 RAID 1 Corsair Force Series GT 120GB x2 RAID 0 NZXT Kraken X60 
PowerCaseOther
SeaSonic Platinum 860W 80 PLUS Platinum Lian-Li PC-100 (Black) ATX Mid Tower Case RaspberryPi 
CPUMotherboardGraphicsGraphics
i7-3930k ASrock X79 EXTREME4-M SAPPHIRE Radeon HD 7870 GHz Edition 2GB  AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Ripjaws Z Series 32GB (4 x 8GB) DDR3-1866 2x Corsair Force Series 3 180GB RAID 0 2x Western Digital Red 2TB RAID 1 Noctua NH-D14 SE2011 
PowerCaseOther
SeaSonic 660W 80 PLUS Platinum Silverstone TJ08B-E Integrated RaspberryPi 
  hide details  
Reply
post #18 of 816
Quote:
Originally Posted by dizzy4 View Post

Video card is your choice. I suggest using a newer Radeon for [...] HD audio on the card which means you won't need to pass through the motherboard audio to get sound from the VM. Of course, you could pass motherboard or another discrete Audio device through too.

Could you elaborate a little more on this? AFAIK Nvidia cards also have HD audio, otherwise HDMI wouldn't work properly.

Is it the difference in implementation, in that AMD cards have an mobo-style onboard sound card integrated into them whereas Nvidia cards pass a digital audio stream directly to the receiver, that makes the difference here? If that's the case, I guess Xen can't handle digital audio devices?
The Ancient
(14 items)
 
  
Reply
The Ancient
(14 items)
 
  
Reply
post #19 of 816
Thread Starter 
Quote:
Originally Posted by Peon View Post

Could you elaborate a little more on this? AFAIK Nvidia cards also have HD audio, otherwise HDMI wouldn't work properly.
Is it the difference in implementation, in that AMD cards have an mobo-style onboard sound card integrated into them whereas Nvidia cards pass a digital audio stream directly to the receiver, that makes the difference here? If that's the case, I guess Xen can't handle digital audio devices?

If what you say is true, the fix might be as simple as passing through the onboard sound as well. If both were assigned to the VM it should work alright. I was also wondering if the virtualized sound could be passed through. Maybe that is where the problem is. Through reading I also found that there are several people trying to patch Xen to make it work and version 4.2 will integrate some of those.

So what I would try would be: Install your domU with virtualized graphics then change your guest config file to include nographic=1. From there restart the system with the card passed through AND the audio passed through. That might force it to work properly.

Another idea I just had is to whether or not passthrough is working for you at all. Have you gotten any device to work in a domU? If you are having trouble binding the device to pci-back there is also a workaround using libvirt that I explained to SoulCleaver.

Best,
Dizzy4
Test Chamber ITX
(14 items)
 
Dream Rig (AMD)
(11 items)
 
Dream Rig (Intel)
(11 items)
 
CPUMotherboardGraphicsRAM
i7-2600 Gigabyte H77N-wifi Radeon HD 7750 16GB Corsair DDR3 1600 CL 10 (@1333 CL 8) 
Hard DriveHard DriveOSPower
Corsair Force 3 180gb Seagate Barracuda STBD2000101 2x in RAID1 Lubuntu / Win 8.1 / CentOS 6.4 -- Xen 4.3 FSP 80+ Micro-ATX 450w 
Case
Fractal Node 304 
CPUMotherboardGraphicsGraphics
AMD FX-8350 ASRock Fatal1ty 990FX Professional Sapphire Radeon HD 7970 GHz Edition 3GB AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Trident X Series 32GB (4 x 8GB) DDR3-1866 Seagate Barracuda 1TB 3.5" 7200RPM x2 RAID 1 Corsair Force Series GT 120GB x2 RAID 0 NZXT Kraken X60 
PowerCaseOther
SeaSonic Platinum 860W 80 PLUS Platinum Lian-Li PC-100 (Black) ATX Mid Tower Case RaspberryPi 
CPUMotherboardGraphicsGraphics
i7-3930k ASrock X79 EXTREME4-M SAPPHIRE Radeon HD 7870 GHz Edition 2GB  AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Ripjaws Z Series 32GB (4 x 8GB) DDR3-1866 2x Corsair Force Series 3 180GB RAID 0 2x Western Digital Red 2TB RAID 1 Noctua NH-D14 SE2011 
PowerCaseOther
SeaSonic 660W 80 PLUS Platinum Silverstone TJ08B-E Integrated RaspberryPi 
  hide details  
Reply
Test Chamber ITX
(14 items)
 
Dream Rig (AMD)
(11 items)
 
Dream Rig (Intel)
(11 items)
 
CPUMotherboardGraphicsRAM
i7-2600 Gigabyte H77N-wifi Radeon HD 7750 16GB Corsair DDR3 1600 CL 10 (@1333 CL 8) 
Hard DriveHard DriveOSPower
Corsair Force 3 180gb Seagate Barracuda STBD2000101 2x in RAID1 Lubuntu / Win 8.1 / CentOS 6.4 -- Xen 4.3 FSP 80+ Micro-ATX 450w 
Case
Fractal Node 304 
CPUMotherboardGraphicsGraphics
AMD FX-8350 ASRock Fatal1ty 990FX Professional Sapphire Radeon HD 7970 GHz Edition 3GB AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Trident X Series 32GB (4 x 8GB) DDR3-1866 Seagate Barracuda 1TB 3.5" 7200RPM x2 RAID 1 Corsair Force Series GT 120GB x2 RAID 0 NZXT Kraken X60 
PowerCaseOther
SeaSonic Platinum 860W 80 PLUS Platinum Lian-Li PC-100 (Black) ATX Mid Tower Case RaspberryPi 
CPUMotherboardGraphicsGraphics
i7-3930k ASrock X79 EXTREME4-M SAPPHIRE Radeon HD 7870 GHz Edition 2GB  AMD FirePro W5000 
RAMHard DriveHard DriveCooling
G.Skill Ripjaws Z Series 32GB (4 x 8GB) DDR3-1866 2x Corsair Force Series 3 180GB RAID 0 2x Western Digital Red 2TB RAID 1 Noctua NH-D14 SE2011 
PowerCaseOther
SeaSonic 660W 80 PLUS Platinum Silverstone TJ08B-E Integrated RaspberryPi 
  hide details  
Reply
post #20 of 816
in my system here the video and audio parts of the card are separate line items in lspci ...



03:00.0 VGA compatible controller: nVidia Corporation Device 1081 (rev a1)
03:00.1 Audio device: nVidia Corporation Device 0e09 (rev a1)
04:00.0 VGA compatible controller: nVidia Corporation Device 1081 (rev a1)
04:00.1 Audio device: nVidia Corporation Device 0e09 (rev a1)


so you would have to just have 2 pci passthrough devices, 1 for the video @ 03:00.0 and one for the audio @ 03:00.1 ... you could also use a virtual sound card, which will fallback to the onboard one, although this isn't true passthrough so it will lag a few ms behind and your audio will be out of sync.

btw since it looks like kvm is going to be about a year or so behind on this, i did go ahead and get all the xen stuff patched in and installed. i've yet to have the time to give it a try.. i will soon
Edited by lloyd mcclendon - 3/4/12 at 10:10am
stable again
(25 items)
 
  
CPUCPUMotherboardGraphics
E5-2687W E5-2687W ASUS Z9PED8-WS EVGA GTX 570 (Linux host) 
GraphicsRAMHard DriveHard Drive
EVGA GTX 970 FTW (win7 guest) 64GB G.SKILL 2133 2x Crucial M4 256GB raid1 4x 3TB raid 10 
CoolingCoolingCoolingCooling
2x Apogee HD  2x RX 480 2x MCP 655 RP-452x2 rev2 (new) 
CoolingCoolingOSOS
16x Cougar Turbine CFT12SB4 (new) EK FC 580 Gentoo (host) Gentoo (x23 guests) 
OSMonitorMonitorPower
windows 7 (guest w/ vfio-pci) Viewsonic 23" 1080P Viewsonic 19" Antec HCP Platinum 1000 (new) 
CaseOtherOther
Case Labs TH10 (still the best ever) 2x Lamptron FC-5 IOGEAR 2 way DVI KVM Switch 
  hide details  
Reply
stable again
(25 items)
 
  
CPUCPUMotherboardGraphics
E5-2687W E5-2687W ASUS Z9PED8-WS EVGA GTX 570 (Linux host) 
GraphicsRAMHard DriveHard Drive
EVGA GTX 970 FTW (win7 guest) 64GB G.SKILL 2133 2x Crucial M4 256GB raid1 4x 3TB raid 10 
CoolingCoolingCoolingCooling
2x Apogee HD  2x RX 480 2x MCP 655 RP-452x2 rev2 (new) 
CoolingCoolingOSOS
16x Cougar Turbine CFT12SB4 (new) EK FC 580 Gentoo (host) Gentoo (x23 guests) 
OSMonitorMonitorPower
windows 7 (guest w/ vfio-pci) Viewsonic 23" 1080P Viewsonic 19" Antec HCP Platinum 1000 (new) 
CaseOtherOther
Case Labs TH10 (still the best ever) 2x Lamptron FC-5 IOGEAR 2 way DVI KVM Switch 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Operating Systems
Overclock.net › Forums › Software, Programming and Coding › Operating Systems › {Guide} Create a Gaming Virtual Machine