Originally Posted by powerhouse
1. With a multi-threaded CPU a 4-core processor would give you 8 threads. You could assign 7 vcpus to the guest and leave 1 vcpu for the host, but that may be borderline. In a 7:1 setup you could increase the dom0 queue priority, giving it precedence over CPU time when needed.
2. If RAID1 doesn't work, use LVM with mirror (similar to RAID1). Well, I'm actually a little confused now. It seems you are trying to access a Linux software RAID via Windows? I doubt this works, neither will a natively installed Windows be able to read LVM volumes.
Here is what I would consider:
a. Run your SSD (/dev/sda) with LVM for everything except /boot (and /efi if you got it - though I discourage using UEFI unless really needed). Once everything works, boot your PC with a Linux live USB stick and backup the entire SSD to disk of the same or larger size using dd if=/dev/sda of=/dev/backup_disk or into an image file with dd if=/dev/sda of=backup.img which must reside on another disk. That will take quite some time, but with the first option you would have a bootable, fully installed disk that you can just swap for your SSD; with the second option you would need to replace the SSD for a new one of the same or larger size and install the disk image via dd command (Linux live USB stick).
b. To use Windows with Linux software RAID or LVM, Windows must be installed as domU. In this case it runs on top of the hypervisor / Linux and they will take care of it. Xen will create some storage container (I don't know how it's called) that will reside on your LVM volume(s). I've tried to format an LVM striped volume (similar to RAID0) with NTFS and specified it using the phy:/dev... option in the domU configuration file, but this gave me some strange results. Windows would see it as a drive, but with multiple partitions, some formatted, some not. I abandoned this attempt.
To access the Windows file system under Linux (dom0) you need to use the kpartx command.
c. As to the data drives: You could run software RAID1 combined with LVM, or just LVM with mirror. I think hardware RAID is generally discouraged. For example, if your computer or specifically the hardware RAID controller fails on you, you would have to get the exact same computer/controller to restore your information, as each manufacturer may implement RAID in a different way. With Linux software RAID or LVM you won't have this issue, as it's either backwards compatible (if you use a newer version) or you can just keep one or two USB sticks with the Linux live system in case you need it.
As mentioned under b., if you use these data drives from within a Windows domU, you can mount/access them under Linux via kpartx.
Other backup possibilities with LVM are snapshots, which can be done on a live system. You need a snapshot volume that must be as big or bigger as the amount of data you expect to be written/changed during the live backup. Just for clarity: a snapshot by itself doesn't provide a backup!!! Once the snapshot is created, you need to back it up to another drive/media or remote location (rsync ?, which of course works locally as well). To do the actual backup use dd or better perhaps rsync, or a backup utility under Linux. At the end you must remove the snapshot. It's best to read up on LVM and snapshots to get a better understanding.
To sum up the backup strategy:
- RAID1 or LVM mirror will give you online backup of your data. The caveat is that when you accidentally delete some file(s) they are gone.
- With LVM snapshot and backup you can create copies of your data manually or at given times (via a cron job). Some backup utilities like rsync don't need to copy the whole data each time, but will backup only the files that have changed. There are also backup utilities that keep copies of the original files so in case you deleted some files you can recover them even after you did a backup.
I would go with a combination of both RAID1 or LVM mirror and some snapshot/backup scheme with history, with the backup preferably on external media.
3. Yeah, pciback takes some time to figure out. Let's start with this command:
If you don't get a list of devices that can be assigned, then they haven't been detached yet. A short how-to on this and VGA passthrough (for Ubuntu-based systems) is http://gro.solexiv.de/2012/08/pci-passthrough-howto/
, a longer one with more details (again for Ubuntu-based systems) is mine here: http://forums.linuxmint.com/viewtopic.php?f=42&t=112013&p=629268#p629268
. In my how-to see part 3 steps 3-6.
Be aware that if you use Fedora, you may need to make some changes or use different commands (to update your init.d). But the idea is the same.
Here in essence:
a) If xen-pciback isn't compiled into the kernel, you need to load it. Under Ubuntu/Debian systems you can use the /etc/modules file and add "xen-pciback passthrough=1" (without quotes) to load the pciback module during boot.
b) You need to detach the PCI devices from their drivers to make them assignable later on. In both how-tos this is done by the pciback script. It unbinds the driver from it's PCI device, adds a new slot to the PCI backend list, and binds it to the new slot. Note the long PCI IDs with domain, e.g. 0000:01:00.0.
c) Once the PCI device has been bound to pciback, it becomes assignable. You can use the command above (xm pci-list...) to see the assignable devices and check that all is OK.
From there on you should be able to execute xm create /etc/xen/your_domU.cfg, or xm new /etc/xen/your_domU.cfg followed by virt-manager (in both cases pointing to your domU configuration file).