1. Software Overview

Paradice on Xen software comes in three repositories.

  • The kernel modules: This repository has the main components of Paradice including the frontend and backend drivers and the device info modules.

  • The Linux kernel: This is a patched Ubuntu Linux kernel that needs to be installed in the VMs for Paradice.

  • The Xen hypervisor: This is a patched Xen hypervisor needed for Paradice.

Our source code is hosted in Github. You can use the following commands to download the repositories.

# Download the modules
$ git clone https://github.com/arrdalan/dfv_modules.git

# Download the kernel
$ git clone https://github.com/arrdalan/ubuntu_dfv.git

# Download the hypervisor
$ git clone https://github.com/arrdalan/xen_dfv.git

2. Instructions

In these instructions, we will show you how to set up Paradice to virtualize a Radeon GPU, a mouse, and a keyboard.

Note: Preferably, download the repos to ~/dfv in your file systems. This will make it easier for you to follow the rest of the instructions.

2.1. Compilation:

Xen:

Follow the instructions in the following links to compile Xen from sources:

Kernel:

Follow the instructions in the following link to compile Ubuntu kernel from sources.

Note: compile the kernel for 32-bit x86 PAE architecture (binary-generic-pae), as that is the only architecture currently supported in Paradice.

Finally, you can compile the kernel modules as follows:

$ cd ~/dfv/dfv_modules
$ make

Note: the modules Makefile assumes that the compiled kernel is located in ~/dfv/ubuntu_dfv.

2.2. Setting up the hypervisor and the VMs

Now that you have compiled everything, you can set up the system.

First, follow the links above to configure Xen on your system.

Then, create two Xen HVM VMs, one for your driver VM and one for the guest VM, install 32-bit Ubuntu 12.04 in them, and then update the kernel with the image you built earlier.

2.3. PCI Passthrough

Now, you need to assign the I/O devices (e.g., the GPU, mouse, and keyboard) to the driver VM. The links mentioned earlier have good instructions for PCI passthrough in Xen.

Note 1: You might need the following update to your /etc/default/grub for the GPU passthrough to work. (Remember to run update-grub and reboot for changes to take effect)

# Original:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"

# After update
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pci=nocrs"

Note 2: Instructions on the web ask you to use gfx_passthru = 1 option in the VM config file for GPU assignment. We found that to break the assignment of our test Radeon GPU. Instead, we assigned the GPU as a normal PCI device, or with gfx_passthru = 0 (although this might not work with other GPUs). We then removed the Xen emulated VGA card in order to have one VGA device in the VM only (which is the GPU we assigned). Assuming the emulated VGA card is on 0000:00:02.0 PCI slot, run the following command to remove it.

# $ echo 1 > /sys/bus/pci/devices/0000\:00\:02.0/remove

Note 3: For mouse and keyboard, you need to assign to the driver VM the USB hub that your devices are connected to.

2.4. Loading the kernel modules

We have developed some scripts that will make it easy to load and configure the kernel modules both in the driver VM and guest VM. Download them from here. Untar them and put the scripts directory in ~/dfv/dfv_modules/.

2.4.1. Driver VM

First, you need to create the device info files corresponding to the devices you want to virtualize. They are mainly needed by the guest VM to be able to fake the presence of these devices to fool applications, such as X. However, we do need the info files for input devices in the driver VM too. This is because the device files for input devices can have a different name from boot to boot because of how udev configures them. Using these info files allows us to automatically connect the virtual device files in the guest VM to the actual device files in the driver VM, no matter what name the device files of our input devices get from udev.

Identify the input devices that you want to virtualize. In Linux, the info for all input devices can be found in /sys/class/input/. In this directory, you’ll find a couple of subdirectories with the name of inputX, one per input device. Identify which one corresponds to the input device that you want to virtualize. For example, read the inputX/name files which tells you the name of the device. For example, you might identify input5 and input6 as your mouse and keyboard since input5/name and input6/name are both "Logitech USB Receiver". Then, you need to extract the info of these two devices into device info files using the following two commands in the driver VM:

$ cd ~/dfv/dfv_modules
$ mkdir info_files
$ source scripts/extract_input_info.sh 5 info_files/keyboard_info.txt
$ source scripts/extract_input_info.sh 6 info_files/mouse_info.txt

For GPU, find the PCI slot that your GPU is sitting on in the driver VM. This can be different from the PCI slot of the GPU as seen in the dom0. For example, let’s say the GPU is on 0000.00.05.0. You can now create the info file as follows:

$ cd ~/dfv/dfv_modules
$ source scripts/extract_gpu_info.sh 0000 00 05 0 info_files/gpu_info.txt

Now you have all the three info files in ~/dfv/dfv_modules/info_files. You can compare them with three example info files that we have provided in the info_files directory in the tarball you downloaded earlier for the scripts. Your info files might look different but the examples should give you an idea of what the info file should look like. Fortunately, generating the info files is a one time effort. You don’t have to regenerate these files every time you want to use Paradice. Just generate them once per device.

Copy the compiled modules to the driver VM. Assuming that the modules are in ~/dfv/dfv_modules/ in the VM, you can load them by:

$ source scripts/load_server.sh

If successful, this should print out something like:

device file successfully added
device file successfully added
device file successfully added

2.4.2. Guest VM

Move the compiled modules to ~/dfv/dfv_modules in the guest VM. Also, move the info files that you generated in the driver VM to ~/dfv/dfv_modules/info_files in the guest VM, and then run:

$ cd ~/dfv/dfv_modules
$ source scripts/load_client.sh

If successful, this should print something like this:

device file successfully added
Input device successfully registered
device file successfully added
Input device successfully registered
device file successfully added

Note: The scripts will try to fake the presence of the GPU on a virtual PCI slot with the exact same PCI slot number as the one that the GPU is sitting on in the driver VM, e.g., 0000.00.05.0 that we used as an example above. This is a requirement for these two PCI slots to match. If this slot is already taken on the guest VM, the scripts will fail. In any case, after the scripts are done, run lspci in the guest and make sure the virtual GPU PCI slot is the same as the GPU PCI slot in the driver VM.

2.4.3. Connecting the VMs

The two VMs communicate using Xen ring and event channel. In order to set them up, go to the dom0 and run:

$ cd scripts #the same scripts you downloaded earlier
$ source dfv_device_attach.sh <driver VM name> <guest VM name>

If the connection is successful, you should be able to see a line in the driver VM and guest VM’s kernel log saying that they are connected.

2.4.4. Testing Paradice

Now, we’re all set and we can test everything. Go to the guest VM. Make sure that you have already removed the emulated VGA card and that X is not running in the guest VM.

Now test Paradice by running:

$ xinit

This should bring a simple white X terminal. Test to make sure mouse and keyboard is working. If you have an OpenGL app somewhere, you can now navigate there and run it.

You can also run the window manager (lightdm). First kill off the xinit by pressing Ctrl+C. Then run:

$ service lightdm start

Now, you’re in the Ubuntu window manager.

You can kill off the window manager by:

$ service lightdm stop

You can also run compute jobs on the GPU through Paradice. Since Paradice currently supports Radeon open source driver, you need to use compute frameworks compatible with these drivers, such as GalliumCompute.

One main reason that you may want to use Paradice for GPU virtualization is to have access to the GPU from multiple VMs (rather than assigning your GPU to one VM only). You can indeed support multiple guest VMs with a single driver VM using the released source code. Simply configure another guest VM and connect it to the driver VM. While this should be enough for compute jobs, the currently released code does not include support for seamless switching between the graphical sessions of the VMs. Currently, you need to stop the graphical application on one VM before launching an application on another one. For this, we will test and release an update to the modules (specifically an update to the dfv_drm module) that will allow you to switch between the graphical sessions using simple key combinations, similar to how you can switch between multiple graphical terminals in Linux.