VirtualBox

Version 3 (modified by umoeller, 17 years ago) ( diff )

--

The VirtualBox architecture

Virtualization is, by nature, extraordinarily complex, especially so on x86 hardware. Understanding the VirtualBox source code therefore requires, at least for some components, a great deal of understanding about the details of the x86 architecture as well as great knowledge about the implementations of the host and guest platforms involved.

There are several ways in which to approach how VirtualBox works. This document shall describe them in order of increasing complexity.

The VirtualBox processes: a bird's eye view

When you start the VirtualBox graphical user interface (GUI), at least one extra process gets started along the way -- the VirtualBox "service" process VBoxSVC.

Once you start a virtual machine (VM) from the GUI, you have two windows (the main window and the VM), but three processes running. Looking at your system from Task Manager (on Windows) or some system monitor (on Linux), you will see these:

  1. VirtualBox, the GUI for the main window;
  2. another VirtualBox process that was started with the -startvm parameter, which means that its GUI process acts as a shell for a VM;
  3. VBoxSVC, the service mentioned above, which is running in the background to keep track of all the processes involved. This was automatically started by the first GUI process.

(On Linux, there's another daemon process called VBoxXPCOMIPCD which is necessary for our XPCOM implementation to work. We will ignore this for now; see COM-XPCOM interoperability for details.)

To the host operating system (OS), the VM that runs "inside" the second window looks like an ordinary program. VirtualBox is very well behaved in that respect: it pretty much takes over control over a large part of your computer, executing a complete OS with its own set of guest processes, drivers, and devices inside this VM process, but the host OS does not notice much of this. Whatever the VM does, it's just another process in your host OS.

We therefore have two sorts of encapsulation in place with the various VirtualBox files and processes:

  1. Client/server architecture. All aspects of VirtualBox and the VMs that are running can be controlled with a simple, yet powerful, COM/XPCOM API. For example, there is a command-line utility called VBoxManage that allows you to control VMs just like the GUI does (in fact, many of the more sophisticated operations are not yet supported by the GUI). You can, for example, start a VM from the GUI (by clicking on the "Start" button) and stop it again from VBoxManage.

This is why the service process (VBoxSVC) is needed: it keeps track of which VMs are running and what state they're in.

  1. Frontend/backend architecture. The guts of VirtualBox -- everything that makes x86 virtualization complicated and messy -- are hidden in a shared library, VBoxVMM.dll, or VBoxVMM.so on Linux. This can be considered a "backend", or black box, that is static, and it is relatively easy to write another frontend without having to mess with the gory details of x86 virtualization. So, as an example, if you don't like the fact that the GUI is a Qt application, you can easily write a different frontend (say, using GTK).

In fact, VirtualBox already comes with several frontends:

  • The Qt GUI (VirtualBox) that you may already be familiar with.
  • VBoxManage, a command-line utility that allows you to control all of VirtualBox's powerful features.
  • A "plain" GUI based on SDL, with fewer fancy features than the Qt GUI. This is useful for business use as well as testing during development. To control the VMs, you will then use VBoxManage.
  • A Remote Desktop Protocol (RDP) server, which is console-only and produces no graphical output on the host, but allows remote computers to connect to it. This is especially useful for enterprises who want to consolidate their client PCs onto just a few servers. The client PCs are then merely displaying RDP data produced by the various RDP server processes on a few big servers, which virtualize the "real" client PCs. (The RDP server is not part of VirtualBox OSE, but is available with the full version; see the Editions page for details.)

Inside a virtual machine

As said above, from the perspective of the host OS, a virtual machine is just another process. The host OS does not need much tweaking to support virtualization. Even though there is a ring-0 driver that must be loaded in the host OS for VirtualBox to work, this ring-0 driver does less than you may think. It is only needed for a few specific tasks, such as:

  • allocating physical memory for the VM;
  • saving and restoring CPU registers and descriptor tables when a host interrupt occurs while a guest's ring-3 code is executing (e.g. when the host OS wants to reschedule);
  • when switching from host ring-3 to guest context;
  • enable or disable VT-x etc. support.

Most importantly, the host's ring-0 driver does not mess with your OS's scheduling or process management. The entire guest OS, including its own hundreds of processes, is only scheduled when the host OS gives the VM process a timeslice.

After a VM has been started, from your processor's point of view, your computer can be in one of several states (the following will require a good understanding of the x86 ring architecture):

  1. Your CPU can be executing host ring-3 code (e.g. from other host processes), or host ring-0 code, just as it would be if VirtualBox wasn't running.
  2. Your CPU can be emulating guest code (within the ring-3 host VM process). Basically, VirtualBox tries to run as much guest code natively as possible. But it can (slowly) emulate guest code as a fallback when it is not sure what the guest system is doing, or when the performance penalty of emulation is not too high. Our emulator (in src/emulator/) is based on QEMU and typically steps in when
    • guest code disables interrupts and VirtualBox cannot figure out when they will be switched back on (in these situations, VirtualBox actually analyzes the guest code using its own disassembler in src/VBox/Disassembler/);
    • for execution of certain single instructions; this typically happens when a nasty guest instruction such as LIDT has caused a trap and needs to be emulated;
    • for any real-mode code (e.g. BIOS code, a DOS guest, or any operating system startup).
  3. Your CPU can be running guest ring-3 code natively (within the ring-3 host VM process). With VirtualBox, we call this "raw ring 3". This is, of course, the most efficient way to run the guest, and hopefully we don't leave this mode too often. The more we do, the slower the VM is compared to a native OS, because all context switches are very expensive.
  4. Your CPU can be running guest ring-0 code natively. Here is where things get hairy: The guest only thinks it's running ring-0 code, but VirtualBox has fooled the guest OS to instead enter ring 1 (which is normally unused with x86 operating systems).

Also, in the VirtualBox source code, you will find lots of references to "host context" or "guest context". Essentially, these mean:

  • Host context (HC) means that the host OS is in control of everything including virtual memory. In the VirtualBox sources, the term "HC" will normally refer to the host's ring-3 context only. We only use host ring-0 (R0) context with our new Intel VT-x (Vanderpool) support, which we'll leave out of the picture for now (but see below).
  • Guest context (GC) means that basically the guest OS in control, but with VirtualBox keeping a watch on things in the background. Here, VirtualBox has set up CPU & memory exactly the way the guest expects, but it has inserted itself at the "bottom" of the picture. It can then assume control when nasty things happen -- if a privileged instruction is executed, the guest traps or external interrupts occur. VirtualBox may then possibly delegate handling such things to the host OS. So, in the guest context, we have
    • ring 3 (hopefully executed in "raw mode" all the time);
    • ring 1 (of which the guest thinks it's ring 0, see above), and
    • ring 0 (which is VirtualBox code). This guest-context ring-0 code is also often called a "hypervisor".

Intel VT-x ("Vanderpool") and AMD-V (SVM) support

With its latest processors, Intel has introduced hardware virtualization support, which they call "Vanderpool", "IVT", "VT-x", or "VMX" (for "virtual machine extensions"). As we started out rather early on this, we internally use the term "VMX". A thorough explanation of this architecture can be found on Intel's pages, but as a summary, with these extensions, a processor always operates in one of the following two modes:

  • In root mode, its behavior is very similar to the standard mode of operation (without VMX), and this is the context that a virtual machine monitor (VMM) runs in.
  • The non-root mode (or guest context, if you want) is designed for running a virtual machine.

One notable novelty is that all four privilege levels (rings) are supported in either mode, so guest software can theoretically run at any of them. VT-x then defines transitions from root to non-root mode (and vice versa) and calls these "VM entry" and "VM exit".

In non-root mode, the processor will automatically cause VM exits for certain privileged instructions and events. For some of these instructions, it is even configurable whether VM exits should occur.

Since, however, nearly all operating systems in use today only make use of ring-0 and ring-3, and since a lot of operations in non-root mode are very expensive, VirtualBox does not use VT-x exactly as intended by Intel. Instead, we make partial use of it -- only where it makes sense and where it helps us to improve performance. So, as said above, our hypervisor, on non-VT-x machines, lives in ring 0 of the guest context, below the guest ring-0 code that is actually run in ring 1. When VT-x is enabled, the hypervisor can safely live in ring 0 host context and gets activated automatically by use of the new VM exits.

We also have experimental support for AMD's equivalent to VT-x (called AMD-V or SVM). As you have read above, VT-x support is not of high practical importance and we have noticed that AMD's implementation comes with an even larger performance penalty plus a number of implementation errors, improving our support for ADM-V is currently not the most important item.

Advanced techniques: code scanning, analysis and patching

As described above, we normally try to execute all guest code natively and use the recompiler as a fallback only in very rare situations. For raw ring 3, the performance penalty caused by the recompiler is not a major problem as the number of faults is generally low (unless the guest allows port I/O from ring 3, something we cannot do as we don't want the guest to be able to access real ports).

However, as was also described previously, we manipulate the guest operating system to actually execute its ring-0 code in ring 1. This causes a lot of additional instruction faults, as ring 1 is not allowed to execute any privileged instructions (of which there are plenty in the guest's ring-0 code, of course). With each of these faults, our VMM must step in and emulate the code to achieve the desired behavior. While this normally works perfectly well, the resulting performance would be very poor since CPU faults tend to be very expensive and there will be thousands and thousands of them per second.

To make things worse, running ring-0 code in ring 1 causes some nasty occasional compatibility problems. Because of design flaws in the IA32/AMD64 architecture that were never addressed, some system instructions that should cause faults when called in ring 1 unfortunately do not. Instead, they just behave differently. It is therefore imperative that these instructions be found and replaced.

To address these two issues, we have come up with a set of unique techniques that we call "Patch Manager" (PATM) and "Code Scanning and Analysis Manager" (CSAM). Before executing ring 0 code, we scan it recursively to discover problematic instructions. We then perform in-situ patching, i.e. we replace the instruction with a jump to hypervisor memory where an integrated code generator has placed a more suitable implementation. In reality, this is a very complex task as there are lots of odd situations to be discovered and handled correctly. So, with its current complexity, one could argue that PATM is an advanced in-situ recompiler.

In addition, every time a fault occurs, we analyze the fault's cause to determine if it is possible to patch the offending code to prevent it from causing more expensive faults in the future. This turns out to work very well, and we can reduce the faults caused by our virtualization to a rate that performs much better than a typical recompiler, or even VT-x technology, for that matter.

Note: See TracWiki for help on using the wiki.

© 2023 Oracle
ContactPrivacy policyTerms of Use