[vbox-dev] detailed paths from vm i/o to host?
huisinro at yahoo.com
Thu Dec 9 07:08:05 PST 2010
Achim and Knut,
Thanks for the info.
As I understand, ESX-i has all emulations done inside the hypervisor layer, I was wondering if that migh be one of the reason that it's faster, and thus my question.
ESX-i must have lower level device drivers with the hypersior, I guess, as it does i/o inside hypervisor.
In other words, if someone uses VBox code to do a type I hypervisor, would be the performance close to ESX-i? (I know that would be a big undertaking)
--- On Thu, 12/9/10, Achim Hasenmüller <achim.hasenmueller at oracle.com> wrote:
From: Achim Hasenmüller <achim.hasenmueller at oracle.com>
Subject: Re: [vbox-dev] detailed paths from vm i/o to host?
To: "Knut St. Osmundsen" <knut.osmundsen at oracle.com>
Cc: vbox-dev at virtualbox.org
Date: Thursday, December 9, 2010, 12:44 AM
I wonder if anyone can provide some detailed info as how guest i/o comes to the host emulation layer (user space). For example, how does "outb port_num, val" instruction travel from guest kernel to host user space on AMD64 with VT-x/AMD-V?
The reason I aksed is that I was wondering if performance would be further improved if moving some emulation layer from host user space to host kernel layer? that should at least saves two context switches.
KVM provides some callbacks, any similar api exists for VBox?
We do a lot more in kernel context than KVM and we have been doing so for many years. Our virtual device architecture lets you specify which handlers run at which context and you can handle simpler and more frequent cases in kernel mode and even decide in kernel mode that you can't handle the call and forward it to user mode.
-----Inline Attachment Follows-----
vbox-dev mailing list
vbox-dev at virtualbox.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the vbox-dev