[vbox-dev] Using HGCM

Nev vbox at zavalon.com
Wed Mar 10 23:46:01 GMT 2010


Hi Michael,

Thanks for your response, Timing was actually good, as I have been on a
very steep learning curve. I now have a simple test environment up and
running and able to pass requests from the windows guest application to
the host service.

Some additional background info. My Host is always Linux (Ubuntu, RedHat
or CentOS), and the Guest is always windows XP, both OS and applications
are currently 32 bits. We will need to move to 64 bit Host some time
soon, also 64 bit Windows 7 support will be needed soon. The
applications are likely to remain as 32 bit for some time yet.

My application can use many blocks of shared memory, the number and size
is determined by the end user of the application. I would prefer not to
set arbitrary limits by pre defining a fixed block of PCI memory.

As this memory block would need to be excessively large and wasteful to
ensure that users did not run out of shared memory. Of cause the block
could be configurable. But I do not think it would be easy to grow the
block while running. But this is not a subject that I am familiar so
would be happy to be corrected.

A possible advantage of using a Host PCI device, and matching Guest PCI
device, would be if VBox could automatically make the Host PCI device
accessible to the Guest applications. This would be a very strong
advantage if no changes to VBox were required as I would like to be able
to use the commercial binary. 

The most general solution that I can think of would be for the Guest
application to pass requests to the host, and the host using mmap to
allocate a block of shared memory.

I can see, and have test code running, for all but one critical part.

How do I map from Host virtual address, as returned from mmap(...) on
the host to Guest virtual address usable by a Guest application?

Any help or advice will be greatly appreciated, in particular on where
to look in the OSE source.

If you think it is more appropriate to use Host PCI device, I will start
looking at that as an alternative. I have yet to do any searching of the
OSE for PCI devices, but I would appreciate any suggestion were to
starting.

Another idea: Can the Host PCI device use mmap to allocate memory on the
fly, instead of use "Device" memory.

Another question on terminology: In the HGCM interface the term LinAddr
- Linear address, and PhysAddr - Physical address.
I am very familiar with normal virtual and physical address as used in
real systems. 
My question is in HGCM does PhysAddr refer to Host's Physical address,
or the Guest's Physical address, or both with a conversion during a HGCM
call.
Does LinAddr refer to Host/Guest Virtual address? I have checked and the
address is not the same value on the Host and the Guest. 

Sorry for the length of email.

Thanks in advance,
Nev


-
----Original Message-----
From: Michael Thayer <Michael.Thayer at Sun.COM>
To: Nev <vbox at zavalon.com>
Cc: VirtualBox developer's list <vbox-dev at virtualbox.org>
Subject: Re: [vbox-dev] Using HGCM
Date: Wed, 10 Mar 2010 11:09:40 +0100

Hello Nev,

Le jeudi 04 mars 2010 à 19:57 +1100, Nev a écrit :
> My apologies for my late response, but I have had an email system
> failure.
Apologies for mine.  I was trying to think of something brilliant to
answer, failed, and then got distracted by other things.


> But your responses has caused me to re-address my approach, as the use
> of HGCM was going to be an attempt to get a shared memory block between
> the Guest OS and the Host OS. If I need to use the OSE then I think a
> more direct interface may be more appropriate.
As I can't think of some other brilliant solution, I would agree with
this.  I'm not fully clear though about what you need.  Do you need to
map a random block of host memory to the guest?  Or is it enough to be
able to share any block of host memory?  I wonder whether creating a
simple virtual PCI device with a block of PCI memory and making that
into a shared memory area on the host side would do what you need?

> I suspect that the current implementation of HGCM maps Guest PCI device
> memory into the Host's memory space. This is not ideal for my
> application.
HGCM reads directly from guest memory, since of course all guest memory
is also host memory.  However this is done by the HGCM infrastructure
code, not by the services themselves - they provide buffers into which
data is copied.

> Ideally I would like to be able to map Host's memory into the Guest's
> address space. This would mean that only the Guest's PR_XXXXSharedMemory
> functions should need to change, maybe with a flag used to request Host
> sharing or not.
> 
> Each Guest function would somehow need to pass each call to the Host's
> matching function, maybe via HGCM.
> 
> The performance of these function should not be overly important, as
> long as the resulting shared memory can be read and written from both
> the host and guest application code without generating exceptions,
> except for the normal exceptions all memory is subjected to.
> 
> So now some questions:
> 1. Does this approach seems reasonable and doable?
> 2. Is anyone else working on a shared memory interface?
> 3. Is there any possibility of having this interface (provided it works)
> including into the OSE? and also included into the closed source
> product?
To 2) and 3) - not that I am aware of, and we are always interested in
useful contributions.

> I can see a number of reasons NOT to including a shared memory interface
> 1. Decreased separation between Guest and Host, which of cause is the
> purpose of Shared memory.
If the feature can be enabled or disabled for a given VM, then from our
point of view this is the VM owner's business.

> 2. Decreased security. Provided the shared memory is NOT executable, I
> don't think this is a major issue, but not all hardware/OS can protect
> memory region from execution.
> 3. Unable to live migrate Guest to another host while a shared memory
> block is open. Also Snap shots would be difficult or impossible with
> shared memory blocks open.
Since as far as I remember a VM is basically saved and the VM
application on the host shut down when it is migrated, this just means
that your host code has to behave correctly during saving and shutdown,
which is a basic requirement for all code in VirtualBox.  If you create
a virtual PCI device, this would be handled in the device and host-side
"driver" code.

Regards,

Michael





More information about the vbox-dev mailing list