Opened 4 weeks ago
Last modified 11 days ago
#22384 new defect
Very slow performance of VirtualBox 7 on Intel Mac (with ArcaOS guest)
Reported by: | davidrmac | Owned by: | |
---|---|---|---|
Component: | other | Version: | VirtualBox-7.1.8 |
Keywords: | OS/2 | Cc: | |
Guest type: | other | Host type: | Mac OS X |
Description
Hi
I am having an issue with great slowness in VirtualBox 7 when using ArcaOS (a modernised version of OS/2) on an Intel Mac. I am using the latest 7.1.8 version of VirtualBox and the latest guest additions, but the problem has existed since version 7 was introduced.
I appreciate this might be a niche use case for VirtualBox, but have you any tips for me to get better performance? Or if you think this might be a bug, can I do any other tests, or generate any other logs for you?
The main issue for me is that VirtualBox 6 on macOS Monterey was able to run ArcaOS very well. But VirtualBox 7 on macOS Sonoma and Sequoia runs the same VM, or even a fresh install, very slowly in comparison. I believe this may have to do with an Apple-mandated change in the later OS versions to force VirtualBox to use Apple's inbuilt hypervisor framework rather than its own? But I'm guessing.
If this is the case, do you know if VMWare has been forced to do the same? I ask because it seems that ArcaOS is running much more quickly when using VMWare 13 on Sonoma and Sequoia than when using VirtualBox 7.
I have done many experiments to speed things up and it seems that using 1920x1080 resolution, 65536 colours, 100% scaling, non-retina app mode makes things more bearable, but it is only a marginal speed increase.
I have previously raised the issue on the VirtualBox Forums, and they suggested I raise it here. Here's the link: https://forums.virtualbox.org/viewtopic.php?t=113195
I also raised it on the OS2 World Community Forum here: https://www.os2world.com/forum/index.php/topic,3828.0.html
A second issue seems to be that the reported guest CPU speed fluctuates wildly when using VirtualBox, but on VMWare it is a rock-sold consistent number that is very similar to the host CPU speed. I don't know if this is related. There are some screenshots illustrating this in the above posts.
Any help you can provide would be much appreciated.
Many thanks
David
Attachments (1)
Change History (3)
by , 4 weeks ago
Attachment: | ArcaOS-Exp-2025-04-21-12-25-22.log added |
---|
comment:1 by , 3 weeks ago
I think I know what the problem is.
Anytime a device--any device--is accessed inside a VirtualBox VM, that causes a VM exit. When the VMM lived in kernelspace, this wasn't as much of a problem. Now, however, each VM exit requires a round trip back to userspace. This is where the slowness and apparent instability of CPU speed comes from.
tl;dr: The biggest contributor by far is the framebuffer. Most of the slowness would probably vanish if VBox stopped tracking the dirty areas.
There are two broad categories of device accesses here:
- Memory-mapped I/O is mostly the video framebuffer, which is tracked by VirtualBox so that only dirty portions of the screen are updated. But, the way it's currently implemented, VBox has to exit the VM in order to update the dirty bits. This is why reducing the resolution and color depth makes it faster. Ironically, this tracking was probably intended to speed up graphics. What VBox really needs to use here for that purpose is the accessed/dirty bits from the EPT. Then, it wouldn't need to exit the VM every time the display memory is written. The only problem is that those aren't yet available through the Darwin Virtualization framework. In this case, VBox may simply need to stop tracking the dirty portions of the screen and just redraw the window every frame. I suspect this is what VMWare Fusion does.
Other MMIO areas need to cause VM exits to have their special effects. There's little that can be done about those, at least that I can think of off the top of my head. Fortunately, writing to those is considerably less frequent than writing to the framebuffer.
- Port I/O necessarily causes a VM exit. This is less frequent than MMIO, but can still cause slowness in places. The Darwin Virtualization framework offers an "I/O notifier" interface, which avoids the round trip by sending Mach messages to a Mach port the client specifies. But, AFAICT it only supports port output; I suppose
IN
instructions will continue to cause VM exits. It has no provision for string I/O, either--I would presume that aREP OUTS
instruction, commonly used for legacy ATA, causes multiple messages, one for each iteration. There is unfortunately no other solution to this.
comment:2 by , 11 days ago
@cdavis5x thanks for the explanation. If I was a bit more familiar with the code I would love to have a look at changing it to stop tracking the dirty portions and redraw the entire screen to see if it made a difference. However, I suspect it will be beyond my abilities ...
Log file