VirtualBox

Opened 5 years ago

Last modified 4 years ago

#18089 assigned defect

Host CPU load 100% for idle guest

Reported by: bcandler Owned by: pentagonik
Component: other Version: VirtualBox 5.2.20
Keywords: Cc:
Guest type: Linux Host type: Mac OS X

Description

Host: macOS 10.12.6 Guest: Linux (Ubuntu 16.04)

After upgrading Virtualbox to 5.2.20, I ran a single guest which I'd used many times before. The guest decided to do an auto-update, which is fine.

However even after the update had finished, on the *host*, CPU used by the VirtualBox process remained stuck at 100%, as shown by Activity Monitor and (here) "top -o cpu":

Processes: 326 total, 3 running, 323 sleeping, 1550 threads                                                         08:15:02
Load Avg: 2.34, 2.50, 2.10  CPU usage: 31.47% user, 7.26% sys, 61.25% idle
SharedLibs: 229M resident, 51M data, 62M linkedit. MemRegions: 65898 total, 7787M resident, 147M private, 1418M shared.
PhysMem: 14G used (4448M wired), 2268M unused. VM: 879G vsize, 627M framework vsize, 0(0) swapins, 0(0) swapouts.
Networks: packets: 197313/179M in, 123489/30M out. Disks: 385658/8849M read, 161264/4111M written.

PID   COMMAND      %CPU  TIME     #TH   #WQ  #PORT MEM    PURG   CMPRS  PGRP PPID STATE    BOOSTS         %CPU_ME %CPU_OTHRS
1810  VirtualBoxVM 105.5 17:19.54 44/1  5    392   2193M  8048K  0B     1810 1805 running  *1[5]          0.00000 0.00000
335   iTerm2       14.6  01:56.18 11    5    281-  164M-  34M-   0B     335  1    sleeping *0[305]        0.04101 1.50065

But inside the *guest*, CPU was ~99% idle. Again, here with "top" but inside the guest:

top - 08:16:00 up 44 min,  1 user,  load average: 0.00, 0.07, 0.11
Tasks: 583 total,   1 running, 582 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.3 us,  0.3 sy,  0.0 ni, 99.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :  2048156 total,   238796 free,  1136680 used,   672680 buff/cache
KiB Swap:   786428 total,   747596 free,    38832 used.   554244 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
 4441 101001    20   0 2552704  62904  14900 S  0.3  3.1   0:05.32 java
 7579 brian     20   0   42240   4268   3172 R  0.3  0.2   0:00.02 top
    1 root      20   0   38320   6280   4016 S  0.0  0.3   0:02.16 systemd

After shutting down and restarting the guest, the problem went away. This again on the host:

Processes: 325 total, 3 running, 322 sleeping, 1501 threads                                                         08:18:39
Load Avg: 1.90, 2.28, 2.10  CPU usage: 2.68% user, 6.34% sys, 90.97% idle
SharedLibs: 229M resident, 51M data, 62M linkedit. MemRegions: 66300 total, 7793M resident, 150M private, 1428M shared.
PhysMem: 14G used (4458M wired), 2178M unused. VM: 876G vsize, 627M framework vsize, 0(0) swapins, 0(0) swapouts.
Networks: packets: 201757/182M in, 127536/32M out. Disks: 419879/9460M read, 166954/4317M written.

PID   COMMAND      %CPU TIME     #TH   #WQ  #PORT MEM    PURG   CMPRS  PGRP PPID STATE    BOOSTS          %CPU_ME %CPU_OTHRS
6140  VirtualBoxVM 7.8  00:49.13 41    4    396   2144M  18M    0B     6140 1805 sleeping *1[4]           0.00000 0.00000
0     kernel_task  7.1  07:13.29 134/4 0    2     1199M  0B     0B     0    0    running   0[0]           0.00000 0.00000

Change History (45)

comment:1 by carlosefr, 5 years ago

I too am seeing this behavior since upgrading to VirtualBox 5.2.20 (with a CentOS 7 x64 guest). The host CPU is pegged at 100% while the guest is doing nothing (I've tried to stop almost everything inside the guest, but the CPU usage on the host remains at 100%).

I've also tried to boot the VM with "legacy" and "default" paravirtualization. The result is the same, after a while this behavior appears. It seems to be triggered by the host coming back from sleep.

The host is also macOS 10.12.6.

VirtualBox 5.2.21 r126213 also shows the problematic behavior, while 5.2.18 does not.

Last edited 5 years ago by carlosefr (previous) (diff)

comment:2 by Socratis, 5 years ago

There are two more additional reports of similar behavior in the forums. See https://forums.virtualbox.org/viewtopic.php?f=8&t=89896

comment:3 by Samuel H., 5 years ago

I have the same behavior on OSX 10.13.6 with Ubuntu as Guest. Inside the guest i see low CPU usage, while on the host the 2 CPU-Cores used by the guest running at 100%.

I am using VirtualBox 5.2.20, but i had the same behavior on VirtualBox 5.2.18.

The paravirtualization-Setting is on default.

comment:4 by ssh22, 5 years ago

I have exactly the same problem after updating to 5.2.20. Host: MacOS 10.14 Mojave Guest: Centos 7.2

In my case i'm using Vagrant (2.2.0). When I start vagrant, everything runs normal. After a hour or so, I start hearing my macbook fan working like crazy. It's vboxhealdless process, started by vagrant - using 100-110% of the CPU. From there it doesn't go down until I restart vagrant, virtualbox or macos.

comment:5 by planck_length, 5 years ago

Same happening here on macOS 10.13.6. I also noticed that logd was > 100% activity, and when I ran Console I could see that VBoxHeadless (AudioToolbox) had generated the following half a million times:

396: buffer 0 ptr 0x7f9e9f0f4300 size 0

comment:6 by liamk15, 5 years ago

I'm having a similar issue since upgrading, it's fine after a restart and starts happening after waking from sleep.

In Console.app i can see approx 10k logs a second being added by VBoxHeadless

Error is something to do with AudioToolbox, see screenshot: https://i.imgur.com/fE3zXmz.png

Virtual Box 5.2.20

Host: MacOS Mojave 10.14.1

Guest: Ubuntu

comment:7 by bcandler, 5 years ago

I have upgraded this machine (Macbook Pro 13" 2015) from macOS 10.12.6 to 10.14.1

Now I've seen the problem appear again. The VM was OK; after closing the laptop and re-opening it again in a different location, it went to 100% CPU, while still idle in the guest. This suggests maybe a change in network settings triggered the problem.

I don't see anything obviously wrong in VBox logs though - just showing that the suspend/resume took place - and I don't see any VBox-related logs in console.

03:24:10.765134 Pausing VM execution, reason 'host suspend'
03:24:10.765825 Changing the VM state from 'RUNNING' to 'SUSPENDING'
03:24:10.799262 AIOMgr: Endpoint for file '/Users/brian/VirtualBox VMs/rabbitmq/rabbitmq.vdi' (flags 000c0781) created
 successfully
03:24:10.815384 AIOMgr: Endpoint for file '/Users/brian/VirtualBox VMs/rabbitmq/rabbitmq-zfs.vdi' (flags 000c0781) cre
ated successfully
03:24:10.901783 PDMR3Suspend: 135 928 808 ns run time
03:24:10.901817 Changing the VM state from 'SUSPENDING' to 'SUSPENDED'
03:24:10.901831 Console: Machine state changed to 'Paused'
03:24:30.767925 NAT: DNS servers changed, triggering reconnect
03:24:31.475081 Resuming VM execution, reason 'host resume'
03:24:31.476581 Changing the VM state from 'SUSPENDED' to 'RESUMING'
03:24:31.480505 AIOMgr: Endpoint for file '/Users/brian/VirtualBox VMs/rabbitmq/rabbitmq.vdi' (flags 000c0723) created successfully
03:24:31.493955 AIOMgr: Endpoint for file '/Users/brian/VirtualBox VMs/rabbitmq/rabbitmq-zfs.vdi' (flags 000c0723) created successfully
03:24:31.745771 NAT: Link down
03:24:31.745842 Changing the VM state from 'RESUMING' to 'RUNNING'
03:24:31.745875 Console: Machine state changed to 'Running'
03:24:36.760571 NAT: Link up
03:24:41.611545 NAT: DNS servers changed, triggering reconnect
03:24:41.612443 NAT: Link down
03:24:46.611772 NAT: Link up
03:25:23.930354 NAT: DNS servers changed, triggering reconnect
03:25:23.933459 NAT: Link down
03:25:28.930853 NAT: Link up
03:25:28.931070 NAT: resolv.conf: nameserver 10.101.2.1
03:25:28.933015 NAT: DNS#0: 10.101.2.1

comment:8 by Socratis, 5 years ago

Another couple of reports in https://forums.virtualbox.org/viewtopic.php?f=8&t=90027

It seems to be widespread, we need to find what's the common denominator. Somehow I have the feeling that 10.13 or higher, 5.2.20, and maybe sleep might be involved.

comment:9 by pentagonik, 5 years ago

Owner: set to pentagonik
Status: newassigned

comment:10 by Socratis, 5 years ago

I forgot that I got a 'ping' to post an update on this, oops... :o

From a developer's comment on the IRC, it seems that this might be audio related, hence the "assigned" tag. ;)

Could you disable your audio in the VM and try again?


PS. Apologies a priori if you receive a duplicate notification in the forum threads...

Last edited 5 years ago by Socratis (previous) (diff)

in reply to:  10 comment:11 by regnauld, 5 years ago

Replying to socratis:

I forgot that I got a 'ping' to post an update on this, oops... :o

From a developer's comment on the IRC, it seems that this might be audio related, hence the "assigned" tag. ;)

Could you disable your audio in the VM and try again?

Confirmed! Turning off all audio (I didn't play with the individual sub-options) fixed the issue. 5.2.20 / Mojave. Before that, pegged CPU with or without pausing the VM.

Side note: I rejected a request to allow access to the microphone during Ubuntu installation - I did not notice if the CPU spiked at that point, but it could be related.

comment:12 by Socratis, 5 years ago

You should try 5.2.22, it should contain the real fix:

  • Audio: fixed a regression in the Core Audio backend causing a hang when returning from host sleep when processing input buffers

Can you confirm that please?

PS. Man that was fast... ;)

comment:13 by carlosefr, 5 years ago

I've tested with 5.2.22 and the problem is still reproducible.

Also, I can also confirm that disabling audio in the VM makes the problem go away.

Version 0, edited 5 years ago by carlosefr (next)

comment:14 by pentagonik, 5 years ago

To further diagnose the problem, could you please try to set the audio backend to "NULL Audio" and try reproducing the problem again? So far I am unable to reproduce the issue here.

comment:15 by pentagonik, 5 years ago

I'm currently trying to investigate and reproduce the issue, but failed so far.

So I have a couple of questions to those where to issue actually is reproducible:

  • This only happens on guests which use the AC'97 device emulation?
  • What happens if you select the NULL driver (backend) instead of the CoreAudio, is this still reproducible then?
  • Does the guest play (output) or record (input) anything while the issue appears?

Thank you!

comment:16 by bcandler, 5 years ago

I left two VMs running for the past couple of days - one with audio disabled, one with null audio driver. I have suspended the host quite a few times as I normally do through a day. Neither VM has caused the problem of 100% CPU load on the host with 0% in guest.

Unfortunately that doesn't prove anything, but may be a general indication.

I did however just discover something that may be relevant. If I go into Settings, Security & Privacy, and select Microphone, I can see that VirtualBox access to microphone is disabled. (I guess it must have asked me at some point, and I said No).

Last edited 5 years ago by bcandler (previous) (diff)

comment:17 by pentagonik, 5 years ago

I just uploaded a new 5.2 test build 126798 on the Testbuild page which hopefully should fix the issue now. Please download the build here: https://www.virtualbox.org/wiki/Testbuilds

comment:18 by symsym, 5 years ago

@petagonik: Tested with build 126798 on Mojave. Same problem. 3 VMs running in headless mode and autostarted by launchd (org.virtualbox.startup.plist). When re-starting the VMs manually, same problem.

Disabling Audio on all VMs cures the problem.

Last edited 5 years ago by symsym (previous) (diff)

comment:19 by pentagonik, 5 years ago

@symsym Thanks for the feedback -- as I still can't reproduce this here (also on Mojave), could you please have a look at the questions from comment 15 and providing more input so that I can continue investigating the issue? Thanks!

comment:20 by JL1, 5 years ago

I have this problem, too, running a Win10 VM in High Sierra. I discovered in the VM that the VBox Tray process suddenly hogs up the CPU. Deleting that process solves the problem but then I loose functionality such as using the clipboard between each OS. Of course, rebooting the VM works, too.

comment:21 by pentagonik, 5 years ago

@JL1 This sounds like a completely different and unrelated problem to me, not audio-related.

comment:22 by pentagonik, 5 years ago

@symsym Would you be able to collect a sample from the VirtualBox VM process which is consuming the high CPU load and upload it somewhere so that I can take a look at it?

comment:23 by bcandler, 5 years ago

I started another VM on which I hadn't disabled audio - Ubuntu 14.04. After suspending and resuming my laptop (macOS), it went up to 100% CPU utilisation. This is stock VBox 5.2.22, not the test build.

What do you mean by "a sample from the VirtualBox VM process"?

comment:24 by bcandler, 5 years ago

I changed to 5.2.23 r126798, but this *doesn't* fix the problem. The same Ubuntu 14.04 guest VM with audio enabled went to 100% CPU after suspending and resuming the macOS laptop host.

comment:25 by BrendanSimon, 5 years ago

I changed the audio driver settings to:

  • Host Driver = Null Audio Driver
  • Controller = Intel HD Audio

Now my mac (10.13.6 High Sierra) and guest (Debian 8 Jessie) have decent performance again.

I'm not sure which of the settings above is the cause of the improvement, but it makes a big difference.

comment:26 by vincent<3, 5 years ago

Thanks for the tip. I disable audio on my Debian stretch VM (with no gui) and things are back to normal ! I had this painful trouble of battery drain for a few weeks on my Mac Host (Mojave 10.14.2). The CPU usage of VBoxHeadless on the mac host was 100% while the cpu usage of the guest was close to 0. I use this debian guest (launched through vagrant) only to compile code, so in idle it does nothing except a bit of ssh and nfs. VirtualBox Version 5.2.22

Last edited 5 years ago by vincent<3 (previous) (diff)

comment:27 by Alan Crosswell, 5 years ago

In case it helps diagnose this: The first time I launched a new virtualbox VM (via vagrant CLI in Terminal) Mojave popped up a permission dialog to allow Terminal to access my microphone, which I declined. Mojave seems to have added quite a few more security features like this.

comment:28 by bcandler, 5 years ago

I just updated to mac OS 10.14.3 and to VirtualBox 5.2.24, and the problem recurred on a VM where it had been fine before. I had previously set this VM to use the null audio driver. The settings were:

  • Audio Enabled
  • Null Audio Driver
  • ICH AC97
  • Audio Output Enabled
  • Audio Input Enabled

I shut it down, unchecked the first box to disable audio entirely, restarted the VM, and CPU load is normal again.

comment:29 by scosol, 5 years ago

Still a problem- OSX Host 5.2.26 r128414 (Qt5.6.3) - Debian guest. Disabling audio fixes it, but is an unacceptable non-solution.

comment:30 by riyer, 5 years ago

Problem persists in OSX High Sierra (10.13.6) and VirtualBox Version 5.2.26 r128414 (Qt5.6.3) - CentOS guest. Disabling audio fixes it. Some additional input. The problem appeared when we upgraded from OSX Sierra to High Sierra. Do not recall Virtual Box version at that time. Have upgraded Virtual Box multiple times since then to no effect on this issue. Virtual Box process and logd process hit 100% CPU on the host MacOS anytime the host machine wakes up from sleep with the Guest "alive". (We have been advised not to upgrade to Mojave pending varios validations internally).

comment:31 by Richlv, 5 years ago

Also reported a bit earlier in #18049, but that somehow was missed by everybody.

Seeing this on MacOs Mojave 10.14.2, with an OpenSUSE Tumbleweed guest. Virtualbox right now at 5.2.22, but have observed this with several earlier versions.

in reply to:  31 comment:32 by Socratis, 5 years ago

Replying to Richlv:

Virtualbox right now at 5.2.22, but have observed this with several earlier versions.

How about some later versions, like 5.2.24 or 5.2.26? Or the Testbuilds?

comment:33 by Richlv, 5 years ago

There's one comment before mine about this still being a problem on 5.2.24, and two about 5.2.26 still being affected - and I have been checking the changelogs eagerly, too.

Is there a fix in some build that should resolve this and would benefit from testing?

comment:34 by Socratis, 5 years ago

I don't know if this is resolved or not, I'm trying to get people to test the latest and greatest, there's no point in staying behind.

For example, I've never had this issue, I'm on OSX 10.11.6, so I'm trying to see if there's a common denominator here.

I have an ext. HD that can boot anything from 10.11 to 10.14, so I could test it easily if it's an OSX specific version...

comment:35 by Richlv, 5 years ago

Got it, that's a good point.

Looking at the reports here, the problem might have started with MacOS 10.12.

comment:36 by Socratis, 5 years ago

@Richlv
I can't reproduce it with an OSX 10.12.6 host and a Win10x64 guest. Any chance that you can send me your VM that has the problem, and tell me the conditions under which that happens?

BTW, what's the I/O activity on the guest? Hard disk activity will not register as CPU in the guest, but it will register as CPU on the host...

Last edited 5 years ago by Socratis (previous) (diff)

comment:37 by Richlv, 5 years ago

VM is large and contains a lot of sensitive info, cannot share it. It is a Linux guest. On Linux, disk activity (i/o) will register in CPU load, and with all applications closed, there is 0 CPU load on the guest during the time when the CPU load of logd and VBox on the host is high.

comment:38 by Socratis, 5 years ago

Richlv
Can you at least give me the recipe? The .vbox file? And the ISO that you installed this VM from? Then at least I could try and build a fresh VM out of these two components.

May I suggest that you do the same? For a new test VM? That way we can figure out if it's the customization that you've done to the guest?

comment:39 by fendale, 5 years ago

I'm seeing this issue on Mac OS 10.14.4 and Virtual Box 5.2.26.

I boot my host with Vagrant - the Vagrant file is here:

https://github.com/sodonnel/puppet-modules/blob/master/boxes/basic/Vagrantfile

The Base box is here, which is a fairly minimal Centos 6.9 build:

https://drive.google.com/open?id=1988zwf-6aM2AeyiJuXVz6y_9m_iXn0no

Steps to reproduce:

Start the machine.

Put Mac OS to sleep

Wake Mac OS and VBoxHeadless immediately pegs a CPU until you shutdown the guest:

574 VBoxHeadless 97.3 07:56.93 36/1 4 250 295M 0B 0B 1574 1336 running *0[2] 0.00000 0.00000 503 81930 389 24332055+ 22555717+ 974686+ 53531137+

Hopefully that will let someone dig a bit deeper into this if its reproducible.

comment:40 by fendale, 5 years ago

Interestingly, I created a new Vagrant base box based on Centos 7 (my problem box above was Centos 6) and explicitly disabled audio on the new box.

Booting this new box with the same Vagrant file as above no longer gives the awake from sleep issue.

comment:41 by 6pac, 5 years ago

Don't know if this is any help, but I had only a single core assigned to the VM, and when I assigned 4 of my 8 cores to it, I started getting this problem. I'm trying disabling audio, will see if it works.

comment:42 by naniid, 4 years ago

I have the same problem VBoxHeadless

MacOS Catalina 10.15.12 Virtualbox Version 6.1.2 r135662

When I put the virtualmachine to sleep and wake it up, hosts's CPU runs into 100%. Disabling audio on guest indeed removed the problem.

Last edited 4 years ago by naniid (previous) (diff)

comment:43 by aeichner, 4 years ago

Can you try to collect a sample of the VBoxHeadless process using Apples activity monitor? We need the threads stack traces to check where it is burning CPU cycles.

comment:44 by qRoC, 4 years ago

@aeichner

Only this message:

sample[7143]: sample cannot examine process 95253 (VirtualBoxVM) for unknown reasons, even though it appears to exist; try running with sudo.

comment:45 by brablc, 4 years ago

I have this problem many years with all versions of VirtualBox. Currently at Version 6.1.6 r137129 (Qt5.6.3) on 10.15.5 (19F101).

I have only one VM in VirtualBox and it is in paused state. Despite of this there is 1 full core 100% utilized. This is output from sudo dtruss -f -c -p 24327 running several seconds.

24327/0xad7e1:  workq_kernreturn(0x40, 0x700006BD4B80, 0x1)		 = 0 Err#-2
24327/0xad7e1:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xabdee:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xabdee:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x40, 0x700006BD4B80, 0x0)		 = 0 Err#-2
24327/0xabdee:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0x7801b:  kevent_qos(0xFFFFFFFFFFFFFFFF, 0x700005C840F0, 0x1)		 = 0 0
24327/0x7809a:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xabdee:  workq_kernreturn(0x40, 0x700006CDAB80, 0x1)		 = 0 Err#-2
24327/0x7809a:  kevent_qos(0xFFFFFFFFFFFFFFFF, 0x700006B510F0, 0x1)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xabdee:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x40, 0x700006BD4B80, 0x0)		 = 0 Err#-2
24327/0xabdee:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xabdee:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xad7e1:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xabdee:  workq_kernreturn(0x40, 0x700006CDAB80, 0x0)		 = 0 Err#-2
24327/0x7801b:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0x7801b:  kevent_qos(0xFFFFFFFFFFFFFFFF, 0x700005C840F0, 0x1)		 = 0 0
24327/0xabdee:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0x77ef3:  psynch_cvwait(0x7FE29A110100, 0x219FF01021A0000, 0x0)		 = -1 Err#316
24327/0x77ef3:  gettimeofday(0x70000619FE18, 0x0, 0x0)		 = 0 0
24327/0xabdee:  workq_kernreturn(0x40, 0x700006CDAB80, 0x1)		 = 0 Err#-2
24327/0xabdee:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xad7e1:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xabdee:  workq_kernreturn(0x40, 0x700006CDAB80, 0x0)		 = 0 Err#-2
24327/0x7809a:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0x7801b:  kevent_qos(0xFFFFFFFFFFFFFFFF, 0x700005C840F0, 0x1)		 = 0 0
24327/0xabdee:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xad7e1:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xae88a:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0x77ef8:  psynch_cvwait(0x7FE298E2E570, 0x8C9501008C9600, 0x0)		 = -1 Err#316
24327/0x77ef8:  gettimeofday(0x7000062A5E58, 0x0, 0x0)		 = 0 0
24327/0xae88a:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xae88a:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xad7e1:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xae88a:  workq_kernreturn(0x40, 0x700005F96B80, 0x0)		 = 0 Err#-2
24327/0x7801b:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0x7809a:  kevent_qos(0xFFFFFFFFFFFFFFFF, 0x700006B510F0, 0x1)		 = 0 0
24327/0xae88a:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xad7e1:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xae88a:  workq_kernreturn(0x40, 0x700005F96B80, 0x1)		 = 0 Err#-2
24327/0x77ef3:  psynch_cvwait(0x7FE29A110100, 0x21A0001021A0100, 0x0)		 = -1 Err#316
24327/0x77ef3:  gettimeofday(0x70000619FE18, 0x0, 0x0)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xad7e1:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xae88a:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xae88a:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x40, 0x700006BD4B80, 0x0)		 = 0 Err#-2
24327/0x7801b:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xae88a:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0x7809a:  kevent_qos(0xFFFFFFFFFFFFFFFF, 0x700006B510F0, 0x1)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xae88a:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xabdee:  workq_kernreturn(0x40, 0x700006CDAB80, 0x1)		 = 0 Err#-2
24327/0xae88a:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xae88a:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xabdee:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xabdee:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xae88a:  workq_kernreturn(0x40, 0x700005F96B80, 0x0)		 = 0 Err#-2
24327/0x7801b:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xae88a:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0x7809a:  kevent_qos(0xFFFFFFFFFFFFFFFF, 0x700006B510F0, 0x1)		 = 0 0
24327/0xabdee:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xae88a:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x40, 0x700006BD4B80, 0x1)		 = 0 Err#-2
24327/0x77ef3:  psynch_cvwait(0x7FE29A110100, 0x21A0101021A0200, 0x0)		 = -1 Err#316
24327/0x77ef3:  gettimeofday(0x70000619FE18, 0x0, 0x0)		 = 0 0
24327/0x77eee:  ioctl(0x8, 0xC0305687, 0x700006099E58)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0xad7e1:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x40, 0x700006BD4B80, 0x0)		 = 0 Err#-2
24327/0xad7e1:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xabdee:  workq_kernreturn(0x40, 0x700006CDAB80, 0x1)		 = 0 Err#-2
24327/0x7809a:  kevent_qos(0xFFFFFFFFFFFFFFFF, 0x700006B510F0, 0x1)		 = 0 0
24327/0xae88a:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0x7801b:  psynch_mutexwait(0x7FE298E54EC0, 0x1A29002, 0x1A28F00)		 = 27430915 0
24327/0xad7e1:  psynch_mutexdrop(0x7FE298E54EC0, 0x1A29000, 0x1A29000)		 = 0 0
24327/0x7801b:  workq_kernreturn(0x20, 0x0, 0x1)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2
24327/0x7801b:  kevent_qos(0xFFFFFFFFFFFFFFFF, 0x700005C840F0, 0x1)		 = 0 0
24327/0xad7e1:  workq_kernreturn(0x4, 0x0, 0x0)		 = 0 Err#-2

CALL                                        COUNT
sendto                                          1
psynch_cvbroad                                  2
read                                            2
write                                           2
recvfrom                                        4
select                                          5
ioctl                                          14
gettimeofday                                   88
psynch_cvwait                                  88
psynch_mutexdrop                              102
psynch_mutexwait                              102
kevent_qos                                    188
workq_kernreturn                             1245

Output from top:

PID     COMMAND           %CPU  TIME      #TH    #WQ   #PORTS MEM     PURG    CMPRS  PGRP   PPID   STATE     BOOSTS           %CPU_ME  %CPU_OTHRS  UID   FAULTS    COW     MSGSENT     MSGRECV     SYSBSD       SYSMACH      CSW          PAGEINS IDLEW     POWER INSTRS      CYCLES      USER                #MREG RPRVT VPRVT VSIZE KPRVT KSHRD
24327   VBoxHeadless      92.3  06:59:37  37/1   4     233    1529M   0B      0B     24327  24269  running   *0[2]            0.00000  0.00000     501   19938     415     715721098+  527228874+  9433137+     1978947777+  189996269+   54      88983     92.3  2148120383  949318110   user              N/A   N/A   N/A   N/A   N/A   N/A
Note: See TracTickets for help on using tickets.

© 2023 Oracle
ContactPrivacy policyTerms of Use