Opened 14 years ago
Closed 5 years ago
#6449 closed defect (obsolete)
Fedora kernel 2.6.33.1-22 - VirtualBox 3.1.6 - possible recursive locking detected
Reported by: | didierg | Owned by: | |
---|---|---|---|
Component: | other | Version: | VirtualBox 3.1.6 |
Keywords: | Cc: | ||
Guest type: | other | Host type: | other |
Description (last modified by )
Environment
Linux lx-azerty 2.6.33.1-22.fc13.x86_64 #1 SMP Mon Mar 29 02:05:28 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux
VirtualBox-3.1-3.1.6_59338_fedora12-1.x86_64
Error messages in /var/log/messages
Mar 29 12:38:06 lx-azerty kernel: warning: `VirtualBox' uses 32-bit capabilities (legacy support in use) Mar 29 12:38:13 lx-azerty kernel: Mar 29 12:38:13 lx-azerty kernel: ============================================= Mar 29 12:38:13 lx-azerty kernel: [ INFO: possible recursive locking detected ] Mar 29 12:38:13 lx-azerty kernel: 2.6.33.1-22.fc13.x86_64 #1 Mar 29 12:38:13 lx-azerty kernel: --------------------------------------------- Mar 29 12:38:13 lx-azerty kernel: VirtualBox/2745 is trying to acquire lock: Mar 29 12:38:13 lx-azerty kernel: (&(&pThis->Spinlock)->rlock){+.+...}, at: [<ffffffffa030dfde>] RTSpinlockAcquire+0x12/0x14 [vboxdrv] Mar 29 12:38:13 lx-azerty kernel: Mar 29 12:38:13 lx-azerty kernel: but task is already holding lock: Mar 29 12:38:13 lx-azerty kernel: (&(&pThis->Spinlock)->rlock){+.+...}, at: [<ffffffffa030dfde>] RTSpinlockAcquire+0x12/0x14 [vboxdrv] Mar 29 12:38:13 lx-azerty kernel: Mar 29 12:38:13 lx-azerty kernel: other info that might help us debug this: Mar 29 12:38:13 lx-azerty kernel: 1 lock held by VirtualBox/2745: Mar 29 12:38:13 lx-azerty kernel: #0: (&(&pThis->Spinlock)->rlock){+.+...}, at: [<ffffffffa030dfde>] RTSpinlockAcquire+0x12/0x14 [vboxdrv] Mar 29 12:38:13 lx-azerty kernel: Mar 29 12:38:13 lx-azerty kernel: stack backtrace: Mar 29 12:38:13 lx-azerty kernel: Pid: 2745, comm: VirtualBox Not tainted 2.6.33.1-22.fc13.x86_64 #1 Mar 29 12:38:13 lx-azerty kernel: Call Trace: Mar 29 12:38:13 lx-azerty kernel: [<ffffffff8107e96b>] __lock_acquire+0xcb5/0xd2c Mar 29 12:38:13 lx-azerty kernel: [<ffffffff81071226>] ? sched_clock_cpu+0xc3/0xce Mar 29 12:38:13 lx-azerty kernel: [<ffffffff8107eabe>] lock_acquire+0xdc/0x102 Mar 29 12:38:13 lx-azerty kernel: [<ffffffffa030dfde>] ? RTSpinlockAcquire+0x12/0x14 [vboxdrv] Mar 29 12:38:13 lx-azerty kernel: [<ffffffff81478a73>] _raw_spin_lock+0x36/0x69 Mar 29 12:38:13 lx-azerty kernel: [<ffffffffa030dfde>] ? RTSpinlockAcquire+0x12/0x14 [vboxdrv] Mar 29 12:38:13 lx-azerty kernel: [<ffffffffa030dfde>] RTSpinlockAcquire+0x12/0x14 [vboxdrv] Mar 29 12:38:13 lx-azerty kernel: [<ffffffffa0305742>] SUPR0ObjAddRefEx+0xc0/0x1f9 [vboxdrv] Mar 29 12:38:13 lx-azerty kernel: [<ffffffffa035a299>] g_abExecMemory+0x43259/0x180000 [vboxdrv] Mar 29 12:38:13 lx-azerty kernel: [<ffffffffa0364d5c>] g_abExecMemory+0x4dd1c/0x180000 [vboxdrv] Mar 29 12:38:13 lx-azerty kernel: [<ffffffffa035bf02>] g_abExecMemory+0x44ec2/0x180000 [vboxdrv] Mar 29 12:38:13 lx-azerty kernel: [<ffffffffa0328f14>] g_abExecMemory+0x11ed4/0x180000 [vboxdrv] Mar 29 12:38:13 lx-azerty kernel: [<ffffffff8107f2b4>] ? lock_release_non_nested+0xd5/0x23b Mar 29 12:38:13 lx-azerty kernel: [<ffffffffa03296a0>] g_abExecMemory+0x12660/0x180000 [vboxdrv] Mar 29 12:38:13 lx-azerty kernel: [<ffffffffa0307d7e>] supdrvIOCtl+0x1098/0x1e52 [vboxdrv] Mar 29 12:38:13 lx-azerty kernel: [<ffffffff810f1dd5>] ? might_fault+0xa5/0xac Mar 29 12:38:13 lx-azerty kernel: [<ffffffff810f1d8c>] ? might_fault+0x5c/0xac Mar 29 12:38:13 lx-azerty kernel: [<ffffffffa0304262>] VBoxDrvLinuxIOCtl+0x124/0x1a7 [vboxdrv] Mar 29 12:38:13 lx-azerty kernel: [<ffffffff8112c054>] vfs_ioctl+0x32/0xa6 Mar 29 12:38:13 lx-azerty kernel: [<ffffffff8112c5d4>] do_vfs_ioctl+0x490/0x4d6 Mar 29 12:38:13 lx-azerty kernel: [<ffffffff8111fe91>] ? fget_light+0x57/0x105 Mar 29 12:38:13 lx-azerty kernel: [<ffffffff8112c670>] sys_ioctl+0x56/0x79 Mar 29 12:38:13 lx-azerty kernel: [<ffffffff81009c72>] system_call_fastpath+0x16/0x1b
In addition I have Fedora host hangs when rebooting Windows XP guests.
Attachments (2)
Change History (7)
by , 14 years ago
comment:1 by , 14 years ago
comment:2 by , 14 years ago
The fact that there was only one report ever until just now, and we never saw the problem in our tests doesn't mean we didn't look into it. Also I doubt that the problem with 3.2.12 is identical to the originally reported one.
follow-up: 4 comment:3 by , 13 years ago
I just saw this with VirtualBox OSE 4.0.2-2.fc14 from rpmfusion while firing up a Windows XP guest:
Jun 30 09:02:03 tinker kernel: [ 980.515736] Jun 30 09:02:03 tinker kernel: [ 980.515738] ============================================= Jun 30 09:02:03 tinker kernel: [ 980.515741] [ INFO: possible recursive locking detected ] Jun 30 09:02:03 tinker kernel: [ 980.515744] 2.6.35.13-92.fc14.x86_64 #1 Jun 30 09:02:03 tinker kernel: [ 980.515745] --------------------------------------------- Jun 30 09:02:03 tinker kernel: [ 980.515748] VirtualBox/3195 is trying to acquire lock: Jun 30 09:02:03 tinker kernel: [ 980.515750] (&(&pThis->Spinlock)->rlock){+.+...}, at: [<ffffffffa0238322>] RTSpinloc Jun 30 09:02:03 tinker kernel: [ 980.515767] Jun 30 09:02:03 tinker kernel: [ 980.515767] but task is already holding lock: Jun 30 09:02:03 tinker kernel: [ 980.515769] (&(&pThis->Spinlock)->rlock){+.+...}, at: [<ffffffffa0238322>] RTSpinloc Jun 30 09:02:03 tinker kernel: [ 980.515781] Jun 30 09:02:03 tinker kernel: [ 980.515781] other info that might help us debug this: Jun 30 09:02:03 tinker kernel: [ 980.515783] 1 lock held by VirtualBox/3195: Jun 30 09:02:03 tinker kernel: [ 980.515785] #0: (&(&pThis->Spinlock)->rlock){+.+...}, at: [<ffffffffa0238322>] RTSp Jun 30 09:02:03 tinker kernel: [ 980.515797] Jun 30 09:02:03 tinker kernel: [ 980.515797] stack backtrace: Jun 30 09:02:03 tinker kernel: [ 980.515800] Pid: 3195, comm: VirtualBox Not tainted 2.6.35.13-92.fc14.x86_64 #1 Jun 30 09:02:03 tinker kernel: [ 980.515802] Call Trace: Jun 30 09:02:03 tinker kernel: [ 980.515808] [<ffffffff8107acca>] lock_acquire+0x947/0xd63 Jun 30 09:02:03 tinker kernel: [ 980.515811] [<ffffffff8107a8ef>] ? lock_acquire+0x56c/0xd63 Jun 30 09:02:03 tinker kernel: [ 980.515821] [<ffffffffa0238322>] ? RTSpinlockAcquire+0x12/0x14 [vboxdrv] Jun 30 09:02:03 tinker kernel: [ 980.515824] [<ffffffff8107b599>] lock_acquire+0xd2/0xfd Jun 30 09:02:03 tinker kernel: [ 980.515833] [<ffffffffa0238322>] ? RTSpinlockAcquire+0x12/0x14 [vboxdrv] Jun 30 09:02:03 tinker kernel: [ 980.515843] [<ffffffffa0237ea8>] ? RTSemMutexRelease+0x70/0xc8 [vboxdrv] Jun 30 09:02:03 tinker kernel: [ 980.515848] [<ffffffff8148aad7>] _raw_spin_lock+0x31/0x40 Jun 30 09:02:03 tinker kernel: [ 980.515857] [<ffffffffa0238322>] ? RTSpinlockAcquire+0x12/0x14 [vboxdrv] Jun 30 09:02:03 tinker kernel: [ 980.515867] [<ffffffffa0238322>] RTSpinlockAcquire+0x12/0x14 [vboxdrv] Jun 30 09:02:03 tinker kernel: [ 980.515874] [<ffffffffa022e3a0>] SUPR0ObjAddRefEx+0xc3/0x1ec [vboxdrv] Jun 30 09:02:03 tinker kernel: [ 980.515882] [<ffffffffa028d4c6>] g_abExecMemory+0x4a226/0x180000 [vboxdrv] Jun 30 09:02:03 tinker kernel: [ 980.515893] [<ffffffffa023ddf0>] RTHandleTableLookupWithCtx+0xac/0xd7 [vboxdrv] Jun 30 09:02:03 tinker kernel: [ 980.515896] [<ffffffff8107a8ef>] ? lock_acquire+0x56c/0xd63 Jun 30 09:02:03 tinker kernel: [ 980.515903] [<ffffffffa02925ee>] g_abExecMemory+0x4f34e/0x180000 [vboxdrv] Jun 30 09:02:03 tinker kernel: [ 980.515911] [<ffffffffa0257f5a>] g_abExecMemory+0x14cba/0x180000 [vboxdrv] Jun 30 09:02:03 tinker kernel: [ 980.515915] [<ffffffff8107b186>] ? lock_release_non_nested+0xa0/0x24f Jun 30 09:02:03 tinker kernel: [ 980.515919] [<ffffffff811175be>] ? kmalloc+0x116/0x161 Jun 30 09:02:03 tinker kernel: [ 980.515927] [<ffffffffa0258e30>] g_abExecMemory+0x15b90/0x180000 [vboxdrv] Jun 30 09:02:03 tinker kernel: [ 980.515930] [<ffffffff810f606e>] ? might_fault+0x5c/0xac Jun 30 09:02:03 tinker kernel: [ 980.515938] [<ffffffffa0231b74>] supdrvIOCtl+0x1321/0x22bd [vboxdrv] Jun 30 09:02:03 tinker kernel: [ 980.515941] [<ffffffff8107b4bb>] ? lock_release+0x186/0x192 Jun 30 09:02:03 tinker kernel: [ 980.515944] [<ffffffff810f60b7>] ? might_fault+0xa5/0xac Jun 30 09:02:03 tinker kernel: [ 980.515947] [<ffffffff810f606e>] ? might_fault+0x5c/0xac Jun 30 09:02:03 tinker kernel: [ 980.515954] [<ffffffffa022d3fb>] VBoxDrvLinuxIOCtl+0x12d/0x1ae [vboxdrv] Jun 30 09:02:03 tinker kernel: [ 980.515958] [<ffffffff811327d0>] vfs_ioctl+0x36/0xa7 Jun 30 09:02:03 tinker kernel: [ 980.515961] [<ffffffff81133149>] do_vfs_ioctl+0x47c/0x4af Jun 30 09:02:03 tinker kernel: [ 980.515964] [<ffffffff8107b4bb>] ? lock_release+0x186/0x192 Jun 30 09:02:03 tinker kernel: [ 980.515967] [<ffffffff81125e2e>] ? rcu_read_unlock+0x21/0x23 Jun 30 09:02:03 tinker kernel: [ 980.515970] [<ffffffff811331d2>] sys_ioctl+0x56/0x7c Jun 30 09:02:03 tinker kernel: [ 980.515974] [<ffffffff81009c72>] system_call_fastpath+0x16/0x1b
Looks very similar to me. Unfortunately, the lock debug code is not usually enabled for release kernels, so very few people will ever see this.
comment:5 by , 5 years ago
Description: | modified (diff) |
---|---|
Resolution: | → obsolete |
Status: | new → closed |
This problem is still present in 3.2.12 running under openSUSE 11.3 with a 2.6.37-rc6 kernel. Are you NEVER going to fix it?