VirtualBox

Opened 12 years ago

Closed 8 years ago

#10880 closed defect (obsolete)

Memory leak in 4.1.18

Reported by: kihjin Owned by:
Component: other Version: VirtualBox 4.1.18
Keywords: memory leak usage Cc:
Guest type: Linux Host type: Linux

Description

As posted here: https://forums.virtualbox.org/viewtopic.php?f=7&t=51277

I have a 32GB machine with Ubuntu Server 12.04 and VirtualBox 4.1.18 compiled from source. I have this machine configured with 15GB swap.

I have six headless VirtualBox instances running on this machine. Four of them are configured with 1024MB, one has 2000MB and the other has 512MB.

One of the instances was killed today by the kernel's OOM killer. It was the one configured with 2000MB.

[3693849.717521] VBoxHeadless invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0
[3693849.717528] VBoxHeadless cpuset=/ mems_allowed=0
[3693849.717531] Pid: 6510, comm: VBoxHeadless Tainted: G           O 3.2.0-26-generic #41-Ubuntu
[3693849.717534] Call Trace:
[3693849.717544]  [<ffffffff810bffdd>] ? cpuset_print_task_mems_allowed+0x9d/0xb0
[3693849.717551]  [<ffffffff8111adc1>] dump_header+0x91/0xe0
[3693849.717553]  [<ffffffff8111b145>] oom_kill_process+0x85/0xb0
[3693849.717556]  [<ffffffff8111b4ea>] out_of_memory+0xfa/0x220
[3693849.717559]  [<ffffffff81120f6f>] __alloc_pages_nodemask+0x80f/0x820
[3693849.717565]  [<ffffffff81157cf3>] alloc_pages_current+0xa3/0x110
[3693849.717568]  [<ffffffff8111798f>] __page_cache_alloc+0x8f/0xa0
[3693849.717571]  [<ffffffff81117dfe>] ? find_get_page+0x1e/0x90
[3693849.717574]  [<ffffffff81119ca4>] filemap_fault+0x234/0x3e0
[3693849.717577]  [<ffffffff8113a222>] __do_fault+0x72/0x550
[3693849.717580]  [<ffffffff8113d86a>] handle_pte_fault+0xfa/0x200
[3693849.717588]  [<ffffffff8113dd28>] handle_mm_fault+0x1f8/0x350
[3693849.717593]  [<ffffffff8165d4e0>] do_page_fault+0x150/0x520
[3693849.717609]  [<ffffffffa012f359>] ? VBoxDrvLinuxIOCtl_4_1_18+0x49/0x200 [vboxdrv]
[3693849.717614]  [<ffffffff81012728>] ? __switch_to+0x138/0x360
[3693849.717618]  [<ffffffff8105613d>] ? set_next_entity+0xad/0xd0
[3693849.717622]  [<ffffffff8118a06a>] ? do_vfs_ioctl+0x8a/0x340
[3693849.717627]  [<ffffffff8165746c>] ? __schedule+0x3cc/0x6f0
[3693849.717630]  [<ffffffff811798a5>] ? fget_light+0x65/0xe0
[3693849.717634]  [<ffffffff8165a135>] page_fault+0x25/0x30
[3693849.717636] Mem-Info:
[3693849.717638] Node 0 DMA per-cpu:
[3693849.717640] CPU    0: hi:    0, btch:   1 usd:   0
[3693849.717642] CPU    1: hi:    0, btch:   1 usd:   0
[3693849.717644] CPU    2: hi:    0, btch:   1 usd:   0
[3693849.717646] CPU    3: hi:    0, btch:   1 usd:   0
[3693849.717647] CPU    4: hi:    0, btch:   1 usd:   0
[3693849.717649] CPU    5: hi:    0, btch:   1 usd:   0
[3693849.717651] CPU    6: hi:    0, btch:   1 usd:   0
[3693849.717653] CPU    7: hi:    0, btch:   1 usd:   0
[3693849.717654] Node 0 DMA32 per-cpu:
[3693849.717656] CPU    0: hi:  186, btch:  31 usd:   7
[3693849.717658] CPU    1: hi:  186, btch:  31 usd:   0
[3693849.717660] CPU    2: hi:  186, btch:  31 usd:  30
[3693849.717662] CPU    3: hi:  186, btch:  31 usd:  72
[3693849.717663] CPU    4: hi:  186, btch:  31 usd:  13
[3693849.717665] CPU    5: hi:  186, btch:  31 usd:   0
[3693849.717667] CPU    6: hi:  186, btch:  31 usd:  57
[3693849.717669] CPU    7: hi:  186, btch:  31 usd: 169
[3693849.717670] Node 0 Normal per-cpu:
[3693849.717672] CPU    0: hi:  186, btch:  31 usd: 163
[3693849.717674] CPU    1: hi:  186, btch:  31 usd:  57
[3693849.717676] CPU    2: hi:  186, btch:  31 usd: 171
[3693849.717678] CPU    3: hi:  186, btch:  31 usd: 164
[3693849.717679] CPU    4: hi:  186, btch:  31 usd: 170
[3693849.717681] CPU    5: hi:  186, btch:  31 usd: 158
[3693849.717683] CPU    6: hi:  186, btch:  31 usd: 167
[3693849.717684] CPU    7: hi:  186, btch:  31 usd: 155
[3693849.717689] active_anon:6100595 inactive_anon:432763 isolated_anon:0
[3693849.717690]  active_file:0 inactive_file:0 isolated_file:0
[3693849.717691]  unevictable:0 dirty:0 writeback:0 unstable:0
[3693849.717691]  free:50637 slab_reclaimable:4857 slab_unreclaimable:19981
[3693849.717692]  mapped:1511649 shmem:3 pagetables:24977 bounce:0
[3693849.717695] Node 0 DMA free:15912kB min:32kB low:40kB high:48kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15656kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
[3693849.717703] lowmem_reserve[]: 0 2474 32210 32210
[3693849.717707] Node 0 DMA32 free:123996kB min:5188kB low:6484kB high:7780kB active_anon:1542704kB inactive_anon:386228kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:2533956kB mlocked:0kB dirty:0kB writeback:0kB mapped:443184kB shmem:8kB slab_reclaimable:1252kB slab_unreclaimable:9432kB kernel_stack:200kB pagetables:10648kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:589 all_unreclaimable? yes
[3693849.717716] lowmem_reserve[]: 0 0 29736 29736
[3693849.717719] Node 0 Normal free:62640kB min:62360kB low:77948kB high:93540kB active_anon:22859676kB inactive_anon:1344824kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:30449664kB mlocked:0kB dirty:0kB writeback:0kB mapped:5603412kB shmem:4kB slab_reclaimable:18176kB slab_unreclaimable:70492kB kernel_stack:1864kB pagetables:89260kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
[3693849.717728] lowmem_reserve[]: 0 0 0 0
[3693849.717731] Node 0 DMA: 0*4kB 1*8kB 0*16kB 1*32kB 2*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15912kB
[3693849.717740] Node 0 DMA32: 298*4kB 304*8kB 536*16kB 350*32kB 201*64kB 104*128kB 78*256kB 66*512kB 8*1024kB 4*2048kB 1*4096kB = 123816kB
[3693849.717748] Node 0 Normal: 14605*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 62516kB
[3693849.717756] 1382 total pagecache pages
[3693849.717757] 1113 pages in swap cache
[3693849.717759] Swap cache stats: add 4671648, delete 4670535, find 337718/343648
[3693849.717761] Free swap  = 0kB
[3693849.717762] Total swap = 15624188kB
[3693849.886079] 8388592 pages RAM
[3693849.886082] 1663935 pages reserved
[3693849.886083] 525 pages shared
[3693849.886084] 6671941 pages non-shared
[3693849.886086] [ pid ]   uid  tgid total_vm      rss cpu oom_adj oom_score_adj name
[3693849.886106] [  321]     0   321     4339        0   3       0             0 upstart-udev-br
[3693849.886110] [  330]     0   330     5457        1   5     -17         -1000 udevd
[3693849.886115] [  578]     0   578     3795        0   5       0             0 upstart-socket-
[3693849.886119] [  701]     0   701    12487       32   0     -17         -1000 sshd
[3693849.886123] [  712]   101   712    62463      831   6       0             0 rsyslogd
[3693849.886127] [  721]   102   721     5952       38   1       0             0 dbus-daemon
[3693849.886131] [  752]  1000   752   213448      525   4     -17         -1000 VBoxSVC
[3693849.886135] [  788]  1000   788    28385       26   0     -17         -1000 VBoxXPCOMIPCD
[3693849.886138] [  801]     0   801     3624        2   2       0             0 getty
[3693849.886142] [  807]     0   807     3624        2   6       0             0 getty
[3693849.886146] [  813]     0   813     3624        2   2       0             0 getty
[3693849.886149] [  814]     0   814     3624        2   1       0             0 getty
[3693849.886153] [  816]     0   816     3624        2   6       0             0 getty
[3693849.886156] [  819]     0   819     1080        1   1       0             0 acpid
[3693849.886160] [  828]     0   828     4225        4   7       0             0 atd
[3693849.886163] [  829]     0   829     4776       22   3       0             0 cron
[3693849.886167] [  856]     0   856     3993       24   0       0             0 irqbalance
[3693849.886171] [  857]   103   857    46892       56   1       0             0 whoopsie
[3693849.886174] [  875]     0   875     3624        2   3       0             0 getty
[3693849.886179] [ 1117]  1000  1117  2295090  1266533   2       0             0 VBoxHeadless
[3693849.886182] [ 1152]  1000  1152  2201201  1285881   4       0             0 VBoxHeadless
[3693849.886186] [ 1184]  1000  1184  2104800  1169590   7       0             0 VBoxHeadless
[3693849.886190] [ 3497]  1000  3497  2410578  1496365   7       0             0 VBoxHeadless
[3693849.886194] [ 3878]     0  3878     5456        2   2     -17         -1000 udevd
[3693849.886197] [ 3879]     0  3879     5459        0   2     -17         -1000 udevd
[3693849.886201] [ 6501]  1000  6501  2189415  1274907   6       0             0 VBoxHeadless
[3693849.886205] [15028]  1000 15028     6832      502   3       0             0 screen
[3693849.886209] [15029]  1000 15029     5548      371   1       0             0 bash
[3693849.886213] [ 8141]  1000  8141  1756145  1269336   2       0             0 VBoxHeadless
[3693849.886217] [19782]   106 19782   204047     9163   0       0             0 mysqld
[3693849.886221] [ 8789]  1000  8789   582772   493368   2       0             0 python
[3693849.886226] Out of memory: Kill process 3497 (VBoxHeadless) score 181 or sacrifice child
[3693849.886385] Killed process 3497 (VBoxHeadless) total-vm:9642312kB, anon-rss:3994060kB, file-rss:1991400kB

I am not using memory ballooning. All of the instances have memory ballooning disabled.

I upgraded from 4.0.8 about 42 days ago. I did not experience this kind of memory exhaustion before the upgrade.

It's worth noting that doing a quick controlvm savestate followed by a start seems to be sufficient enough to clear out the memory usage.

Please let me know what additional information is needed to help solve this problem.

Attachments (4)

VBox.log.1 (40.4 KB ) - added by kihjin 12 years ago.
showvminfo.txt (2.3 KB ) - added by kihjin 12 years ago.
VBox.log (53.1 KB ) - added by Oleg 12 years ago.
First log file (guest FreeBSD 6.4)
VBox.2.log (49.6 KB ) - added by Oleg 12 years ago.
Second VM's log file (FreeBSD 8.2)

Download all attachments as: .zip

Change History (10)

by kihjin, 12 years ago

Attachment: VBox.log.1 added

by kihjin, 12 years ago

Attachment: showvminfo.txt added

comment:1 by vasily Levchenko, 12 years ago

Does the same happen with 4.1.20?

comment:2 by Oleg, 12 years ago

I observe the same massive memory leak happening in the recent versions of Virtual Box with Win64 host and FreeBSD guest. It was working fine in versions 4.0.X. Now, after few hours it eats the whole system memory. I have 10Gb of system memory and its getting consumed in about 3 hours.

comment:3 by Frank Mehnert, 12 years ago

holger67, can you provide more information? At first, a VBox.log file of your VM session would be interesting to show us your VM configuration.

by Oleg, 12 years ago

Attachment: VBox.log added

First log file (guest FreeBSD 6.4)

by Oleg, 12 years ago

Attachment: VBox.2.log added

Second VM's log file (FreeBSD 8.2)

comment:4 by Oleg, 12 years ago

I am attaching two log files for my VMs.

comment:5 by kihjin, 12 years ago

I have not tried 4.1.20. Upgrading at this time is non-trivial, so I am not sure how soon I would be able to test it out.

comment:6 by Frank Mehnert, 8 years ago

Resolution: obsolete
Status: newclosed
Note: See TracTickets for help on using tickets.

© 2023 Oracle
ContactPrivacy policyTerms of Use