VirtualBox

Opened 16 years ago

Closed 14 years ago

#2048 closed defect (fixed)

Linux - Disk I/O performance problems

Reported by: Christian Holler Owned by:
Component: virtual disk Version: VirtualBox 1.6.4
Keywords: I/O performance disk Cc:
Guest type: Linux Host type: Linux

Description

Hello,

I am running VirtualBox 1.6.4 on a Linux system with a Linux guest. I did I/O performance tests with the following setup:

  1. I created a 4 GB file on the root filesystem of the guest (called test.file) with dd from /dev/zero
  1. I ran "dd if=/dev/zero of=/test.file bs=4M count=1000 conv=notrunc"
  2. I ran "dd if=/dev/zero of=/test.file bs=4M count=1000 conv=notrunc oflag=direct"

So both commands overwrite the existing file without truncating it, the second command uses direct I/O for this task.

The results are as follows:

dd if=/dev/zero of=/test.file bs=4M count=1000 conv=notrunc &

1000+0 records in
1000+0 records out
4194304000 bytes (4.2 GB) copied, 416.406 s, 10.1 MB/s

dd if=/dev/zero of=/test.file bs=4M count=1000 conv=notrunc oflag=direct &

1000+0 records in
1000+0 records out
4194304000 bytes (4.2 GB) copied, 108.356 s, 38.7 MB/s

As one can see, the first command (normal I/O) is very slow, whereas direct I/O is resonably fast. Is there any explanation for this behavior? As far as I know, direct I/O circumvents buffering in the linux kernel, so there must be a performance bottleneck somewhere making normal I/O really slow.

The underlying filesystems are all ext3, and the selected disk controller is SATA if that is important :)

Best regards,

Chris

Change History (8)

comment:1 by Christian Holler, 16 years ago

Edit: I also tested version 1.6.6 now, with approximately the same results :(

comment:2 by Christian Holler, 16 years ago

Found the problem: The VDI Image I was using was a converted DD image (and then shrinked with VBoxManage). I create a second new image and used it in the same machine and it works.

Any idea why this process made the image so slow? (I get 56mb/s on the new image)

comment:3 by Frank Mehnert, 16 years ago

Interesting findings. What partition type have both partitions, the one you converted with convertdd and the one you are working now? What partition size?

comment:4 by Christian Holler, 16 years ago

Hi, thanks for your response :)

Here are the test conditions and stats about the old and new image.

Test:

Additionally to the test I described earlier, I did (re)write tests with 800 MB files created from /dev/urandom to make sure no compression stuff etc. strikes in there. I first created an 800 MB file, wrote it to the disk, and then used dd to read it from this disk and write it to the same disk again (to a different file). All of those new write tests were done from a livecd (hence also no additions loaded), to make sure the same kernel etc. is used for the test.

The old VDI (created from a dd image then shrinked, contains a whole OS):

Disk /dev/sda: 82.3 GB, 82348277760 bytes
255 heads, 63 sectors/track, 10011 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x3ce81e89

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14          76      506047+  82  Linux swap / Solaris
/dev/sda3              77        2509    19543072+  83  Linux
/dev/sda4            2510       10011    60259815    5  Extended
/dev/sda5            2510        4334    14659281   83  Linux
/dev/sda6            4335       10011    45600471   83  Linux



Virtual Image Size: 76.69 GB
Actual Image Size: 34.83 GB

Write Partition: /dev/sda3
Write speed: repeated multiple times, varied from 10 MB/s to 15 MB/s

The new VDI (created for testing purposes):

Disk /dev/sda: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xb1c94df2

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1         261     2096451   83  Linux


Virtual Image Size: 2.0 GB
Actual Image Size: 1.63 GB

Write Partition: /dev/sda1
Write speed: repeated multiple times, varied from 25 mb/s to 40 mb/s

All partitions used for writing have an ext3 filesystem.

I'm actually not sure why there is such a difference, and there clearly shouldn't be :(

If you think any of these test conditions is non-ideal or if you need further tests/information, please reply and tell me what else is required :)

comment:5 by Christian Holler, 16 years ago

Edit: I repeated the tests on the smaller image and also reformatted the partition and it seems that the write speed decreases, down to what I have in the first image.. Maybe you can try to reproduce this yourself, I'm not sure if my test conditions are ideal, but it seems there is a problem somewhere...

comment:6 by Bob Bednar, 16 years ago

If your Linux system is using a 2.4.x kernel build... there seems to be a serious issue with heavy throughput I/O operations. What build is Linux kernel.

comment:7 by Bill McGonigle, 15 years ago

I came across this bug trying to figure out why my I/O is so slow on Virtualbox, and this bug had some good ideas, but left me more confused in the end. Starting out with just raw disk access (a partition setup as a raw disk):

$ sudo time dd if=/dev/zero of=/dev/sdb bs=4M count=1000 conv=notrunc oflag=direct
1000+0 records in
1000+0 records out
4194304000 bytes (4.2 GB) copied, 117.208 s, 35.8 MB/s
0.01user
2.17system
1:57.28 elapsed
1% CPU (0 avg text + 0 avg data 0 max resident)k
0 inputs
+8192000 outputs ( 0major + 1245 minor) pagefaults 0 swaps

this is with direct, no filesystem (using my regular swap partition for lack of other space), and then:

$ sudo time dd if=/dev/zero of=/dev/sdb bs=4M count=1000 conv=notrunc
1000+0 records in
1000+0 records out
4194304000 bytes (4.2 GB) copied, 111.973 s, 37.5 MB/s
0.02user
10.00system
1:52.05 elapsed
8% CPU (0 avg text+ 0 avg data 0 max resident)k
0 inputs + 8192000 outputs (0 major +1237 minor)pagefaults 0 swaps

same test, without the direct flag. Not a huge difference, probably within the bounds of measurement error. Performance is pretty reasonable (this is a 7200RPM Seagate 2.5" hard drive) However, when bringing a filesystem (ext3 on another partition setup as a raw disk) into play:

$ sudo time dd if=/dev/zero of=/test.file bs=4M count=1000 conv=notrunc
1000+0 records in
1000+0 records out
4194304000 bytes (4.2 GB) copied, 9162.21 s, 458 kB/s
0.04user
9.72system
2:33:03 elapsed
0%CPU (0 avg text +0 avg data 0 maxr esident)k
424 inputs + 488 outputs ( 2 major +1235 minor) pagefaults 0 swaps

ouch, wow, that hurts. And then when trying to replicate the direct test on a filesystem:

$ sudo time dd if=/dev/zero of=/test.file bs=4M count=1000 conv=notrunc oflag=direct
dd: opening `/test.file': Invalid argument

No matter where I try to put the file, if it's not a block device dd won't work in direct mode. So, I'm somewhat at a loss for how to replicate the test.

This is on kernel: 2.6.27.12-170.2.5.fc10.i686

running on Virtualbox 2.1.2, under Mac OS X 10.4.11, on a 2.16GHz Core2Duo with VT on, and disks configured as SATA devices.

Please let me know if I can provide any additional useful data.

comment:8 by aeichner, 14 years ago

Resolution: fixed
Status: newclosed

Please check with a newer release of VirtualBox (3.2 contains a completely reworked I/O subsystem) if this is still a problem and reopen if neccessary.

Note: See TracTickets for help on using tickets.

© 2023 Oracle
ContactPrivacy policyTerms of Use