VirtualBox

Opened 3 years ago

Last modified 3 years ago

#20432 new defect

Cannot write to raw disk under Windows 10 21H1 guest in VBox 6.1.22

Reported by: cheater Owned by:
Component: virtual disk Version: VirtualBox 6.1.22
Keywords: block cache BLKCACHE_IIOERR raw disk data loss dataloss data_loss Cc:
Guest type: Linux Host type: Windows

Description

Severe bug, results in complete loss of all data written.

Quick description: I have some mechanical Western Digital Purple SATA drives hooked up via a Perc H310 SAS HBA. (SAS can use SATA drives). Windows 10 host, Ubuntu guest. Some of them can be written, but one which is larger and has NTFS on it can't. The ones that can be written to use ext4 and btrfs partitions. This probably isn't relevant, but who knows.

I had been using VBox 6.1.15 on a Windows 10 host (updated to latest non-insider version which is 21H1, build 19043.1081) with a Ubuntu 20.04 guest, with guest additions installed. Recently, I tried using it with a 14 TB WDC WD140PURZ-850A82 (Western Digital Purple 14TB mechanical SATA drive) connected via a Perc H310 SAS HBA. It is completely new and I created a single partition on it spanning the whole disk. I formatted it to NTFS using Windows 10 and put it offline, then created a VMDK using

PS C:\WINDOWS\system32> & 'C:\Program Files\Oracle\VirtualBox\VBoxManage.exe' internalcommands createrawvmdk -filename C:\VirtualBox\SAS-PhysicalDrive5-14tb.vmdk -rawdisk \\.\PhysicalDrive5

and then set it to write-through under the VBox gui. Finally I hooked it up to a SATA AHCI controller with 16 ports. The "Use Host I/O Cache" box was originally empty (unchecked), and I would get this error:

The I/O cache encountered an error while updating data in medium "ahci-0-4" (rc=VERR_WRITE_PROTECT). Make sure there is enough free space on the disk and that the disk is working properly. Operation can be resumed afterwards.
Error ID: BLKCACHE_IOERR
Severity: Non-Fatal Error

If I DO check the "Use Host I/O Cache" box, then the data seems to get written and no error is produced - but when I unmount and remount the partition, it's gone.

What's weird is that there are other drives, also from the same drive family (WD Purple) and on the same SAS controller, which don't have the same issue. But they're linux formatted (either ext4 or btrfs) and also smaller (less than 10 TB).

If I turn on "Use Host I/O Cache", I am able to write to the disk. However, 3 issues remain still:

  1. performance is very, very poor. On the order of 20 MB/sec. These drives are capable of 150 MB/sec when used under the Windows 10 host on the same controller.
  1. if I type sync in the guest, it takes > 5 minutes, and meanwhile hangs up all disk IO
  1. The data isn't ACTUALLY written to the disk.

Things I tried include:

  • formatting the disk again. Didn't help
  • I tried whether writing to the disk works when it's hooked up to the host (and not offlined). It works - files written to disk remain after offlining and again onlining the disk (which essentially unmounts and again mounts the disk on Windows 10).
  • I updated to VBox 6.1.22 with latest guest additions, no improvement
  • In the VM, I created a new LsiLogic SAS controller, and put the drives under that, no improvement

Finally, I am always running VBox GUI as admin, which is required in order for raw disk access to work in the first place, for all drives, even the ones where writing to disk does work.

Also, I've noticed that when I format the disk to a single NTFS partition under Windows 10, it actually creates two partitions: one tiny ~15MB one that's a "Microsoft reserved partition" (/dev/sde1 under Linux for me), and one that spans the rest of the disk (/dev/sde2) (which is where the data lives). Under the Windows 10 Disk Management panel, only one partition is shown. When I mount /dev/sde2, I can see the System Volume Information directory and some other files I put on that drive when it was mounted directly on the host, so I know it's the right partition.

P.S. I have previously re-opened #14461 thinking that was the same bug, but it's not - in #14461 you could work around to write data to the disk, in my situation the same workaround results in data being written to a black hole. Could someone please close that ticket?

Attachments (3)

SAS-PhysicalDrive5-14tb.vmdk (549 bytes ) - added by cheater 3 years ago.
vmdk of the 14tb drive that doesn't work under VBox 6.1.22
SAS-PhysicalDrive0.vmdk (549 bytes ) - added by cheater 3 years ago.
a drive where writing does work under 6.1.22
linux_only.vmdk (820 bytes ) - added by cheater 3 years ago.
Another vmdk where writing under 6.1.22 does work without Host I/O Cache enabled. Bear in mind this creates raw access to one partition only.

Download all attachments as: .zip

Change History (5)

by cheater, 3 years ago

vmdk of the 14tb drive that doesn't work under VBox 6.1.22

by cheater, 3 years ago

Attachment: SAS-PhysicalDrive0.vmdk added

a drive where writing does work under 6.1.22

by cheater, 3 years ago

Attachment: linux_only.vmdk added

Another vmdk where writing under 6.1.22 does work without Host I/O Cache enabled. Bear in mind this creates raw access to one partition only.

comment:1 by cheater, 3 years ago

Added VMDKs:

SAS-PhysicalDrive5-14tb.vmdk - affected drive, 14TB, NTFS, WD Purple mechanical drive, Perc H310 SAS controller, under VBox it's under a SATA AHCI controller.

SAS-PhysicalDrive0.vmdk - drive not affected, writing works fine, also WD Purple, same SAS controller, size < 10TB, ext4, same SATA AHCI controller in VBox.

linux_only.vmdk - drive not affected, writing works fine, single partition out of a larger drive, partition on an M.2 PCIE SSD, same SATA AHCI controller in VBox.

Last edited 3 years ago by cheater (previous) (diff)

comment:2 by cheater, 3 years ago

Note that mounting the drive on the Windows 10 host by onlining it, setting up a shared folder on it, and then writing data onto that from within the Linux guest works perfectly well - data is being written at 110-120 MB/sec, and it remains after offlining and onlining in the Windows Disk Management panel.

Note: See TracTickets for help on using tickets.

© 2023 Oracle
ContactPrivacy policyTerms of Use