[vbox-dev] [PATCH] Performance improvement of vboxsf by using page cache

Tatsuya Karino kokuban.kumasan at gmail.com
Fri Aug 7 10:43:29 UTC 2015


Hi Klaus,

Thank you for your reply.

I agree with that handling the concurrent changing of host and guest is
very important.

As my approach to data consistency...

* Function to update the inode cache (e.g. sf_inode_revalidate in vboxsf,
HgfsRevalidate in vmhgfs) also will clear a page cache.
  Since sf_inode_revalidate clear the inode cache at the right time,
  this implementation would clear the page cache by using the correct stat
information.

* If any of file size and last modification time are different, the page
cache will be cleared.
  I think it is enough to verifying that the host file is not changed.
  This approach is same as vmhgfs.

* Before using page cache in the guest side, sf_inode_revalidate is
executed.
  A page cache will be used when performing read, write, and seek.
  So, I implemented to call the sf_inode_revalidate in file_operations
read_iter, write_iter, splice_read and llseek before performing the
generic_* function.


As a test on the concurrent changing of the file,
I have confirmed that the page cache in the guest side is cleared when an
file is updated on the host side.

1. On the guest side, put the file data in the page cache. (cat file)
2. On the host side, edit the file. (echo "host edit" >> file)
3. On the guest side, After editing the file, make sure that the changes on
the host side is saved correctly. (echo "guest edit" >> file && cat file)

Certainly, the test in more severe conditions is not carried out.
If there is a better way, we'll try the test again.

Thanks,


2015-08-06 23:51 GMT+09:00 Klaus Espenlaub <klaus.espenlaub at oracle.com>:

> Hi Tatsuya,
>
> On 06.08.2015 16:20, Tatsuya Karino wrote:
> > Here is another patch for the improvement of vboxsf.
> >
> > The key point is using page cache in the system call read/write.
> > I created this patch by reference to the other file system(e.g. ext4,
> > nfs, vmhgfs).
>
> Now that gets a lot more interesting... I hope you didn't look too much
> at the normal 'block based' filesystems, as vboxsf is logically much
> closer to a network filesystem like nfs/cifs/vmhgfs. It has to deal with
> others making concurrent changes, in the vboxsf case the host can change
> directory and file contents (from what I understood accesses are less
> critical but there can be still delays due to writes being handled later
> by the page cache)..
>
> I'm not too worried about truly concurrent accesses (as handling them
> 100% correct is very expensive, and the current cache-less
> implementation doesn't meet this very difficult requirement either).
>
> This optimization has a big potential for presenting stale data if any
> of the assumptions it makes aren't true.
>
> > In this patch...
> > * To use page cache, generic_file_read_iter/generic_file_write_iter will
> > be called from read/write.
> > * the page cache will be cleared in sf_inode_revalidate if necessary.
> > * sf_inode_revalidate will be called in a cache sensitive function like
> > read_iter, write_iter.
> > * sf_inode_revalidate will not call unnecessary stat.
>
> Can you say a bit more how you approached data consistency? Benchmark
> results are one thing, but correct operation is far more important.
>
> Klaus
>
> > I tested this code on a kernel 3.19 Linux guest.
> > This patch is provided under the MIT license.
> >
> >
> > # before apply this patch
> > vagrant at debian-jessie:/vagrant$ for i in {0..2}; do time cat
> > large-file>/dev/null ; done
> >
> > real  0m0.045s
> > user  0m0.000s
> > sys 0m0.020s
> >
> > real  0m0.031s
> > user  0m0.000s
> > sys 0m0.016s
> >
> > real  0m0.045s
> > user  0m0.000s
> > sys 0m0.020s
> >
> >
> > # after apply this patch
> > vagrant at debian-jessie:/vagrant$ for i in {0..2}; do time cat
> > large-file>/dev/null ; done
> >
> > real  0m0.140s
> > user  0m0.000s
> > sys 0m0.072s
> >
> > real  0m0.004s
> > user  0m0.000s
> > sys 0m0.000s
> >
> > real  0m0.004s
> > user  0m0.000s
> > sys 0m0.000s
> >
> > Thanks,
> >
> > --
> > Tatsuya Karino
>
> _______________________________________________
> vbox-dev mailing list
> vbox-dev at virtualbox.org
> https://www.virtualbox.org/mailman/listinfo/vbox-dev
>



-- 
Tatsuya Karino
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.virtualbox.org/pipermail/vbox-dev/attachments/20150807/e5e13e9f/attachment.html 


More information about the vbox-dev mailing list