lkml.org 
[lkml]   [2010]   [Aug]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: BTRFS: Unbelievably slow with kvm/qemu
From
On Mon, Aug 30, 2010 at 8:59 AM, K. Richard Pixley <rich@noir.com> wrote:
>  On 8/29/10 17:14 , Josef Bacik wrote:
>>
>> On Sun, Aug 29, 2010 at 09:34:29PM +0200, Tomasz Chmielewski wrote:
>>>
>>> Christoph Hellwig wrote:
>>>>
>>>> There are a lot of variables when using qemu.
>>>>
>>>> The most important one are:
>>>>
>>>>  - the cache mode on the device.  The default is cache=writethrough,
>>>>    which is not quite optimal.  You generally do want to use cache=none
>>>>    which uses O_DIRECT in qemu.
>>>>  - if the backing image is sparse or not.
>>>>  - if you use barrier - both in the host and the guest.
>>>
>>> I noticed that when btrfs is mounted with default options, when writing
>>> i.e. 10 GB on the KVM guest using qcow2 image, 20 GB are written on the
>>> host (as measured with "iostat -m -p").
>>>
>>> With ext4 (or btrfs mounted with nodatacow), 10 GB write on a guest
>>> produces 10 GB write on the host
>>
>> Whoa 20gb?  That doesn't sound right, COW should just mean we get quite a
>> bit of
>> fragmentation, not write everything twice.  What exactly is qemu doing?
>>  Thanks,
>
> Make sure you build your file system with "mkfs.btrfs -m single -d single
> /dev/whatever".  You may well be writing duplicate copies of everything.
>
There is little reason not to use duplicate metadata. Only small
files (less than 2kb) get stored in the tree, so there should be no
worries about images being duplicated without data duplication set at
mkfs time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2010-08-31 23:49    [W:0.277 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site