lkml.org 
[lkml]   [2010]   [Mar]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH][RF C/T/D] Unmapped page cache control - via boot parameter
Avi Kivity <avi@redhat.com> writes:

> On 03/15/2010 10:23 PM, Chris Webb wrote:
>
> >Wasteful duplication of page cache between guest and host notwithstanding,
> >turning on cache=writeback is a spectacular performance win for our guests.
>
> Is this with qcow2, raw file, or direct volume access?

This is with direct access to logical volumes. No file systems or qcow2 in
the stack. Our typical host has a couple of SATA disks, combined in md
RAID1, chopped up into volumes with LVM2 (really just dm linear targets).
The performance measured outside qemu is excellent, inside qemu-kvm is fine
too until multiple guests are trying to access their drives at once, but
then everything starts to grind badly.

> I can understand it for qcow2, but for direct volume access this
> shouldn't happen. The guest schedules as many writes as it can,
> followed by a sync. The host (and disk) can then reschedule them
> whether they are in the writeback cache or in the block layer, and
> must sync in the same way once completed.

I don't really understand what's going on here, but I wonder if the
underlying problem might be that all the O_DIRECT/O_SYNC writes from the
guests go down into the same block device at the bottom of the device mapper
stack, and thus can't be reordered with respect to one another. For our
purposes,

Guest AA Guest BB Guest AA Guest BB Guest AA Guest BB
write A1 write A1 write B1
write B1 write A2 write A1
write A2 write B1 write A2

are all equivalent, but the system isn't allowed to reorder in this way
because there isn't a separate request queue for each logical volume, just
the one at the bottom. (I don't know whether nested request queues would
behave remotely reasonably either, though!)

Also, if my guest kernel issues (say) three small writes, one at the start
of the disk, one in the middle, one at the end, and then does a flush, can
virtio really express this as one non-contiguous O_DIRECT write (the three
components of which can be reordered by the elevator with respect to one
another) rather than three distinct O_DIRECT writes which can't be permuted?
Can qemu issue a write like that? cache=writeback + flush allows this to be
optimised by the block layer in the normal way.

Cheers,

Chris.


\
 
 \ /
  Last update: 2010-03-17 16:27    [W:0.186 / U:0.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site