lkml.org 
[lkml]   [2011]   [Jun]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [ANNOUNCE] Native Linux KVM tool v2
    On 06/15/2011 05:20 PM, Anthony Liguori wrote:
    > On 06/15/2011 05:07 PM, Alexander Graf wrote:
    >>
    >> On 16.06.2011, at 00:04, Anthony Liguori wrote:
    >>
    >>> On 06/15/2011 03:13 PM, Prasad Joshi wrote:
    >>>> On Wed, Jun 15, 2011 at 6:10 PM, Pekka Enberg<penberg@kernel.org>
    >>>> wrote:
    >>>>> On Wed, Jun 15, 2011 at 7:30 PM, Avi Kivity<avi@redhat.com> wrote:
    >>>>>> On 06/15/2011 06:53 PM, Pekka Enberg wrote:
    >>>>>>>
    >>>>>>> - Fast QCOW2 image read-write support beating Qemu in fio
    >>>>>>> benchmarks. See
    >>>>>>> the
    >>>>>>> following URL for test result details:
    >>>>>>> https://gist.github.com/1026888
    >>>>>>
    >>>>>> This is surprising. How is qemu invoked?
    >>>>>
    >>>>> Prasad will have the details. Please note that the above are with Qemu
    >>>>> defaults which doesn't use virtio. The results with virtio are little
    >>>>> better but still in favor of tools/kvm.
    >>>>>
    >>>>
    >>>> The qcow2 image used for testing was copied on to /dev/shm to avoid
    >>>> the disk delays in performance measurement.
    >>>>
    >>>> QEMU was invoked with following parameters
    >>>>
    >>>> $ qemu-system-x86_64 -hda<disk image on hard disk> -hdb
    >>>> /dev/shm/test.qcow2 -m 1024M
    >>>
    >>> Looking more closely at native KVM tools, you would need to use the
    >>> following invocation to have an apples-to-apples comparison:
    >>>
    >>> qemu-system-x86_64 -drive
    >>> file=/dev/shm/test.qcow2,cache=writeback,if=virtio
    >>
    >> Wouldn't this still be using threaded AIO mode? I thought KVM tools
    >> used native AIO?
    >
    > Nope. The relevant code is:
    >
    >> /* blk device ?*/
    >> disk = blkdev__probe(filename, &st);
    >> if (disk)
    >> return disk;
    >>
    >> fd = open(filename, readonly ? O_RDONLY : O_RDWR);
    >> if (fd < 0)
    >> return NULL;
    >>
    >> /* qcow image ?*/
    >> disk = qcow_probe(fd, readonly);
    >> if (disk)
    >> return disk;
    >>
    >> /* raw image ?*/
    >> disk = raw_image__probe(fd, &st, readonly);
    >> if (disk)
    >> return disk;
    >
    > It uses a synchronous I/O model similar to qcow2 in QEMU with what I
    > assume is a global lock that's outside of the actual implementation.
    >
    > I think it lacks some of the caching that Kevin's added recently though
    > so I assume that if QEMU was run with cache=writeback, it would probably
    > do quite a bit better than native KVM tool.
    >
    > It also turns out that while they have the infrastructure to deal with
    > FLUSH, they don't implement it for qcow2 :-/
    >
    > So even if the guest does an fsync(), it native KVM tool will never
    > actually sync the data to disk...
    >
    > That's probably why it's fast, it doesn't preserve data integrity :(

    Actually, I misread the code. It does unstable writes but it does do
    fsync() on FLUSH.

    Regards,

    Anthony Liguori


    \
     
     \ /
      Last update: 2011-06-16 00:47    [W:0.028 / U:29.828 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site