lkml.org 
[lkml]   [2009]   [Jun]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [KVM PATCH v7 2/2] KVM: add iosignalfd support
    Avi Kivity wrote:
    > On 06/16/2009 04:42 PM, Gregory Haskins wrote:
    >> iosignalfd is a mechanism to register PIO/MMIO regions to trigger an
    >> eventfd
    >> signal when written to by a guest. Host userspace can register any
    >> arbitrary
    >> IO address with a corresponding eventfd and then pass the eventfd to a
    >> specific end-point of interest for handling.
    >>
    >> Normal IO requires a blocking round-trip since the operation may cause
    >> side-effects in the emulated model or may return data to the caller.
    >> Therefore, an IO in KVM traps from the guest to the host, causes a
    >> VMX/SVM
    >> "heavy-weight" exit back to userspace, and is ultimately serviced by
    >> qemu's
    >> device model synchronously before returning control back to the vcpu.
    >>
    >> However, there is a subclass of IO which acts purely as a trigger for
    >> other IO (such as to kick off an out-of-band DMA request, etc). For
    >> these
    >> patterns, the synchronous call is particularly expensive since we really
    >> only want to simply get our notification transmitted asychronously and
    >> return as quickly as possible. All the sychronous infrastructure to
    >> ensure
    >> proper data-dependencies are met in the normal IO case are just
    >> unecessary
    >> overhead for signalling. This adds additional computational load on the
    >> system, as well as latency to the signalling path.
    >>
    >> Therefore, we provide a mechanism for registration of an in-kernel
    >> trigger
    >> point that allows the VCPU to only require a very brief, lightweight
    >> exit just long enough to signal an eventfd. This also means that any
    >> clients compatible with the eventfd interface (which includes userspace
    >> and kernelspace equally well) can now register to be notified. The end
    >> result should be a more flexible and higher performance notification API
    >> for the backend KVM hypervisor and perhipheral components.
    >>
    >> To test this theory, we built a test-harness called "doorbell". This
    >> module has a function called "doorbell_ring()" which simply increments a
    >> counter for each time the doorbell is signaled. It supports signalling
    >> from either an eventfd, or an ioctl().
    >>
    >> We then wired up two paths to the doorbell: One via QEMU via a
    >> registered
    >> io region and through the doorbell ioctl(). The other is direct via
    >> iosignalfd.
    >>
    >> You can download this test harness here:
    >>
    >> ftp://ftp.novell.com/dev/ghaskins/doorbell.tar.bz2
    >>
    >> The measured results are as follows:
    >>
    >> qemu-mmio: 110000 iops, 9.09us rtt
    >> iosignalfd-mmio: 200100 iops, 5.00us rtt
    >> iosignalfd-pio: 367300 iops, 2.72us rtt
    >>
    >> I didn't measure qemu-pio, because I have to figure out how to
    >> register a
    >> PIO region with qemu's device model, and I got lazy. However, for
    >> now we
    >> can extrapolate based on the data from the NULLIO runs of +2.56us for
    >> MMIO,
    >> and -350ns for HC, we get:
    >>
    >> qemu-pio: 153139 iops, 6.53us rtt
    >> iosignalfd-hc: 412585 iops, 2.37us rtt
    >>
    >> these are just for fun, for now, until I can gather more data.
    >>
    >> Here is a graph for your convenience:
    >>
    >> http://developer.novell.com/wiki/images/7/76/Iofd-chart.png
    >>
    >> The conclusion to draw is that we save about 4us by skipping the
    >> userspace
    >> hop.
    >>
    >>
    >> +config KVM_MAX_IOSIGNALFD_ITEMS
    >> + int "Maximum IOSIGNALFD items per address"
    >> + depends on KVM
    >> + default "32"
    >> + ---help---
    >> + This option influences the maximum number of fd's per PIO/MMIO
    >> + address that are allowed to register
    >> +
    >>
    >
    > Is there a per-vm limit on iosignalfds? if not, userspace can exhaust
    > kernel memory in that way.

    Yeah, its already naturally limited by the maximum number of MMIO/PIO
    devices we can register (today this is 6 per VM). I should have
    documented that fact somewhere, tho.

    >
    > We could limit the just total number of iosignafds, it's somewhat more
    > natural.
    >> diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
    >> index daece36..a4b427f 100644
    >> --- a/virt/kvm/Kconfig
    >> +++ b/virt/kvm/Kconfig
    >> @@ -12,3 +12,5 @@ config HAVE_KVM_EVENTFD
    >>
    >> config KVM_APIC_ARCHITECTURE
    >> bool
    >> +
    >> +
    >>
    >
    > Spurious, please drop.

    Ack

    >> +/*
    >> + * Design note: We create one PIO/MMIO device (iosignalfd_group) which
    >> + * aggregates one or more iosignalfd_items. Each item points to
    >> exactly one
    >> + * eventfd, and can be registered to trigger on any write to the group
    >> + * (wildcard), or to a write of a specific value. If more than one
    >> item is to
    >> + * be supported, the addr/len ranges must all be identical in the
    >> group. If a
    >> + * trigger value is to be supported on a particular item, the group
    >> range must
    >> + * be exactly the width of the trigger.
    >> + */
    >> +
    >> +struct _iosignalfd_item {
    >> + struct list_head list;
    >> + struct file *file;
    >> + unsigned char *match;
    >> + struct rcu_head rcu;
    >> +};
    >>
    >
    > Why not u64 match?

    Well, tbh it was primarily because it was starting to make my head hurt
    w.r.t. endianness ;). For instance, if someone wanted a u16 match, I
    would presumably have to understand the relevant endianess of the u64 so
    I compare the appropriate bytes against the data-register coming in from
    the [MM|P]IO. Using a pointer, I simply copy/memcmp the specified
    number of bytes and never have to worry about endianness.

    As a minor bonus, item->match == NULL tells me its a wildcard. If I had
    item->match as a u64, I'd need a different state flag for "wildcard".
    NBD, but thought I would point it out.

    >
    >> +static int
    >> +iosignalfd_is_match(struct _iosignalfd_group *group,
    >> + struct _iosignalfd_item *item,
    >> + const void *val,
    >> + int len)
    >> +{
    >> + if (!item->match)
    >> + /* wildcard is a hit */
    >> + return true;
    >> +
    >> + if (len != group->length)
    >> + /* mis-matched length is a miss */
    >> + return false;
    >>
    >
    > Should check length before match (i.e. require correctly sized access).

    Perhaps, but my thinking is that group->length only matters for
    data-matching. You could conceivably have a larger window registered if
    you are using all wildcards. Not sure if this is really useful, but its
    the reason the code is that way today.

    Thanks Avi,
    -Greg


    [unhandled content-type:application/pgp-signature]
    \
     
     \ /
      Last update: 2009-06-18 14:13    [W:4.422 / U:0.828 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site