[lkml]   [2009]   [Apr]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [RFC PATCH 00/17] virtual-bus
    Rusty Russell wrote:
    > On Wednesday 01 April 2009 22:05:39 Gregory Haskins wrote:
    >> Rusty Russell wrote:
    >>> I could dig through the code, but I'll ask directly: what heuristic do
    >>> you use for notification prevention in your venet_tap driver?
    >> I am not 100% sure I know what you mean with "notification prevention",
    >> but let me take a stab at it.
    > Good stab :)
    >> I only signal back to the guest to reclaim its skbs every 10
    >> packets, or if I drain the queue, whichever comes first (note to self:
    >> make this # configurable).
    > Good stab, though I was referring to guest->host signals (I'll assume
    > you use a similar scheme there).
    Oh, actually no. The guest->host path only uses the "bidir napi" thing
    I mentioned. So first packet hypercalls the host immediately with no
    delay, schedules my host-side "rx" thread, disables subsequent
    hypercalls, and returns to the guest. If the guest tries to send
    another packet before the time it takes the host to drain all queued
    skbs (in this case, 1), it will simply queue it to the ring with no
    additional hypercalls. Like typical napi ingress processing, the host
    will leave hypercalls disabled until it finds the ring empty, so this
    process can continue indefinitely until the host catches up. Once fully
    drained, the host will re-enable the hypercall channel and subsequent
    transmissions will repeat the original process.

    In summary, infrequent transmissions will tend to have one hypercall per
    packet. Bursty transmissions will have one hypercall per burst
    (starting immediately with the first packet). In both cases, we
    minimize the latency to get the first packet "out the door".

    So really the only place I am using a funky heuristic is the modulus 10
    operation for tx-complete going host->guest. The rest are kind of
    standard napi event mitigation techniques.

    > You use a number of packets, qemu uses a timer (150usec), lguest uses a
    > variable timer (starting at 500usec, dropping by 1 every time but increasing
    > by 10 every time we get fewer packets than last time).
    > So, if the guest sends two packets and stops, you'll hang indefinitely?
    Shouldn't, no. The host will send tx-complete interrupts at *max* every
    10 packets, but if it drains the queue before the modulus 10 expires, it
    will send a tx-complete immediately, right before it re-enables
    hypercalls. So there is no hang, and there is no delay.

    For reference, here is the modulus 10 signaling
    (./drivers/vbus/devices/venet-tap.c, line 584):;a=blob;f=drivers/vbus/devices/venet-tap.c;h=0ccb7ed94a1a8edd0cca269488f940f40fce20df;hb=master#l584

    Here is the one that happens after the queue is fully drained (line 593);a=blob;f=drivers/vbus/devices/venet-tap.c;h=0ccb7ed94a1a8edd0cca269488f940f40fce20df;hb=master#l593

    and finally, here is where I re-enable hypercalls (or system calls if
    the driver is in userspace, etc);a=blob;f=drivers/vbus/devices/venet-tap.c;h=0ccb7ed94a1a8edd0cca269488f940f40fce20df;hb=master#l600

    > That's why we use a timer, otherwise any mitigation scheme has this issue.

    I'm not sure I follow. I don't think I need a timer at all using this
    scheme, but perhaps I am missing something?

    Thanks Rusty!

    [unhandled content-type:application/pgp-signature]
     \ /
      Last update: 2009-04-02 04:29    [W:0.027 / U:4.504 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site