lkml.org 
[lkml]   [2010]   [Jun]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH] use unfair spinlock when running on hypervisor.
    On Thu, Jun 03, 2010 at 10:52:51AM +0200, Andi Kleen wrote:
    > > Fyi - I have a early patch ready to address this issue. Basically I am using
    > > host-kernel memory (mmap'ed into guest as io-memory via ivshmem driver) to hint
    > > host whenever guest is in spin-lock'ed section, which is read by host scheduler
    > > to defer preemption.
    >
    > Looks like a ni.ce simple way to handle this for the kernel.

    The idea is not new. It has been discussed for example at [1].

    > However I suspect user space will hit the same issue sooner
    > or later. I assume your way is not easily extensable to futexes?

    I had thought that most userspace lock implementation avoid spinning for long
    times? i.e they would spin for a short while and sleep beyond a threshold?
    If that is the case, we shouldn't be burning lot of cycles unnecessarily
    spinning in userspace ..

    > So do you defer during the whole spinlock region or just during the spin?
    >
    > I assume the the first?

    My current implementation just blindly defers by a tick and checks if it is safe
    to preempt in the next tick - otherwise gives more grace ticks until the
    threshold is crossed (after which we forcibly preempt it).

    In future, I was thinking that host scheduler can hint back to guest that it was
    given some "grace" time which can be used in guest to yield when it comes out of
    the locked section.

    - vatsa

    1. http://l4ka.org/publications/2004/Towards-Scalable-Multiprocessor-Virtual-Machines-VM04.pdf


    \
     
     \ /
      Last update: 2010-06-03 11:29    [W:0.022 / U:0.136 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site