[lkml]   [2010]   [Jun]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [PATCH] use unfair spinlock when running on hypervisor.
    On Thu, Jun 03, 2010 at 08:38:55PM +1000, Nick Piggin wrote:
    > > Guest side:
    > >
    > > static inline void spin_lock(spinlock_t *lock)
    > > {
    > > raw_spin_lock(&lock->rlock);
    > > + __get_cpu_var(gh_vcpu_ptr)->defer_preempt++;
    > > }
    > >
    > > static inline void spin_unlock(spinlock_t *lock)
    > > {
    > > + __get_cpu_var(gh_vcpu_ptr)->defer_preempt--;
    > > raw_spin_unlock(&lock->rlock);
    > > }
    > >
    > > [similar changes to other spinlock variants]
    > Great, this is a nice way to improve it.
    > You might want to consider playing with first taking a ticket, and
    > then if we fail to acquire the lock immediately, then increment
    > defer_preempt before we start spinning.
    > The downside of this would be if we waste all our slice on spinning
    > and then preempted in the critical section. But with ticket locks
    > you can easily see how many entries in the queue in front of you.
    > So you could experiment with starting to defer preempt when we
    > notice we are getting toward the head of the queue.

    Mm - my goal is to avoid long spin times in the first place (because the
    owning vcpu was descheduled at an unfortunate time i.e while it was holding a
    lock). From that sense, I am targetting preemption-defer of lock *holder*
    rather than of lock acquirer. So ideally whenever somebody tries to grab a lock,
    it should be free most of the time, it can be held only if the owner is
    currently running - which means we won't have to spin too long for the lock.

    > Have you also looked at how s390 checks if the owning vcpu is running
    > and if so it spins, if not yields to the hypervisor. Something like
    > turning it into an adaptive lock. This could be applicable as well.

    I don't think even s390 does adaptive spinlocks. Also afaik s390 zVM does gang
    scheduling of vcpus, which reduces the severity of this problem very much -
    essentially lock acquirer/holder are run simultaneously on different cpus all
    the time. Gang scheduling is on my list of things to look at much later
    (although I have been warned that its a scalablility nightmare!).

    - vatsa

     \ /
      Last update: 2010-06-03 14:07    [W:0.023 / U:9.400 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site