lkml.org 
[lkml]   [2013]   [Aug]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[tip:x86/spinlocks] xen, pvticketlock: Allow interrupts to be enabled while blocking
    Commit-ID:  38eddb85894561ab32c1de4171e1c1582f0efa78
    Gitweb: http://git.kernel.org/tip/38eddb85894561ab32c1de4171e1c1582f0efa78
    Author: Jeremy Fitzhardinge <jeremy@goop.org>
    AuthorDate: Tue, 6 Aug 2013 17:14:12 +0530
    Committer: H. Peter Anvin <hpa@linux.intel.com>
    CommitDate: Thu, 8 Aug 2013 16:07:01 -0700

    xen, pvticketlock: Allow interrupts to be enabled while blocking

    If interrupts were enabled when taking the spinlock, we can leave them
    enabled while blocking to get the lock.

    If we can enable interrupts while waiting for the lock to become
    available, and we take an interrupt before entering the poll,
    and the handler takes a spinlock which ends up going into
    the slow state (invalidating the per-cpu "lock" and "want" values),
    then when the interrupt handler returns the event channel will
    remain pending so the poll will return immediately, causing it to
    return out to the main spinlock loop.

    Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org>
    Link: http://lkml.kernel.org/r/20130806114412.20643.84141.sendpatchset@codeblue.in.ibm.com
    Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
    Acked-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
    ---
    arch/x86/xen/spinlock.c | 46 ++++++++++++++++++++++++++++++++++++++++------
    1 file changed, 40 insertions(+), 6 deletions(-)

    diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
    index 546112e..0438b93 100644
    --- a/arch/x86/xen/spinlock.c
    +++ b/arch/x86/xen/spinlock.c
    @@ -142,7 +142,20 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
    * partially setup state.
    */
    local_irq_save(flags);
    -
    + /*
    + * We don't really care if we're overwriting some other
    + * (lock,want) pair, as that would mean that we're currently
    + * in an interrupt context, and the outer context had
    + * interrupts enabled. That has already kicked the VCPU out
    + * of xen_poll_irq(), so it will just return spuriously and
    + * retry with newly setup (lock,want).
    + *
    + * The ordering protocol on this is that the "lock" pointer
    + * may only be set non-NULL if the "want" ticket is correct.
    + * If we're updating "want", we must first clear "lock".
    + */
    + w->lock = NULL;
    + smp_wmb();
    w->want = want;
    smp_wmb();
    w->lock = lock;
    @@ -157,24 +170,43 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
    /* Only check lock once pending cleared */
    barrier();

    - /* Mark entry to slowpath before doing the pickup test to make
    - sure we don't deadlock with an unlocker. */
    + /*
    + * Mark entry to slowpath before doing the pickup test to make
    + * sure we don't deadlock with an unlocker.
    + */
    __ticket_enter_slowpath(lock);

    - /* check again make sure it didn't become free while
    - we weren't looking */
    + /*
    + * check again make sure it didn't become free while
    + * we weren't looking
    + */
    if (ACCESS_ONCE(lock->tickets.head) == want) {
    add_stats(TAKEN_SLOW_PICKUP, 1);
    goto out;
    }
    +
    + /* Allow interrupts while blocked */
    + local_irq_restore(flags);
    +
    + /*
    + * If an interrupt happens here, it will leave the wakeup irq
    + * pending, which will cause xen_poll_irq() to return
    + * immediately.
    + */
    +
    /* Block until irq becomes pending (or perhaps a spurious wakeup) */
    xen_poll_irq(irq);
    add_stats(TAKEN_SLOW_SPURIOUS, !xen_test_irq_pending(irq));
    +
    + local_irq_save(flags);
    +
    kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
    out:
    cpumask_clear_cpu(cpu, &waiting_cpus);
    w->lock = NULL;
    +
    local_irq_restore(flags);
    +
    spin_time_accum_blocked(start);
    }
    PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
    @@ -188,7 +220,9 @@ static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
    for_each_cpu(cpu, &waiting_cpus) {
    const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);

    - if (w->lock == lock && w->want == next) {
    + /* Make sure we read lock before want */
    + if (ACCESS_ONCE(w->lock) == lock &&
    + ACCESS_ONCE(w->want) == next) {
    add_stats(RELEASED_SLOW_KICKED, 1);
    xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
    break;

    \
     
     \ /
      Last update: 2013-08-09 02:01    [W:5.229 / U:0.124 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site