lkml.org 
[lkml]   [2017]   [Aug]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH] membarrier: Document scheduler barrier requirements
    Date
    Document the membarrier requirement on having a full memory barrier in
    __schedule() after coming from user-space, before storing to rq->curr.
    It is provided by smp_mb__before_spinlock() in __schedule().

    Document that membarrier requires a full barrier on transition from
    kernel thread to userspace thread. We currently have an implicit barrier
    from atomic_dec_and_test() in mmdrop() that ensures this.

    The x86 switch_mm_irqs_off() full barrier is currently provided by many
    cpumask update operations as well as load_cr3(). However, since
    load_cr3() may be done lazily in the future, move the documentation from
    load_cr3() to cpumask_set_cpu().

    [ Applies on top of "membarrier: expedited private command (v4)". ]

    Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
    CC: Peter Zijlstra <peterz@infradead.org>
    CC: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    CC: Boqun Feng <boqun.feng@gmail.com>
    CC: Andrew Hunter <ahh@google.com>
    CC: Maged Michael <maged.michael@gmail.com>
    CC: gromer@google.com
    CC: Avi Kivity <avi@scylladb.com>
    CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    CC: Paul Mackerras <paulus@samba.org>
    CC: Michael Ellerman <mpe@ellerman.id.au>
    CC: Dave Watson <davejwatson@fb.com>
    ---
    arch/x86/mm/tlb.c | 7 +++++--
    include/linux/sched/mm.h | 4 ++++
    kernel/sched/core.c | 9 +++++++++
    3 files changed, 18 insertions(+), 2 deletions(-)

    diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
    index bb103d693f33..a79e72691cf1 100644
    --- a/arch/x86/mm/tlb.c
    +++ b/arch/x86/mm/tlb.c
    @@ -105,6 +105,10 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
    this_cpu_write(cpu_tlbstate.loaded_mm, next);

    WARN_ON_ONCE(cpumask_test_cpu(cpu, mm_cpumask(next)));
    + /*
    + * The full memory barrier implied by mm_cpumask update
    + * operations is required by the membarrier system call.
    + */
    cpumask_set_cpu(cpu, mm_cpumask(next));

    /*
    @@ -132,8 +136,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
    * due to instruction fetches or for no reason at all,
    * and neither LOCK nor MFENCE orders them.
    * Fortunately, load_cr3() is serializing and gives the
    - * ordering guarantee we need. This full barrier is also
    - * required by the membarrier system call.
    + * ordering guarantee we need.
    */
    load_cr3(next->pgd);

    diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
    index 2b24a6974847..fe29d06e2800 100644
    --- a/include/linux/sched/mm.h
    +++ b/include/linux/sched/mm.h
    @@ -38,6 +38,10 @@ static inline void mmgrab(struct mm_struct *mm)
    extern void __mmdrop(struct mm_struct *);
    static inline void mmdrop(struct mm_struct *mm)
    {
    + /*
    + * The implicit full barrier implied by atomic_dec_and_test is
    + * required by the membarrier system call.
    + */
    if (unlikely(atomic_dec_and_test(&mm->mm_count)))
    __mmdrop(mm);
    }
    diff --git a/kernel/sched/core.c b/kernel/sched/core.c
    index 4f85494620d7..0e36d9960d91 100644
    --- a/kernel/sched/core.c
    +++ b/kernel/sched/core.c
    @@ -2649,6 +2649,12 @@ static struct rq *finish_task_switch(struct task_struct *prev)
    finish_arch_post_lock_switch();

    fire_sched_in_preempt_notifiers(current);
    + /*
    + * When transitioning from a kernel thread to a userspace
    + * thread, mmdrop()'s implicit full barrier is required by the
    + * membarrier system call, because the current active_mm can
    + * become the current mm without going through switch_mm().
    + */
    if (mm)
    mmdrop(mm);
    if (unlikely(prev_state == TASK_DEAD)) {
    @@ -3290,6 +3296,9 @@ static void __sched notrace __schedule(bool preempt)
    * Make sure that signal_pending_state()->signal_pending() below
    * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
    * done by the caller to avoid the race with signal_wake_up().
    + *
    + * The membarrier system call requires a full memory barrier
    + * after coming from user-space, before storing to rq->curr.
    */
    smp_mb__before_spinlock();
    rq_lock(rq, &rf);
    --
    2.11.0
    \
     
     \ /
      Last update: 2017-08-17 21:47    [W:4.708 / U:0.880 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site