lkml.org 
[lkml]   [2015]   [Feb]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 3/4] sched: Pull preemption disablement to __schedule() caller
On Wed, Jan 28, 2015 at 04:50:44PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 28, 2015 at 01:24:11AM +0100, Frederic Weisbecker wrote:
> > +++ b/kernel/sched/core.c
> > @@ -2760,7 +2760,6 @@ static void __sched __schedule(void)
> > struct rq *rq;
> > int cpu;
> >
> > - preempt_disable();
>
> Implies barrier();
>
> > cpu = smp_processor_id();
> > rq = cpu_rq(cpu);
> > rcu_note_context_switch();
> > @@ -2822,8 +2821,6 @@ static void __sched __schedule(void)
> > raw_spin_unlock_irq(&rq->lock);
> >
> > post_schedule(rq);
>
> implies barrier();
>
> > -
> > - sched_preempt_enable_no_resched();
> > }
> >
> > static inline void sched_submit_work(struct task_struct *tsk)
>
> > @@ -2883,9 +2882,9 @@ void __sched schedule_preempt_disabled(void)
> > static void preempt_schedule_common(void)
> > {
> > do {
> > - preempt_count_add(PREEMPT_ACTIVE);
> > + preempt_count_add(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
>
> Does _NOT_ imply barrier();
>
> > __schedule();
> > - preempt_count_sub(PREEMPT_ACTIVE);
>
> idem.

It looks like preempt_count_add/inc() mostly imply entering a context that we want
to be seen right away (thus want barrier() after) and preempt_count_sub/dec() mostly
want previous work to be visible before re-enabling interrupt, preemption, etc...
(thus want barrier() before).

So maybe these functions (the non-underscored ones) should imply a barrier() rather
than their callers (preempt_disable() and others). Inline functions instead of macros
would do the trick (if the headers hell let us do that).

Note the underscored implementations are all inline currently so this happens to
work by chance for direct calls to preempt_count_add/sub() outside preempt_disable().
If the non-underscored caller is turned into inline too I don't expect performance issues.

What do you think, does it make sense?


>
> > + preempt_count_sub(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET);
> >
> > /*
> > * Check again in case we missed a preemption opportunity


\
 
 \ /
  Last update: 2015-02-02 19:01    [W:0.351 / U:0.224 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site