lkml.org 
[lkml]   [2017]   [Jul]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [RFC PATCH v2] membarrier: expedited private command
On Fri, 28 Jul 2017 17:06:53 +0000 (UTC)
Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote:

> ----- On Jul 28, 2017, at 12:46 PM, Peter Zijlstra peterz@infradead.org wrote:
>
> > On Fri, Jul 28, 2017 at 03:38:15PM +0000, Mathieu Desnoyers wrote:
> >> > Which only leaves PPC stranded.. but the 'good' news is that mpe says
> >> > they'll probably need a barrier in switch_mm() in any case.
> >>
> >> As I pointed out in my other email, I plan to do this:
> >>
> >> --- a/kernel/sched/core.c
> >> +++ b/kernel/sched/core.c
> >> @@ -2636,6 +2636,11 @@ static struct rq *finish_task_switch(struct task_struct
> >> *prev)
> >> vtime_task_switch(prev);
> >> perf_event_task_sched_in(prev, current);
> >
> > Here would place it _inside_ the rq->lock, which seems to make more
> > sense given the purpose of the barrier, but either way works given its
> > definition.
>
> Given its naming "...after_unlock_lock", I thought it would be clearer to put
> it after the unlock. Anyway, this barrier does not seem to be used to ensure
> the release barrier per se (unlock already has release semantic), but rather
> ensures a full memory barrier wrt memory accesses that are synchronized by
> means other than this this lock.
>
> >
> >> finish_lock_switch(rq, prev);
> >
> > You could put the whole thing inside IS_ENABLED(CONFIG_SYSMEMBARRIER) or
> > something.
>
> I'm tempted to wait until we hear from powerpc maintainers, so we learn
> whether they deeply care about this extra barrier in finish_task_switch()
> before making it conditional on CONFIG_MEMBARRIER.
>
> Having a guaranteed barrier after context switch on all architectures may
> have other uses.

I haven't had time to read the thread and understand exactly why you need
this extra barrier, I'll do it next week. Thanks for cc'ing us on it.

A smp_mb is pretty expensive on powerpc CPUs. Removing the sync from
switch_to increased thread switch performance by 2-3%. Putting it in
switch_mm may be a little less painful, but still we have to weigh it
against the benefit of this new functionality. Would that be a net win
for the average end-user? Seems unlikely.

But we also don't want to lose sys_membarrier completely. Would it be too
painful to make MEMBARRIER_CMD_PRIVATE_EXPEDITED return error, or make it
fall back to a slower case if we decide not to implement it?

Thanks,
Nick

\
 
 \ /
  Last update: 2017-07-29 03:59    [W:0.429 / U:1.164 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site