lkml.org 
[lkml]   [2015]   [Mar]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
Subject[PATCHv2] mm/slub: fix lockups on PREEMPT && !SMP kernels
On Fri, Mar 13, 2015 at 04:29:23PM +0000, Christoph Lameter wrote:
> On Fri, 13 Mar 2015, Mark Rutland wrote:
>
> > */
> > - do {
> > - tid = this_cpu_read(s->cpu_slab->tid);
> > - c = raw_cpu_ptr(s->cpu_slab);
> > - } while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid));
> > + c = raw_cpu_ptr(s->cpu_slab);
> > + tid = READ_ONCE(c->tid);
> >
>
> Ok that works for the !SMP case. What about SMP and PREEMPT now?
>
> And yes code like this was deemed safe for years and the race condition is
> very subtle and difficult to trigger (also given that PREEMPT is rarely
> used these days).

Do you mean if the READ_ONCE(c->tid) gives us a torn value that happens
to be a future value of c->tid?

Are you happy to retain the loop, but with the c->tid access replaced
with READ_ONCE(c->tid)?

If torn values are an issue for the raw access then the loop doesn't
guarantee that c and tid were read on the same CPU as the comment above
it implies. The cmpxchg saves us given the torn value would have to
match some currently active tid, and I guess the loop saves a pointless
cmpxchg when it does detect a mismatch.

Mark.


\
 
 \ /
  Last update: 2015-03-16 14:01    [W:0.153 / U:0.140 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site