lkml.org 
[lkml]   [2015]   [Sep]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] lib: fix data race in rhashtable_rehash_one
From
Date
On Mon, 2015-09-21 at 17:10 +0200, Dmitry Vyukov wrote:
> On Mon, Sep 21, 2015 at 4:51 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> > On Mon, 2015-09-21 at 06:31 -0700, Eric Dumazet wrote:
> >> On Mon, 2015-09-21 at 10:08 +0200, Dmitry Vyukov wrote:
> >> > rhashtable_rehash_one() uses plain writes to update entry->next,
> >> > while it is being concurrently accessed by readers.
> >> > Unfortunately, the compiler is within its rights to (for example) use
> >> > byte-at-a-time writes to update the pointer, which would fatally confuse
> >> > concurrent readers.
> >> >
> >> This is bogus.
> >>
> >> 1) Linux is certainly not working if some arch or compiler is not doing
> >> single word writes. WRITE_ONCE() would not help at all to enforce this.
> >>
> >> 2) If new node is not yet visible, we don't care if we write
> >> entry->next using any kind of operation.
> >>
> >> So the WRITE_ONCE() is not needed at all.
> >>
> >>
> >>
> >> > + WRITE_ONCE(entry->next, head);
> >>
> >>
> >> The rcu_assign_pointer() immediately following is enough in this case.
> >>
> >> We have hundred of similar cases in the kernel.
> >>
> >>
> >
> > The changelog and comment are totally confusing.
> >
> > Please remove the bogus parts in them, and/or rephrase.
> >
> > The important part here is that we rehash an item, so we need to make
> > sure to maintain consistent ->next field, and need to prevent compiler
> > from using ->next as a temporary variable.
> >
> > ptr->next = 1UL | ((base + offset) << 1);
> >
> > Is dangerous because compiler could issue :
> >
> > ptr->next = (base + offset);
> >
> > ptr->next <<= 1;
> >
> > ptr->next += 1UL;
> >
> > Frankly, all this looks like an oversight in this code.
> >
> > Not sure why the NULLS value is even recomputed.
>
> I have not looked in detail yet, but the NULLS recomputation uses
> new_hash, which obviously wasn't available when the value was
> previously computed. Don't know yet whether it is important or not.


Well, head already contains the right value, set in bucket_table_alloc()

for (i = 0; i < nbuckets; i++)
INIT_RHT_NULLS_HEAD(tbl->buckets[i], ht, i);

Think of this nulls value as a special NULL pointer.

If hash table is properly allocated/initialized, all the chains are
correctly ending with a proper NULL pointer.






\
 
 \ /
  Last update: 2015-09-21 17:41    [W:0.093 / U:0.756 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site