lkml.org 
[lkml]   [2017]   [Aug]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH 1/2] sched/wait: Break up long wake list walk
    Linus Torvalds <torvalds@linux-foundation.org> writes:

    > On Tue, Aug 15, 2017 at 3:57 PM, Linus Torvalds
    > <torvalds@linux-foundation.org> wrote:
    >>
    >> Oh, and the page wait-queue really needs that key argument too, which
    >> is another thing that swait queue code got rid of in the name of
    >> simplicity.
    >
    > Actually, it gets worse.
    >
    > Because the page wait queues are hashed, it's not an all-or-nothing
    > thing even for the non-exclusive cases, and it's not a "wake up first
    > entry" for the exclusive case. Both have to be conditional on the wait
    > entry actually matching the page and bit in question.
    >
    > So no way to use swait, or any of the lockless queuing code in general
    > (so we can't do some clever private wait-list using llist.h either).
    >
    > End result: it looks like you fairly fundamentally do need to use a
    > lock over the whole list traversal (like the standard wait-queues),
    > and then add a cursor entry like Tim's patch if dropping the lock in
    > the middle.
    >
    > Anyway, looking at the old code, we *used* to limit the page wait hash
    > table to 4k entries, and we used to have one hash table per memory
    > zone.
    >
    > The per-zone thing didn't work at all for the generic bit-waitqueues,
    > because of how people used them on virtual addresses on the stack.
    >
    > But it *could* work for the page waitqueues, which are now a totally
    > separate entity, and is obviously always physically addressed (since
    > the indexing is by "struct page" pointer), and doesn't have that
    > issue.
    >
    > So I guess we could re-introduce the notion of per-zone page waitqueue
    > hash tables. It was disgusting to allocate and free though (and hooked
    > into the memory hotplug code).
    >
    > So I'd still hope that we can instead just have one larger hash table,
    > and that is sufficient for the problem.

    If increasing the hash table size fixes the problem I am wondering if
    rhash tables might be the proper solution to this problem. They start
    out small and then grow as needed.

    Eric

    \
     
     \ /
      Last update: 2017-08-17 01:24    [W:4.776 / U:0.084 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site