lkml.org 
[lkml]   [2020]   [Jun]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v13 00/18] per memcg lru lock
    On Fri, 19 Jun 2020 16:33:38 +0800 Alex Shi <alex.shi@linux.alibaba.com> wrote:

    > This is a new version which bases on linux-next, merged much suggestion
    > from Hugh Dickins, from compaction fix to less TestClearPageLRU and
    > comments reverse etc. Thank a lot, Hugh!
    >
    > Johannes Weiner has suggested:
    > "So here is a crazy idea that may be worth exploring:
    >
    > Right now, pgdat->lru_lock protects both PageLRU *and* the lruvec's
    > linked list.
    >
    > Can we make PageLRU atomic and use it to stabilize the lru_lock
    > instead, and then use the lru_lock only serialize list operations?

    I don't understand this sentence. How can a per-page flag stabilize a
    per-pgdat spinlock? Perhaps some additional description will help.

    > ..."
    >
    > With new memcg charge path and this solution, we could isolate
    > LRU pages to exclusive visit them in compaction, page migration, reclaim,
    > memcg move_accunt, huge page split etc scenarios while keeping pages'
    > memcg stable. Then possible to change per node lru locking to per memcg
    > lru locking. As to pagevec_lru_move_fn funcs, it would be safe to let
    > pages remain on lru list, lru lock could guard them for list integrity.
    >
    > The patchset includes 3 parts:
    > 1, some code cleanup and minimum optimization as a preparation.
    > 2, use TestCleanPageLRU as page isolation's precondition
    > 3, replace per node lru_lock with per memcg per node lru_lock
    >
    > The 3rd part moves per node lru_lock into lruvec, thus bring a lru_lock for
    > each of memcg per node. So on a large machine, each of memcg don't
    > have to suffer from per node pgdat->lru_lock competition. They could go
    > fast with their self lru_lock
    >
    > Following Daniel Jordan's suggestion, I have run 208 'dd' with on 104
    > containers on a 2s * 26cores * HT box with a modefied case:
    > https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice
    >
    > With this patchset, the readtwice performance increased about 80%
    > in concurrent containers.
    >
    > Thanks Hugh Dickins and Konstantin Khlebnikov, they both brought this
    > idea 8 years ago, and others who give comments as well: Daniel Jordan,
    > Mel Gorman, Shakeel Butt, Matthew Wilcox etc.
    >
    > Thanks for Testing support from Intel 0day and Rong Chen, Fengguang Wu,
    > and Yun Wang. Hugh Dickins also shared his kbuild-swap case. Thanks!
    >
    > ...
    >
    > 24 files changed, 500 insertions(+), 357 deletions(-)

    It's a large patchset and afaict the whole point is performance gain.
    80% in one specialized test sounds nice, but is there a plan for more
    extensive quantification?

    There isn't much sign of completed review activity here, so I'll go
    into hiding for a while.

    \
     
     \ /
      Last update: 2020-06-21 01:09    [W:4.123 / U:0.084 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site