lkml.org 
[lkml]   [2017]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] mm: mlock: remove lru_add_drain_all()
From
Date
On 10/19/2017 04:47 AM, Shakeel Butt wrote:
> Recently we have observed high latency in mlock() in our generic
> library and noticed that users have started using tmpfs files even
> without swap and the latency was due to expensive remote LRU cache
> draining.

With and without this I patch I dont see much difference in number
of instructions executed in the kernel for mlock() system call on
POWER8 platform just after reboot (all the pagevecs might not been
filled by then though). There is an improvement but its very less.

Could you share your latency numbers and how this patch is making
them better.

>
> Is lru_add_drain_all() required by mlock()? The answer is no and the
> reason it is still in mlock() is to rapidly move mlocked pages to
> unevictable LRU. Without lru_add_drain_all() the mlocked pages which
> were on pagevec at mlock() time will be moved to evictable LRUs but
> will eventually be moved back to unevictable LRU by reclaim. So, we

Wont this affect the performance during reclaim ?

> can safely remove lru_add_drain_all() from mlock(). Also there is no
> need for local lru_add_drain() as it will be called deep inside
> __mm_populate() (in follow_page_pte()).

The following commit which originally added lru_add_drain_all()
during mlock() and mlockall() has similar explanation.

8891d6da ("mm: remove lru_add_drain_all() from the munlock path")

"In addition, this patch add lru_add_drain_all() to sys_mlock()
and sys_mlockall(). it isn't must. but it reduce the failure
of moving to unevictable list. its failure can rescue in
vmscan later. but reducing is better."

Which sounds like either we have to handle the active to inactive
LRU movement during reclaim or it can be done here to speed up
reclaim later on.

\
 
 \ /
  Last update: 2017-10-22 17:20    [W:2.088 / U:0.308 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site