lkml.org 
[lkml]   [2017]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm: mlock: remove lru_add_drain_all()
On Thu 19-10-17 12:19:26, Shakeel Butt wrote:
> On Thu, Oct 19, 2017 at 5:32 AM, Michal Hocko <mhocko@kernel.org> wrote:
> > On Wed 18-10-17 16:17:30, Shakeel Butt wrote:
> >> Recently we have observed high latency in mlock() in our generic
> >> library and noticed that users have started using tmpfs files even
> >> without swap and the latency was due to expensive remote LRU cache
> >> draining.
> >
> > some numbers would be really nice
> >
>
> On a production workload, customers complained that single mlock()
> call took around 10 seconds on mapped tmpfs files and the perf profile
> showed lru_add_drain_all as culprit.

draining can take some time. I wouldn't expect orders of seconds so perf
data would be definitely helpful in the changelog.

[...]
> > Is this really true? lru_add_drain_all will flush the previously cached
> > LRU pages. We are not flushing after the pages have been faulted in so
> > this might not do anything wrt. mlocked pages, right?
> >
>
> Sorry for the confusion. I wanted to say that if the pages which are
> being mlocked are on caches of remote cpus then lru_add_drain_all will
> move them to their corresponding LRUs and then remaining functionality
> of mlock will move them again from their evictable LRUs to unevictable
> LRU.

yes, but the point is that we are draining pages which might be not
directly related to pages which _will_ be mlocked by the syscall. In
fact those will stay on the cache. This is the primary reason why this
draining doesn't make much sense.

Or am I still misunderstanding what you are saying here?
--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2017-10-22 17:29    [W:0.084 / U:2.848 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site