lkml.org 
[lkml]   [2017]   [Mar]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH V5 6/6] proc: show MADV_FREE pages info in smaps
On Wed 01-03-17 13:31:49, Johannes Weiner wrote:
> On Wed, Mar 01, 2017 at 02:36:24PM +0100, Michal Hocko wrote:
> > On Fri 24-02-17 13:31:49, Shaohua Li wrote:
> > > show MADV_FREE pages info of each vma in smaps. The interface is for
> > > diganose or monitoring purpose, userspace could use it to understand
> > > what happens in the application. Since userspace could dirty MADV_FREE
> > > pages without notice from kernel, this interface is the only place we
> > > can get accurate accounting info about MADV_FREE pages.
> >
> > I have just got to test this patchset and noticed something that was a
> > bit surprising
> >
> > madvise(mmap(len), len, MADV_FREE)
> > Size: 102400 kB
> > Rss: 102400 kB
> > Pss: 102400 kB
> > Shared_Clean: 0 kB
> > Shared_Dirty: 0 kB
> > Private_Clean: 102400 kB
> > Private_Dirty: 0 kB
> > Referenced: 0 kB
> > Anonymous: 102400 kB
> > LazyFree: 102368 kB
> >
> > It took me a some time to realize that LazyFree is not accurate because
> > there are still pages on the per-cpu lru_lazyfree_pvecs. I believe this
> > is an implementation detail which shouldn't be visible to the userspace.
> > Should we simply drain the pagevec? A crude way would be to simply
> > lru_add_drain_all after we are done with the given range. We can also
> > make this lru_lazyfree_pvecs specific but I am not sure this is worth
> > the additional code.
> > ---
> > diff --git a/mm/madvise.c b/mm/madvise.c
> > index dc5927c812d3..d2c318db16c9 100644
> > --- a/mm/madvise.c
> > +++ b/mm/madvise.c
> > @@ -474,7 +474,7 @@ static int madvise_free_single_vma(struct vm_area_struct *vma,
> > madvise_free_page_range(&tlb, vma, start, end);
> > mmu_notifier_invalidate_range_end(mm, start, end);
> > tlb_finish_mmu(&tlb, start, end);
> > -
> > + lru_add_drain_all();
>
> A full drain on all CPUs is very expensive and IMO not justified for
> some per-cpu fuzz factor in the stats. I'd take hampering the stats
> over hampering the syscall any day; only a subset of MADV_FREE users
> will look at the stats.
>
> And while the aggregate error can be large on machines with many CPUs
> (notably the machines on which you absolutely don't want to send IPIs
> to all cores each time a thread madvises some pages!),

I am not sure I understand. Where would we trigger IPIs?
lru_add_drain_all relies on workqueus.

> the pages of a
> single process are not likely to be spread out across more than a few
> CPUs.

Then we can simply only flushe lru_lazyfree_pvecs which should reduce
the unrelated noise from other pagevecs.

> The error when reading a specific smaps should be completely ok.
>
> In numbers: even if your process is madvising from 16 different CPUs,
> the error in its smaps file will peak at 896K in the worst case. That
> level of concurrency tends to come with much bigger memory quantities
> for that amount of error to matter.

It is still an unexpected behavior IMHO and an implementation detail
which leaks to the userspace.

> IMO this is a non-issue.

I will not insist if there is a general consensus on this and it is a
documented behavior, though.

--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2017-03-01 19:57    [W:0.107 / U:0.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site