lkml.org 
[lkml]   [2012]   [Jan]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm: do not drain pagevecs for mlock
Hi KOSAKI,
On 12/30/2011 06:07 PM, KOSAKI Motohiro wrote:
>>> Because your test program is too artificial. 20sec/100000times =
>>> 200usec. And your
>>> program repeat mlock and munlock the exact same address. so, yes, if
>>> lru_add_drain_all() is removed, it become near no-op. but it's
>>> worthless comparision.
>>> none of any practical program does such strange mlock usage.
>> yes, I should say it is artificial. But mlock did cause the problem in
>> our product system and perf shows that the mlock uses the system time
>> much more than others. That's the reason we created this program to test
>> whether mlock really sucks. And we compared the result with
>> rhel5(2.6.18) which runs much much faster.
>>
>> And from the commit log you described, we can remove lru_add_drain_all
>> safely here, so why add it? At least removing it makes mlock much faster
>> compared to the vanilla kernel.
>
> If we remove it, we lose to a test way of mlock. "Memlocked" field of
> /proc/meminfo
> show inaccurate number very easily. So, if 200usec is no avoidable,
> I'll ack you.
> But I'm not convinced yet.
As you don't think removing lru_add_drain_all is appropriate, I have
created another patch set to resolve it. I add a new per cpu counter to
record the counter of all the pages in the pagevecs. So if the counter
is 0, don't drain the corresponding cpu. Does it make sense to you?

Thanks
Tao


\
 
 \ /
  Last update: 2012-01-09 08:29    [W:0.342 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site