lkml.org 
[lkml]   [2012]   [Jan]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm: do not drain pagevecs for mlock
On 01/10/2012 07:58 AM, KOSAKI Motohiro wrote:
> (1/6/12 1:46 AM), Tao Ma wrote:
>> On 01/06/2012 02:33 PM, KOSAKI Motohiro wrote:
>>> (1/6/12 1:30 AM), Tao Ma wrote:
>>>> On 01/06/2012 02:18 PM, KOSAKI Motohiro wrote:
>>>>> 2012/1/6 Tao Ma<tm@tao.ma>:
>>>>>> Hi Kosaki,
>>>>>> On 12/30/2011 06:07 PM, KOSAKI Motohiro wrote:
>>>>>>>>> Because your test program is too artificial. 20sec/100000times =
>>>>>>>>> 200usec. And your
>>>>>>>>> program repeat mlock and munlock the exact same address. so,
>>>>>>>>> yes, if
>>>>>>>>> lru_add_drain_all() is removed, it become near no-op. but it's
>>>>>>>>> worthless comparision.
>>>>>>>>> none of any practical program does such strange mlock usage.
>>>>>>>> yes, I should say it is artificial. But mlock did cause the
>>>>>>>> problem in
>>>>>>>> our product system and perf shows that the mlock uses the system
>>>>>>>> time
>>>>>>>> much more than others. That's the reason we created this program
>>>>>>>> to test
>>>>>>>> whether mlock really sucks. And we compared the result with
>>>>>>>> rhel5(2.6.18) which runs much much faster.
>>>>>>>>
>>>>>>>> And from the commit log you described, we can remove
>>>>>>>> lru_add_drain_all
>>>>>>>> safely here, so why add it? At least removing it makes mlock much
>>>>>>>> faster
>>>>>>>> compared to the vanilla kernel.
>>>>>>>
>>>>>>> If we remove it, we lose to a test way of mlock. "Memlocked"
>>>>>>> field of
>>>>>>> /proc/meminfo
>>>>>>> show inaccurate number very easily. So, if 200usec is no avoidable,
>>>>>>> I'll ack you.
>>>>>>> But I'm not convinced yet.
>>>>>> Do you find something new for this?
>>>>>
>>>>> No.
>>>>>
>>>>> Or more exactly, 200usec is my calculation mistake. your program call
>>>>> mlock
>>>>> 3 times per each iteration. so, correct cost is 66usec.
>>>> yes, so mlock can do 15000/s, it is even slower than the whole i/o time
>>>> for some not very fast ssd disk and I don't think it is endurable. I
>>>> guess we should remove it, right? Or you have another other suggestion
>>>> that I can try for it?
>>>
>>> read whole thread.
>> I have read the whole thread, and you just described that the test case
>> is artificial and there is no suggestion or patch about how to resolve
>> it. As I have said that it is very time-consuming and with more cpu
>> cores, the more penalty, and an i/o time for a ssd can be faster than
>> it. So do you think 66 usec is OK for a memory operation?
>
> I don't think you've read the thread at all. please read akpm's commnet.
>
> http://www.spinics.net/lists/linux-mm/msg28290.html
Oh, your patch set doesn't cc to me, so my mail filter moved it to
another directory..
Sorry and I will read the whole thread. Thanks again for your time.

Thanks
Tao


\
 
 \ /
  Last update: 2012-01-10 03:11    [W:0.115 / U:0.264 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site