lkml.org 
[lkml]   [2009]   [Jul]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: Found the commit that causes the OOMs
From
On Thu, Jul 2, 2009 at 9:43 PM, Wu Fengguang<fengguang.wu@intel.com> wrote:
> On Thu, Jul 02, 2009 at 03:41:06PM +0800, Minchan Kim wrote:
>>
>>
>> On Tue, 30 Jun 2009 20:57:47 +0100
>> David Howells <dhowells@redhat.com> wrote:
>>
>> > Minchan Kim <minchan.kim@gmail.com> wrote:
>> >
>> > > David. Doesn't it happen OOM if you revert my patch, still?
>> >
>> > It does happen, and indeed happens in v2.6.30, but requires two adjacent runs
>> > of msgctl11 to trigger, rather than usually triggering on the first run.  If
>> > you interpolate the rest of LTP between the iterations, it doesn't seem to
>> > happen at all on v2.6.30.  My guess is that with the rest of LTP interpolated,
>> > there's either enough time for some cleanup or something triggers a cleanup
>> > (the swapfile tests perhaps?).
>> >
>> > > Befor I go to the trip, I made debugging patch in a hurry.  Mel and I
>> > > suspect to put the wrong page in lru list.
>> > >
>> > > This patch's goal is that print page's detail on active anon lru when it
>> > > happen OOM.  Maybe you could expand your log buffer size.
>> >
>> > Do you mean to expand the dmesg buffer?  That's probably unnecessary: I capture
>> > the kernel log over a serial port into a file on another machine.
>> >
>> > > Could you show me the information with OOM, please ?
>> >
>> > Attached.  It's compressed as there was rather a lot.
>> >
>> > David
>> > ---
>>
>> Hi, David.
>>
>> Sorry for late response.
>>
>> I looked over your captured data when I got home but I didn't find any problem
>> in lru page moving scheme.
>> As Wu, Kosaki and Rik discussed, I think this issue is also related to process fork bomb.
>
> Yes, me think so.
>
>> When I tested msgctl11 in my machine with 2.6.31-rc1, I found that:
>
> Were you testing the no-swap case?

Yes.

>> 2.6.31-rc1
>> real  0m38.628s
>> user  0m10.589s
>> sys   1m12.613s
>>
>> vmstat
>>
>> allocstall 3196
>>
>> 2.6.31-rc1-revert-mypatch
>>
>> real  1m17.396s
>> user  0m11.193s
>> sys   4m3.803s
>
> It's interesting that (sys > real).

My test environment is quad core. :)

>> vmstat
>>
>> allocstall 584
>>
>> Sometimes I got OOM, sometime not in with 2.6.31-rc1.
>>
>> Anyway, the current kernel's test took a rather short time than my reverted patch.
>> In addition, the current kernel has small allocstall(direct reclaim)
>>
>> As you know, my patch was just to remove calling shrink_active_list in case of no swap.
>> shrink_active_list function is a big cost function.
>> The old shrink_active_list could throttle to fork processes by chance.
>> But by removing that function with my patch, we have a high
>> probability to make process fork bomb. Wu, KOSAKI and Rik, does it
>> make sense?
>
> Maybe, but I'm not sure on how to explain the time/vmstat numbers :(

I think we can prove it following as.
For example, whenever the each forking 1000 processes from starting msgctl11,
we look at the vmstat and check the elasped time.

I think current kernel may take a very short time but many allocstall .
but reverted one may take a rather long time but small allocstall increasement
after some time(maybe when inactive_anon_is low).

In addition, we can check shrink_active_list's collpased time when the
inactive_aon_is low.

>
>> So I think you were just lucky with a unnecessary routine.
>> Anyway, AFAIK, Rik is making throttling page reclaim.
>> I think it can solve your problem.
>
> Yes, with good luck :)
>
> Thanks,
> Fengguang
>



--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2009-07-02 16:11    [W:0.101 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site