lkml.org 
[lkml]   [2002]   [Jan]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Q: behaviour of mlockall(MCL_FUTURE) and VM_GROWSDOWN segments
Manfred Spraul wrote:
>
> If an app has an VM_GROWS{DOWN,UP} stack and calls
> mlockall(MCL_FUTURE|MCL_CURRENT), which pages should the kernel lock?
>
> * grow the vma to the maximum size and lock all.
> * just according to the current size.
>
> What should happen if the segment is extended by more than one page
> at once? (i.e. a function with 100 kB local variables)
>
> * Just allocate the page that is needed to handle the page faults
> * always fill holes immediately.
>
> Right now segments are not grown during the mlockall syscall. Some
> codepaths fill holes (find_extend_vma()), most don't (page fault
> handlers)
>
> What's the right thing (tm) to do?
> I don't care which implementation is choosen, but IMHO all
> implementations should be identical

This was a problem encountered when taking a libpthread-based
application from 2.4.7 to 2.4.15. It ran fine with mlockall
under 2.4.7, but under 2.4.15 everything wedged up. This was, I assume,
because under 2.4.15, the many pthread stacks were fully faulted in and
locked at mlockall() time. We ended up just not using mlockall
at all.

Really the 2.4.15 behaviour is correct, but undesirable. It requires
each thread to know apriori what its maximum stack use will be.
(I'm assuming that there's a way of setting a thread's stack size
in libpthread).

So in this case, the behaviour I would prefer is MCL_FUTURE for
all vma's *except* the stack. Stack pages should be locked
only when they are faulted in. Hard call.

-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:15    [W:0.041 / U:2.152 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site