Messages in this thread | | | From | "Manfred Spraul" <> | Subject | Re: Hashing and directories | Date | Wed, 7 Mar 2001 16:56:36 +0100 |
| |
Jamie wrote:
> Linus Torvalds wrote: > > The long-term solution for this is to create the new VM space for the > > new process early, and add it to the list of mm_struct's that the > > swapper knows about, and then just get rid of the pages[MAX_ARG_PAGES] > > array completely and instead just populate the new VM directly. That > > way the destination is swappable etc, and you can also remove the > > "put_dirty_page()" loop later on, as the pages will already be in their > > right places. > > > > It's definitely not a one-liner, but if somebody really feels strongly > > about this, then I can tell already that the above is the only way to do > > it sanely.
> Yup. We discussed this years ago, and it nobody thought it important
> enough. mm->mmlist didn't exist then, and creating it it _just_ for
> this feature seemed too intrusive. I agree it's the only sane way to
> completely remove the limit.
I'm not sure that this is the right way: It means that every exec() must call dup_mmap(), and usually only to copy a few hundert bytes. But I don't see a sane alternative. I won't propose to create a temporary file in a kernel tmpfs mount ;-)
--
Manfred
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |