[lkml]   [2006]   [Oct]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
SubjectRe: [patch 2/5] mm: fault vs invalidate/truncate race fix
Andrew Morton wrote:

>On Tue, 10 Oct 2006 16:21:49 +0200 (CEST)
>Nick Piggin <> wrote:
>>--- linux-2.6.orig/mm/filemap.c
>>+++ linux-2.6/mm/filemap.c
>>@@ -1392,9 +1392,10 @@ struct page *filemap_nopage(struct vm_ar
>> unsigned long size, pgoff;
>> int did_readaround = 0, majmin = VM_FAULT_MINOR;
>>+ BUG_ON(!(area->vm_flags & VM_CAN_INVALIDATE));
>> pgoff = ((address-area->vm_start) >> PAGE_CACHE_SHIFT) + area->vm_pgoff;
>> size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
>> if (pgoff >= size)
>> goto outside_data_content;
>>@@ -1416,7 +1417,7 @@ retry_all:
>> * Do we have something in the page cache already?
>> */
>> retry_find:
>>- page = find_get_page(mapping, pgoff);
>>+ page = find_lock_page(mapping, pgoff);
>Here's a little problem. Locking the page in the pagefault handler takes
>our deadlock while writing from a mmapped copy of the page into the same
>page from "extremely hard to hit" to "super-easy to hit". Try running
>write-deadlock-demo.c from
>It conveniently deadlocks while holding mmap_sem, so `ps' get stuck too.
>So this whole idea of locking the page in the fault handler is off the
>table until we fix that deadlock for real.

OK. Can it sit in -mm for now, though? Or is this deadlock less theoretical
than it sounds? At any rate, thanks for catching this.

> Coincidentally I started coding
>a fix for that a couple of weeks ago, but spend too much time with my nose
>in other people's crap to get around to writing my own crap.
>The basic idea is
>- revert the recent changes to the core write() code (the ones which
> killed writev() performance, especially on NFS overwrites).
>- clean some stuff up
>- modify the core of write() so that instead of doing copy_from_user(),
> we do inc_preempt_count();copy_from_user_inatomic(). So we never enter
> the pagefault handler while holding the lock on the pagecache page.
> If the fault happens, we run commit_write() on however much stuff we
> managed to copy and then go back and try to fault the target page back in
> again. Repeat for ten times then give up.

Without looking at any code, perhaps we could instead run get_user_pages
and copy the memory that way.

We'd still want to do try the initial copy_from_user, because the TLB is
quite likely to exist or at least the pte will exist so the low level TLB
refill can reach it - so we don't want to walk the pagetables manually if
we can help it.

At that point, if we end up doing the get_user_pages thing, do we even need
to do the intermediate commit_write()? Or just do the whole copy (the
copied data is going to be in cache on physically indexed caches anyway, so
it will be very low cost to copy again). And it should be a reasonably
unlikely path... but I'll instrument it.

> It gets tricky because it means that we'll need to go back to zeroing
> out the uncopied part of the pagecache page before
> commit_write+unlock_page(). This will resurrect the recently-fixed
> problem where userspace can fleetingly see a bunch of zeroes in pagecache
> where it expected to see either the old data or the new data.
> But I don't think that problem was terribly serious, and we can improve
> the situation quite a lot by not doing that zeroing if the page is
> already up-to-date.
>Anyway, if you're feeling up to it I'll document the patches I have and hand
>them over - they're not making much progress here.

Yeah I'll have a go.


Send instant messages to your online friends

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

 \ /
  Last update: 2006-10-11 07:53    [W:0.060 / U:13.716 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site