lkml.org 
[lkml]   [1999]   [Jun]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch] truncate inode page sleeps
On Fri, 25 Jun 1999, Ingo Molnar wrote:

>this should not happen, because when we do the lock_page() we have done a
>get_page() - so shrink_mmap() cannot possibly do a remove_inode_page().
>
>if it does then it's a shrink_mmap() bug. Can you see this happen in the
>original kernel tree as well?

No, sorry, you are right, (I was thinking too much about my tree), the
pre-2.3.9-3 shrink_mmap is safe because both truncate_inode_pages and
shrink_mmap runs with the big kernel lock held. Correct. (it was a my
local problem where shrink_mmap runs without the big kernel lock)

Thinking about my problem I am not sure if I really need to grab the
pagecache_lock in shrink_mmap only to stay synchronized with the
page-count of the page to avoid racing with truncate_inode_pages. My
shrink_mmap has all its locking based on the per-page PG_locked bitflag
and on a separate spinlock that I use only to synchronize the pagemap-lru
list management. So it looks to me that the check of page->inode in
truncate_inode_pages() is the right thing to do in my case. Do you think
there are many other places that may get confused by my logic as
truncate_inode_pages was (so I could fix them too)? The page->inode can't
change while the page is locked down so if I check the page->inode after I
have the page locked, then I can avoid in shrink_mmap to check the
page_count of the page with the pagemap_lock held (so I can really
SMP-thread shrink_mmap in respect of everything else as I am doing now).

Comments?

Andrea Arcangeli


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:52    [W:0.027 / U:0.516 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site