lkml.org 
[lkml]   [2006]   [Dec]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] prune_icache_sb
Wendy Cheng wrote:

> Andrew Morton wrote:
>
>> On Thu, 30 Nov 2006 11:05:32 -0500
>> Wendy Cheng <wcheng@redhat.com> wrote:
>>
>>
>>
>>>
>>> The idea is, instead of unconditionally dropping every buffer
>>> associated with the particular mount point (that defeats the purpose
>>> of page caching), base kernel exports the "drop_pagecache_sb()" call
>>> that allows page cache to be trimmed. More importantly, it is
>>> changed to offer the choice of not randomly purging any buffer but
>>> the ones that seem to be unused (i_state is NULL and i_count is
>>> zero). This will encourage filesystem(s) to pro actively response to
>>> vm memory shortage if they choose so.
>>>
>>
>>
>> argh.
>>
>>
> I read this as "It is ok to give system admin(s) commands (that this
> "drop_pagecache_sb() call" is all about) to drop page cache. It is,
> however, not ok to give filesystem developer(s) this very same
> function to trim their own page cache if the filesystems choose to do
> so" ?
>
>> In Linux a filesystem is a dumb layer which sits between the VFS and the
>> I/O layer and provides dumb services such as reading/writing inodes,
>> reading/writing directory entries, mapping pagecache offsets to disk
>> blocks, etc. (This model is to varying degrees incorrect for every
>> post-ext2 filesystem, but that's the way it is).
>>
>>
> Linux kernel, particularly the VFS layer, is starting to show signs of
> inadequacy as the software components built upon it keep growing. I
> have doubts that it can keep up and handle this complexity with a
> development policy like you just described (filesystem is a dumb layer
> ?). Aren't these DIO_xxx_LOCKING flags inside __blockdev_direct_IO() a
> perfect example why trying to do too many things inside vfs layer for
> so many filesystems is a bad idea ? By the way, since we're on this
> subject, could we discuss a little bit about vfs rename call (or I can
> start another new discussion thread) ?
>
> Note that linux do_rename() starts with the usual lookup logic,
> followed by "lock_rename", then a final round of dentry lookup, and
> finally comes to filesystem's i_op->rename call. Since lock_rename()
> only calls for vfs layer locks that are local to this particular
> machine, for a cluster filesystem, there exists a huge window between
> the final lookup and filesystem's i_op->rename calls such that the
> file could get deleted from another node before fs can do anything
> about it. Is it possible that we could get a new function pointer
> (lock_rename) in inode_operations structure so a cluster filesystem
> can do proper locking ?

It looks like the ocfs2 guys have the similar problem?

http://ftp.kernel.org/pub/linux/kernel/people/mfasheh/ocfs2/ocfs2_git_patches/ocfs2-upstream-linus-20060924/0009-PATCH-Allow-file-systems-to-manually-d_move-inside-of-rename.txt

Does this change help fix gfs lock ordering problem as well?


-Russell Cattelan
cattelan@xfs.org
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2006-12-04 17:55    [W:0.071 / U:0.684 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site