lkml.org 
[lkml]   [2011]   [Jan]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm: prevent concurrent unmap_mapping_range() on the same inode
On Thu, Jan 20, 2011 at 01:30:58PM +0100, Miklos Szeredi wrote:
> From: Miklos Szeredi <mszeredi@suse.cz>
>
> Running a fuse filesystem with multiple open()'s in parallel can
> trigger a "kernel BUG at mm/truncate.c:475"
>
> The reason is, unmap_mapping_range() is not prepared for more than
> one concurrent invocation per inode. For example:
>
> thread1: going through a big range, stops in the middle of a vma and
> stores the restart address in vm_truncate_count.
>
> thread2: comes in with a small (e.g. single page) unmap request on
> the same vma, somewhere before restart_address, finds that the
> vma was already unmapped up to the restart address and happily
> returns without doing anything.
>
> Another scenario would be two big unmap requests, both having to
> restart the unmapping and each one setting vm_truncate_count to its
> own value. This could go on forever without any of them being able to
> finish.
>
> Truncate and hole punching already serialize with i_mutex. Other
> callers of unmap_mapping_range() do not, and it's difficult to get
> i_mutex protection for all callers. In particular ->d_revalidate(),
> which calls invalidate_inode_pages2_range() in fuse, may be called
> with or without i_mutex.


Which I think is mostly a fuse problem. I really hate bloating the
generic inode (into which the address_space is embedded) with another
mutex for deficits in rather special case filesystems.



\
 
 \ /
  Last update: 2011-01-20 13:43    [W:2.104 / U:1.920 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site