lkml.org 
[lkml]   [2016]   [Oct]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.8 40/57] vfs,mm: fix a dead loop in truncate_inode_pages_range()
    Date
    4.8-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Wei Fang <fangwei1@huawei.com>

    commit c2a9737f45e27d8263ff9643f994bda9bac0b944 upstream.

    We triggered a deadloop in truncate_inode_pages_range() on 32 bits
    architecture with the test case bellow:

    ...
    fd = open();
    write(fd, buf, 4096);
    preadv64(fd, &iovec, 1, 0xffffffff000);
    ftruncate(fd, 0);
    ...

    Then ftruncate() will not return forever.

    The filesystem used in this case is ubifs, but it can be triggered on
    many other filesystems.

    When preadv64() is called with offset=0xffffffff000, a page with
    index=0xffffffff will be added to the radix tree of ->mapping. Then
    this page can be found in ->mapping with pagevec_lookup(). After that,
    truncate_inode_pages_range(), which is called in ftruncate(), will fall
    into an infinite loop:

    - find a page with index=0xffffffff, since index>=end, this page won't
    be truncated

    - index++, and index become 0

    - the page with index=0xffffffff will be found again

    The data type of index is unsigned long, so index won't overflow to 0 on
    64 bits architecture in this case, and the dead loop won't happen.

    Since truncate_inode_pages_range() is executed with holding lock of
    inode->i_rwsem, any operation related with this lock will be blocked,
    and a hung task will happen, e.g.:

    INFO: task truncate_test:3364 blocked for more than 120 seconds.
    ...
    call_rwsem_down_write_failed+0x17/0x30
    generic_file_write_iter+0x32/0x1c0
    ubifs_write_iter+0xcc/0x170
    __vfs_write+0xc4/0x120
    vfs_write+0xb2/0x1b0
    SyS_write+0x46/0xa0

    The page with index=0xffffffff added to ->mapping is useless. Fix this
    by checking the read position before allocating pages.

    Link: http://lkml.kernel.org/r/1475151010-40166-1-git-send-email-fangwei1@huawei.com
    Signed-off-by: Wei Fang <fangwei1@huawei.com>
    Cc: Christoph Hellwig <hch@infradead.org>
    Cc: Dave Chinner <david@fromorbit.com>
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    mm/filemap.c | 4 ++++
    1 file changed, 4 insertions(+)

    --- a/mm/filemap.c
    +++ b/mm/filemap.c
    @@ -1687,6 +1687,10 @@ static ssize_t do_generic_file_read(stru
    unsigned int prev_offset;
    int error = 0;

    + if (unlikely(*ppos >= inode->i_sb->s_maxbytes))
    + return -EINVAL;
    + iov_iter_truncate(iter, inode->i_sb->s_maxbytes);
    +
    index = *ppos >> PAGE_SHIFT;
    prev_index = ra->prev_pos >> PAGE_SHIFT;
    prev_offset = ra->prev_pos & (PAGE_SIZE-1);

    \
     
     \ /
      Last update: 2016-10-21 11:24    [W:4.188 / U:0.708 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site