lkml.org 
[lkml]   [2024]   [Apr]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCHv2 1/1] mm: fix unproperly folio_put by changing API in read_pages
From
On 03.04.24 13:08, Zhaoyang Huang wrote:
> On Wed, Apr 3, 2024 at 4:01 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 03.04.24 07:50, Zhaoyang Huang wrote:
>>> On Tue, Apr 2, 2024 at 8:58 PM David Hildenbrand <david@redhat.com> wrote:
>>>>
>>>> On 01.04.24 10:17, zhaoyang.huang wrote:
>>>>> From: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
>>>>>
>>>>> An VM_BUG_ON in step 9 of [1] could happen as the refcnt is dropped
>>>>> unproperly during the procedure of read_pages()->readahead_folio->folio_put.
>>>>> This is introduced by commit 9fd472af84ab ("mm: improve cleanup when
>>>>> ->readpages doesn't process all pages")'.
>>>>>
>>>>> key steps of[1] in brief:
>>>>> 2'. Thread_truncate get folio to its local fbatch by find_get_entry in step 2
>>>>> 7'. Last refcnt remained which is not as expect as from alloc_pages
>>>>> but from thread_truncate's local fbatch in step 7
>>>>> 8'. Thread_reclaim succeed to isolate the folio by the wrong refcnt(not
>>>>> the value but meaning) in step 8
>>>>> 9'. Thread_truncate hit the VM_BUG_ON in step 9
>>>>>
>>>>> [1]
>>>>> Thread_readahead:
>>>>> 0. folio = filemap_alloc_folio(gfp_mask, 0);
>>>>> (refcount 1: alloc_pages)
>>>>> 1. ret = filemap_add_folio(mapping, folio, index + i, gfp_mask);
>>>>> (refcount 2: alloc_pages, page_cache)
>>
>> [not going into all details, just a high-level remark]
>>
>> page_cache_ra_unbounded() does a filemap_invalidate_lock_shared(), which
>> is a down_read_trylock(&mapping->invalidate_lock).
>>
>> That is, all read_pages() calls in mm/readahead.c happen under
>> mapping->invalidate_lock in read mode.
>>
>> ... and ...
>>
>>>>>
>>>>> Thread_truncate:
>>>>> 2. folio = find_get_entries(&fbatch_truncate);
>>>>> (refcount 3: alloc_pages, page_cache, fbatch_truncate))
>>
>> truncation, such as truncate_inode_pages() must be called under
>> mapping->invalidate_lock held in write mode. So naive me would have
>> thought that readahead and truncate cannot race in that way.
>>
>> [...]
>>
> Thanks for the reminder. But I don't find the spot where holding
> "mapping->invalidate_lock" by check the callstack of
> 'kill_bdev()->truncate_inode_pages()->truncate_inode_pages_range()',
> or the lock is holded beyond this?

Well, truncate_inode_pages() documents:

"Called under (and serialised by) inode->i_rwsem and
mapping->invalidate_lock."

If that's not the case than that's either (a) a BUG or (b) an
undocumented exception to the rule, whereby other mechanisms are in
place to prevent any further pagecache magic.

I mean, kill_bdev() documents " Kill _all_ buffers and pagecache , dirty
or not..", so *something* has to be in place to guarantee that there
cannot be something concurrently filling the pagecache again, otherwise
kill_bdev() could not possibly do something reasonable.

For example, blkdev_flush_mapping() is called when bd_openers goes to 0,
and my best guess is that nobody should be able to make use of that
device at that point.

Similarly, changing the blocksize sounds like something that wouldn't be
done at arbitrary points in time ...

So kill_bdev() already has a "special" smell to it and I suspect (b)
applies, where concurrent pagecache action is not really any concern.

But I'm not an expert and I looked at most of that code right now for
the first time ...

>>
>>>>
>>>> Something that would help here is an actual reproducer that triggersthis
>>>> issue.
>>>>
>>>> To me, it's unclear at this point if we are talking about an actual
>>>> issue or a theoretical issue?
>>> Thanks for feedback. Above callstack is a theoretical issue so far
>>> which is arised from an ongoing analysis of a practical livelock issue
>>> generated by folio_try_get_rcu which is related to abnormal folio
>>> refcnt state. So do you think this callstack makes sense?
>>
>> I'm not an expert on that code, and only spent 5 min looking into the
>> code. So my reasoning about invalidate_lock above might be completely wrong.
>>
>> It would be a very rare race that was not reported so far in practice.
>> And it certainly wouldn't be the easiest one to explain, because the
>> call chain above is a bit elaborate and does not explain which locks are
>> involved and how they fail to protect us from any such race.
>>
>> For this case in particular, I think we really need a real reproducer to
>> convince people that the actual issue does exist and the fix actually
>> resolves the issue.
> Sorry, it is theoretically yet according to my understanding.

Okay, if you find a reproducer, please share it and we can investigate
if it's a locking problem or something else. As of now, I'm not
convinced that there is an actual issue that needs fixing.

--
Cheers,

David / dhildenb


\
 
 \ /
  Last update: 2024-05-27 16:22    [W:0.122 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site