lkml.org 
[lkml]   [2023]   [Mar]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH] mm/ksm: Fix race with ksm_exit() in VMA iteration
On 07.03.23 21:59, Liam R. Howlett wrote:
> ksm_exit() may remove the mm from the ksm_scan between the unlocking of
> the ksm_mmlist and the start of the VMA iteration. This results in the
> mmap_read_lock() not being taken and a report from lockdep that the mm
> isn't locked in the maple tree code.

I'm confused. The code does

mmap_read_lock(mm);
...
for_each_vma(vmi, vma) {
mmap_read_unlock(mm);

How can we not take the mmap_read_lock() ? Or am I staring at the wrong
mmap_read_lock() ?

>
> Fix the race by checking if this mm has been removed before iterating
> the VMAs. __ksm_exit() uses the mmap lock to synchronize the freeing of
> an mm, so it is safe to keep iterating over the VMAs when it is going to
> be freed.
>
> This change will slow down the mm exit during the race condition, but
> will speed up the non-race scenarios iteration over the VMA list, which
> should be much more common.

Would leaving the existing check in help to just stop scanning faster in
that case?

>
> Reported-by: Pengfei Xu <pengfei.xu@intel.com>
> Link: https://lore.kernel.org/lkml/ZAdUUhSbaa6fHS36@xpf.sh.intel.com/
> Reported-by: syzbot+2ee18845e89ae76342c5@syzkaller.appspotmail.com
> Link: https://syzkaller.appspot.com/bug?id=64a3e95957cd3deab99df7cd7b5a9475af92c93e
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: heng.su@intel.com
> Cc: lkp@intel.com
> Cc: <Stable@vger.kernel.org>
> Fixes: a5f18ba07276 ("mm/ksm: use vma iterators instead of vma linked list")
> Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> ---
> mm/ksm.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 525c3306e78b..723ddbe6ea97 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1044,9 +1044,10 @@ static int unmerge_and_remove_all_rmap_items(void)
>
> mm = mm_slot->slot.mm;
> mmap_read_lock(mm);

Better add a comment:

/*
* Don't iterate any VMAs if we might be racing against ksm_exit(),
* just exit early.
*/

> + if (ksm_test_exit(mm))
> + goto mm_exiting;
> +
> for_each_vma(vmi, vma) {
> - if (ksm_test_exit(mm))
> - break;
> if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma)
> continue;
> err = unmerge_ksm_pages(vma,
> @@ -1055,6 +1056,7 @@ static int unmerge_and_remove_all_rmap_items(void)
> goto error;
> }
>
> +mm_exiting:
> remove_trailing_rmap_items(&mm_slot->rmap_list);
> mmap_read_unlock(mm);
>



--
Thanks,

David / dhildenb
\
 
 \ /
  Last update: 2023-03-27 00:49    [W:0.060 / U:0.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site